Over the course of four weeks beginning in late March, Maine students in grades 3-8 will slog through seven plus hours of standardized tests. These reading, writing and math tests will seriously disrupt school schedules, steal valuable instructional time, cause stress and anxiety, and ultimately produce data which is almost meaningless at best, and, at its worst, suspect.
Last year our students slogged through a different exam: Supposedly, the Smarter Balance assessment was designed to determine if our students are meeting Common Core Standards, standards which, as of 2011, are now part of Maine Learning Results. The test proved so problematic (technology issues galore, including lost answers, wrong tests, and connectivity issues) that in June the legislature directed the Maine Department of Education (DOE) to withdraw from the Smarter Balanced Consortium and to replace the test.
Although we educators had been promised a quick turn-around on Smarter Balance results, they arrived in September, just in time to be rendered useless, but try telling that to the Pollyannas at the DOE.
“These results are reflective of a more rigorous assessment as the world is changing rapidly,” stated a September 11, 2015 Department of Education post, “and Maine is poised to improve in this educational shift to better prepare our students for future success.” Apparently this means that although 25% of all Maine students who took Smarter Balance (many opted out) did NOT meet the standards for ELA/LITERACY and 32% of these students did NOT meet the Mathematics standards, these baseline scores indicate that Maine is well on its way to preparing our students for the future.
Fast forward a few months and we’re poised to take a brand new, hastily designed test. Weirdly known as the 2016 eMPower™ME test, it was developed by Measured Progress, the company who held the contract for the Maine Educational Assessment (MEA’s) for many years prior to Smarter Balance. While the 2015 Smarter Balance test couldn’t be compared with previous Maine Educational Assessments because of differing platforms and standards, Measured Progress is conducting a study which will use MetaMetrics Lexile® and Quantile® scales to link last year’s debacle to this year’s assessments.
Whatever all that means.
In the meantime, apparently due to time constraints, Measured Progress does not have enough back-up items in its test bank to provide sample writing prompt responses to teachers. In past years, such prompts and responses have provided rich practice for students and given a framework for instruction to teachers.
It gets worse.
For the sample items Measured Progress was kind enough to provide, there are NO ANSWERS. If the answers were clear-cut and unambiguous, maybe this wouldn’t be an issue, but they are not. I took several of these sample tests along with my students and I am not kidding when I say I was almost in tears by the end of it, because on SEVERAL QUESTIONS, I did not (do not) know the correct responses.
When I asked the DOE why the answers were missing, the quick emailed response was, “DOE was not provided with an answer key.”
Huh? No-one at the DOE thought to ask Measured Progress for the answers? No-one at the DOE thought it might be helpful to save teachers time and energy by providing correct responses for sample questions? No-one at the DOE recognized that when students are given models, exemplars, and responses to practice with before taking the test, they are way-y-y more likely to be successful?
Another huge challenge with the reading part of the test is that students have to scroll back and forth from the text to the questions and back to the text. Unlike a paper assessment, where students can pretty much view a text in its entirety right next to the questions, the on-line version requires kids to hold information in their heads as they scramble backwards and forward for textual evidence. For students with reading deficits, this is torturous and almost impossible. It’s even a challenge for students who do well in reading.
Go to this link and try it yourself.
Then there’s the whole issue of trying to assess a multi-faceted, comprehensive standard in one writing prompt. The 8th grade reading sample asks students to read a Hans Christian Andersen fairy tale and a selection from Call of the Wild and then “Analyze the ways in which both Passage 1 and Passage 2 reinforce a lesson or moral. Identify the lesson each Passage teaches and then analyze how that lesson is supported by the Passage. Cite strong and thorough textual evidence to support your answer.”
Never mind that even the directions are redundant, asking eighth grade students to ANALYZE two disparate texts in one essay in one setting while scrolling back and forth looking for textual evidence is ludicrous. If such a prompt indeed appears on the upcoming tests, I predict the percentage of students who will meet or exceed the standards will be minuscule, while the percentage of students who will be dazed, confused, and stressed will be high.
When people put too much stock in the results of standardized tests, we educators often respond that these assessments don’t give the whole picture, that such tests give only one snapshot of student performance — and that it’s the day-to-day formal and informal assessments, the multiple measures of learning that teachers collect over time, which provide a much more accurate evaluation.
I totally agree with that view, so when I get my classes’ results in June, as promised (or September, as is more likely), I’ll do what I usually do with distorted snapshots. I’ll check them out for a bit, I’ll try to ascertain if there’s anything I can learn from them, and then, I’ll slip them in an envelope and tuck them away in the bottom of my filing cabinet.