LAST week, Education Secretary Arne Duncan acknowledged standardized tests are flawed measures of student progress. But the problem is not so much the tests themselves -- it's the people scoring them.
Many people remember those tests as lots of multiple-choice questions answered by marking bubbles with a No. 2 pencil, but today's exams nearly always include the sort of "open ended" items where students fill up the blank pages of a test booklet with their own thoughts and words. On many tests today, a good number of points come from such open-ended items, and that's where the real trouble begins.
Multiple-choice items are scored by machines, but open-ended items are scored by subjective humans who are prone to errors. I know because I was one of them. In 1994, I was a graduate student looking for part-time work. After a five-minute interview I got the job of scoring fourth-grade, state-wide reading comprehension tests. The for-profit testing company that hired me paid almost $8 an hour, not bad money for me at the time.
One of the tests I scored had students read a passage about bicycle safety. They were then instructed to draw a poster that illustrated a rule that was indicated in the text. We would award one point for a poster that included a correct rule and zero for a drawing that did not.