Are core tests written for robo-readers?

Sacramento teacher Alice Mercer questions the Common Core tests her students will be taking.  A sample essay prompt by the Smarter Balanced Assessment Consortium gives students a list of arguments to use.
sbac1.jpg

The new standards are supposed to promote higher-order thinking. So why not let students think up their own arguments? Evaluating students’ writing is time-consuming and expensive — unless it can be automated, Mercer points out. Listing the arguments seems designed for the convenience of a robo-reader.

Robo-readers have limitations, she observes. It’s hard for computers to score open-ended questions.

Basically, the programs can judge grammar and usage errors (although I suspect it will lead to a very stilted form of writing that only a computer could love), but it’s not in the position to judge the facts and assertions, or content in an essay.  The only way to do that is to limit students to what “facts” they are using by giving them a list.

Computer grading could explain Common Core’s hostility to background knowledge, Mercer adds.

Computer-scored tests train children to think like the computer,  writes Anthony Cody in Ed Week. “If we are sacrificing intelligence, creativity and critical thinking for the sake of the efficiency and standardization provided by a computer, this seems a very poor trade.”

About Joanne