My standard advice for learning how to write can be boiled down to six words: Read a lot. Write a lot. If brevity is essential, three words are enough: Write a lot. I can even make do with one word: Write.
So I’m sympathetic to the argument that students will write better if they write more, with feedback on their efforts. But teachers don’t have the time to read and respond to every draft of every paper.
Automated essay scoring lets teachers assign more writing and focus their own time on “higher order feedback,” argues Tom Vander Ark on Getting Smart. In response to an attack on scoring engines in the New York Times, Vander Ark summarizes and links to the case for automation.
Measurement is a friend to creativity, he writes in another post.
The online scoring engines use the same rubrics to score essays as human graders. Any ‘standardization’ of writing is not a function of the method of scoring but the nature of the prompt, i.e., if a state requires every 8th grader to write a five paragraph essay every year it may lead to formulaic teaching—that’s a teaching issue driven by a testing issue, not a scoring issue.
People are sick of standardized tests “because most states are using old psychometric technology to administer inexpensive tests with little real performance assessment.”
. . . we’ve been using these tests for more than they were designed for—to hold schools accountable, to manage student matriculation, to evaluate teachers, and to improve instruction.But remember the state of the sector in the early 90s before state tests were widely used. There was no data, chronic failure was accepted, and the achievement gap was largely unrecognized. Measurement is key to improvement.
Soon, “essay graders will soon be incorporated into word processors and will be used as commonly as spell-check,” Vander Ark predicts. Students will get more assessment to help them improve.
Update: Machines Shouldn’t Grade Student Writing — Yet, writes Dana Goldstein on Slate.