Teachers hit computer-scored writing exams

New tests aligned to the new common standards will ask students to do more writing and provide quick feedback to teachers on their students’ skills. But that’s only cost effective, if students’ writing is scored by computers, writes Catherine Gewertz on Ed Week‘s Curriculum Matters. Not surprisingly, the National Council of Teachers of English think machines can’t evaluate writing.

In its statement, the NCTE says that artificial intelligence assesses student writing by only “a few limited surface features,” ignoring important elements such as logic, clarity, accuracy, quality of evidence, and humor or irony. Computers ability to judge student writing also gets worse as the length of the essays increases, the NCTE says. The organization argues for consideration of other ways of judging student writing, such as portfolio assessment, teacher-assessment teams, and more localized classroom- or district-based assessments.

If essays are scored by humans — usually teachers working over the summer — the costs will go way up, tempting states to require less writing.

About Joanne

Comments

  1. Cardinal Fang says:

    “Artificial intelligence assesses student writing by only ‘a few limited surface features,’ ignoring important elements such as logic, clarity, accuracy, quality of evidence, and humor or irony. ”

    That is absolutely true of current AI. And it is also absolutely true of the humans who in actual fact evaluate essays for the high-stakes tests like the SAT and the state tests that K-12 students take.

    Probably everyone reading this comment would do a mcuh better job than a computer at evaluating an essay. And we would also do much better at evaluating an essay that someone who spends 60 seconds an essay. Let’s compare apples with apples here.