These days, more tests ask students to write short essays, not just answer multiple-choice questions. But it’s slow and expensive to hire humans do the grading. Essay-grading ‘bots are cheap and fast, but are they any good?
It’s easy to fool a robot grader, Les Perelman, a former writing director at MIT, tells the Chronicle of Higher Education.
His Basic Automatic B.S. Essay Language Generator, or Babel, can crank out robot-fooling essays using one to three keywords. Each sentence is grammatically correct, structurally sound and meaningless. Robots can’t tell the difference, says Perelman.
He fed in “privacy.” Babel wrote:
“Privateness has not been and undoubtedly never will be lauded, precarious, and decent. Humankind will always subjugate privateness.”
MY Access!, an online writing-instruction product, graded the essay immediately: 5.4 points out of 6, with “advanced” ratings for “focus and meaning” and “language use and style.”
Robots and human graders awarded similar scores to 22,000 essays by high school and middle school students, concluded at study by Mark D. Shermis, a former dean of the College of Education at the University of Akron, in 2012.
Computer scientists at edX, the nonprofit online-course provider co-founded by MIT, are working on the Enhanced AI Scoring Engine, or EASE. The software can learn and imitate the grading styles of particular professors,” reports the Chronicle.
Some of edX’s university partners have used EASE to provide feedback to students in massive open online courses (MOOCs).