NY principals: Common Core tests fail

New York’s new state exams are supposed to be aligned with the new Common Core Standards, but a group of principals says they’re poorly aligned, unbalanced, take too much time and often confuse students.  

The English Language Arts tests focused mostly on one skill — “analyzing specific lines, words and structures of information text” — while ignoring other “deep and rich” skills.

. . . the testing sessions—two weeks of three consecutive days of 90-minute (and longer for some) periods—were unnecessarily long, requiring more stamina for a 10-year-old special education student than of a high school student taking an SAT exam. Yet, for some sections of the exams, the time was insufficient for the length of the test.

Students faced more multiple-choice questions than ever before, the principals complain. “For several multiple choice questions the distinction between the right answer and the next best right answer was paltry at best.”

The math tests contained 68 multiple-choice problems often repeatedly assessing the same skills. The language of these math questions was often unnecessarily confusing.

The principals also object to “putting the fate of so many in the education community in the hands of Pearson – a company with a history of mistakes.”

Anti-testing rebellion grows

Resistance to testing is growing as schools introduce tougher tests linked to Common Core standards, writes Marc Tucker of the National Center on Education and the Economy on Ed Week‘s Top Performers blog. Parents — and sometimes teachers — are opting out of state exams.

Test resistance isn’t seen in high-performing countries, Tucker writes. All, except Finland, have tests that match their standards. But they’re not like U.S. tests.

First, they are designed to match the curriculum, to find out whether and to what degree students have mastered the curriculum the teacher has been teaching. American tests, for many years, have been designed to be curriculum neutral, meaning unrelated to the curriculum.  So American teachers have seen the basic skills tests they are familiar with as their enemy, testing things that they did not necessarily teach, and often don’t believe should be taught.

Common Core State Standards will fix this, if teachers can teach a standards-based curriculum aligned to the tests.

Second, American tests have been designed to be, first and foremost, cheap.  . . .  (Multiple-choice) tests are great at testing the rudiments of the basic skills and not very good at testing complex skills, deep understanding, critical thinking or creativity, the things teachers want most to teach, another reason for them to detest the typical test.  In the top-performing countries, there is very little use of multiple-choice, computer-based testing.  Most tests are essay-based.  They are scored by teachers trained to score them and teachers generally feel that these examinations are testing the things they think really matter.

Our top competitors give statewide or national exams two or three times in a student’s school career, often in 10th grade and the end of high school.  Testing to monitor school quality is done by sampling a few students in a few schools.  They can afford expensive, high-quality tests because they do less testing.

No top-performing country has an accountability system like No Child Left Behind, which mandates annual testing in grades three through eight.  No other country is using test scores to evaluate teachers.

American teachers “see cheap tests, unrelated to what they teach and incapable of measuring the things they really care about, being used to determine their fate and that of their students,” Tucker writes. If Common Core tests are cheap, low-quality tests, “millions of American teachers may rebel.”

Testing could be Common Core’s fatal flaw, writes Peg Tyre in the final part of her four-part series on what Common Core means for American education.

What tests are best?

Under No Child Left Behind, tests don’t measure what’s important, writes Susan Engel, director of the teaching program at Williams College, in a New York Times op-ed.

Instead, we should come up with assessments that truly measure the qualities of well-educated children: The ability to understand what they read; an interest in using books to gain knowledge; the capacity to know when a problem calls for mathematics and quantification; the agility to move from concrete examples to abstract principles and back again; the ability to think about a situation in several different ways; and a dynamic working knowledge of the society in which they live.

Hooey, responds Katharine Beals of Out in Left Field.

Completely absent from Engel’s proposals is content knowledge — unless “dynamic working knowledge of the society in which they live” includes things like world geography, American history, and current events in Pakistan. This, despite the fact that the latest cognitive science research indicates that “higher level” skills neither develop, nor apply, independently of structured, information-rich content.

Also absent are such specific skills as penmanship, decoding, sentence construction, foreign language fluency, balancing chemical equations, and finding the roots to quadratic equations.

A good multiple-choice test can measure “specific skills and rich, structured, factual knowledge,” Beals writes.

But Engel wants to measure students’ vocabulary and grammatical complexity by sampling their writing. These are developmental skills, not academic skills taught by teachers, argues Beals.

Engel suggests having children “Write a description of yourself from your mother’s point of view” in order to “gauge the child’s ability to understand the perspectives of others.”

Again, it’s not clear what purpose this assessment serves–beyond identifying who is and who isn’t on the autistic spectrum.

Similarly problematic is Engel’s proposal to measure reading comprehension levels by having children do an oral reconstruction of a story to a “trained examiner.” What about shy children; what about children were struggle to express themselves orally?

Engel’s proposal to measure literacy levels by “testing a child’s ability to identify the names of actual authors amid the names of non-authors” makes sense only if all students are taught a core curriculum including these authors, Beals writes. Otherwise, this testing penalizes socio-economically disadvantaged children.

It seems to me that school tests should measure what’s taught in school to see if children are getting it. Jaden doesn’t enjoy reading and doesn’t know L. Frank Baum from Franklin Roosevelt. Is this actionable information?

Race to new tests

Competition has opened for $350 million in Race To The Top funding for new assessments linked to common standards, reports Education Week. That means less multiple-choice testing  and more “essays, multidisciplinary projects, and other more nuanced measures of achievement.”

(The Education Department) wants tests that show not only what students have learned, but also how that achievement has grown over time and whether they are on track to do well in college. And all that, the regulations say, requires assessments that elicit “complex student demonstrations or applications” of what they’ve learned.

There is money for “comprehensive assessment systems” measuring mastery of a “common set of college- and career-ready” standards. Applicants get points for working with state universities to design the tests and guarantee that students who score above a certain level will be able to enroll in for-credit college classes.

Another pot of money will fund end-of-course high school exams.

Stanford Education Professor Linda Darling-Hammond, who leads a group representing a majority of states, believes performance assessments can improve the way teachers teach, notes John Fensterwald on Educated Guess.

The alternative is performance assessments, which require students to construct their own responses to questions. These can take the form of supplying short phrases or sentences to questions, writing essays or conducting complex and time-consuming activities, such as a lab experiment. “By tapping into students’ advanced thinking skills and abilities to explain their thinking, performance assessments yield a more complete picture of students’ strengths and weaknesses,” Darling-Hammond wrote.

“Performance assessments face obstacles of cost, reliability and testing time,” Fensterwald writes. He links to a critique of Darling-Hammond’s paper by Doug McRae, a retired publisher for the testing division of McGraw-Hill.

Because multiple-choice questions are cheap and easy to score, it’s possible to ask students a wide range of questions. As tests get more complex — write an essay, design an experiment, stage a debate — students  spend more time being assessed on far fewer prompts. Grading is subjective. Todd Farley’s Making the Grades explains tough it is for a group of people to score short answers and essays with consistency and fairness.