Common Core tests may not pass

The two Common Core testing groups — Smarter Balanced Assessment Consortium and the Partnership for the Assessment of Readiness for College and Careers (PARCC) — made big promises when they bid for $350 million in federal funding, notes Education Week. The vision has “collided with reality.” Due to “political, technical, and financial constraints,” some ambitious plans have been scaled back.

. . . most students will take the exams on computers, rather than use bubble sheets, for instance. The Smarter Balanced assessment will adapt in difficulty to each student’s skill level, potentially providing better information about strengths and weaknesses.

In addition, students taking the PARCC test will write essays drawing on multiple reading sources. And to a level not seen since the 1990s, students taking both exams will be engaged in “performance” items that ask them to analyze and apply knowledge, explain their mathematical reasoning, or conduct research.

Performance-based assessment requires “longer, more expensive exams,” reports Ed Week. That’s a tough sell. Both exams have reduced the length or complexity of some test elements.

Both groups will continue to use some multiple-choice or machine-scored questions, but many of those items have been enhanced — allowing students to select multiple answers, for instance, or to drag and drop text from reading passages to cite evidence.

Both exams will hire teachers to score written answers after deciding that robot scorers aren’t yet up to the job.

Both consortia promised to develop tools and supports for teachers, but help for teachers has “lagged,” reports Ed Week.

Are core tests written for robo-readers?

Sacramento teacher Alice Mercer questions the Common Core tests her students will be taking.  A sample essay prompt by the Smarter Balanced Assessment Consortium gives students a list of arguments to use.
sbac1.jpg

The new standards are supposed to promote higher-order thinking. So why not let students think up their own arguments? Evaluating students’ writing is time-consuming and expensive — unless it can be automated, Mercer points out. Listing the arguments seems designed for the convenience of a robo-reader.

Robo-readers have limitations, she observes. It’s hard for computers to score open-ended questions.

Basically, the programs can judge grammar and usage errors (although I suspect it will lead to a very stilted form of writing that only a computer could love), but it’s not in the position to judge the facts and assertions, or content in an essay.  The only way to do that is to limit students to what “facts” they are using by giving them a list.

Computer grading could explain Common Core’s hostility to background knowledge, Mercer adds.

Computer-scored tests train children to think like the computer,  writes Anthony Cody in Ed Week. “If we are sacrificing intelligence, creativity and critical thinking for the sake of the efficiency and standardization provided by a computer, this seems a very poor trade.”

Common tests lose support

Forty-five states and the District of Columbia are moving forward on Common Core Standards, but support for common testing is eroding, reports StateImpact.

Georgia will use its own exam, instead of the costlier test developed by the Partnership for Assessment of Readiness for College and Careers (PARCC).

Two of Florida’s top elected leaders want Florida to leave PARCC, even though Florida is the fiscal agent for the testing consortium.

Already Alabama, North Dakota and Pennsylvania have left the consortium. Oklahoma plans to design its own test, and Indiana isn’t participating in PARCC governing board meetings right now. State education officials say they’re waiting until after a mandatory legislative review of the Common Core academic standards.

That brings the number of states participating in PARCC down to 18 plus the District of Columbia.

Pennsylvania, Utah and Alabama quit the other testing group, Smarter Balanced Assessment Consortium, which now has 24 members. (Some states had joined both groups.)

The crumbling of the testing consortia is a “disaster,” writes Andy Smarick on Flypaper.

At this point, I won’t be surprised if we end up with 20 or more different testing systems in 2014–15. So much for commonness, so much for comparability. Rigor and alignment with tough standards are likely the next to fall.

Blinded by “technocratic hubris,” common assessment advocates “underestimated how difficult it would be to undo decades of state policy and practice on tests,” writes Smarick. Governors and state chiefs will be reluctant to spend lots of money for a testing system that will make their schools and teachers look bad, he predicted six months ago.

The Common Core sky isn’t falling, responds Checker Finn, also a Fordhamite. This is “right sizing.”

The forty-five-state thing was always artificial, induced by Race to the Top greed and perhaps a crowd mentality. Never in a million years were we going to see forty-five states truly embrace these rigorous academic expectations for their students, teachers, and schools, meet all the implementation challenges (curriculum, textbooks, technology, teacher prep, etc.), deploy new assessments, install the results of those assessments in their accountability systems, and live with the consequences of zillions of kids who, at least in the near term, fail to clear the higher bar.

It’s “better for states to drop out in advance than to fake it, pretending to use the Common Core standards but never really implementing them,” Finn writes. “That’s long-standing California-style behavior (fine standards, wretched implementation), in contrast with Massachusetts-style behavior (exemplary standards and serious implementation—and results to show for it).”

Most of the drop-out states will keep the standards, but write their own tests or sign up with ACT. They’ll give comparability, “one of the major benefits of commonality,” Finn writes. Some may change their minds later “or face up to the fact that (like Texas and Virginia) they don’t really want to use the Common Core at all.”

New ‘core’ test will be (a bit) shorter, simpler

One group developing tests aligned to new standards will make exams shorter and simpler — but less capable of providing detailed feedback on students’ performance, reports Ed Week.

The Smarter Balanced Assessment Consortium, one of two groups crafting tests for the Common Core State Standards, will include only one lengthy “performance task” in each subject—mathematics and English/language arts. The test will include multiple choice, short response and “technology-enhanced” questions. But it won’t be all that quick: SBAC estimate seven hours of testing in grades 3-5, 7½ hours in grades 6-8, and 8½ hours in grade 11.

A performance item might ask students to “tackle longer, more complex math problems and write essays based on reading multiple texts,” reports Ed Week.

In this version, students will evaluated on  math concepts and procedures, communicating reasoning, and problem-solving/modeling/data analysis and on reading, writing, listening, and research.

“Is it about getting data for instruction? Or is it about measuring the results of instruction? In a nutshell, that’s what this is all about,” said Douglas J. McRae, a retired test designer who helped shape California’s assessment system. “You cannot adequately serve both purposes with one test.”

That’s because the more-complex, nuanced items and tasks that make assessment a more valuable educational experience for students, and yield information detailed and meaningful enough to help educators adjust instruction to students’ needs, also make tests longer and more expensive, Mr. McRae and other experts said.

Separate formative and interim tests can help teachers figure out what students need to learn, while the end-of-the-year test is used for accountability, say SBAC designers. Teachers will be able to use an online bank of test questions and tasks and a bank of “formative” tools to judge students’ learning.