What is a typical sixth grader?

According to Meredith Kolodner at Insideschools, many principals and teachers have been raising concerns over the rubrics and scoring procedures for this year’s standardized tests in New York State.

Sometimes the rubrics (for the written portions of the tests) are ambiguous. Sometimes they work against good judgment. Sometimes the writing prompt itself puts students and scorers alike in a quandary.

Here’s an example of the last of these:

In addition, a listening passage about a kid who loved music asked students to write about how the child in the passage is like and unlike a “typical 6th grader.” Teachers debated what would lead to a high score: does a typical 6th graders really like music? Does a typical 6th grader attend after-school? Take the bus? There was not consensus on what details would be considered “meaningful and relevant examples,” as dictated by the scoring guide.

Assuming that the description is accurate, I wonder what the test makers had in mind. What is the point of asking students to compare a character to a “typical” sixth grader? Is there such a thing? Are children supposed to know (or care) what a “typical” sixth grader is?

In order to receive a high score, a student must fulfill all the requirements of the task. Here an intellectually advanced student could easily get sidetracked with definitions of “typical” and fail to write the essay as required.

Rubrics have inherent limitations; you can’t standardize good judgment. When applied on a massive scale, they become more limiting still. But they are here to stay, at least for now. Given that state of things, it’s all the more important to create good test questions. This, apparently, is not one.

I scored tests this year but signed a confidentiality agreement. I am not allowed to discuss what I saw on the tests or in student writing. Thus I am limiting myself to commenting on what others have reported. In the past, New York State tests were released to the public after they had been administered and scored. This is good practice; we should all have the opportunity to see and comment on them. After all, they presumably reflect what students are expected to learn.

Comments

  1. Many have this big hangup on having state tests be meaningful on some sort of critical thinking and problem solving basis, as if teachers can’t do this better in their own classrooms using homework, projects, tests, and directly knowing and evaluating each student. Are state tests supposed to evaluate and fix this level of academic achievement if the schools and teachers can’t do it themselves?

    What is 6*7? What does “cogent” mean? Heaven forbid that we have a state test that just checks for basic skills and knowledge. I was on a committee once that evaluated our middle school’s state math test results. It said that our problem solving score was dropping. Heavens! we better spend more time on problem solving. … whatever that means. That’s a pretty poor feedback loop. If we had a state test of the basics, then we might know that too many kids are getting to fifth grade not knowing the times table. Of course, many think that is an inauthentic or unimportant skill; that it’s meaningless if kids can show that they can problem solve. … whatever that is. Then again, if they can’t problem solve, we can’t separate the issues to fix the problem.

    State tests are not the ultimate feedback tool for high achievement. They are just a cutoff to check for some low level of proficiency. Stick with the basics.

  2. A “typical” 6th grader in New York is going to be very different in an urban ghetto, a chi-chi suburb, and a rural upstate town. Why not just have the student write on something covered in the 6th grade curriculum like ancient history?

  3. Ponderosa says:

    The whole business of teaching and testing writing in American schools is a sham. We drill kids over and over on the writing process and what do we get? The kids who already were decent writers remain decent writers, and the kids who couldn’t write still can’t write. Writing ability doesn’t come from writing workshop; it comes from a complex array of sources, including genes and background knowledge. We cannot teach the central components of writing skill directly. And tests that purport to measure how much writing skill school has imparted are really measuring a random slice of a kids’ background knowledge, his exposure to texts in that genre, his genetic endowment… SteveH is correct that these tests don’t create a useful feedback loop. If we teach content well, writing ability will develop as a byproduct. We should test content, not mercurial skills.