What is a typical sixth grader?

According to Meredith Kolodner at Insideschools, many principals and teachers have been raising concerns over the rubrics and scoring procedures for this year’s standardized tests in New York State.

Sometimes the rubrics (for the written portions of the tests) are ambiguous. Sometimes they work against good judgment. Sometimes the writing prompt itself puts students and scorers alike in a quandary.

Here’s an example of the last of these:

In addition, a listening passage about a kid who loved music asked students to write about how the child in the passage is like and unlike a “typical 6th grader.” Teachers debated what would lead to a high score: does a typical 6th graders really like music? Does a typical 6th grader attend after-school? Take the bus? There was not consensus on what details would be considered “meaningful and relevant examples,” as dictated by the scoring guide.

Assuming that the description is accurate, I wonder what the test makers had in mind. What is the point of asking students to compare a character to a “typical” sixth grader? Is there such a thing? Are children supposed to know (or care) what a “typical” sixth grader is?

In order to receive a high score, a student must fulfill all the requirements of the task. Here an intellectually advanced student could easily get sidetracked with definitions of “typical” and fail to write the essay as required.

Rubrics have inherent limitations; you can’t standardize good judgment. When applied on a massive scale, they become more limiting still. But they are here to stay, at least for now. Given that state of things, it’s all the more important to create good test questions. This, apparently, is not one.

I scored tests this year but signed a confidentiality agreement. I am not allowed to discuss what I saw on the tests or in student writing. Thus I am limiting myself to commenting on what others have reported. In the past, New York State tests were released to the public after they had been administered and scored. This is good practice; we should all have the opportunity to see and comment on them. After all, they presumably reflect what students are expected to learn.

The Danielson Framework: what is engagement?

I look forward to the next twelve days of guest-blogging with Michael Lopez. I will begin with some thoughts about the Danielson Framework for Teaching and its assumptions about student responsibility. A question for readers: is an “engaged” student one who starts projects, initiates groups, and selects materials? Or do you have other definitions of engagement?

The Danielson Framework (created by Charlotte Danielson, an education policy adviser and consultant) is now the standard teacher evaluation rubric in New York City and hundreds of other districts around the country. It will be used with  a point scale, Danielson’s discomfort notwithstanding. (She told Peter DeWitt in an interview, “In general, I don’t like numbers of any kind. Teaching is enormously complex work and it is very hard to just reduce it to a number of any kind. However, it’s important to capture, in a short-hand manner, the relative skills of different teachers, so I suppose numbers or ratings of some kind – are inevitable.”)

As reading material, the Framework generally preens my feathers instead of ruffling them (though the two are not necessarily at odds). It consists of 22 components, which are distributed across four domains: Planning and Preparation, Classroom Environment, Instruction, and Professional Responsibilities. The explanatory text fills in some of the subtleties and caveats.  As a rubric, though, it affects not my feathers but my gut; some of its key premises seem shaky at best. For instance, it assumes that student “engagement” is essential to learning and that students manifest such engagement overtly through initiative and leadership. The first part makes sense; how can you learn unless you put some effort into it? It is the second part that leaves me uneasy.

Let us consider the Framework’s third domain, “Instruction,” and the domain’s third component, “Engaging Students in Learning.” [Read more...]

How innovative are you, teacher?

Yesterday I wrote about the NYC public school requirement that every student have a “learning goal” in every subject. Today I will talk about teacher goals. (What, did you think teachers could slip away without goals? Everyone must have goals!) In setting these goals for themselves, teachers must follow the Continuum of Teacher Development (you have to buy it to see it), a rubric devised by the New Teacher Center at the University of California, Santa Cruz. Based on constructivist assumptions, this rubric was originally devised for new teachers. Now all teachers must use it to evaluate themselves. Apparently that has been deemed such a success (in advance) that Quality Reviewers use it to observe lessons and rate schools (see slide 13).

I first encountered the Continuum of Teacher Development as a new teacher. My official mentor from the Department of Education, a gracious and knowledgeable woman, would help me fill out the sheets for each category. This took up much of our meeting time, and it had to be done. My mentor spent much time with me in the classroom and at play rehearsals, so it wasn’t all paperwork. I reconciled myself with the paperwork requirement, thinking that after my first year I would not have to deal with the Continuum again.

I was wrong. The Continuum is now for everyone. And a strange rubric it is. Each category and subcategory contains descriptions for the levels Beginning, Emerging, Applying, Integrating, and Innovating. What does it take to be an “innovative” teacher, according to the rubric? First of all, it takes a willingness to hand over the authority to the kids. Second, it takes… er, well, I don’t know what it takes. The descriptions of the “innovative” level are sometimes hard to understand.

Here is a sample of the levels from the subcategory “Facilitating learning experiences that promote autonomy, interaction, and choice”:
[Read more...]