Knowing when to stop

In a paper delivered at the 2010 conference of the European Association of Conservatoires (AEC) in Ghent, Samuel Hope, executive director of the National Association of Schools of Music, spoke about the complexities of assessment in higher music education. His speech emphasizes the “centrality of content” in educational policy, particularly assessment policy.

Assessment at the higher levels must involve the language of the field; musicians in an orchestra, for instance, assess themselves continually as they play but have no need to document such assessment. (Samuel Hope is not disparaging documented assessment; he’s saying that in this particular context, at this level, it would burden the work instead of lifting it.)

Which aspects of musical composition and performance require highly advanced knowledge and judgment? Which are particularly resistant to standardized assessment? Hope draws attention to one in particular: knowing when to stop.

This means knowledge of when to stop doing something and begin doing something else and how to work effectively with relationships among stasis and change, and speed and time. Knowing when to stop is an aspect of mastering many relationships and balances in music. Mozart, Beethoven, and other great composers are consummate masters of knowing when to stop, when a chord or key or musical figure has been continued long enough, and when there is time for a variation or a change altogether. The performer of such music has thousands of choices about how to make the structural decisions of the composer come alive in performance. Great performers are also masters in this area. In many artistic dimensions, knowing when to stop is an essential determiner of the line between fine works of art and kitsch.

Knowing when to stop is important in all fields, but it isn’t a transferable skill. You may have a general sense of what is excessive (in art, music, or poetry), but you cannot make  fine decisions about stopping, or asssess the decisions of others, unless you know art, music, or poetry itself.

Hope points out that knowing when to stop is also essential to institutional review. You can establish frameworks for music instruction at the higher levels, but how detailed should they become? When should the frameworks stop and leave the remaining decisions to the individual institutions? It is essential that review and accreditation organizations such as AEC and NASM take on these questions, according to Hope, because they have the requisite knowledge and understanding.

One of the problems I see in K–12 education reform is precisely the lack of a sense of when to stop. Let’s take group work as an example. It’s one thing to say that certain kinds of group work, used in the right contexts, can foster certain kinds of learning. It’s another to require group work in every lesson (or even in most lessons). Similarly, it’s one thing to regard test scores as limited measures of intellectual attainment of a particular kind. It’s another to treat them like numerical oracles.

To know when to stop, one must consider the subject matter itself. For instance, the Common Core State Standards have specified a ratio of informational and literary text for each grade span. But the proper ratio depends on what the students are learning. The ratio should not precede the content; if the content is well planned, then there’s no need to worry about the ratio. It could vary from year to year, for good reasons.

Formulas are important, useful, even beautiful things, but they only do what they say they’ll do. You can somehow calculate a curriculum of 70 percent informational text and 30 percent literature, and that’s all it will be. It will not be, by virtue of this ratio, a good curriculum. It might coincide with some good curricula and conflict with others.

Back to music: in Beethoven’s “Waldstein” sonata, there’s a syncopated passage near the end of the third movement. It is twelve measures long and has an evanescent, ethereal quality. When I was a teenager, I would listen to the sonata every day and wait eagerly for that passage. Once it came, I wanted it to go on longer but knew that it couldn’t.

But its beauty cannot be attributed to its length alone, or to its syncopation, or to its key changes, or to its place in the movement and in the sonata; it is all of these things and many more.

You can listen to this passage as performed by Jacob Lateiner. (It starts at 8:52, but I recommend listening to the full second and third movements, which are included in this clip). This recording and Vladimir Ashkenazy’s were my favorites for many years. Lateiner plays the first movement too fast, I’d say, but his rendition of the third movement has something like a third ear to it, a sense of something beyond the notes. I have started listening to more renditions of the sonata; Claudio Arrau’s has something remarkable as well.

New test for new teachers: Can she teach?

More than 10,000 teachers-in-training in 25 states will field-test a new way to evaluate classroom competence, writes Sarah Butrymowicz on the Hechinger Report. Eventually, states may use the Teacher Performance Assessment to decide who qualifies for a teaching license.

Currently, most states require would-be teachers to take pencil-and-paper exams — usually multiple choice — covering basic skills and knowledge of specific subjects, writes Butrymowicz. “Some states also include tests that focus on teaching strategies.”

(TPA follows) candidates through a classroom lesson over the course of a few days, complete with detailed pre-lesson plans from teacher candidates, in-class video, and post-lesson reflection.

Aspiring teachers will be graded on a scale of 1 to 5 by national reviewers, who will look for evidence of student learning. Developers of the assessment recommend making the lowest passing score a 3, but states will be free to set their own passing mark.

Stanford is working with Pearson Education to develop the assessment. Ray Pecheone, co-executive director of the Stanford School Redesign Network, streamlined his model for evaluating  already-certified teachers.  He predicts 10 to 20 percent of would-be teachers will fail the field test, but that will fall to under 10 percent with time.

University of Massachusetts teacher candidates are refusing to send the classroom videos for evaluation, reports Michael Winerip in the New York Times.

The UMass students say that their professors and the classroom teachers who observe them for six months in real school settings can do a better job judging their skills than a corporation that has never seen them.

Lily Waites, 25, who is getting a master’s degree to teach biology, found that the process of reducing 270 minutes of recorded classroom teaching to 20 minutes of video was demeaning and frustrating, made worse because she had never edited video before. “I don’t think it showed in any way who I am as a teacher,” she said. “It felt so stilted.”

Pearson advertises that it is paying scorers $75 per assessment, with work “available seven days a week” for current or retired licensed teachers or administrators. This makes Amy Lanham wonder how thorough the grading will be. “I don’t think you can have a genuine reflective process from a calibrated scorer,” said Ms. Lanham, 28, who plans to teach English.

In traditional evaluations of student teachers, nearly everybody passes.

New York, Illinois, Minnesota, Ohio, Tennessee and Washington plan to adopt TPA in the next few years. Other states are waiting to see how it works.

Are students learning? Colleges don’t know

Many college students aren’t working very hard or learning very much, according to recent studies, writes New York Times columnist David Brooks, who suggests value-added assessments to show how much graduates have gained.

At some point, parents are going to decide that $160,000 is too high a price if all you get is an empty credential and a fancy car-window sticker.

. . . Colleges and universities have to be able to provide prospective parents with data that will give them some sense of how much their students learn.

In 2006, the Spellings commission recommended using the Collegiate Learning Assessment.  There are many other ideas out there, Brooks writes.

Some schools like Bowling Green and Portland State are doing portfolio assessments — which measure the quality of student papers and improvement over time. Some, like Worcester Polytechnic Institute and Southern Illinois University Edwardsville, use capstone assessment, creating a culminating project in which the students display their skills in a way that can be compared and measured.

Colleges could pick an assessment method that “suits their vision,” writes Brooks.

Then they could broadcast the results to prospective parents, saying, “We may not be prestigious or as expensive as X, but here students actually learn.”

. . . If you’ve got a student at or applying to college, ask the administrators these questions: “How much do students here learn? How do you know?”

With many different learning assessment schemes, it would be difficult to compare schools — or to add a do-they-learn metric to the all-powerful U.S. News college rankings.

Technology and testing

Technology can transform testing in ways that will dramatically improve teaching and learning, writes Bill Tucker of Education Sector.

Using multiple forms of media that allow for both visual and graphical representations, we can present complex, multi-step problems for students to solve, and we can collect detailed information about an individual student’s approach to problem solving. This information may allow educators to better comprehend how students arrive at their answers and learn what those pathways reveal about students’ grasp of underlying concepts, as well as to discover how they can alter their instruction to help move students forward. Most importantly, the new research projects have produced assessments that reflect what cognitive research tells us about how people learn, providing an opportunity to greatly strengthen the quality of instruction in the nation’s classrooms.

Technology-enabled assessment already is used in military training and medical education, Tucker writes.