Don’t grade schools on grit

Don’t grade schools on grit, writes Penn psychologist Angela Duckworth, who practically invented grit, in the New York Times.images

Character traits such as self-control affect students’ success, she writes. Schools can help students develop these traits.

But character measures aren’t accurate enough to be used for accountability.

Encouraged by ESSA, the new federal education law, nine California districts are experimenting with using measures of “soft skills” to evaluate school effectiveness.

Duckworth’s research has identified three clusters of character strengths.

One includes strengths like grit, self-control and optimism. They help you achieve your goals. The second includes social intelligence and gratitude; these strengths help you relate to, and help, other people. The third includes curiosity, open-mindedness and zest for learning, which enable independent thinking.

Educators and researchers are looking for ways to assess these traits, raise students’ awareness of their shortcomings and provide “strategies for what to do differently,” she writes. Turning that research into a high-stakes assessment would be a mistake.

Non-cognitive measures aren’t reliable and may never be good enough to use for accountability writes Jay Greene. For a new study, his team tested students with different measures of “non-cognitive” skills. They wanted “to see if we get consistent results. We didn’t.”

W need “hard thinking on soft skills,” writes Brookings’ Russ Whitehurst. These skills are “far too important to suffer the fad-like fate” of other education reforms.

Teach, test, reteach, succeed

In “a desert of school failure,” a Watts elementary school is soaring, writes Jill Stewart for the LA Weekly. At 96th Street Elementary, teachers assess students’ progress and their own teaching, they tell Stewart.

Kailee Brown, 5, visited 96th Street Elementary in Watts with her mom, Desiree, who hoped to enroll her there even though they don't live in the area. Photo by Ted Soqui

Kailee Brown, 5, visited 96th Street Elementary in Watts with her mom, Desiree, who hoped to enroll her there even though they don’t live in the area. Photo by Ted Soquis.

Principal Luis Heckmuller, now in his eighth year at the school, encourages teachers at each grade level to work as a team. If one first-grade teacher sees a problem, the first-grade team works together to find a solution.

“This is a school of many veteran teachers who are here because they love it, who believe in our group approach of assessing the students regularly and assessing the success of their own teaching,” says Tracy Mack, the “intervention coordinator.”

“At 96th, we do a lot of assessment and data analysis of how the kids are doing,” says David Owens, a sixth-grade teacher. Sometimes that shows the teacher is the problem.

“You can say, ‘OK, I am looking at all these student scores for reading comprehension and literary analysis, and there is a point where all of my kids took a dip. So there is the point where I, the teacher, need to do better.'”

“Many of our teachers enroll their own children right here, in the middle of Watts, says Sandra DeLucas, who teaches third grade.

From Mario Kart to the classroom

Video Games Like Mario Kart and World of Warcraft Could Be Making Their Way Into Classrooms, writes Georgia Perry in The Atlantic.

Some top game designers are high school or college dropouts, writes Perry. They were bored in school. They’re experts at engaging players. 

GlassLab, a joint effort by Electronic Arts and the Educational Testing Service (the SAT people) pairs commercial and educational game designers to create games kids will want to play. Use Your Brainz is based on the popular Plants vs. Zombies. It adds a tracker to assess players’ problem-solving skills.

Games are great at assessment, says Richard Culatta, the director of the Department of Education’s Office of Educational Technology.

They assess their players constantly—that’s how they determine when a player is ready to move to the next level. Similarly, feedback is provided instantly in games; unlike having to wait a week for a grade on an assignment, students playing a game can, say, look at the top of the screen and see a bunch of cartoon hearts letting them know how many lives they have left.

Another major challenge facing American education, according to Culatta, involves “[holding] students—almost like a surfboard—right on the wave of their ability.” In other words, schools often struggle to give them tasks that they are capable of doing but for which they also need to work and stay vigilant.

Games do this expertly.

The DOE, the National Science Foundation and foundations such as Gates and MacArthur are spending “upwards of $100 million to promote educational gaming,” writes Greg Toppo in The Game Believes In You: How Digital Play Can Make Our Kids Smarter.

Nearly three-quarters of K-8 teachers use digital games as a teaching tool, according to a 2014 study by the Joan Ganz Cooney Center.

How does a teacher figure out which education apps are worth trying? asks Kaycie Gillette-Mallard on EdCentral. Putting the Education in “Educational” Apps has advice on how to evaluate apps.

Teaching without grading

When Mark Barnes decided to stop grading students’ work, it changed everything, he writes on Education Week Teacher.  “I’ll never put a number, percentage, or letter on any activity or project you complete,” he told his seventh graders.

Students who had only experienced traditional grades throughout their school lives were asked to discuss learning, to reflect and, ultimately, to evaluate themselves. Many were shocked, when we discussed an activity, and I asked them to return to prior learning, to rethink what they had done, and rework the activity for further discussion. An amazing and enriching ongoing conversation about learning was born.

I would review each student’s work, summarize and explain what I had observed, and ask questions. “Did you consider doing it this way?” I might inquire. “What would it look like if you tried this instead?” Soon, students had these informative conversations with each other, as they grew into enthusiastic, independent learners, who never feared a bad grade, because there were no grades.

The school required grades on the report card. At the end of the grading period, Barnes asked students to discuss their in-class activities and projects and suggest what grade they’d earned.

Here’s Barnes’ 7 reasons teachers should stop grading their students from his blog, Brilliant or Insane.

Starr Sackstein, a writing and journalism teacher, co-teaches a publications elective with two math teachers. They discuss letting students assess their own learning.

Knowing when to stop

In a paper delivered at the 2010 conference of the European Association of Conservatoires (AEC) in Ghent, Samuel Hope, executive director of the National Association of Schools of Music, spoke about the complexities of assessment in higher music education. His speech emphasizes the “centrality of content” in educational policy, particularly assessment policy.

Assessment at the higher levels must involve the language of the field; musicians in an orchestra, for instance, assess themselves continually as they play but have no need to document such assessment. (Samuel Hope is not disparaging documented assessment; he’s saying that in this particular context, at this level, it would burden the work instead of lifting it.)

Which aspects of musical composition and performance require highly advanced knowledge and judgment? Which are particularly resistant to standardized assessment? Hope draws attention to one in particular: knowing when to stop.

This means knowledge of when to stop doing something and begin doing something else and how to work effectively with relationships among stasis and change, and speed and time. Knowing when to stop is an aspect of mastering many relationships and balances in music. Mozart, Beethoven, and other great composers are consummate masters of knowing when to stop, when a chord or key or musical figure has been continued long enough, and when there is time for a variation or a change altogether. The performer of such music has thousands of choices about how to make the structural decisions of the composer come alive in performance. Great performers are also masters in this area. In many artistic dimensions, knowing when to stop is an essential determiner of the line between fine works of art and kitsch.

Knowing when to stop is important in all fields, but it isn’t a transferable skill. You may have a general sense of what is excessive (in art, music, or poetry), but you cannot make  fine decisions about stopping, or asssess the decisions of others, unless you know art, music, or poetry itself.

Hope points out that knowing when to stop is also essential to institutional review. You can establish frameworks for music instruction at the higher levels, but how detailed should they become? When should the frameworks stop and leave the remaining decisions to the individual institutions? It is essential that review and accreditation organizations such as AEC and NASM take on these questions, according to Hope, because they have the requisite knowledge and understanding.

One of the problems I see in K–12 education reform is precisely the lack of a sense of when to stop. Let’s take group work as an example. It’s one thing to say that certain kinds of group work, used in the right contexts, can foster certain kinds of learning. It’s another to require group work in every lesson (or even in most lessons). Similarly, it’s one thing to regard test scores as limited measures of intellectual attainment of a particular kind. It’s another to treat them like numerical oracles.

To know when to stop, one must consider the subject matter itself. For instance, the Common Core State Standards have specified a ratio of informational and literary text for each grade span. But the proper ratio depends on what the students are learning. The ratio should not precede the content; if the content is well planned, then there’s no need to worry about the ratio. It could vary from year to year, for good reasons.

Formulas are important, useful, even beautiful things, but they only do what they say they’ll do. You can somehow calculate a curriculum of 70 percent informational text and 30 percent literature, and that’s all it will be. It will not be, by virtue of this ratio, a good curriculum. It might coincide with some good curricula and conflict with others.

Back to music: in Beethoven’s “Waldstein” sonata, there’s a syncopated passage near the end of the third movement. It is twelve measures long and has an evanescent, ethereal quality. When I was a teenager, I would listen to the sonata every day and wait eagerly for that passage. Once it came, I wanted it to go on longer but knew that it couldn’t.

But its beauty cannot be attributed to its length alone, or to its syncopation, or to its key changes, or to its place in the movement and in the sonata; it is all of these things and many more.

You can listen to this passage as performed by Jacob Lateiner. (It starts at 8:52, but I recommend listening to the full second and third movements, which are included in this clip). This recording and Vladimir Ashkenazy’s were my favorites for many years. Lateiner plays the first movement too fast, I’d say, but his rendition of the third movement has something like a third ear to it, a sense of something beyond the notes. I have started listening to more renditions of the sonata; Claudio Arrau’s has something remarkable as well.

New test for new teachers: Can she teach?

More than 10,000 teachers-in-training in 25 states will field-test a new way to evaluate classroom competence, writes Sarah Butrymowicz on the Hechinger Report. Eventually, states may use the Teacher Performance Assessment to decide who qualifies for a teaching license.

Currently, most states require would-be teachers to take pencil-and-paper exams — usually multiple choice — covering basic skills and knowledge of specific subjects, writes Butrymowicz. “Some states also include tests that focus on teaching strategies.”

(TPA follows) candidates through a classroom lesson over the course of a few days, complete with detailed pre-lesson plans from teacher candidates, in-class video, and post-lesson reflection.

Aspiring teachers will be graded on a scale of 1 to 5 by national reviewers, who will look for evidence of student learning. Developers of the assessment recommend making the lowest passing score a 3, but states will be free to set their own passing mark.

Stanford is working with Pearson Education to develop the assessment. Ray Pecheone, co-executive director of the Stanford School Redesign Network, streamlined his model for evaluating  already-certified teachers.  He predicts 10 to 20 percent of would-be teachers will fail the field test, but that will fall to under 10 percent with time.

University of Massachusetts teacher candidates are refusing to send the classroom videos for evaluation, reports Michael Winerip in the New York Times.

The UMass students say that their professors and the classroom teachers who observe them for six months in real school settings can do a better job judging their skills than a corporation that has never seen them.

Lily Waites, 25, who is getting a master’s degree to teach biology, found that the process of reducing 270 minutes of recorded classroom teaching to 20 minutes of video was demeaning and frustrating, made worse because she had never edited video before. “I don’t think it showed in any way who I am as a teacher,” she said. “It felt so stilted.”

Pearson advertises that it is paying scorers $75 per assessment, with work “available seven days a week” for current or retired licensed teachers or administrators. This makes Amy Lanham wonder how thorough the grading will be. “I don’t think you can have a genuine reflective process from a calibrated scorer,” said Ms. Lanham, 28, who plans to teach English.

In traditional evaluations of student teachers, nearly everybody passes.

New York, Illinois, Minnesota, Ohio, Tennessee and Washington plan to adopt TPA in the next few years. Other states are waiting to see how it works.

Are students learning? Colleges don’t know

Many college students aren’t working very hard or learning very much, according to recent studies, writes New York Times columnist David Brooks, who suggests value-added assessments to show how much graduates have gained.

At some point, parents are going to decide that $160,000 is too high a price if all you get is an empty credential and a fancy car-window sticker.

. . . Colleges and universities have to be able to provide prospective parents with data that will give them some sense of how much their students learn.

In 2006, the Spellings commission recommended using the Collegiate Learning Assessment.  There are many other ideas out there, Brooks writes.

Some schools like Bowling Green and Portland State are doing portfolio assessments — which measure the quality of student papers and improvement over time. Some, like Worcester Polytechnic Institute and Southern Illinois University Edwardsville, use capstone assessment, creating a culminating project in which the students display their skills in a way that can be compared and measured.

Colleges could pick an assessment method that “suits their vision,” writes Brooks.

Then they could broadcast the results to prospective parents, saying, “We may not be prestigious or as expensive as X, but here students actually learn.”

. . . If you’ve got a student at or applying to college, ask the administrators these questions: “How much do students here learn? How do you know?”

With many different learning assessment schemes, it would be difficult to compare schools — or to add a do-they-learn metric to the all-powerful U.S. News college rankings.

Technology and testing

Technology can transform testing in ways that will dramatically improve teaching and learning, writes Bill Tucker of Education Sector.

Using multiple forms of media that allow for both visual and graphical representations, we can present complex, multi-step problems for students to solve, and we can collect detailed information about an individual student’s approach to problem solving. This information may allow educators to better comprehend how students arrive at their answers and learn what those pathways reveal about students’ grasp of underlying concepts, as well as to discover how they can alter their instruction to help move students forward. Most importantly, the new research projects have produced assessments that reflect what cognitive research tells us about how people learn, providing an opportunity to greatly strengthen the quality of instruction in the nation’s classrooms.

Technology-enabled assessment already is used in military training and medical education, Tucker writes.