Measuring 'soft skills'

How do we tell if students are learning “soft skills,”  such as “the ability to work with others, process information from disparate sources, communicate persuasively, or work reliably?”  On Taking Note, John Merrow and Arnold Packer look at the challenge of creating valid, reliable assessments.

With a Kellogg Foundation grant, they’re asking mentors at 28 community-based organizations to assess high and middle school students on “responsibility, work ethic, collaboration, communication, problem-solving, critical thinking, and creativity.”

We ask the mentors to write a two -sentence description of the context in which each of the traits was demonstrated.  Was the teenager responsible about picking up trash in the park or helping out on the surgical ward?  Communicating to a friend about the homework assignment requires a different skill level than communicating about obesity to a large community audience.  There is no reasonable rubric that will cover this amount of variation.

Finally, mentors also grade the students’ performance on a scale of one (“cannot do it”) to five (“does it well enough teach others”).

This produces a Verified Resume of performance that could be used for job applications and college admissions.

The pilot project will survey employers to see if  the mentors’ evaluations match the new hire’s performance.

We got into this because we believe performance traits like responsibility, tolerance for diversity, ability to communicate and work ethic matter.. Because they matter, we must also figure out how to measure them reliably.

Recommendations are supposed to fill this purpose, but it’s difficult to judge whether the recommenders are setting the bar high or low. And many people are afraid to be honest in recommendations.