Preschool won’t close achievement gap

President Obama’s $75 billion preschool proposal won’t close the achievement gap, predict Brookings scholars. Sound research doesn’t show preschool makes much difference, write Russ Whitehurst and David J. Armor.

The most credible recent study of pre-K outcomes, the federal Head Start Impact Study, found only small differences at the end of the Head Start year between the performance of children randomly assigned to Head Start vs. the control group, e.g., about a month’s superiority in vocabulary for the Head Start group. There were virtually no differences between Head Start and the control group once the children were in elementary school.

Nationwide, the number of children enrolled in state pre-K programs is associated weakly with later academic performance, they write. Fourth-grade reading and math achievement “would increase by no more than about a 10th of a standard deviation if state pre-K enrollments increased dramatically.”

Advocates cite the Perry Preschool experiment “from half century ago” that is  ”so different in many important ways from current state pre-K programs that findings . . .  can’t be confidently generalized to the present day,” write Whitehurst and Armor.

Pre-K advocates also rely heavily on studies that don’t use random assignment of children to pre-K or a control group. “Age-cutoff regression discontinuity” studies, which show large impacts for pre-K, are “problematic,” the Brookings researchers conclude.

“There are reasons to doubt that we yet know how to design and deliver a government funded pre-K program that produces sufficiently large benefits to justify prioritizing pre-K over other investments in education.”

In Shanghai, all teachers have mentors

In high-scoring Shanghai, all teachers have mentors — not just novices — and teachers collaborate in lesson and research groups, writes Marc Tucker in an interview with Ben Jensen, of Australia’s Grattan Institute, in Ed Week‘s Top Performers blog. (A longer version is here.)

Every teacher has a mentor and new teachers have two, one for subject matter and one for teaching, says Jensen. The mentors observe and provide feedback.

Only .2 percent of teachers reach the “master teacher” level and then they don’t have mentors, but they will still work together and have their work evaluated and appraised.

In Shanghai, you will struggle to get promoted if you receive poor feedback from the people you mentored. That means the people who get promoted are collaborative and committed to helping teachers, and they have a proven track record in this area.

In most schools in Shanghai, teachers form lesson groups that discuss students’ progress and research groups that explore new strategies, says Jensen.

In Shanghai, you don’t get promoted as a teacher unless you are also a researcher. You have to have published articles, not in academic journals but in professional journals or even school journals. In fact, one of the first stages in a promotion evaluation is to have one of your articles peer reviewed. Every teacher will work in a research group with about half a dozen other teachers, often of the same subject area but not always. If there is a young teacher, that teacher’s mentor will often be in that group as well. They will meet for about 2 hours every 2 weeks.

At the start of the year, the group choses a topic—a new curriculum or pedagogical technique or determining how to help out a particular student—and the principal will approve that topic. The first third of the year is spent on a literature review. The second third of the year is spent trying out strategies in the classroom that the group identified as promising during the literature review. As they try these strategies in the classroom, other members of the research group will observe.

Senior teachers with strong research experience serve as leaders.

About 30 percent of Shanghai teachers’ salary is performance pay, reports the New York Times. “Teacher salaries are modest, about $750 a month before bonuses and allowances — far less than what accountants, lawyers or other professionals earn.”

Teachers vs. bad research, evidence-free fads

Tom Bennett’s new book, Teacher Proof: Why Research in Education Doesn’t Always Mean What it Claims, and What You Can Do about It, is the work of “one pissed off teacher,” writes cognitive scientist Daniel Willingham.

Cover edit 3

Bennett, who’s taught in Britain for 10 years, feels cheated of the time he’s spent in training sessions urged to adopt some “evidence-free theory” and cheated of respect as “researchers with no classroom experience presume to tell him his job, and blame him (or his students) if their magic beans don’t grow a beanstalk.” Researchers are actively getting in his way, to the extent “their cockamamie ideas infect districts and schools,” Bennett believes.

Social sciences aspire to the precision of the “hard” sciences but are just “walking around in mother’s heels and pearls,” he charges.

His advice: “Researchers need to take a good long look in the mirror; media outlets need to be less gullible and  teachers should appear to comply with the district’s latest lunacy, but once the door closes stick to the basics.”

Willingham writes:

This section offers a merciless, overdue, and often funny skewering of speculative ideas in education: multiple intelligences, Brain Gym, group work, emotional intelligence, 21st century skills, technology in education, learning styles, learning through games. Bennett has a unerring eye for the two key problems in these fads: in some cases, the proposed “solutions” are pure theory, sprouting from bad (or absent) science (eg., learning styles, Brain Gym); others are perfectly sensible ideas transmogrified into terrible practice when people become too dogmatic about their application  (group learning, technology).

In addition, schools of education should raise their standards for education research, writes Willingham.

Another new book, The Anti-education Era: Creating Smarter Students through Digital Learning by Arizona State Professor James Paul Gee is disappointing, writes Willingham. “There is very little solid advice here about how to change education.”

NAPCS: Charters boost achievement

Charter school students outperform similar students in district-run public schools, according to a National Alliance for Public Charter Schools analysis of research in the last three years.

Three national studies and ten studies from major regions across the country since 2010 found positive academic performance results for students in public charter schools compared to their traditional public school peers, suggesting a strong upward trend . . .

Since 2010, only one study, conducted in Utah, has found neutral or negative results for charter schools, NAPCS reports.

CREDO: Indiana charter students do well

Students at Indiana charter schools outperformed similar students at traditional public schools in math and reading, concludes a new report from Stanford’s CREDO. Indianapolis charter students did especially well, reports Ed Week.

The study tracked 15,297 charter school students at 64 schools from grades 3-8. On average, students in charter schools ended the year having made the equivalent of 1.5 more months of learning gains in both reading and math than their traditional public school counterparts did. Students in charter schools in Indianapolis ended the year ahead of their traditional public school counterparts by two months in reading and three months in math.

Charter students and the control group were matched by  demographic and performance data (gender, race/ethnicity, special education status, English language proficiency, free-or-reduced lunch participation, grade level, and prior test scores on state achievement tests).

In Indiana, 58 percent of charter students are black, compared to 11 percent of the state’s students. Eleven percent of charter students are in special education compared to 15 percent in traditional public schools.

In a wrap-up on education research in 2012, Matthew Di Carlo notes that CREDO’s research on charter gains in Indiana and New Jersey show most of the progress comes in big cities, Indianapolis and Newark. By contrast, rural charter students tend to underperform similar students.

One contentious variation on this question is whether charter schools “cream” higher-performing students, and/or “push out” lower-performing students, in order to boost their results. Yet another Mathematica supplement to their 2010 report examining around 20 KIPP middle schools was released, addressing criticisms that KIPP admits students with comparatively high achievement levels, and that the students who leave are lower-performing than those who stay. This report found little evidence to support either claim (also take a look at our post on attrition and charters).

An another analysis, presented in a conference paper, “found that low-performing students in a large anonymous district did not exit charters at a discernibly higher rate than their counterparts in regular public schools,” DiCarlo adds.

On the flip side of the entry/exit equation, this working paper found that students who won charter school lotteries (but had not yet attended the charter) saw immediate “benefits” in the form of reduced truancy rates, an interesting demonstration of the importance of student motivation.

Di Carlo has more on the research this year on charter management organizations, merit pay and teacher evaluations using value-added and growth measures.

Neuro-garbage in education

Pop neuroscience — silly and scientifically inaccurate — has spurred a backlash, writes Alissa Quart in a New York Times op-ed. Among the critics are NeurocriticNeurobonkersNeuroskepticMind Hacks and Dorothy Bishop’s Blog

There’s a lot of neuro-garbage in education, writes Daniel Willingham, a cognitive scientist.

Sometimes it’s the use of accurate but ultimately pointless neuro-talk that’s mere window dressing for something that teachers already know (e.g., explaining the neural consequences of exercise to persuade teachers that recess is a good idea for third-graders).

Other times the neuroscience is simply inaccurate (exaggerations regarding the differences between the left and right hemispheres, for example).

Even when the neuroscience is solid, “we can’t take lab findings and pop them right into the classroom,” Willingham writes.

. . . the outcomes we care about are behavioral; reading, analyzing, calculating, remembering.

. . . Likewise, most of the things that we can change are behavioral. We’re not going to plant electrodes in the child’s brain to get her to learn–we’re going to change her environment and encourage certain behaviors. . . . Neuroscience is out of the loop.

It’s possible to use neuroscience to improve education, writes Willingham. But it isn’t easy.

Teachers who know the most about neuroscience believe the most things that aren’t true, writes Cedar Riener, a psychology professor, in Cedar’s Digest, citing this study. These teachers’ belief in myths is rooted in their values, he writes. People want to believe low achievers just haven’t found the right way to tap their “unlimited reservoir of intelligence” properly. “To dismiss the learning styles myth, we have to let go of equating cognitive ability (or intelligence) with some sort of larger social value.”

Teachers: Technology cuts attention spans

Diverted and distracted by technology, students can’t focus or persevere, say teachers in two new surveys.

In a Pew Internet Project survey, nearly 90 percent of teachers said digital technologies are creating “an easily distracted generation with short attention spans.”  Although the Internet helps students develop better research skills, teachers said, 64 felt technologies “do more to distract students than to help them academically.”

Seventy-three percent of teachers said entertainment media has cut students’ attention spans, according to Common Sense Media, a San Francisco nonprofit. A majority said it hurt students’ writing and speaking skills.

“Distraction” could be seen as a judgment call, Pew’s Kristen Purcell told the New York Times. Some teachers think education “must adjust to better accommodate the way students learn.”

But teachers worry about that too, the Times reports.

“I’m an entertainer. I have to do a song and dance to capture their attention,” said Hope Molina-Porter, 37, an English teacher at Troy High School in Fullerton, Calif., who has taught for 14 years. She teaches accelerated students, but has noted a marked decline in the depth and analysis of their written work.

She said she did not want to shrink from the challenge of engaging them, nor did other teachers interviewed, but she also worried that technology was causing a deeper shift in how students learned. She also wondered if teachers were adding to the problem by adjusting their lessons to accommodate shorter attention spans.

“Are we contributing to this?” Ms. Molina-Porter said. “What’s going to happen when they don’t have constant entertainment?”

Both younger and older teachers worried about technology’s impact on their students’ learning.

It’s not likely students have lost the ability to focus, responds cognitive scientist Daniel Willingham. But flashy technology with immediate rewards may have eroded students’ willingness to focus on mundane tasks.

Kids learn early that very little effort can bring a big payoff, he writes.

When a toddler is given a toy that puts on a dazzling display of light and sound when a button is pushed, we might be teaching him this lesson.

In contrast, the toddler who gets a set of blocks has to put a heck of a lot more effort (and sustained attention) into getting the toy to do something interesting–build a tower, for example, that she can send crashing down.

“It’s hard for me to believe that something as fundamental to cognition as the ability to pay attention can moved around a whole lot,” Willingham writes. “It’s much easier for me to accept that one’s beliefs–beliefs about what is worthy of my attention, beliefs about how much effort I should dispense to tasks–can be moved around, because beliefs are a product of experience.”

The best bang-for-the-buck colleges

The University of California at San Diego tops Washington Monthly‘s list of the top colleges for social mobility (enrolling and graduating low-income students at an affordable price), research and service. Next in line are Texas A&M, Stanford, University of North Carolina and Berkeley.

Only one of U.S. News‘ top ten schools, Stanford, makes the Washington Monthy’s top ten. Yale fails even to crack the top 40. New York University, which has floated to national prominence on a sea of student debt, is 77th. NYU does particularly poorly on the new “bang for the buck” measure.
Thirteen of the top 20 Washington Monthly universities are public, while all the top-ranked U.S. News colleges are “private institutions that spend more, charge more, and cater almost exclusively to the rich and upper-upper middle class.”
Also in the Washington Monthly, Stephen Burd calls for Getting Rid of the College Loan Repo Man who fails to distinguish between deadbeats and people who just can’t pay.

When to trust (or not) the experts

Dan Willingham’s new book, When Can You Trust the Experts: How to Tell Good Science from Bad in Education is out. (Download chapter 1 here.)

Every new program claims to be “research-based,” writes Willingham. Teachers and administrators don’t have the time to evaluate everything and there are no credible summaries to help.

 The first half of the book focuses on what cues we use to tell us “this is probably true,” and how they can be misleading.

. . . (It also describes) how when can know when science might help with a particular problem and when it can’t.

The second half of the book gives Willingham’s four-step short cut for evaluating research claims: Strip it (of verbiage), trace it, analyze it and should I do it?

 

$1.1 million to test ‘galvanic’ bracelets

The Gates Foundation is spending $1.1 million to test “galvanic skin response” bracelets that measure students’ engagement in lessons, writes Valerie Strauss on Answer Sheet. Clemson and the National Center on Time and Learning will research the idea’s feasibility.

Strauss sees it as a “nutty” waste of money that could be spent on books, teachers and librarians.

Is it foolish? Let’s say research shows that students learn more in the X state than when their bracelets record Z’s. Teachers could analyze the high-X and high-Z portions of their lessons to figure out how to reach students more effectively. Of course, the idea could be a dud. Maybe too many students X up or Z out for reasons that have nothing to do with learning. But we don’t know that yet.