We know what works, but don’t do it
“Direct Instruction is the Rodney Dangerfield of education,” writes Fordham’s Robert Pondiscio. Despite 50 years of research showing its effectiveness, DI “gets no respect.”
A new meta-analysis in the Review of Educational Research finds “strong positive results” for DI “regardless of school, setting, grade, student poverty status, race, and ethnicity, and across subjects and grades” in studies from 1966 to 2016.
So why are so few schools using it?
Many teachers see it as limiting their “autonomy and creativity,” writes Pondiscio.
Yes, DI lessons are scripted, specifying “the exact wording and the examples the teacher is to present for each exercise in the program, which ensures that the program will communicate one and only one possible interpretation of the skill being taught,” according to the National Institute for Direct Instruction (NIFDI), an advocacy organization based in Oregon. This, as much as anything, probably explains how DI can be both highly effective and the perpetual wallflower at the curriculum dance hall.
It’s not designed to be “teacher-proof,” writes Pondiscio. “Proper implementation, especially for struggling students, involves not only delivering the curriculum well but constantly monitoring students and responding to their confusion in a timely and effective manner.”
Direct Instruction began at the University of Illinois in the mid-1960s as a preschool program for children from deeply impoverished homes. Those who swear by it frequently invoke the results of Project Follow Through.
The huge federal study compared the outcomes of 20 programs for low-income children in kindergarten through third grade. Most had no effect on children’s basic, cognitive or emotional skills. “DI was the only intervention that had significantly positive impacts on all of the outcome measures,” observe the authors of the meta-analysis.