That basic conclusion generally holds true from preschool (because, as early-childhood expert Alison Gopnik put it, “direct instruction really can limit young children’s learning”)1 straight through to college (where a review of 225 studies showed that active learning results in “strong increases in student performance” when compared to traditional lecture-based teaching).2 It holds true across diverse populations, including with low-income and minority students,3 and, at least in some circumstances, with low-achieving students.4
Read the latest print edition of School News HERE
It holds true not only in STEM subjects, which account for a disproportionate share of the relevant research,5 but also in reading instruction, where, as one group of investigators reported, “The more a teacher was coded as telling children information, the less [they] grew in reading achievement.”6
It holds true when judged by how long students retain knowledge,7 and the effect is even clearer with more ambitious and important educational goals. The more emphasis one places on long-term outcomes, on deep understanding, on the ability to transfer ideas to new situations, or on fostering and maintaining students’ interest in learning, the more direct instruction (DI) comes up short.8
Finally, it appears to hold true not only because active learning and inquiry are beneficial (which they typically are)9 but also because explicit telling can be actively harmful10 — sometimes in ways that extend beyond the academic realm.11
I’ve cited several metaanalyses and other research reviews in the extensive endnotes to this essay precisely so that skeptics can’t claim that I’ve cherry-picked unrepresentative studies to make the case in favor of what is sometimes called progressive education. In its totality, the evidence is truly impressive.
Unfortunately, that hasn’t prevented traditionalists from defiantly doubling down.12 If you look carefully at their briefs for the superiority of direct instruction (DI), however, you’ll notice two things. First, any benefits they’re able to show are almost entirely short-term and/or superficial in nature. As one group of researchers put it, “Studies favoring direct instruction tend to be small-scale, use limited measures and time horizons, [and rely on] ‘skill acquisition’ or simple concepts as the learning goals…”13 (Actually, student-centered learning often produces better results even in studies with those limitations; when more meaningful outcomes are assessed, the case for DI collapses almost entirely.)
Here’s a striking illustration: DI’s defenders triumphantly cited a 2004 experiment in which science students who received “an extreme type of direct instruction in which the goals, the materials, the examples, the explanations, and the pace of instruction [were] all teacher controlled” did better on a test than their classmates who designed their own procedures. But three years later, another pair of researchers returned to the same question in the same discipline with students of the same age. This time, though, they measured the effects after six months instead of only a week and they used a more sophisticated assessment of learning. It turned out that any advantage produced by DI quickly evaporated. And on one of the outcome measures, exploration ultimately proved to be not only more impressive than DI but also more impressive than a combination of the two — further reason to believe that DI not only is less effective but can actually be counterproductive.14
The second problem with evidence said to favor DI reflects the way its proponents tend to structure what happens in the two teaching conditions they’re comparing. On the one hand, they’re apt to set up inquiry learning for failure by using a caricatured version of it, a kind of pure discovery rarely found in real-world classrooms, with teachers providing no guidance at all so that students are left to their own devices. On the other hand, the version of DI they test sometimes sneaks in a fair amount of active student involvement — to the point that the two conditions may just amount to “different forms of constructivist instruction.”15
Fairer comparisons convincingly support the case against direct instruction. To be clear, I don’t believe the evidence argues against all teacher talk and guidance (in favor of the pure discovery model that’s employed as a straw man to make traditionalism seem more appealing). But that doesn’t justify a “split the difference; just use both” conclusion: A mostly student-centered approach really does make more sense most of the time.16
*
Now put yourself in the place of one of those hard-liners who want teachers to remain the center of gravity in the classroom, disgorging information. How might you circle the wagons despite all the research that undercuts your position? Even more audaciously, how could you try to get away with saying DI is “evidence-based” or supported by the “science of learning” — a favorite rhetorical gambit of traditionalists?17
To the rescue comes an idea called cognitive load theory (CLT). This concept, primarily associated with an Australian educational psychologist named John Sweller, basically holds that trying to figure things out for yourself uses up so much working memory that too little is left to move whatever has been learned into long-term memory. It’s therefore more efficient for the teacher just to show students problems that have already been worked out correctly or provide them with “process sheets” that list step-by-step instructions for producing the right answer. (Imagine Jack Nicholson as the cognitive load theorist, hollering at students, “Inquiry? Your brain can’t handle inquiry!”)
CLT has been rattling around certain academic corridors and conservative websites for a while now, and some educators who have heard about it apparently assume that the idea is widely accepted by experts and that it offers a persuasive rationale for explicit instruction. Well, no and no. In fact, its persuasiveness is inversely related to how carefully you’ve looked into it.
1. Methodological flaws in its research base are so serious as to raise doubts about the concept itself.
Of course our brains can’t do everything at once; the fact that there are limits to how much information can be retained in short-term memory confers on CLT an intuitive plausibility. The problem is that there’s no objective way to measure cognitive capacity and test the theory. Most of the relevant experiments therefore just infer the degree of cognitive load from results on tests of knowledge following a lesson. If students don’t remember much, the load is simply assumed to have been high. In particular, “the critical level indicating overload is unknown,” so proponents have to talk about load only in relative terms. Another expert reports that “there are no standard, reliable, and valid measures for the main constructs of the theory….Without a measure of cognitive capacity, the predictions of CLT cannot be tested.”18
The idea’s advocates keep tweaking the theory to account for anything they observe, whether or not that result confirms their basic hypothesis. Sometimes higher load turns out to improve learning. No problem — they just invent a new category (“germane” as opposed to “extraneous” load) and *poof* the theory is rescued. CLT “is constructed in such a way that it is hard or even impossible to falsify.”19 No wonder, then, that “dissatisfaction about the explanatory and predictive value of CLT continues to grow among the scientific community”20 — even as advocates of direct instruction continue to ignore these disqualifying defects. As cognitive scientist Guy Claxton put it, “CLT is just a fad; it is, as someone said, like Brain Gym for Traditionalists. The sooner the fad passes the better.”21
2. “CLT offers no recognition of the learner as an autonomous agent.”22
For those who talk about cognitive load, learning happens in a vacuum rather than being an activity engaged in by actual human beings. Focusing just on the impact of memory capacity in solving an academic problem eclipses such questions as whether students have any desire to solve it. (Many years ago, a group of researchers tried to sort out the factors that helped children to remember what they’d been reading. They found that how interested the students were in the passage was thirty times more important than how “readable” the passage was.)23 Good luck finding any discussion of students’ motivation, emotions, beliefs, or agency — any acknowledgment that they “decide whether they do or do not engage and [the cognitive] resources they will invest”24 — in those airless admonitions to minimize cognitive load. Leaving out the students’ perspective of learning (or the social and cultural contexts in which that learning takes place) undermines any attempt to explain what happens in real classrooms or to recommend a particular kind of instruction.
3. Even its account of discrete acts of learning are greatly oversimplified.
Suppose we were willing to pretend that the experience and distinctive features of individual learners didn’t matter. Even so, CLT flattens learning and problem-solving into a mechanical process of taking in information, holding it briefly in short-term memory, and then storing it permanently in long-term memory, with the last step described, remarkably, as “the ultimate justification for instruction.”25
The theory “is based on a vastly oversimplified and antiquated notion of ‘working memory’ that was current in psychology in the 1970s.”26 First, cognitive load probably isn’t a single phenomenon that varies only by degree; rather, different tasks may result in different types of load. Second, some learning apparently doesn’t require any extra working memory. In many cases, we learn continuously, from multiple sources, and without even being aware of it. (When we do learn something by rote, it may not remain in long-term memory, particularly if we don’t use it.) Because CLT’s account is, at best, only a partial truth and therefore misleading, it fails to justify the conclusion that learners can’t handle anything other than formal, explicit instruction.27
4. Its research base is limited to contrived tasks.
One reason for CLT’s oversimplification is that the studies cited to support it use “problems for which optimal pathways to solutions already exist and are well-established” — typically certain kinds of math exercises. These problems may be difficult if you haven’t encountered them before, but they’re almost always clear-cut, excluding “the possibility of multiple solution paths [let alone] multiple solutions.” When you’re taught the procedure, you come away with only a formula for solving similar problems.28 Thus, even if CLT did accurately describe how learning happens in these highly structured domains — something that, again, many experts dispute — it doesn’t apply to other academic fields or to the kind of learning required for real-life problems that lack a single solution specifiable in advance.
5. Reducing cognitive load isn’t always desirable.
CLT warns us off of having students construct meaning and discover solutions because that’s less efficient than just showing them the preferred way to get the answer. But what kind of teacher cares only about efficiency? “Conditions that maximize performance in the short term may not necessarily be the ones that maximize learning in the long term.”29 That’s because “learning can be impeded…when too much help is provided.”30 To put it the other way around, “Increasing the cognitive load under certain circumstances can improve learning.”31
Now combine this point with the preceding one. Cognitive load may be higher when we’d like students to engage in meaningful learning rather than just to produce the right answer efficiently. That’s true in general, but it’s particularly important when we want them to be proficient with problems that are open-ended and perhaps unstructured, those that may not have a single right answer. As yet another group of researchers observed, “Premature efficiency is the enemy of subsequent new knowledge construction.”32 And the process of grappling with such problems can be intellectually valuable in its own right. If the goal is for students to make meaning, to think critically and creatively and flexibly, then direct instruction is usually unwise and cognitive load shouldn’t be our chief concern.33
6. CLT assumes that learning is a solitary act.
In the best classrooms, students spend much of their time in pairs and small groups, learning with and from one another. But the case for direct instruction based on cognitive load “comes from studies on individual learning settings instead of group-based learning settings such as [problem-based learning], where different cognitive load conditions apply.”34
7. Even if CLT were persuasive, it wouldn’t necessitate direct instruction.
In real life, there are lots of ways to avoid cognitive overload, such as doing one segment of a task at a time or writing things down so everything doesn’t have to be recalled at once. CLT research excludes such strategies, “thus creating a situation that is artificially time-critical” in order to make its theory seem correct. These studies are contrived in other ways, too, such as giving learners no choice about what they’re doing, which naturally affects their motivation, and providing very little time to study the material on which they’ll be tested.35
Research methodology aside, the key takeaway here is that teachers play an active role in progressive classrooms, too. “Most proponents of [inquiry learning] are in favor of structured guidance,” one group of researchers notes, but that guidance looks nothing like direct instruction; it “affords choice, hands-on and minds-on experiences, and rich student collaboration.”36 One metaanalysis found that DI is inferior to even the sort of inquiry in which relatively little guidance is provided. More guidance can be even better — which is what that same review found37 — but providing such guidance is entirely consistent with student-centered, constructive, collaborative learning, contrary to the false dichotomy set up by DI proponents. Thus, the benefits of such learning need not be withheld from students even if we worry about their cognitive loads.
In short, progressive education isn’t just more engaging than what might be called regressive education; according to decades of research, it’s also more effective — particularly with regard to the kinds of learning that matter most. And that remains true even after taking our cognitive architecture into account.
1. The quotation is from Alison Gopnik, “Why Preschool Shouldn’t Be Like School,” Slate, March 16, 2011. Among the studies she cites is Elizabeth Bonawitz et al., “The Double-Edged Sword of Pedagogy: Instruction Limits Spontaneous Exploration and Discovery,” Cognition 120 (2011): 322-30. Her summary is consistent with my own review of multiple studies that contrast explicit, teacher-centered instruction of young children with child-centered or constructivist approaches: See this lengthy excerpt from Alfie Kohn, The Schools Our Children Deserve (Houghton Mifflin, 1999). Separately, a 2022 review of research conducted with children from toddlers to eight-year-olds found that guided play was more effective than direct instruction: Kayleigh Skene et al., “Can Guidance During Play Enhance Children’s Learning and Development in Educational Contexts?” Child Development 93 (2022): 1162-80.
2. Scott Freeman et al., “Active Learning Increases Student Performance in Science, Engineering, and Mathematics,” PNAS, May 12, 2014. The result was so clear, in fact, that the researchers remarked that if this had been a medical study, it would have been halted because of ethical concerns about continuing with the inferior approach — i.e., lecturing. Also see discussions of research on higher-education pedagogy in Maryellen Weimer, Learner-Centered Teaching, 2nd ed. (Jossey-Bass, 2013) and in my essay about the ineffectiveness of lecturing (and efforts to move beyond it): “Don’t Lecture Me!“, blog post, June 24, 2017.
3. “There is growing evidence from large-scale experimental and quasi-experimental studies demonstrating that inquiry-based instruction results in significant learning gains in comparison to traditional instruction and that disadvantaged students benefit most from inquiry-based instructional approaches” (Cindy E. Hmelo-Silver et al., “Scaffolding and Achievement in Problem-Based and Inquiry Learning,” Educational Psychologist 42 [2007], p. 104). Two studies in 2021, one with third graders and one with high schoolers, confirmed the benefits of project-based, student-centered approaches with diverse students, notably low-income kids of color. See, respectively, Joseph Krajcik et al., “Assessing the Effect of Project-Based Learning on Science Learning in Elementary Schools,” Technical Report, Michigan State University, January 11, 2021, which is summarized here; and Anna Rosefsky Saavedra et al., “Knowledge in Action Efficacy Study Over Two Years,” Center for Economic and Social Research, University of Southern California, February 22, 2021. From the latter: “The traditional ‘transmission’ model of instruction…may be suboptimal for supporting students’ ability to think and communicate in sophisticated ways, demonstrate creativity…and transfer their skills, knowledge, and attitudes to new contexts.”
4. There is some evidence that more proficient students are better able to take advantage of inquiry learning — which, in light of all the other data attesting to its benefits, is an argument for providing more scaffolding for struggling students, not for subjecting them to direct instruction instead. But one interesting study of college math education (which comprised more than one hundred sections of forty courses at four universities) found that, while all students in inquiry-oriented sections “succeeded at least as well as their peers in later courses,” it was the students with poorer academic records who benefited the most. Their performance in subsequent courses reflected “sizable and persistent” improvement “relative both to their own previous performance and to [traditionally taught] peers” (Marina Kogan and Sandra L. Laursen, “Assessing Long-Term Effects of Inquiry-Based Learning,” Innovations in Higher Education 39 [2014], pp. 195, 194, 196).
5. In a comprehensive review of the evidence in STEM fields that was published in 2023, an international group of 13 researchers argued for some role for direct instruction, but in the end they emphasized, “Overall the literature persuasively shows the benefits of inquiry-based instruction over direct instruction for acquiring conceptual knowledge” (Ton de Jong et al., “Let’s Talk Evidence – The Case for Combining Inquiry-Based and Direct Instruction,” Educational Research Review 39 (2023), pp. 9-10). The huge Freeman et al. metaanalysis that I mentioned in note 2 demonstrates the superiority of the inquiry approach at the university level, but it’s just as true for younger students according to many, many studies. In elementary school, for example, see E. M. Granger et al., “The Efficacy of Student-Centered Instruction in Supporting Science Learning,” Science 338 (October 5, 2012): 105-108. Also, “students tended to score higher on the 4th grade and 8th grade NAEP science tests when they had experienced science instruction centered on projects in which they took a high degree of initiative” (Harold Wenglinsky, “Facts or Critical Thinking Skills? What NAEP Results Say,” Educational Leadership, September 2004, p. 33). And it was true both in middle school and high school science when students were were evaluated on their conceptual understanding; no difference showed up on multiple-choice tests: Marcia C. Linn et al., “Teaching and Assessing Knowledge Integration in Science,” Science 313 (August 25, 2016): 1049-50. Incidentally, one early review of 57 studies of elementary science programs, which found benefits for an inquiry-based approach across the board — and particularly for disadvantaged students — offered an important caution: The advantages offered by such student-centered teaching “may be lost” if students are subsequently taught “in classrooms where more traditional methods prevail” (Ted Bredderman, “Effects of Activity-Based Elementary Science on Student Outcomes,” Review of Educational Research 53 [1983], p. 513).
6. Barbara M. Taylor et al., “Looking Inside Classrooms: Reflecting on the ‘How’ as Well as the ‘What’ in Effective Reading Instruction,” The Reading Teacher, November 2022, p. 278. Also see Karen Eppley and Curt Dudley-Marling, “Does Direct Instruction Work?” Journal of Curriculum and Pedagogy 16 (2019); and Randall J. Ryder et al., “Longitudinal Study of Direct Instruction Effects from First Through Third Grades,” Journal of Educational Research 99 (2006): 179-91. The latter two studies investigated a version of direct instruction for teaching reading that is known as Direct Instruction (with capital letters) or Reading Mastery.
7. One review of research relevant to this point explained the finding as follows: “Instructional strategies that actively involve students in learning may result in qualitatively different memories that are more resistant to forgetting than memories acquired through more traditional instructional methods” (George B. Semb and John A. Ellis, “Knowledge Taught in School: What Is Remembered,” Review of Educational Research 64 [1994], p. 279). Similarly, a study of math instruction in a “constructivist learning environment showed better retention of almost all the concepts than…in the traditional [lecture-based] class” (Serkan Narli, “Is Constructivist Learning Environment Really Effective on Learning and Long-Term Knowledge Retention in Mathematics?” Educational Research and Reviews 6 [2011]: 36-49). And Australian middle-school students in more progressive classrooms (featuring active learning and group discussions) remembered significantly more geography content than those in conventional classrooms (Andrew A. Mackenzie and Richard T. White, “Fieldwork in Geography and Long-Term Memory Structures,” American Educational Research Journal 19 [1982] 623-32).
8. This is clear from many of the studies cited throughout this section. The phenomenon is explained lucidly by Daniel L. Schwartz et al., “Constructivism in an Age of Non-Constructivist Assessments,” in Sigmund Tobias and Thomas M. Duffy, eds., Constructivist Instruction: Success or Failure? (Routledge, 2009). They cite research showing that student-centered classrooms help students to construct meaning while also helping them to learn standard material “without an appreciable cost in overall instructional time.” Simply telling students the standard procedure “blocked student learning” — a result that comes into focus only if the assessment is rich enough to capture something more than short-term retention of facts. One legacy of direct instruction is that students “learned the solution procedure but they did not learn about the structure of situations for which that procedure might be useful” (pp. 55, 59).
9. Indeed, this brief summary of the research only scratches the surface, not just because many other studies have replicated the same basic finding but because separate strands of evidence independently attest to the benefits of each of the components of student-centered learning and active exploration. These include the multiple advantages of “autonomy support” (that is, giving people more say about what they’re doing); learning from, and in collaboration with, one’s peers; and supportive student-teacher relationships.
10. See the discussion of possible mechanisms for that counterproductive effect in Gopnik, op. cit., and Bonawitz et al., op. cit. Another sort of evidence comes from research demonstrating that the beneficial impact of teaching that focuses on helping students to understand mathematical principles (which they then have to figure out how to apply) is undermined when they are also taught step-by-step problem-solving procedures. See Michelle Perry, “Learning and Transfer: Instructional Conditions and Conceptual Change,” Cognitive Development 6 (1991): 449-68. Also see the second study cited in note 14, below, which produced a similar finding: A positive outcome often requires not only the presence of student-centered teaching but the absence of explicit instruction.
11. Several studies cited in my review of early-childhood research (see note 1) found that children whose preschool had used DI (compared with a child-centered or constructivist approach) fared more poorly as teenagers and then as adults on a range of psychological, social, and other measures. For an updated summary of the most ambitious of those investigations, see Lawrence J. Schweinhart, The High/Scope Perry Preschool Study Through Age 40 (Ypsilanti, MI: HighScope, n.d.). A more recent longitudinal study didn’t find a negative effect from teacher-directed instruction but confirmed that “child-initiated instruction in preschool is a robust predictor of adulthood well-being” (Jasmine R. Ernst and Arthur J. Reynolds, “Preschool Instructional Approaches and Age 35 Health and Well-Being,” Preventive Medicine Reports 23 [2021]).
12. For example, Richard E. Mayer, “Should There Be a Three-Strikes Rule against Pure Discovery Learning? The Case for Guided Methods of Instruction,” American Psychologist 59 (2004): 14–19; Paul A. Kirschner, John Sweller, and Richard E. Clark., “Why Minimal Guidance During Instruction Does Not Work,” Educational Psychologist 41 (2006): 75-86 (and virtually everything else written by those three authors); and Lin Zhang et al., “There Is an Evidence Crisis in Science Educational Policy,” Educational Psychology Review 34 (2022): 1157-76. (Strong rebuttals to the latter two articles were subsequently published in the same journals. See especially de Jong et al., op. cit.)
13. Schwartz et al., op. cit., p. 58. The same point is made by a pair of researchers who conducted a metaanalysis on just this question: Ard W. Lazonder and Ruth Harmsen, “Meta-Analysis of Inquiry-Based Learning: Effects of Guidance,” Review of Educational Research 86 (2016), especially pp. 684, 704, 706.
14. The original study: David Klahr and Milena Nigam, “The Equivalence of Learning Paths in Early Science Instruction,” Psychological Science 15 (2004): 661-67. The follow-up study: David Dean, Jr. and Deanna Kuhn, “Direct Instruction vs. Discovery: The Long View,” Science Education 91 (2007): 384-97. DI proponents continue to cite the first study but, as far as I can tell, have never even acknowledged the existence of the second. For other examples of how they have repeatedly ignored “a massive number of controlled studies that have shown the benefits of inquiry-based instruction in comparison with direct instruction” — or, on other occasions, misrepresented research that contradicts their position — see de Jong et al., op. cit., pp. 3-4.
15. Alyssa Friend Wise and Kevin O’Neill, “Beyond More Versus Less: A Reframing of the Debate on Instructional Guidance,” in Tobias and Duffy, eds., op. cit., p. 87. On this point, also see Michele T. H. Chi, “Active-Constructive-Interactive,” Topics in Cognitive Science 1 (2009), pp. 92-3; and Schwartz et al., op. cit., p. 49.
16. While some researchers talk about looking for ways to combine inquiry and direct instruction (for example, de Jong et al., op. cit.), it’s important to keep in mind that just because the latter can be used in certain circumstances — notably, if a teacher’s goal is to transmit facts rather than to help students learn how to think — that doesn’t prove that it needs to be used or that it’s more effective than student-centered learning (when the latter is accompanied by adequate guidance). Moreover, while a recommendation to find a place for both may appeal to us as a reasonable compromise, it deflects attention from what remains a fundamental divergence in one’s point of departure: Is learning understood mostly as memorizing facts and practicing skills to produce right answers, or as constructing meaning and understanding ideas? Are students primarily seen as passive receptacles or active meaning makers?
Similarly, even though few schools exemplify the pure version at one pole or the other, the vast majority tend to privilege teacher talk over student talk, impose a curriculum that students have little opportunity to help design, and continue to rely on instruments of traditional pedagogy such as lectures, worksheets, quizzes, textbooks, and practice homework. (For evidence of the continued traditionalism of U.S. schools, if anyone really requires it, see my The Schools Our Children Deserve, op. cit., chap. 1; and Robert Pianta et al., “Opportunities to Learn in America’s Elementary Classrooms,” Science 315 [March 30, 2007]: 1795-96.)
Nor should we lose sight of how radically this residual traditionalism diverges from what evidence shows are usually more advantageous strategies. Elsewhere, I — and, of course, many other authors — have discussed how teachers can provide guidance, model problem-solving, elicit students’ questions about the world and create a curriculum with them (rather than just for them), helping students to acquire intellectual proficiencies and to think in increasingly sophisticated ways. Teachers can provide limited direct instruction when necessary, but the bottom line is that their role should not consist chiefly of dispensing information. High-quality teaching is usually more facilitative than directive and more implicit than explicit.
17. “Science” is also misleadingly invoked in support of a reductive phonics-centered method of teaching reading. For more on that issue, see these lengthy excerpts from Kohn, op. cit., and the following more recent rebuttals by experts to the so-called “science of reading” campaign: Robert J. Tierney and P David Pearson, Fact-Checking the Science of Reading (Literacy Research Commons, 2024); David Reinking et al., “Legislating Phonics: Settled Science or Political Polemics?“, Teachers College Record 125 (2023): 104-31; Peter Johnston and Donna Scanlon, “An Examination of Dyslexia Research and Instruction with Policy Implications,” Literacy Research: Theory, Method, and Practice 70 (2021): 107-28; Jeffrey S. Bowers, “Reconsidering the Evidence That Systematic Phonics Is More Effective Than Alternative Methods of Reading Instruction,” Educational Psychology Review 32 (2020): 681-705; Stephen Krashen, “Beginning Reading,” Language Magazine, April 2019; Dominic Wyse and Alice Bradbury, “Reading Wars or Reading Reconciliation?“, Review of Education 10 (2022): e3314; and Catherine Compton-Lilly et al., “Stories Grounded in Decades of Research: What We Truly Know About the Teaching of Reading,” The Reading Teacher 77 (2023): 392-400. Also see a series of three essays by literacy expert Maren Aukerman on the media’s coverage of reading instruction, all titled “The Science of Reading and the Media” and subtitled, respectively, “Is Reporting Biased?“, “Does the Media Draw on High-Quality Research?“, and “How Do Current Reporting Patterns Cause Damage?” Literary Research Association Critical Conversations, 2022.
18. The first quote is from Ton de Jong, “Cognitive Load Theory, Educational Research, and Instructional Design,” Instructional Science 38 (2010), p. 114. (That article offers a useful review of the technical literature about CLT more generally.) The second quote is from Roxana Moreno, “Cognitive Load Theory,” Instructional Science 38 (2010), pp. 136, 137. The fundamental conjecture of CLT therefore must rely on “research in related domains”; it has never been tested by comparing constructivist teaching and direct instruction to see if the latter actually reduces cognitive load (Sigmund Tobias, “An Eclectic Appraisal of the Success or Failure of Constructivist Instruction,” in Tobias and Duffy, eds., op. cit., p. 340).
19. de Jong, op. cit., p. 125. Also see Wolfgang Schnotz and Christian Kürschner, “A Reconsideration of Cognitive Load Theory,” Educational Psychology Review 19 (2007): 469-508; and Guy Claxton, “Cognitive Load Theory: Just Brain Gym for Traditionalists?“, blog post, August 15, 2022.
20. Moreno, op. cit., p. 139.
21. Claxton, op. cit.
22. Peter Ellerton, “On Critical Thinking and Content Knowledge,” Thinking Skills and Creativity 43 (2022), p. 10. Also see Moreno, op. cit.
23. Richard C. Anderson et al., “Interestingness of Children’s Reading Material,” in Richard E. Snow and Marshall J. Farr, eds., Aptitude, Learning, and Instruction, vol. 3: Conative and Affective Process Analyses (Erlbaum, 1987), p. 288. Many other studies have reached the same general conclusion about the disproportionate impact of interest. In one experiment, fourth graders’ comprehension turned out to be so much higher when the passages they were assigned dealt with topics that interested them — suddenly, the kids were testing well above their supposed reading level — that the researchers ended their report by asking why teachers and researchers tend to be so “concern[ed] with difficulty when interest is so obviously a factor in comprehension.” See Thomas H. Estes and Joseph L. Vaughan, Jr., “Reading Interest and Comprehension: Implications,” The Reading Teacher 27 (1973): 149-52; quotation appears on p. 152. Today, the same persistent inattention to motivation is one of many problems with what is imposed on classrooms in the name of the “science of reading.” See Seth A. Parsons and Joy Dangora Erickson, “Where Is Motivation in the Science of Reading?“, Phi Delta Kappan, February 2024: 32-36; and Nancy Bailey, “The Comprehension Problem with New Reading Programs,” blog post, June 24, 2024.
24. Schnotz and Kürschner, op. cit., p. 497.
25. Kirschner et al., op. cit., p. 77.
26. Claxton, op. cit. He continues: “The computer metaphor on which the original concept of WM was based is no longer widely accepted as an accurate or adequate depiction of human cognition. Brain-based theories, in which there are no separate memory ‘stores’ – no boxes in the head – underpin much current research, and they do not lead to or justify anything like John Sweller’s image of Cognitive Load.”
27. For more on CLT’s simplified view of cognition, see David Jonassen, “Reconciling a Human Cognitive Architecture,” in Tobias and Duffy, eds., op. cit. On the existence of diverse types of load, and the fact that some learning doesn’t require additional working memory, see Schnotz and Kürschner, op. cit., especially pp. 485 and 502; and Sue Gerrard, “Direct Instruction: The Evidence,” blog post, April 23, 2014; and “How Working Memory Works,” blog post, March 16, 2014. Gerrard explains why the model of working memory used by Sweller and his colleagues “appears to be oversimplified and doesn’t take into account the biological mechanisms involved in learning.” Their model is based on straightforward activities like solving algebra problems in which “new items coming into the buffer displace older items, so buffer capacity would be a limiting factor. But real-world problems tend to involve different buffers, so items in the buffers can be easily maintained while they are manipulated by the central executive…..Discovery, problem-based, experiential, and inquiry-based teaching in classrooms tends to more closely resemble real world situations than the single-buffer problems” on which CLT is based.
28. First quotation: Ellerton, op. cit., p. 8. Second quotation: Henk G. Schmidt et al., “Problem-Based Learning Is Compatible with Human Cognitive Architecture,” Educational Psychologist 42 (2007), p. 95. If we want to help students to think flexibly and be prepared for future learning, these authors add, then problem-based learning is a better bet than direct instruction.
29. Manu Kapur and Nikol Rummel, “Productive Failure in Learning from Generation and Invention Activities,” Instructional Science 40 (2012), p. 645. Emphasis added to underscore the two separate distinctions being made here — between short term and long term, and between performance and learning. The latter distinction is critical to understanding the inadequacy of DI: Eliciting a correct answer may amount to what these authors call “unproductive success” rather than meaningful learning. Indeed, Kapur found in a pair of studies that students who had to wrestle with problems on their own before receiving instruction “significantly outperformed DI students on conceptual understanding and transfer without compromising procedural knowledge.” Did they expend greater mental effort? Yes. Did that hurt? No – if anything, it helped. See Manu Kapur, “Productive Failure in Learning Math,” Cognitive Science 38 (2014): 1008-22.
30. Schnotz and Kürschner, op. cit., p. 484. Thus, presenting “worked examples” to students can be “beneficial for task performance but not for learning. In other words: Making a task easier does not necessarily result in better learning” (ibid., p. 493).
31. Walter Kintsch, “Learning and Constructivism,” in Tobias and Duffy, eds., op. cit., p. 229.
32. Schwartz et al. op. cit., p. 46.
33. For example, see Rand J. Spiro and Michael DeSchryver, “Constructivism: When It’s the Wrong Idea and When It’s the Only Idea,” in Tobias and Duffy, eds., op. cit.
34. The quotation is from Schmidt et al., op. cit., p. 95. On the value — and different variants — of cooperative learning, see my “Learning Together,” which originally appeared as chapter 10 of Kohn, No Contest: The Case Against Competition, rev. ed. (Houghton Mifflin, 1992).
35. de Jong, op. cit., pp. 123, 124.
36. Hmelo-Silver et al., op. cit., p. 104. Wise and O’Neill, op. cit., make the same point, as do Kapur and Rummel, op. cit.
37. Erin Marie Furtak et al., “Experimental and Quasi-Experimental Studies of Inquiry-Based Science Teaching: A Meta-Analysis,” Review of Educational Research 82 (2012): 300-29. For another study (published after this review) that found students who received no guidance at all did better than those who received direct instruction or worked examples, see Kapur 2014, op. cit.
It's been a big year in the education sector, and we're all looking forward to…
ERO is publishing a series of best practice guides to help educators effectively implement incoming…
Summer reading can help students retain literacy skills over the break – how can we…
Pakuranga Intermediate demonstrates the simple power of a friendly, welcoming environment
The new Māori Education Action Plan has been criticised by some as being light on…
How can we use AI to transform education while being mindful of its limitations, pitfalls…
This website uses cookies.