Musings about language and literacy and learning

How you interpret “the science of reading” depends on how you think of “science”: Part III

*The “science of reading” has become a loaded term — partly due to how “science” itself may be conceived. Since starting this series (yes, I know, I take a really long time to write posts), there’s been a fascinating trend of articles reacting to the term in various ways. These takes seem only slated to increase, given the wide attention this recent tidy overview on the push for SOR in Time has received, just as one example.*

In Part I, we examined a 2003 article by Keith Stanovich that proposed 5 different “styles” that can influence how science is conducted and perceived. In that article, we learned that in education there may be a tendency to lean towards “coherence” in narratives or the “uniqueness” presented by silver bullet fads. These tendencies can and do subvert science-based reading practice.

In Part II, we began our analysis of yet another stellar 2003 piece by Paula and Keith Stanovich, which lays out the importance in drawing on the cumulative base of scientific findings on reading, rather than on gurus, personal agendas, and politics, as the field of education so often tends to. We learned that while peer reviewed research may not be a guarantee of quality, it is at the very least a minimum criterion that establishes such research as a part of the accumulating “public” realm of scientific knowledge.

Today in Part III, we continue onward with the article from Part II, Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular & Instructional Decisions,” as it is a lengthy one and there’s quite a bit more left to unpack.

For example, the importance of an empirical approach to reading practice . . .

Research-based Practice Relies on Systematic Empiricism

There’s a lot of talk in education-related policy about research-based or evidence-based practice, but what does that mean? According to the Stanovichs, research-based practice is grounded in systematic empiricism.

What is empiricism? Empiricism is knowledge derived from evidence; this is the basis of the scientific method. All empirical theories must be tested by real world observations, rather than drawn from philosophical musing or heart felt intuition.

Basic, right? But the field of education is a realm as subject to political, bureaucratic, and ideological whims and sophistry as it is to taking action based on data from the students in front of us.

Empiricism thus starts with observation, but according to the Stanovichs, it’s more than that:

Observation itself is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. . . Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying causal structure of events in the world. Observations are structured so that, depending upon the outcome of the observation, some theories of the causes of the outcome are supported and others rejected.”

It’s worth unpacking the term “causal” (easily mistaken for casual by casual readers) here, as it’s one of those academic terms used frequently by researchers and not so frequently by teachers.

When we observe events in the real world, we primarily see the interwoven effects of many underlying factors. In order to disentangle and identify the specific causes of effects, researchers design tests to isolate variables that can allow them to make inferences—causal inferences—about complex and interrelated phenomenon.

When it is claimed that one form of instruction is better than another (e.g. phonics vs. whole word instruction) this is a causal claim that can be tested systematically and empirically.

When testing such claims using systemic empiricism and looking at the evidence base, there must be a space allowed for being wrong. According to the Stanovichs, this is called the “falsifiability criterion“:

A scientific theory must always be stated in such a way that the predictions derived from it can potentially be shown to be false.

A brief digression here on this concept of falsifiability: there are many within the “science of reading” camp—as with any other tribe in education—who can be overly strident in their claims about specific research-based practices. Examples are: sound walls, decodables, and multisensory instruction. Despite the coherence with existing converging evidence offered by each of these approaches, each has yet to be empirically proven as better than other approaches (I should add there is great debate about all of this, of course). This doesn’t mean they aren’t better — just that the peer reviewed evidence is not quite there to state more unequivocally that they are.

We’ve examined related controversy on phonological awareness instruction here on this blog in the past.

While this can be frustrating to those of us who wish to live in a clearly defined black-and-white world of proven approaches based on the “science of reading,” the deeper beauty of science is that it is completely agnostic as to anyone’s preferred outcomes (as with nature). The scientific truth lies on the ever shifting dune-face of peer reviewed evidence. This thus requires intellectual humility and the willingness to shift one’s beliefs in the face of a slowly accumulating evidence base.

Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will lead them to abandon the practice. True scientific knowledge is held tentatively and is subject to change based on contrary evidence.

So what does this mean for teachers? It doesn’t mean we shouldn’t test out sound walls or use decodables or go all in on multisensory instruction — it means that when we do, to recognize them more humbly, and honestly, as tests of a hypothesis. Teachers are quite familiar with the phenomenon of what works one period with one group of students bombing completely with the next one. What works is dependent on any number of given variables. It’s certainly worth strategically trying different ones until outcomes can cumulatively demonstrate the intended effect. This is what science is all about.

Objectivity and Intellectual Honesty

Philosopher Jonathan Adler (1998) teaches us that science values another aspect of open-mindedness even more highly: “What truly marks an open-minded person is the willingness to follow where evidence leads. . . Scientific method is attunement to the world, not to ourselves.”

As the Stanovichs point out, scientists are also flawed human beings. They are individually no more objective than the rest of us. Yet, the greater endeavor of science countervails this fallibility with a “process of checks and balances.” This process is inherently social, in which scientists engage in peer critique of one another’s assumptions, biases, and conclusions.

“Purveyors of pseudoscientific educational practices fail the test of objectivity and are often identifiable by their attempts to do an “end run” around the public mechanisms of science by avoiding established peer review mechanisms and the information-sharing mechanisms that make replication possible. Instead, they attempt to promulgate their findings directly to consumers, such as teachers.”

This is an important caution to bear in mind. We have an over-abundance of “direct to consumer” products and pitches in our field. Furthermore, even when something is research-based via clinical studies, there is still the issue of translation and implementation in the complex and complicated world of real schools and classrooms.

Either way, in classrooms and schools we also need to rely heavily on our own peer review process: a social process of checks and balances delivered by our peers. Through dialogue and shared analysis of student data in teams, and via peer classroom intervisitations and feedback, we can get our assumptions and hypotheses checked.

The Principle of Converging Evidence

Research itself is hardly pristine, either. All individual studies are imperfect in their own way (and each should clearly outline their limitations) — but when taken collectively, they can provide robust conclusions.

“Scientists do not evaluate data from a single experiment that has finally been designed in the perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer.”

This idea of converging evidence is critical, because research takes time to conduct, write up, and publish, and it’s easy to get taken up with the latest finding and lose sight of a wider body of evidence, most especially ones that were established by a previous generation. And it’s furthermore difficult to connect the dots between different journals and bodies of knowledge. Like any human endeavor, there are geographical, institutional, and social networks and gaps between scientists that make cross-disciplinary connections over time harder to make. Some of the best peer reviewed articles provide a comprehensive overview of the extant research, and meta-analysis can also be invaluable for sifting through the bricolage and collate the wheat from the chaff.

This makes me think relatedly about how important it is to consider a variety of sources of information about our students — including talking directly to them and their families. I used to coordinate and write Individualized Education Plans (IEPs) for students in my building, and it was only after examining a students’ social history, multiple years of academic performance, multiple test scores across domains, considering the students’ behavior and performance across classrooms, examining writing and work samples, interviewing the student, and speaking with all of his or her teachers and service providers and family that I began to feel like I was getting somewhere in understanding their strengths and needs. One data point will tell you very little about a student. But with multiple forms of data, both qualitative and quantitative, we can tell a story from the converging evidence.

More to Come

Can you believe that we still aren’t done exploring Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular & Instructional Decisions by Paula and Keith Stanovich?!

Nope. There’s that much good stuff in there to unpack.

Before we wrap up, let’s review what we’ve covered thus far since Part I:

To be continued in Part IV of our series. Thanks for joining me.

#Stanovich #science #scienceofreading #SOR #research #empiricism

Discuss...