Musings about language and literacy and learning

How you interpret “the science of reading” depends on how you think of “science”: Part IV

This is Part IV in a series digging into two articles from Keith Stanovich that provides useful ways for educators to understand the science in the science of reading.

In Part I, we examined a 2003 article that proposed 5 different “styles” that can influence how science is conducted and perceived.

Since Part II, we’ve been unpacking a long and stellar 2003 piece by Paula and Keith Stanovich, Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular & Instructional Decisions.”

Today in Part IV, we continue onward deeper into the article to examine the oh-so very science-y aspects of experimental design.

The Logic of the Experimental Method

So here’s two words that will cause an immediate and negative gut reaction to many involved in education: manipulation and control. Yet according to the Stanovichs, these sinister concepts are essential to experimental design:

“The heart of the experimental method lies in manipulation and control. In contrast to a correlational study, where the investigator simply observes whether the natural fluctuation in two variables displays a relationship, the investigator in a true experiment manipulates the variable thought to be the cause (the independent variable) and looks for an effect on the variable thought to be the effect (the dependent variable) while holding all other variables constant by control and randomization.”

It is interesting to think that there is a certain cold calculation to effective experimental design that may in fact present a barrier to executing more controlled studies in real schools. RCTs, for example, are incredibly difficult to do in schools because no one in a school in their right mind wants to withhold an intervention or resource from kids who are not currently receiving it that may be more effective. Which then makes having a control group really hard to sustain.

There’s a lot related to these concepts of manipulation and control that brings us back to causal inferences and the importance of attempting to falsify our hypotheses, which we briefly discussed in Part III. Frankly, I find all these seemingly simple concepts hard to fully grasp. It’s made me feel slightly better to see real scientists posting deep explorations of the complexity of making accurate causal inferences on my Twitter timeline.

I think in education we do have some grasp and application of these ideas. For example, concepts of team inquiry, problem-solving, and data-based decision-making are oriented around looking at student data and coming up with some kind of problem of practice and theory of change or hypothesis, then taking action steps to address it.

The Stanovichs point this out at the close of this paper, in fact:

Effective teachers engage in scientific thinking in their classrooms in a variety of ways: when they assess and evaluate student performance, develop Individual Education Plans (IEPs) for their students with disabilities, reflect on their practice, or engage in action research.

This assessment cycle looks even more like the scientific method when teachers (as part of a multidisciplinary team) are developing and implementing an IEP for a student with a disability. The team must assess and evaluate the student’s learning strengths and difficulties, develop hypotheses about the learning problems, select curriculum goals and objectives, base instruction on the hypotheses and the goals selected, teach, and evaluate the outcomes of that teaching. If the teaching is successful (goals and objectives are attained), the cycle continues with new goals. If the teaching has been unsuccessful (goals and objectives have not been achieved), the cycle begins again with new hypotheses.

Yet perhaps an area where we fall most short in terms of aligning more directly to the real scientific method is that we typically move confidently and brashly towards action without either strategically creating a group of students who do not receive the supports we think will be most effective, or considering in advance what evidence would both most clearly prove or falsify our theory of change. Instead, we often seem to default towards the notion that if we’ve taken action, we’ve achieved success. Everyone in education likes a good celebration, especially those in more political positions, but “confronting the brutal facts” in normed or standardized data seems to be a rarer exercise, other than as media headlines when the latest NAEP results come out. (For a model of an educator making this shift towards confront the brutal facts in literacy outcome data, please read this blog from The Right to Read Project).

In the tech world, A/B testing is a well-known and common undertaking. Maybe we need to keep simple study designs like this in mind and start smaller in our change endeavors. Maybe it’s not always about one whole-sail theory of change, but rather about having a few different theories that we can keep testing in a simple ways, with different students, classes, or groups. Or maybe the flipside is better? I’ve explored the idea that coherence within and beyond a school may actually matter more than “research-based” practices elsewhere, so it’s an open question as to whether putting a firm directive out and asking everyone to get on board (more typical), versus opening up a space for teacher teams to try out different ways to meet a goal, is ultimately more effective in impacting collective outcomes for students.

The Need for Both Correlational Methods and True Experiments

To this end, the Stanovichs highlight a great point that there’s a need for multiple approaches to experimental design. We then must look across findings from these divergent approaches for converging evidence:

It is necessary to amalgamate the results from not only experimental investigations, but correlational studies, nonequivalent control group studies, time series designs, and various other quasi-experimental designs and multivariate correlational designs, all have their strengths and weaknesses. For example, it is often (but not always) the case that experimental investigations are high in internal validity, but limited in external validity, whereas correlational studies are often high in external validity, but low in internal validity.

Convergence increases our confidence in the external and internal validity of our conclusions.

The Role of Case Studies and Qualitative Investigations

I found this section of the paper, in which the Stanovichs explain how case studies and qualitative studies operate in conjunction with quantitative research, especially fascinating.

In their explanation, case studies and qualitative investigations are most useful as explorations of a problem. Once a theory is more fully formed, however, ensuring it can be stress tested in full to “rule out alternative explanations” requires the “comparative information” provided by quantitative experiments.

Where qualitative investigations are useful relates strongly to a distinction in philosophy of science between the context of discovery and the context of justification. Qualitative research, case studies, and clinical observations support a context of discovery where, as Levin and O’Donnell (2000) note in an educational context, such research must be regarded as “preliminary/exploratory, observational, hypothesis generating.”

In education, however, investigators sometimes claim to be pursuing Objective B but slide over into Objective A without realizing they have made a crucial switch. They want to make comparative, or quantitative, statements, but have not carried out the proper types of investigation to justify them.

Case studies and qualitative description lack the comparative information necessary to prove that a particular theory or educational practice is superior, because they fail to test an alternative; they rule nothing out. Take the seminal work of Jean Piaget for example. His studies were critical in pointing developmental psychology in new and important directions, but many of his theoretical conclusions and causal explanations did not hold up in controlled experiments.

Woah.

My thinking is that while case studies can be a good way to get an investigation of a problem started, I suspect they can also serve as a great bookend to the journey as well. In other words, I suspect that once a theory has some solid backing from converging evidence, drawing up case studies and gaining fuller context of its application in the real world with qualitative description can also be really beneficial.

So: a qualitative-quantitative-qualitative sandwich! I don’t know, I’m not a researcher. But people seem to like case studies, so it seems like a good communication and educative tool, in addition to the beginning of an exploration.

Teachers and Researchers: Commonality in a “What Works” Epistemology

The Stanoviches close this tour-de-force exposition with some connections between the thinking necessary for science and how it can be leveraged by teachers:

Drawing upon personal experience is necessary and desirable in a veteran teacher, but it is not sufficient for making critical judgments about the effectiveness of an instructional strategy or curriculum.

Teachers need creativity, but they also need to demonstrate that they know what evidence is, and that they recognize that they practice in a profession based in behavioral science.

One of the pitfalls in teaching is that we often rely too heavily on personal observation, rather than systematic empiricism that can move us past our subjective assumptions and more accurately surface the underlying causes of what we see.

I suspect this is a why running records are such a strongly held practice, even in the midst of shifts to universal screening instruments and systematic phonics programs.

Yet the Stanovichs end on a hopeful note that there is a researcher-teacher partnership that can be forged:

Teachers believe in a world that is predictable and controllable by manipulations that they use in their professional practice, just as scientists do. Researchers and educators are kindred spirits in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom.

This was a lot to unpack, and it sure took me some time to pick away at it, but I feel like I came away with a deeper understanding of research and how to consider it in the context of education.

The crazy thing is, I feel like there’s still a lot I just touched the surface of, or missed entirely. But I’ve got other things calling to me to write about, and it’s time to move on. Thank you, Keith and Paula Stanovich, for sharing your wisdom in these gems — and I hope, dear reader, I’ve inspired you to take either article up and read them, or re-read them, as the case may be, to dig into them on your own.

Over and out.

#research #Stanovich #science #scienceofreading #SOR #empiricism #reading