Background

The Covid-19 pandemic was "the largest disruption of education systems in human history, affecting 1.6 billion learners [including elementary and secondary school students] in more than 200 countries" (Pkhrei and Chhetri, 2021). When the pandemic reached the United States in March 2020, everything that could go online—including higher education—went online both to fight back against the virus and to complete the school year. Millions of college students and instructors involuntarily and abruptly made the transition to online learning with little preparation.

Also in March 2020, The Chronicle of Higher Education published a commentary from the education historian Jonathan Zimmerman titled "Coronavirus and the Great Online-Learning Experiment." Online courses specifically and distance education generally have been part of the mix of higher education in the U.S. for decades, and by 2016, as Zimmerman noted, about a third of all college students were taking at least one online course as part of their degrees. Yet Zimmerman was still skeptical of the effectiveness of the format. "What were they learning," Zimmerman asked? "It's hard to tell." While he acknowledges that there have been numerous studies in the 1990s (and before and since) that have "found little difference in academic achievement between people who took face-to-face, online, and hybrid courses," Zimmerman nonetheless argued "this research was marred by the problem of self-selection: Students [and I would add instructors as well] who chose online courses were probably more comfortable in that format and tended to perform better in it." 

Zimmerman suggested researchers should take advantage of the "natural experiment" created by the Covid Pandemic to study how those students, who had not taken classes online previously, perform. "It might be hard to get good data if the online instruction only lasts a few weeks," Zimmerman wrote, "But at institutions that have moved to online-only for the rest of the semester, we should be able to measure how much students learn in that medium compared to the face-to-face instruction they received earlier."

By “natural experiment,” Zimmerman is referring to the observational research methodology in which a particular situation or event approximates the control conditions of a randomized experiment where subjects are assigned to different groups to answer a particular question. As Julia Rosen (2021) discussed in Nature, "Researchers have long relied on natural experiments to probe subjects that would be difficult--or unethical--to investigate through conventional methods such as randomized controlled trials" (p. 150). These studies tend to involve examining very large data sets; for example, Rosen discusses an international project launched during Covid to study the role of pollution changes in different parts of the world on preterm births and the impact of many people skipping cardiovascular and cancer screenings during the height of the pandemic out of fear of contracting the virus.

In other words, if Zimmerman is correct about the problem of self-selection in previous studies about the effectiveness of online courses, then Covid would create the conditions for a unique natural experiment. After all, if the majority of faculty in the U.S. who taught online during the Covid pandemic would have preferred to teach face to face, then the self-selection bias essentially would have disappeared.

Now, to be fair, even Zimmerman suggests comparing online courses to face to face courses under the difficult circumstances of the pandemic was problematic. He admitted that the "abrupt and rushed shift to a new format might not make these courses representative of online instruction as a whole," and it's also worth noting that Zimmerman's commentary was published at a time when the common wisdom was the pandemic would be over in a few weeks and not a few years. Other critiques of Zimmerman's idea were far more direct. Thomas J. Tobin (2021) wrote his own commentary for The Chronicle with the succinct headline "Now is Not the Time to Assess Online Learning," and Kevin Gannon (2020), who is the Director of the Center for Excellence in Teaching and Learning at Grand View University and who blogs under the pseudonym "The Tattooed Professor," tweeted "This is like deciding to give people a swimming test during a flood. No. No no no." 

There is also the question of how useful this comparison is any more. What is being teased out here, which is “better,” reminds me of one of the common questions that circulated in computers and composition research in the early 1990s when I first entered the field: which is better, teaching writing with or without computers? It was at the time a serious question. The "World Wide Web" was still a new and novel space for techies, many students did not own computers (not to mention cell phones and other devices) and had to use campus computer labs and classrooms to draft and revise their writing assignments, some instructors were skeptical of features like spell-check in word processing software, and the only common technology in most college writing classrooms was an overhead projector.

Most of us who were enthusiastic about the role of computers in teaching argued the comparison was not the point. Rather, the real question was how could we use computers to teach writing differently? The same is true with the research that has been done comparing online courses with face to face ones. Means, Bakia, and Murphy (2014) raise a similar concern throughout their book Learning Online, which analyzes and synthesizes hundreds of studies on the effectiveness of online teaching:

Rather than trying to compare the outcomes of the non-existent chimera of "online learning in general" to the equally under-specified "typical classroom," research would do well to focus their efforts on particular kinds of online learning experiences that capitalize on the things that technology is differently suited to provide. (24)

There are affordances of the technology of online pedagogy "that cannot be replicated in conventional classroom-based instruction" (24), and I would also argue the f2f classroom offers different affordances that are difficult to replicate online as well.

Still and even with objections regarding the validity of the natural experiment conditions, Zimmerman does have a point. The Covid pandemic did create the unique exigency to study online teaching in conditions where participants were not able to self-select into those conditions.

But again, in March of 2020, Zimmerman's essay was as much a thought experiment as anything else, in part because the assumption at the time was the pandemic would be over by the end of that school year. By the late spring of 2020, it became clear that the ongoing pandemic and its restrictions would continue well into the fall of 2020 and beyond. At the vast majority of universities in the U.S., faculty would spend at least some of the summer of 2020 retooling their courses for the online format and—regardless of their own preferences—would have to teach online. Like it or not, the "natural experiment" of teachers and students who had no choice (other than to sit the school year out entirely, of course) but to take their teaching and learning online was going to happen.

So in the midst of the natural experiment, opportunity created by Covid and what appeared to me to be the odd choice of so many faculty to opt for the unconventional online mode of synchronous instruction and learning, I embarked on this pilot study. My goal in my survey and follow-up interviews was to seek the beginnings of answers to the following questions: