Word Processing Research

Early research into word processing in the field often took the form of case studies or experiments exploring the differences between writing with print technologies and writing with word processors. Gail Hawisher’s (1986) review of studies from the early 1980s found that “[r]esearchers most often studied the attitudes of writers but also looked at the effect of word processing on errors, the number of words, the frequency and kinds of revision, and the quality of the writing” (p. 14). Hawisher noted that the results of these studies were difficult to compare given the variety of designs, research instruments, and research participants. In a follow-up article, however, Hawisher reviewed 16 more studies published in the twelve months following her initial 1986 review and was willing to “tentatively draw some conclusions of what we might expect when students use word processing as a writing tool” (1988, p. 10).1 These studies showed that students using word processors instead of conventional print methods would likely produce “fewer errors in final drafts” and “longer texts,” have more “positive attitudes … toward writing with word processing,” and would improve their writing if they were basic writers (p. 10). Although such results suggested that students were “more highly motivated when they work[ed] with word processing” (p. 17), Hawisher was quick to argue

that the new classroom atmosphere I am describing is not directly attributable to the introduction of word processing. Rather it has more to do with the ways in which writers interact with one another when they are learning and writing with computers. The real challenge of working in this context, then, is to devise a pedagogy that capitalizes on both computers, and this cooperative atmosphere yet goes beyond what we have previously contrived. (p. 18)

Hawisher’s call for a new pedagogy marrying the affordances of computers and the “cooperative atmosphere” that could be developed in computer writing classrooms resonates with later research that cautioned scholars not to conceptualize computer technology as a “silver bullet” for improving student writing, but at the same time to pay close attention to the benefits they can bring (e.g., Selfe, 1999).

In contrast, Bernard Susser (1998) reviewed the research on word processing and concluded that computers and writing scholars were not, in fact, paying close enough attention to the benefits of word processing because they were not training students to be skillful users of the software. He concluded his review by arguing that

Writing teachers have not devoted much time to teaching the skills of word-processing packages, nor have they reached any agreement on what level of skills are adequate. Researchers have for the most part ignored the important variable of the level of their participants’ skills with word-processing packages; those few studies that have looked specifically at experienced users have invalidated their own conclusions, given that the data they present show that many experienced users, in fact, cannot be said to be using word-processing packages in a meaningful sense. (p. 355)

Using a word processor “in a meaningful sense,” for Susser, means employing a large number of its features. Susser reported the results of his own study, which surveyed students on their use of specific features of a word processing program during each class session over the course of a semester.2 For one class, “[t]he average use of functions clusters around 5 or 6 and the maximum is 8; this is disappointing considering the maximum possible is 30” (p. 357) and in the second class, despite more effort on Susser’s part to “make students more interested in using functions, and despite the individual attention I gave to their questions and problems [revealed in the surveys] […] the level of use was just as disappointing” (p. 358).

While it may seem curious that Susser equates “meaningful use” with using a somewhat arbitrary number of word processing functions, his argument for attending more closely to writers’ skill and experience with word processing software is a compelling one. In seeking to address the apparent contradiction between studies that reveal “that using computers motivates student writers powerfully” (p. 362) and studies that show “that using computers does not necessarily improve the quality of writing” (p. 362), Susser suggests that more “meaningful” use of word processing may lead to improved writing quality: “When students use word-processing packages in a meaningful way, it may turn out that writing quality will improve. This claim must be tested by redoing the computers and writing studies of the past with participants actually word processing” (p. 362). However, as Charles Moran (2003) argued in his review of twenty years of Computers and Composition research, “[t]his particular hope—that computers would somehow make a difference in student writing—has been one that ‘springs eternal’” (p. 349).

More recently, scholars within computers and writing have been more attentive to issues of instrumental skill with computer technology. Stuart Selber (2004) has called for writing teachers to embrace a critical “functional literacy” as part of a more complete and nuanced approach to teaching and conceptualizing computer literacy. As he argues,

teachers have not paid enough attention to the so-called advanced features of software programs (e.g., style sheets, master pages, version controls, macros), which are typically explained in the associated help resources. Such features are not hard to grasp but require a pedagogical commitment deeper than cut, copy, and paste. The payoff though, is a command over software features that manipulate text elements in ways that are significant and sometimes elegant. (p. 48).

Unlike Susser’s somewhat acontextual privileging of all features available in a software package, Selber advocates for teaching features that lead to desirable pedagogical ends. He gives the example of guiding students to use style sheets when working in groups, as this feature makes it easier to merge work from multiple authors and “[o]n a rhetorical level … requires students to understand how and why readers rely on the various structural elements of reports” (p. 49).

In her reconsideration of the term mechanics in rhetoric and composition, Jenny Edbauer Rice (2008) points to the ways that a rhetorical approach toward instrumental skill can benefit writers: “[m]ore than an instrumental knowledge of technology, rhetorical mechanics is the material practice of enactment. Embracing such productive skills is thus a move away from instrumentalism” (p. 373). Like much recent research dealing with computer technology within rhetoric and composition, Rice’s argument looks beyond “the print essay” (373) and the word processors used to create them and toward the possibilities of composing across many media.

Our review of research within Writing Studies has not turned up any studies of “distraction-free” writing tools. This is not surprising as they are still a new phenomenon and not widely used. However, there has been academic interest in these tools, as evidenced by references to them in social networks and blog posts. Like the writers whose posts we examine in later sections of this webtext, some academic writers have created screencasts, written reviews, and created other kinds of accounts of their use of these tools.


  1. Students were by far the most often studied group in all the studies Hawisher reviewed in 1986 and 1988, with only five of the thirty studies involving professional writers.

  2. Features such as “block moves [cut and paste], undo, spelling checker, etc.” (p. 355)