Research and Methods

High School OWLs and Writers as Solicitors of Feedback: Two Nascent Fields

The issues these girls identify contribute to two evolving fields: high school OWLs and writers as solicitors of feedback. High school writing center research has been emerging for over twenty-five years; the scholarship addresses the strategies that incipient writing centers can implement to meet high school writers’ distinct needs.[4] While high school writing center research represents a small collection, even less research focuses on high school writing centers and technology—high school OWL research represents a meager strand in this scholarship. The available research concerning high school writing centers and technology establishes that writing centers have always included technologies from pen and paper to complex computer systems. Significantly, these scholars remind writing center directors to evaluate the digital technologies employed in the writing centers or on OWLs. Work by Inman (2006); Childers (1995, 2003); and Childers, Jordan, and Upton (1998) argues these technologies should be peripheral to writing center or OWL sessions—the digital tools should not dictate the curriculum but should enhance how high school writers learn to think, write, and revise.

These digital tools can benefit student writers while also offering valuable insights into how these writers operate within online tutoring sessions. In “Confessions of First-Time Virtual Collaborators: When College Tutors Mentor High School Students in Cyberspace,” Denny (2005) divulges complex, cross-institutional (university-to-high-school) pedagogical expectations regarding OWL responses. Denny notes, for instance, high school students perceive the university OWL consultants as writing experts rather than as peer consultants. Furthermore, he observes, students expect corrective feedback rather than a dialogue between writer and consultant (p.2). Denny’s article complements the work of Childers et al. (1998), suggesting that high school and university writers’ response expectations are complementary but distinct. These distinctions, combined with the limited high school OWL research, remind us to scrutinize the ways high school students engage OWL feedback.[5]

In addition to the exiguous high school OWL research, most response research elides the writer’s role in the feedback process. As a field, composition and rhetoric has developed a rich corpus of research and pedagogical suggestions focused on the response process. As Formo and Stallings (2014) describe in their article, “Where’s the Writer in Response Research? Examining the Role of the Writer as Solicitor of Feedback in (Peer) Response,” response research coalesces into three categories: the value of the response process [6], best practices in peer-response workshop design [7], and the art (and technology) of giving effective feedback.[8] However, this research contains a curious gap—a mere modicum of inquiry exists instructing writers how to ask for feedback. This absence is troubling. When we fail to include such pedagogy, as teachers we may unknowingly minimize the writer’s role in the response process.

Our Study
To emphasize the writer’s importance in response, we targeted the rich repository of student work offered by the CSUSM OWL as a fruitful site for our research. After the OWL had operated for two years and with the high school writers’ and parents’ permission, we began to collect quantitative and qualitative data. In phase one, we randomly selected fifteen students, boys and girls, from three participating medium-sized, suburban high schools as potential interviewees. Of the fifteen selected students, we interviewed eleven. These students had used the OWL for at least one semester and had submitted a signed consent form. The random selection process, however, resulted in a limited number of interested and available students. Therefore, in phase two, we asked students to volunteer to be interviewed. In this phase, we interviewed forty-three students. We scheduled the interviews after school in the students’ English classrooms, talking to students whose OWL experiences ranged from a singular use to several times during one school year or over multiple school years. Scheduling the meetings directly after school increased student participation while allowing enough time to meet with the students (the short lunch break did not give students time to both eat and answer questions). We chose to interview students in their English classrooms because we understood these spaces to be comfortable meeting places for the students.

We conducted structured interviews that ran approximately thirty to sixty minutes long. As the interviewer, Dawn used the same questions of each interviewee to facilitate our analyses and comparisons. We asked students about their background, their general computer use, and their experiences on the OWL itself. Dawn asked the students to reflect on their initial perceptions of the OWL as well as their thoughts after extended use. (Appendix A gives the questions we asked of the students as part of the structured interviews.) Informed by Ratcliffe’s work, Dawn also engaged in rhetorical listening during each interview, committed to being open to the girls’ insights about the OWL. As such, she followed the girls’ responses by asking authentic follow-up questions unique to the girls’ comments to ensure clarity and understanding, and when possible, answered students’ OWL questions. We tape-recorded, videoed, and transcribed the interviews verbatim to analyze the raw data.

Because girls were the primary OWL users, we launched this project using the comments of the twenty-nine girls we interviewed to analyze high school writers’ insights about response. We then studied the girls’ interviews, coding for patterns (and their generative variations) within and across each academic year. [9] The preliminary categories of the girls’ comments on OWL limitations used to organize the data included: common concerns about the OWL, such as the OWL as less interactive than face-to-face consulting, OWL feedback as based too much on personal opinion, and OWL consultants’ lack of clarity or lack of subject knowledge. As the project continued, we modified the categories, eliminating and combining categories as we gathered more responses from our interviewees. Once we collected all of the data, we reviewed the transcripts for evidence and extracted several excerpts to place into the categories we had created (Student Interview Transcripts, 2003-2006).

[4] See Barnett (2006); Childers (1995, 2003, 2006); Farrell (1989); Nixon-John (1994); and Fels and Wells (2011).
[5] See Littleton (2006); Moussu (2012); and Tinker (2006) for additional discussions about university and high school writing center connections.
[6] See Bruffee (1973, 1984); Castner (2000); Elbow (1973); Gere (1987); Gillam (1990); Gillespie and Kail (2006); Harris (1992, 1995); Moss, Highberg, and Nicolas (2004); North (1984); Nystrand and Brandt (1989); Shadle (2000); Spear (1988); and Trimbur (1987).
[7] See Anson (1989, 1997, 2011); Bruffee (1973,1984); Dixon (2007); Elbow (1973); Gere (1987); Glenn, Goldthwaite, and Robert (2003, p. 63-68); Healy (1980); Liu and Hansen (2002); Macrorie (1968); Moffett (1987, 1992); Spear (1988, 1993); Spigelman and Grobman (2005); Thomas and Thomas (1989); and White (2007).
[8] See Anson (1997, 2011); Auten (2005); Brannon and Knoblauch (1982); Cho, Schunn, and Charney (2006); Coogan (2001); Elbow (1973); Formo and Robinson Neary (2009); Geiger and Rickard (1999); Gere and Abbott (1985); Harris and Pemberton (1995); Hewett (2000, 2006); Hobson (1998); Honeycutt (2001); Inman and Sewell (2000); Kim (2004); Krych-Appelbaum and Musial (2007); Mabrito (1991); McGarrell and Verbeem (2007); Monroe (1998); North (1982); Palmquist, LeCourt, and Kiefer (1999); Silver and Lee (2007); Simple, Jeff Sommers, Stephenson, and Warnock (2011); J. Sommers (1989, 2002); N. Sommers (1980, 1982); Spear (1998, 1993); Straub and Lunsford (1995); Strasma (2009); Treglia (2009); and White (2007).
[9] This systematic methodology allows for the discovery of theory through the analysis of the data. Grounded theory is a generative method of coding and grouping data through on-going analysis. Rather than approaching data with a hypothesis, with this method, concepts emerge from the coding and analysis that can result in rich theories. In addition to Glaser and Strauss’ work, see work by Corbin and Strauss (2014).

Comments are closed