Introduction

Pandemic Writing Pedagogy

Results

Discussion

Methods

Study Design

This study reflects on the March 2020 choices of WPAs in a few ways: first, through a survey with questions about decisions, obstacles, and what instructors writing programs were supporting (Phase 1); then, through two interviews, with one focused on reflective narratives and the other examining support materials crafted for writing programs by those doing WPA-like work (Phase 2). The main question of this project is "What support did writing programs offer to instructors in March 2020 when a global health crisis suddenly forced us to change our methods of teaching and learning?" The Phase 1 Survey, which this article focuses on, thinks through this with a few sub-questions:

  1. What areas of instruction (technology, delivery, synchronous engagement, other support to instructors) did that focus on?
  2. Was that support catered at all to the specific often-tenuous situations (contingent faculty/graduate TAs) of instructors in these types of programs?
  3. What barriers existed to delivering that support?

I first designed this study as part of an Empirical Methods course in my doctoral program, before working on it in more detail with my faculty advisor and with consultation of my university's Institutional Review Board (Approved IRB-2021-779). To capture these reflections before they faded from memory, I began Phase 1 of this study, the survey, in October of 2021.

Survey Design & Distribution

I designed my survey after more reading on empirical research, in particular Jarrett's (2021) Surveys That Work, which focuses on making surveys easy and quick to complete. Jarrett recommends using few open-ended questions, meaning that most questions on the Phase 1 survey were multiple-choice with one open-ended question at the end.

This survey was conducted on Qualtrics survey software and distributed through writing studies mailing lists (NextGEN and WPA-A), on Twitter (with the hashtag #WPALife), and later through direct recruitment emails. After initially circulating the survey in October 2021, the study was later revised to

  1. Include direct recruitment, and
  2. Expand inclusion protocol to those who did WPA work but did not have a named WPA position.

I was encouraged to pursue Direct Recruitment to increase response rates on the survey; the decision to revise inclusion protocol was for the same reason, and because, as literature like McLeod (2007) reminds us, those who do WPA work do not always have WPA titles (or their positions are more complex than one title, as we see in the Results). The Direct Recruitment list was pulled from my faculty advisor's contacts and other publicly-available information, such as CWPA conference programs and who uses social media hashtags like #WPAlife. These revisions, with an additional push on social media and mailing lists, more than doubled my response rate. Between October 2021 and January 2022, I received a total of 55 responses. It is difficult to estimate an accurate response rate for this survey: there are 822 members of the WPA-Announcements Google group, but finding information on current NextGen membership is difficult, and the expanded inclusion protocol (beyond named WPA titles) makes the numbers even fuzzier.

After the Survey

Conversation guides for the interviews, Phase 2, were drafted for IRB approval, but were altered based on survey responses. Phase 2 began over Spring 2022, and recruitment for Phase 2 occurred at the end of the Phase 1 survey, where participants could mark interest in being contacted about scheduling interviews. Phase 2 interviews then used the conversation guide, adapted to each participant's survey responses and support materials being examined. Survey data was assigned to specific participants only if they participated in Phase 2, and only as a means of prompting conversation; if Phase 1 participants declined to participate in Phase 2, their personal information was not collected.

In January 2022, I downloaded data from the Phase 1 survey from Qualtrics, using one big, messy spreadsheet with all questions and data, and then adding sheets to focus on specific questions. A few multiple-choice questions left room for "other" responses in a textbox, which I coded on these additional sheets. Responses to the one open-ended question, "If you could time-travel to two weeks before the emergency shift to remote teaching and give yourself a quick bit of WPA-related advice, what would you say?”, were also coded after data collection. However, when discussing the progress of my study with my peer-mentoring/advising group, I identified some early themes in this question's responses to discuss with them, and their feedback was kept in mind as I did my final coding. Coding was done in a couple of rounds, then: one in mid-November, to update my colleagues on my study, and one mid-January. In the initial round, I considered some notable answers for the "other" responses, and preliminary codes for the one open-ended question. More codes were defined in the second round to, for example, try to untangle the ways that OWI-specific preparation is different from technological familiarity and how that is different from other tech support infrastructure. Notably, this coding after-the-fact helped me rethink my initial questions: while I was wondering about how WPAs conceptualized empathy in the early pandemic, many responses brought up some form of self-care for themselves and their instructors, giving me a more specific lens for thinking about this emotional labor for WPAs.


Results

New Priorities in Strange Times