BOOK REVIEW BY JASON PALMER

Book Cover

AI and Writing, by Sidney Dobrin

(Broadview Press, 2023)

Click play to view the book review.

Jason Palmer is an instructor of English at Georgia Gwinnett College and a PhD student at Georgia State University.

The video above is based on a ChatGPT interaction in which ChatGPT was prompted to act as an interviewer.

*The format of this book review is meant to mimic the style and pace in which these chats take place. Please, adjust your playback speed and/or pause as necessary.

*The simple letter-by-letter appearance of text in the ChatGPT interface may be changing the way we read and process text–and even how we think about the entity producing the text. Compare this style to a text message conversation where the readers are presented with the three pulsing dots before the entirety of the message appears. Both are deliberate UX choices made by the tech companies.

Reflecting on Collaborating with ChatGPT

Academic writing is rarely an independent process. I urge my students to rely on their friends and family to talk through subjects they want to write about, to help each other improve their writing through peer revision activities, and to visit the writing center to work one on one with tutors. Such advice is common from writing instructors. We don't expect students to produce intelligent prose with zero assistance, in part because, generally, the final product of any author will benefit from teamwork along the way.

Emerging scholars seeking to publish often learn the same lesson about collaboration. Academic publishing at the highest level is almost always collaborative, with colleagues, editors, and peer reviewers all contributing efforts and ideas throughout the different phases of the writing process. Extremely rare is the monograph, dissertation, or non-fiction book which lacks a robust acknowledgement section.

Despite the affinity for collaboration in the academic realm of writing, the new popular non-human collaborators are under intense scrutiny by researchers and educators around the world. This scrutiny is appropriate. Generative AI tools running on LLMs can absolutely do at least some of what a human collaborator can do—sometimes better, sometimes worse. Sometimes dangerously worse. Therefore, all writers should carefully consider how and when it is appropriate to engage with these tools (if at all). Since early 2023, I have found myself deliberating these questions non-stop as the tools keep evolving at a staggering pace.

For the moment, to understand these tools and evaluate them for myself, I deem the risks and downsides that come along with thoughtful experimentation as worth the potential benefits that the technology can provide. The risks are both known and unknown. Such known risks include a perpetuation of bias encoded in LLM training data, the incorporation of misinformation, the production of unoriginal text, and intellectual property infringements related to both training data and user inputs. Harder to predict risks include the future effects of increased dependency on technology and the potential fallout from advanced systems that could further escape both human understanding and control. These are all valid concerns. Another common concern (or complaint) about computer generated text is that it comprises only lifeless or boilerplate writing; however, at least one award winning author has already provided strong evidence to the contrary.

For me, there is only one convincing way to know for sure what benefits generative AI might offer the writer. I need to see it for myself. If these systems had longer track records, maybe I could rely solely on the experts with more experience than I have, but since the phenomenon is so recent, I have decided instead to develop my own expertise informed by both current researchers and my own first-hand experience.

The book review on this webpage is one of many experiments I have conducted with language programs that communicate with me both coherently and fluently much of the time. In this case, the tool may have lightened some of the intellectual workload of prioritizing and organizing the content of my writing (perhaps I should say "our writing" as I included the text outputs of the machine, unaltered, in the finished piece). By presenting me with follow-up questions based on my own responses, the generative AI in interview mode compelled me to think and elaborate. By being available any time, night or day, the generative AI provided a convenience that no human ever could. And by allowing me to prompt and re-prompt and re-prompt again, the generative AI afforded me a level of patience and immediate responsiveness that I doubt the best humans could match. These are just some of the benefits generative AI offered me for this project, but I would not go so far as to say it exceeded the competence of an above-average human interviewer.

When we seek assistance from peers and editors with our writing, sometimes the help we get is beneficial, sometimes it is not, and in some unfortunate cases, it can be truly counterproductive. It is always up to us as authors to differentiate and filter, to use or ignore. Assistance from generative AI is no different. In the case of my book review project, the machine didn't so much offer advice as it helped guide an inquiry. This use proved somewhat helpful and also somewhat constraining. But nothing in the interaction strikes me as wildly different from what I would expect from a human interviewer given the same instructions.

It is fair to worry about the threats generative AI may pose to writers and scholars, but many concerns about writers not thinking and writing for themselves ignore both the collaborative nature of writing among humans and the computer's ability to take on a portion of that role now.