1.ii. "Software Studies" on the Rise
First and foremost, the Oulipo’s approach to constraint as an inventional method—and especially their use of mathematics as a basis for experimentation—can be valued as a way for the field of computers and writing to delve more deeply into the programmable dimensions of writing in new media. If the programming concepts, rules, and procedures in programming languages are viewed as a generative medium for exploration, scholars in computers and writing could contribute to the emergent field of “software studies” by identifying and experimenting with new forms of textuality and processes of writing in new media.
In his book titled Software Takes Command, Lev Manovich argues that the study of programming languages has been ignored for the most part in the humanities and social sciences. While scholars associated with these disciplines have established approaches to the studies of cyberculture, new media, Internet studies, and other related fields, the impact of software has been of marginal interest: "the underlying engine which drives most of these subjects—software—has received little or no direct attention" (Manovich 4). Adding to the force of his claim is Matthew Fuller's description of software as a "blind spot [in] the wider, broadly cultural theorization and study of computational and networked digital media" (3).
Both of these scholars advocate the establishment of a field called “software studies,” and they offer compelling reasons for doing so. Manovich offers one based on his analysis of global economic trends. He argues that commercial software development, which includes applications for everything from blogging and mapping to data analysis and writing, is at the forefront of the global economy. Companies like Google, Oracle, Yahoo!, Microsoft, and Amazon are at the top of financial indexes and other types of market lists. These companies represent the vanguard of today’s post-industrial economy: “If electricity and the combustion engine made industrial society possible, software similarly enables global information society” (4). Based on his economic analysis, Manovich argues that failing to recognize the role of software renders our scholarly work incomplete:
if we want to understand contemporary techniques of control, communication, representation, simulation, analysis, decision-making, memory, vision, writing, and interaction, our analysis can’t be complete until we consider this software layer. (7)
Likewise, Fuller argues that software is an inescapable part of the lives of citizens in the “global north.” Software has become a “putatively mature part of societal formations,” and its youngest generations are growing up under its many influences. For this reason, scholars in the humanities and social sciences “need to gather and make palpable a range of associations and interpretations of software to be understood and experimented with” (3). Fuller advocates that scholars extend their interests in digital media and culture by learning key concepts and methods in modern programming. Based on the essays in his edited collection, Software Studies: A Lexicon, which is touted as the first book with the moniker, “software studies,” in its title, scholars associated with this emergent field will be able to work both theoretically and practically with key concepts and methods in modern languages. Scholars should know what are variables, data types, functions, loops, classes, and objects. They should know concepts such as encapsulation, elegance, and ethnocomputing, and the basic differences between procedural and object-oriented languages.
Inspired by Manovich’s and Fuller’s arguments, I can appreciate how they might read most studies of digital media as a viewer might experience the cinematic special effect known as a Hitchcock- or Dolly Zoom. Popularized by Alfred Hitchcock in his 1958 film, Vertigo, this effect is based on two opposing movements. In the following quote, the mechanics of the effect are described:
The camera starts at a distance from the subject with the lens completely zoomed in. This is the start point. From here, when the shot begins, the camera tracks into the subject and at the same time the lens is zoomed out relatively so that the subject in the foreground remains the same size. This causes the backdrop to increase in depth of field as the lens is zoomed out causing a sense of gaining distance between the subject and the backdrop. (Valluri, par 5; my emphasis)
As the cinematographer tracks the camera in toward the scene, s/he zooms out the lens. The effect has been used in numerous films since Vertigo including Jaws, Pulp Fiction, and Lord of the Rings. My point in associating this effect with a research method that does not reconcile the machinic dimensions of new media (i.e., software) with the on-screen phenomena on which it is usually focused is the following: as our methods of engaging with digital media grow increasingly sophisticated, they remain paradoxically “screen deep.” As we achieve greater levels of sophistication, drawing closer to the media that we study, we simultaneously marginalize the software underlying our studies. The closer we get to the screen, the farther away we push the machinic.
Manovich’s and Fuller's concerns extend to the field of computers and writing, because few published books or articles affiliated with this field focus explicitly on key concepts in software programming, or specific methods in programming languages. Nor do the vast majority of books and articles include block quotes of executable code for discussion. While over the past decade, there have been a handful of articles and essays that focus on the importance of programming languages and executable code, these contributions represent a small, discontinuous set. In 1999, Ron Fortune and Jim Kalmbach guest edited an issue of Computers and Composition about programming and writing. In my estimation, their issue represents the most compelling, in-depth focus on topics related to programming in the field. In addition to their introduction titled “From Codex to Code,” contribution in their issue included the following three articles that quote lines of code explicitly in the body of their texts: Cecilia Hartley’s “Writing (online) Spaces: Composing Webware in Perl,” Alan Rea and Doug White’s “The Changing Nature of Code: Prose or Code in the Classroom,” and Mark Haas and Clinton Gardner’s “MOO in Your Face: Researching, Designing, and Programming a User-Friendly Interface.” The importance of these articles is that they provide scholars and instructors in computers and writing with a solid basis for studying the rhetoricity of code as well as some of the connections between programming and writing. Unfortunately, the momentum that their issue represents was never picked up.
Since then, in 2003, N. Katherine Hayles published “Deeper into the Machine,” an article in which she challenges computers and writing scholars to incorporate the “machinic” dimensions of writing (i.e., code) in to their work. Also in Computers and Composition in 2006, an article about Flash was published in which a section about ActionScript was included. In 2008, an article in Technical Communication was published that includes a focus on code. Most recently, in 2009, two essays published in Computers and Composition Online include examples of executable code and some commentary related to them. The first is Anthony Ellerston’s essay, “New Media Rhetoric in the Attention Economy,” which includes a few snapshots of ActionScript programming in the Adobe Flash environment. The second is Brian Ballentine’s “Hacker Ethics and Firefox Extensions.” Under the header titled “Sample Hacks,” Ballentine includes excerpts of the JavaScript code used to generate the hacks about which he writes, which he justifies including for the following reason:
Although mastering Web Developer, Greasemonkey, or computer languages like JavaScript are not required to introduce students to white, black, and grey hat ethics, we should not shy away from engaging these new technologies. [Par. 5]
Based on Manovich’s and Fuller’s calls for a turn to software, I argue that these scripts should be studied explicitly. They should not, as Ballentine implies, be supplied in a roundabout way to address the current limits of writing with and about programming languages and code in computers and writing. The opportunity to develop connections between the machinic and screenic dimensions of writing and rhetoric in computational new media is an opportunity worth raising to a necessity.
From a technical standpoint, the central focus of the field is the study of application environments in which new forms of writing are identified and practiced. Beginning with early studies of word processors, the field has developed an impressive body of technical knowledge related to specific applications. Nevertheless, in my investigations on the subject, few if any contributions in the field delve into the deeper programmable dimensions of those environments that provide access to a software language. Few essays and articles explore the “software layer” about which Manovich writes.
If Manovich’s and Fuller’s arguments are compelling, and the study of software provides a way in which the field can extend its areas of expertise, then the Oulipo’s method offer a productive approach. The Oulipo's use of mathematical concepts to generate new forms of textuality and inventional methods can be transposed to the exploration of programming concepts and procedures in writing and rhetoric. In particular, the algorithmic dimensions of computational new media can be explored.