Labor

In this section, we discuss how platform labor practices are worth our attention as technofeminist researchers. In particular, we discuss how the digital labor of users can be exploited by the corporate platforms that design and implement policies for how labor will be valued. Yet, we also want to draw attention to deeper means of production—that is, the labor performed by low-waged workers directly employed or contracted by platforms. For us, opening up discussions of labor in such a way allows researchers to consider issues like content moderation, click workers, hardware manufacturing, and commissioned crowdwork jobs (e.g., Uber, Amazon Mechanical Turk, Upwork). Although it’s often understood that digital platforms are highly automated industries, our aim here is to showcase the often-invisible conditions of human labor embedded in the foundations of the platform economy. In doing so, we hope to illuminate ways technofeminism can work to shed light on the labor practices of corporate platforms.

Buzz phrases like gift economy, peer economy, sharing economy, and, now, gig economy have saturated understandings of how value is consumed and distributed on digital platforms. An undercurrent in all of these discussions has been issues of digital labor. Since the emergence of Web 2.0, numerous scholars have discussed how digital economies affect the production, exchange, and circulation of writing and media content (e.g., DeVoss & Porter, 2006; Dush; 2015; Eyman, 2015; Porter, 2009). Scholars have also addressed the darker sides of “free labor” (Terranova, 2004) in a Web 2.0 context. Although individual writers might benefit from producing and circulating content online, such labor also comes at a price when considering issues of privacy, surveillance, and ownership (Beck, 2015; Beck et al., 2016; McKee, 2011; Reyman, 2013). The networked infrastructure that makes sharing and interaction media possible—ease of participation online, folksonomic building of content, collaborative technology interfaces, and so on—propagates an ecosystem that, though valuable to individual users in certain ways, ultimately tips the scale to the corporate entities that define the terms of service for exchange.

In such an economy, scholars have argued that digital labor exploitation has adversely affected women and people of color (e.g., Duffy, 2017; Nakamura, 2014). For instance, citing statistics such as women are more likely to use Facebook than men, women contribute more original content than men, and women are more likely than men to use image-sharing platforms like Instagram, Lisa Nakamura (2015) noted that “women perform much of the ‘free labor’ of social media” (p. 223). Nakamura also pointed out that vulnerable populations such as “children, poor women, migrants, and older women” are less likely to “quit” a platform like Facebook and are thus more likely to be subjected to their surveillance and data-brokering practices (p. 223).

Although the “free labor” of everyday participation can certainly be a focus for technofeminist work, in the remainder of our discussion we focus on another particular strand of (often-invisible) labor, what media scholar Sarah T. Roberts (2016) called commercial content moderation. The labor of content moderation happens via automated processes (algorithms filtering content) and user-generated feedback (the free labor of users who flag inappropriate content), we will dwell with another category: waged content moderation work where paid workers are asked to determine the appropriateness of content for the platform by which they are employed or contracted.

Commercial content moderators are dispersed around the world and engage in low-waged work that demands secrecy from the general public. These workers play a key role in determining the appropriateness of content on particular platforms, they are, however, “relatively low status workers, who must review, day in and day out, digital content that may be pornographic, violent, disturbing, or disgusting” (Roberts, 2016, pp. 147–148). In a recent public forum hosted by Roberts at UCLA, two commercial content moderators discussed the psychological and demoralizing effects of this kind of work. Often working based on commission, content moderators sometimes only earn a few cents for reviewing one image or other forms of content. These workers are the human element behind moderation, but as Roberts pointed out, the labor of content moderation is designed to fade into the background, and, in designed to do so, evade as much social responsibility as possible. This practice can work at the advantage of commercial platforms, because such invisible labor allows platforms to appear as neutral hosting sites while gaining profits when racist, sexist, and violent content sells.

To illustrate the complexities and responsibilities of content moderation work, we turn to a case involving Facebook Live, content moderation, and the life and death of Philando Castile, yet another Black man killed at the hands of police. In our brief discussion, we cannot cover the intricacies of the continued dehumanization of Black bodies on social media. As such, we point readers to the work of Black public intellectuals and scholars who have drawn attention to how advocacy, activist, and other awareness efforts can easily be drowned out by the spectacle of Black death (see especially, Blay, 2016; Noble, 2018). Here, we describe how the violences of Castile’s livestreamed death were subject to the human labor of content moderation.

On July 6, 2016, Castile’s death was broadcasted on Facebook Live, a video-streaming service that enables users to record and share live video. Castile’s girlfriend, Diamond Reynolds, took to Facebook Live after Jeronimo Yanez, a Minnesota police officer, shot and killed Castile after a routine traffic stop. The livestream spread at rapid rates, and eventually made its way to mainstream news outlets. Although Reynolds’ video later proved to be decisive evidence for Yanez to be charged with manslaughter and reckless discharge of a firearm (he was later acquitted by a jury of seven men and five women—only two of whom were Black), the video, like all Facebook content, was subject to human intervention by way of commercial content moderators.

Image depicts a screenshot from Facebook. It indicates that the page on Facebook can no longer be accessed. To depict this, Facebook includes a picture of their commonly used branding—a human hand holding on thumb up. Only this image has a bandage around the thumb.

And, in fact, Reynolds’ livestream was removed from Facebook just one hour after she posted it. Although the video was eventually reinstated with a disclaimer from Facebook about its graphic nature, Facebook insisted that its temporary removal was a “technical glitch.” Two days after the livestream, however, Facebook issued a statement and noted, “We understand the unique challenges of live video. We know it’s important to have a responsible approach. That’s why we make it easy for people to report live videos to us as they’re happening. We have a team on-call 24 hours a day, seven days a week, dedicated to responding to these reports immediately.” Facebook continued, “One of the most sensitive situations involves people sharing violent or graphic images of events taking place in the real world. In those situations, context and degree are everything.” Facebook routinely denies they are a media company and instead insists that they are a more neutral technology company; however, we see that Facebook—and, more particularly, content moderators—make decisions about context and degree. Given the complex importance of the video, we may ask: What would happen to Yanez if this video was permanently removed by Facebook? What other videos have Facebook content moderators removed from public view? How do content moderators discern context and degree?

There are no easy answers for this case or for other cases involving content moderation. Yet, we find intersectional technofeminism important for offering analytical approaches to question and disrupt the status quo of content moderation work. From such a vantage point, it would be inadequate to place all responsibility on the workers who follow corporate guidelines to decide what is appropriate (or not) for their contracted platform. Such an approach would ignore the potentially precarious work conditions of content moderators. Instead, technofeminist researchers might begin more thoroughly investigating the emotional labor of content moderation work. What does it feel like to respond to videos involving misogyny, brutality, and other forms of violence? What is the emotional weight of doing such work day in and day out?

Further, technofeminist research might take a more systemic approach to studying and intervening in the brand-forward work of content moderation, examining, for example, how “platforms make active decisions about what kinds of racist, sexist, and hateful imagery and content they will host and to what extent they will host it” (Roberts, 2016, p. 152). Still, understanding the precise labor practices behind these decisions is no easy task; as Roberts (2018) described, a recurring “logic of opacity” in platforms’ decision making processes poses a direct problem for critical analysis and intervention. She noted, “Even when platforms do acknowledge their moderation practices and the human workforce that undertakes them, they still are loath to give details about who does the work, where in the world they do it, under what conditions[,] and whom the moderation activity is intended to benefit.” We need more research—and more public and community-building work—to break down the opacity at play in content moderation work. Technofeminism—attuned to questions of design, policy, and cultures—can be a lens to do this work.

When technofeminist researchers ask questions about labor conditions on digital platforms, they are asking questions about exploitation, inequity, and responsibility. They are interrogating the historical legacies and current realities of racism and sexism entwined in the precarious working conditions of emerging digital economies. They are situating platforms not as some inevitable technical configuration, but as rhetorical assemblages that obscure their profit-driven agendas. As we’ve shown, conditions of labor come in many forms in today’s platform economy, and we thus are in need of flexible and critical methodological practices.

Continue to Material Infrastructures