11 September 2023

Modding

'From Community Governance to Customer Service and Back Again: Re-Examining Pre-Web Models of Online Governance to Address Platforms’ Crisis of Legitimacy' by Ethan Zuckerman and Chand Rajendra-Nicolucci in (2023) Social Media and Society comments 

As online platforms grow, they find themselves increasingly trying to balance two competing priorities: individual rights and public health. This has coincided with the professionalization of platforms’ trust and safety operations—what we call the “customer service” model of online governance. As professional trust and safety teams attempt to balance individual rights and public health, platforms face a crisis of legitimacy, with decisions in the name of individual rights or public health scrutinized and criticized as corrupt, arbitrary, and irresponsible by stakeholders of all stripes. We review early accounts of online governance to consider whether the customer service model has obscured a promising earlier model where members of the affected community were significant, if not always primary, decision-makers. This community governance approach has deep roots in the academic computing community and has re-emerged in spaces like Reddit and special purpose social networks and in novel platform initiatives such as the Oversight Board and Community Notes. We argue that community governance could address persistent challenges of online governance, particularly online platforms’ crisis of legitimacy. In addition, we think community governance may offer valuable training in democratic participation for users. 

Since the earliest days of computing, people have used information technology to converse with one another. Four years before the internet, Noel Morris and Tom Van Vleck wrote both an electronic mail system and a real-time chat system for MIT’s Compatible Time-Sharing System (CTSS), allowing users who logged onto the single shared computer to leave messages for one another or send messages to another user’s terminal (Van Vleck, 2012). Within 3 years of the introduction of the internet, email became the primary use of a network initially established to let computer scientists run programs on remote machines (Sterling, 1993). France’s Minitel service, designed to give users access to an electronic telephone directory and the ability to make travel reservations online, quickly became dominated by chat services, particularly erotic chat (Tempest, 1989). People want to talk to one another and will find ways to do so as soon as they are technically capable of connecting to one another. Unfortunately, as soon as people are able to talk to one another, they are also able to harm each other. Spam has undermined the utility of email and largely destroyed Usenet, the dominant community platform of the academic internet in the 1980s and early 1990s. Harassment and hate speech have become facts of life for users of many online systems, particularly for women, people of color, and LGBTQIA+ people. People often behave differently online than they would offline (Suler, 2004) and the impetus for humans to harass each other via digital tools is at least as strong as the impulse to connect. 

The emergence of trust and safety as a professional discipline reflects the centrality of issues like content moderation, spam and fraud prevention, and efforts to combat child sexual abuse imagery (CSAM) to the operation of platforms that enable user-generated content and conversation. As Tarleton Gillespie (2018) notes in Custodians of the Internet, “Platforms are not platforms without moderation.” Recent efforts to recognize trust and safety as a profession, with the establishment of the Trust & Safety Professional Association in 2020 and the emergence of a Journal of Online Trust and Safety in 2021 are overdue, as the work of policing online spaces traces back at least to the 1980s, if not earlier. 

One danger of losing the early history of online governance is a narrowing of possible futures, making it seem as if the contemporary model for governing online spaces, where professionals make decisions about what behavior is acceptable, with little input from members of the community, is the way it’s always been done. We refer to this model as the “customer service” model and contrast it to earlier models of online governance in which community members were significant, if not always primary, decision-makers about the online spaces they were a part of. This article examines three paradigms of online governance that preceded the contemporary customer service model and suggests that varying degrees of community governance may be a viable and socially beneficial option for many online spaces. 

This article is far from an exhaustive history of early online governance or of the emergence of the customer service model, though both histories are needed. While there has been excellent work calling attention to the complexities of trust and safety (Gillespie, 2018; Gray & Suri, 2019), it has focused primarily on the “web 2.0” social media platforms that emerged in the mid-2000s—the shift toward the customer service model begins in the late 1980s and is cemented in place by the mid-1990s. This is also an opinionated and personal history, as one of the authors (Zuckerman) built the early content moderation department for Tripod.com, one of the web’s first user-generated content sites, from 1995 to 1999.