17 September 2023

Disinformation

'The efficacy of Facebook’s vaccine misinformation policies and architecture during the COVID-19 pandemic' by David A David A Broniatowski, Joseph R Simons, Jiayan Gu, Amedia M Jamison and Lorien C Abroms in (2023) 9(37) Science Advances comments 

 Online misinformation promotes distrust in science, undermines public health, and may drive civil unrest. During the coronavirus disease 2019 pandemic, Facebook—the world’s largest social media company—began to remove vaccine misinformation as a matter of policy. We evaluated the efficacy of these policies using a comparative interrupted time-series design. We found that Facebook removed some antivaccine content, but we did not observe decreases in overall engagement with antivaccine content. Provaccine content was also removed, and antivaccine content became more misinformative, more politically polarized, and more likely to be seen in users’ newsfeeds. We explain these findings as a consequence of Facebook’s system architecture, which provides substantial flexibility to motivated users who wish to disseminate misinformation through multiple channels. Facebook’s architecture may therefore afford antivaccine content producers several means to circumvent the intent of misinformation removal policies. 

Online misinformation undermines trust in scientific evidence (1) and medical recommendations (2). It has been linked to harmful offline behaviors including stalled public health efforts (3), civil unrest (4), and mass violence (5). The coronavirus disease 2019 (COVID-19) pandemic has spurred widespread concern that misinformation spread on social media may have lowered vaccine uptake rates (6, 7). Therefore, policymakers and public officials have put substantial pressure on social media platforms to curtail misinformation spread (8, 9). 

Years of “soft” remedies—such as warning labels by Twitter (10), YouTube (11), and Facebook (12) and attempts by these platforms to downrank objectionable content in search—have demonstrated some success (13); however, misinformation continues to spread widely online, leading many to question the efficacy of these interventions (14). Some have suggested that combining these soft remedies with “hard” remedies (15)—removing content and objectionable accounts (16–18)—could largely curtail misinformation spread (19). However, evidence for the short-term efficacy of hard remedies is mixed (20–24), and the long-term efficacy of these strategies has not been systematically examined. Hard remedies have also spurred accusations of censorship and threats of legal action (25, 26). There is therefore a critical need to understand whether this combination of remedies is effective—i.e., whether it reduces users’ exposure to misinformation—and if not, why not. 

Any evaluation of the efficacy of these remedies must be grounded in a scientific understanding of why misinformation spreads online. Prior work indicates that misinformation may spread widely on social media if it is framed in a manner that is more compelling than true information (27, 28). Users appear to prefer sharing true information when cued to think about accuracy (13); however, the social media environment may interfere with peoples’ ability to distinguish truth from falsehood (29, 30). Other studies have suggested that social media platforms’ algorithms facilitate the creation of “echo chambers” (31, 32), which increase exposure to content from like-minded individuals. Accordingly, prior interventions have focused on altering the social media environment to either reduce users’ exposure to misinformation or inform them when content is false. However, recent evidence suggests that people use online algorithms to actively seek out and engage with misinformation (33). Therefore, on the basis of prior theory (34, 35), we examine how a social media platform’s “system architecture”—its designed structure that shapes how information flows through the platform (34)—enables antivaccine content producers and users to flexibly (36) establish new paths to interdicted content, facilitating resistance to content removal efforts. 

We analyzed Facebook because it is the world’s largest social media platform. In December 2020, when COVID-19 vaccines first became available, Facebook had 2.80 billion monthly active users (37). As of April 2023, this number had grown to 2.99 billion monthly active users (25). We therefore conducted an evaluation of Facebook’s attempts to remove antivaccine misinformation from its public content throughout the COVID-19 pandemic. 

Of primary interest were the following three research questions: (i) Were Facebook’s policies associated with a substantial decrease in public antivaccine content and engagement with remaining antivaccine content? (ii) Did misinformation decrease when these policies were implemented? (iii) How might Facebook’s system architecture have enabled or undermined these policies?

...Our findings suggest that Facebook’s policies may have reduced the number of posts in antivaccine venues but did not induce a sustained reduction in engagement with antivaccine content. Misinformation proportions both on and off the platform appear to have increased. Furthermore, it appears that antivaccine page administrators especially focused on promoting content that outpaced updates to Facebook’s moderation policies: The largest increases appear to have been associated with topics falsely attributing severe vaccine adverse events and deaths to COVID-19 vaccines.