09 September 2020

Disinformation and Platforms

Last week's Digital Disinformation and Vote Suppression report from the US Brennan Center comments that

Election officials, internet companies, the federal government, and the public must act to defend the 2020 elections against digital disinformation attacks designed to suppress the vote. U.S. elections face extreme pressure in 2020. The Covid-19 crisis has created new challenges for election officials and pushed them to make last-minute changes to the voting process, typically with resources that were already stretched thin. Pandemic-related voting changes have become an election issue themselves, with political actors sowing confusion for the benefit of their party. Bad actors have circulated lies to trick certain groups out of voting — and thanks to social media, these deceptive practices can instantly reach huge numbers of people. Experts warn that foreign powers have learned from Russia’s 2016 election interference efforts and will try to covertly influence the American electorate this year. 

State and local election officials play a crucial role in defending U.S. elections against these threats and in protecting American voters from disenfranchisement due to disinformation. Internet companies and members of the public can also take action against deceptive practices, voter intimidation, and other forms of digital vote suppression. In all cases, accurate information from trusted official sources provides the best antidote to disinformation about voting.

The report's Summary Recommendations are

Election officials should:

  • Develop plans and procedures to publicize corrective information. 

  • Make written plans to push out correct information without repeating falsehoods. 

  • Establish channels of communication with the public and with key actors like community groups, candidates, and the media. 

  • Publicize official sources of accurate information to build public trust. 

  • Disseminate information on well-publicized sources like websites, emails, advertising, and social media accounts that are active and verified by the platform. 

  • Protect official sources from hacking and manipulation. 

  • Secure official websites and social media accounts from being used to trick voters by implementing cybersecurity best practices like tight access controls, multifactor authentication, and anti-phishing procedures. 

  • Monitor for disinformation. 

  • Actively watch for falsehoods about elections, set up ways for the public to report instances of digital disinformation, work with internet companies, and participate in information-sharing networks. 

  • Build relationships with communities and media. 

  • Perform early public outreach to communities, including in appropriate languages, to facilitate communication before an incident occurs. 

  • Build relationships with local and ethnic media.

Internet companies should:

  • Proactively provide information about how to vote. 

  • Maintain clear channels for reporting disinformation. 

  • Take down false information about voting but preserve the data. 

  • Protect official accounts and websites. 

  • Push corrective information to specific users affected by disinformation.

The federal government should:

  • Enact the Deceptive Practices and Voter Intimidation Prevention Act. 

  • Share intelligence about incidents of disinformation and help disseminate correct information.

'Content Not Available: Why The United Kingdom's Proposal For A “Package Of Platform Safety Measures” Will Harm Free Speech' by Mark Leiser and Edina Harbinja in (2020) Technology and Regulation comments 

This article critiques key proposals of the United Kingdom’s “Online Harms” White Paper; in particular, the proposal for new digital regulator and the imposition of a “duty of care” on platforms. While acknowledging that a duty of care, backed up by sanctions works well in some environments, we argue is not appropriate for policing the White Paper’s identified harms as it could result in the blocking of legal, subjectively harmful content. Furthermore, the proposed regulator lacks the necessary independence and could be subjected to political interference. We conclude that the imposition of a duty of care will result in an unacceptable chilling effect on free expression, resulting in a draconian regulatory environment for platforms, with users’ digital rights adversely affected.