'The Digital Services Act and the EU as the Global Regulator of the Internet' by Ioanna Tourkochoriti in (2023) 24(1) Chicago Journal of International Law discusses
the Digital Services Act (DSA), the new regulation enacted by the EU to combat hate speech and misinformation online, focusing on the major challenges its application will entail. However sophisticated the DSA might be, major technological challenges to detecting hate speech and misinformation online necessitate further research in implementing the DSA. This Essay also discusses potential conflicts with U.S. law that may arise in the application of the DSA. The gap in regulating the platforms in the U.S. has meant that the platforms adapt to the most stringent standards of regulation existing elsewhere. In 2016, the EU agreed with Facebook, Microsoft, Twitter, and YouTube on a code of conduct countering hate speech online. As part of this code, the platforms agreed to rules or Community Guidelines and to practice content moderation in conformity with them. The DSA builds on the content moderation system by enhancing the internal complaint-handling systems the platforms maintain. In the meantime, some states in the U.S., namely Texas and Florida, enacted legislation prohibiting the platforms from engaging in viewpoint discrimination. Two federal courts of appeals that have examined the constitutionality of these statutes under the First Amendment are split in their rulings. This Essay discusses the implications for the platforms’ content moderation practices depending on which ruling will be upheld. ...
Tourkochoriti argues
Extreme speech has become a major source of mass unrest throughout the world. Social media platforms magnify the conflicts that lie latent within many societies, which are often further fueled by powerful political actors. Similarly, widespread misinformation during the COVID-19 pandemic and the perceptions of these platforms’ inadequate responses led the European Union (EU) to pass the 2022 Digital Services Act (DSA) to combat misinformation and extremist speech.1 The EU also strengthened its Code of Practice on Disinformation. Although these are important developments toward regulating hate speech online, the legislation will be difficult to implement. There are major technological challenges in monitoring online hate speech that necessitate further research. Furthermore, depending on legal developments in the United States (U.S.), the EU’s new legal regime might lead to a conflict with U.S. law, which will complicate platforms’ content moderation processes.
The DSA responds to concerns expressed about the shortcomings of the system of content moderation currently applied by major social media platforms. Although it offers a sophisticated regulatory model to combat hate speech and misinformation, further research is required in several areas related to detecting such content. The state of the relevant detection technologies raises several concerns, which relate to the difficulties in the current artificial intelligence (AI) models that have been developed to detect hate speech and misinformation. Research is also needed to determine the impact of exposure to hate speech online.
The U.S. offers extended protection for freedom of speech. In many European states, however, it is legitimate for the government to limit abuse of the same freedom to protect citizens from harm caused by hate speech. It is also legitimate to limit fake news. In the U.S., the sparse regulation of speech at the federal level has left a gap to be filled by states and civil society actors. Florida and Texas enacted legislation to limit online platforms’ discretion to refuse to host others’ speech. More frequently, contractual terms limit speech rights in several private institutions in the U.S. The major U.S.-based social media companies (Facebook and Twitter) have created deontology committees to limit hate speech in the U.S. under pressure from the EU. Questions emerged recently among academics and political actors in the EU on whether these platforms are limiting too much speech as private actors. The concern emerged that the platforms may be limiting even more speech than what is acceptable in Europe, where limits to hate speech by the government are acceptable.
Courts have the last word in Europe about whether social media users’ freedoms will be adequately protected. Citizens can bring claims before courts alleging violations of their constitutional rights by the platforms. The doctrine of horizontal effect of constitutional rights, dominant in European states, enables them to do so. According to this doctrine, the Constitution applies not only to the vertical relationship between the state and its citizens, but also to the horizontal relationship between private parties within society. The constitutionally protected right to freedom of expression justifies government intervention to ensure its protection against civil society actors too. In several EU member states, the DSA will supersede existing national legislation regulating hate speech and fake news online. France has enacted such legislation, the constitutionality of which was examined by the Constitutional Council. Germany has also enacted legislation generating significant case law in this area. The DSA will trump even U.S. free speech law insofar as the major companies are transnational and must therefore follow European rules as well as American law. However, depending on future court decisions, a conflict may emerge between U.S. law and the DSA. Should this conflict emerge, content moderation may become challenging for the platforms, as they will need to maintain different moderation standards in the U.S. and in the EU.
Social media companies are required to modify their operational practices to abide by the EU’s Code of Conduct Countering Illegal Hate Speech Online. Specifically, platforms are required to offer enhanced internal complaint-handling mechanisms. They must also meet several procedural requirements in investigating complaints. They must issue prior warnings before removing users.
The DSA applies to providers of intermediary services irrespective of their place of establishment or residence “in so far as they provide services in the Union, as evidenced by a substantial connection to the Union.” Social media companies modify their behavior to meet the most stringent legal regimes in order to be able to offer their services everywhere. So, by engaging in regional regulation of online speech, the EU is becoming a global regulator of the internet.
Part II of this Essay discusses the role platforms play in defining the public sphere today and the implications of that role for government regulation. Part III presents how the DSA complements existing codes of practice in countering illegal hate speech. Part IV investigates the challenges that regulating online extreme speech and misinformation pose for governments and platforms. These challenges relate to the state of the relevant detection technologies. Part V focuses on transnational enforcement of the Act and discusses possible areas of conflict with U.S. law. Further research is needed to establish guidelines for establishing what counts as hateful, violent, dangerous, offensive, or defamatory expression, insofar as these forms of expression are subject to DSA regulation.