The latter states
Illegal and unacceptable content and activity is widespread online, and UK users are concerned about what they see and experience on the internet. The prevalence of the most serious illegal content and activity, which threatens our national security or the physical safety of children, is unacceptable. Online platforms can be a tool for abuse and bullying, and they can be used to undermine our democratic values and debate. The impact of harmful content and activity can be particularly damaging for children, and there are growing concerns about the potential impact on their mental health and wellbeing.
3. Terrorist groups use the internet to spread propaganda designed to radicalise vulnerable people, and distribute material designed to aid or abet terrorist attacks. There are also examples of terrorists broadcasting attacks live on social media. Child sex offenders use the internet to view and share child sexual abuse material, groom children online, and even live stream the sexual abuse of children.
4. There is also a real danger that hostile actors use online disinformation to undermine our democratic values and principles. Social media platforms use algorithms which can lead to ‘echo chambers’ or ‘filter bubbles’, where a user is presented with only one type of content instead of seeing a range of voices and opinions. This can promote disinformation by ensuring that users do not see rebuttals or other sources that may disagree and can also mean that users perceive a story to be far more widely believed than it really is.
5. Rival criminal gangs use social media to promote gang culture and incite violence. This, alongside the illegal sale of weapons to young people online, is a contributing factor to senseless violence, such as knife crime, on British streets.
6. Other online behaviours or content, even if they may not be illegal in all circumstances, can also cause serious harm. The internet can be used to harass, bully or intimidate, especially people in vulnerable groups or in public life. Young adults or children may be exposed to harmful content that relates, for example, to self-harm or suicide. These experiences can have serious psychological and emotional impact. There are also emerging challenges about designed addiction to some digital services and excessive screen time.
Our response
7. This White Paper sets out a programme of action to tackle content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by undermining our shared rights, responsibilities and opportunities to foster integration.
8. There is currently a range of regulatory and voluntary initiatives aimed at addressing these problems, but these have not gone far or fast enough, or been consistent enough between different companies, to keep UK users safe online.
9. Many of our international partners are also developing new regulatory approaches to tackle online harms, but none has yet established a regulatory framework that tackles this range of online harms. The UK will be the first to do this, leading international efforts by setting a coherent, proportionate and effective approach that reflects our commitment to a free, open and secure internet.
10. As a world-leader in emerging technologies and innovative regulation, the UK is well placed to seize these opportunities. We want technology itself to be part of the solution, and we propose measures to boost the tech-safety sector in the UK, as well as measures to help users manage their safety online.
11. The UK has established a reputation for global leadership in advancing shared efforts to improve online safety. Tackling harmful content and activity online is one part of the UK’s wider ambition to develop rules and norms for the internet, including protecting personal data, supporting competition in digital markets and promoting responsible digital design.
12. Our vision is for:
- A free, open and secure internet
- Freedom of expression online
- An online environment where companies take effective steps to keep their users safe, and where criminal, terrorist and hostile foreign state activity is not left to contaminate the online space
- Rules and norms for the internet that discourage harmful behaviour
- The UK as a thriving digital economy, with a prosperous ecosystem of companies developing innovation in online safety
- Citizens who understand the risks of online activity, challenge unacceptable behaviours and know how to access help if they experience harm online, with children receiving extra protection
- A global coalition of countries all taking coordinated steps to keep their citizens safe online
- Renewed public confidence and trust in online companies and services
Clarity for companies
13. Increasing public concern about online harms has prompted calls for further action from governments and tech companies. In particular, as the power and influence of large companies has grown, and privately-run platforms have become akin to public spaces, some of these companies now acknowledge their responsibility to be guided by norms and rules developed by democratic societies.
14. The new regulatory framework this White Paper describes will set clear standards to help companies ensure safety of users while protecting freedom of expression, especially in the context of harmful content or activity that may not cross the criminal threshold but can be particularly damaging to children or other vulnerable users. It will promote a culture of continuous improvement among companies, and encourage them to develop and share new technological solutions rather than complying with minimum requirements.
15. It will also provide clarity for the wide range of businesses of all sizes that are in scope of the new regulatory framework but whose services present much lower risks of harm, helping them to understand and fulfil their obligations in a proportionate manner.
A new regulatory framework for online safety
16. The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.
17. Compliance with this duty of care will be overseen and enforced by an independent regulator.
18. All companies in scope of the regulatory framework will need to be able to show that they are fulfilling their duty of care. Relevant terms and conditions will be required to be sufficiently clear and accessible, including to children and other vulnerable users. The regulator will assess how effectively these terms are enforced as part of any regulatory action.
19. The regulator will have a suite of powers to take effective enforcement action against companies that have breached their statutory duty of care. This may include the powers to issue substantial fines and to impose liability on individual members of senior management.
20. Companies must fulfil the new legal duty. The regulator will set out how to do this in codes of practice. If companies want to fulfil this duty in a manner not set out in the codes, they will have to explain and justify to the regulator how their alternative approach will effectively deliver the same or greater level of impact.
21. Reflecting the threat to national security or the physical safety of children, the government will have the power to direct the regulator in relation to codes of practice on terrorist activity or child sexual exploitation and abuse (CSEA) online, and these codes must be signed off by the Home Secretary.
22. For codes of practice relating to illegal harms, including incitement of violence and the sale of illegal goods and services such as weapons, there will be a clear expectation that the regulator will work with law enforcement to ensure the codes adequately keep pace with the threat.
23. Developing a culture of transparency, trust and accountability will be a critical element of the new regulatory framework. The regulator will have the power to require annual transparency reports from companies in scope, outlining the prevalence of harmful content on their platforms and what counter measures they are taking to address these. These reports will be published online by the regulator, so that users and parents can make informed decisions about internet use. The regulator will also have powers to require additional information, including about the impact of algorithms in selecting content for users and to ensure that companies proactively report on both emerging and known harms.
24. The regulator will encourage and oversee the fulfilment of companies’ existing commitments to improve the ability of independent researchers to access their data, subject to appropriate safeguards.
25. As part of the new duty of care, we will expect companies, where appropriate, to have effective and easy-to-access user complaints functions, which will be overseen by the regulator. Companies will need to respond to users’ complaints within an appropriate timeframe and to take action consistent with the expectations set out in the regulatory framework.
26. We also recognise the importance of an independent review mechanism to ensure that users have confidence that their concerns are being treated fairly. We are consulting on options, including allowing designated bodies to make ‘super complaints’ to the regulator in order to defend the needs of users.
27. Ahead of the implementation of the new regulatory framework, we will continue to encourage companies to take early action to address online harms. To assist this process, this White Paper sets out high-level expectations of companies, including some specific expectations in relation to certain harms. We expect the regulator to reflect these in future codes of practice.
28. For the most serious online offending such as CSEA and terrorism, we will expect companies to go much further and demonstrate the steps taken to combat the dissemination of associated content and illegal behaviours. We will publish interim codes of practice, providing guidance about tackling terrorist activity and online CSEA later this year.
The companies in scope of the regulatory framework
29. We propose that the regulatory framework should apply to companies that allow users to share or discover user-generated content or interact with each other online.
30. These services are offered by a very wide range of companies of all sizes, including social media platforms, file hosting sites, public discussion forums, messaging services and search engines.
31. The regulator will take a risk-based and proportionate approach across this broad range of business types. This will mean that the regulator’s initial focus will be on those companies that pose the biggest and clearest risk of harm to users, either because of the scale of the platforms or because of known issues with serious harms.
32. Every company within scope will need to fulfil their duty of care, particularly to counter illegal content and activity, comply with information requests from the regulator, and, where appropriate, establish and maintain a complaints and appeals function which meets the requirements to be set out by the regulator.
33. Reflecting the importance of privacy, any requirements to scan or monitor content for tightly defined categories of illegal content will not apply to private channels. We are consulting on definitions of private communications, and what measures should apply to these services. An independent regulator for online safety
34. An independent regulator will implement, oversee and enforce the new regulatory framework. It will have sufficient resources and the right expertise and capability to perform its role effectively.
35. The regulator will take a risk-based approach, prioritising action to tackle activity or content where there is the greatest evidence or threat of harm, or where children or other vulnerable users are at risk. To support this, the regulator will work closely with UK Research and Innovation (UKRI) and other partners to improve the evidence base. The regulator will set out expectations for companies to do what is reasonably practicable to counter harmful activity or content, depending on the nature of the harm, the risk of the harm occurring on their services, and the resources and technology available to them.
36. The regulator will have a legal duty to pay due regard to innovation, and to protect users’ rights online, taking particular care not to infringe privacy or freedom of expression. We are clear that the regulator will not be responsible for policing truth and accuracy online.
37. The government is consulting on whether the regulator should be a new or existing body. The regulator will be funded by industry in the medium term, and the government is exploring options such as fees, charges or a levy to put it on a sustainable footing. This could fund the full range of the regulator’s activity, including producing codes of practice, enforcing the duty of care, preparing transparency reports, and any education and awareness activities undertaken by the regulator.
Enforcement of the regulatory framework
38. The regulator will have a range of enforcement powers, including the power to levy substantial fines, that will ensure that all companies in scope of the regulatory framework fulfil their duty of care.
39. We are consulting on which enforcement powers the regulator should have at its disposal, particularly to ensure a level playing field between companies that have a legal presence in the UK, and those which operate entirely from overseas. 40. In particular, we are consulting on powers that would enable the regulator to disrupt the business activities of a non-compliant company, measures to impose liability on individual members of senior management, and measures to block non-compliant services.
41. The new regulatory framework will increase the responsibility of online services in a way that is compatible with the EU’s e-Commerce Directive, which limits their liability for illegal content until they have knowledge of its existence, and have failed to remove it from their services in good time.
Technology as part of the solution
42. Companies should invest in the development of safety technologies to reduce the burden on users to stay safe online.
43. In November 2018, the Home Secretary co-hosted a hackathon with five major tech companies to develop a new tool to tackle online grooming, which will be licensed for free to other companies, but more of these innovative and collaborative efforts are needed.
44. The government and the regulator will work with leading industry bodies and other regulators to support innovation and growth in this area and encourage the adoption of safety technologies.
45. The government will also work with industry and civil society to develop a safety by design framework, linking up with existing legal obligations around data protection by design and secure by design principles, to make it easier for start-ups and small businesses to embed safety during the development or update of products and services.
Empowering users
46. Users want to be empowered to keep themselves and their children safe online, but currently there is insufficient support in place and many feel vulnerable online.
47. While companies are supporting a range of positive initiatives, there is insufficient transparency about the level of investment and the effectiveness of different interventions. The regulator will have oversight of this investment.
48. The government will develop a new online media literacy strategy. This will be developed in broad consultation with stakeholders, including major digital, broadcast and news media organisations, the education sector, researchers and civil society. This strategy will ensure a coordinated and strategic approach to online media literacy education and awareness for children, young people and adults.