It has been a long time in coming, but the UK government has published extensive plans to tackle the problem of online harm.The government is to establish a new statutory duty of care, which it hopes will make “companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services”.

Compliance will be overseen and enforced by an independent regulator, it said in the report outlining the plans in its Online harms white paper.

The proposals will apply to any company that allows users to share or discover user generated content or interact with each other online. The companies affected include social media platforms, file hosting sites, public discussion forums, messaging services, and search engines.

“The era of self-regulation for online companies is over,” Digital Secretary Jeremy Wright said. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action.”

Information Commissioner, Elizabeth Denham said: “I think the white paper proposals reflect people’s growing mistrust of social media and online services. People want to use these services, they appreciate the value of them, but they’re increasingly questioning how much control they have of what they see, and how their information is used. That relationship needs repairing, and regulation can help that. If we get this right, we can protect people online while embracing the opportunities of digital innovation.”

The measures include:

  • A new statutory ‘duty of care’ to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services
  • Further stringent requirements on tech companies to ensure child abuse and terrorist content is not disseminated online
  • Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this
  • Making companies respond to users’ complaints, and act to address them quickly
  • Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods
  • A new “Safety by Design” framework to help companies incorporate online safety features in new apps and platforms from the start
  • A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism

The proposals will depend on the cooperation of major technology platforms since the solution requires them to deploy algorithms to help tackle online harm – and to cooperate with the code of practice on transparency.

This part of the plan mirror attempts by the European Commission (EC) to introduce a code of conduct for social media platforms in the run up to the European elections taking place in May 2018. The EC launched its Action Plan Against Disinformation in December last year and is reporting monthly on the progress platforms such as Facebook, Google and Twitter are making in complying with the voluntary code.

The code expects social media platforms to combat disinformation in the run up to the elections by publishing information on such topics as funding for political advertising, taking down of misleading political information, and the use of fake bots in spreading disinformation.

Consultation on the UK government’s plans closes 1 July 2019.