The European Union has launched a regulatory regime for artificial intelligence that bans systems that are likely to pose and unacceptable risk to society.

The move creates the first ever legal framework for AI and it has published separate rules on machinery that take account of the increasing ubiquity of AI in manufacturing.

“On Artificial Intelligence, trust is a must, not a nice to have,” Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”

The EU has made trust a central plank of its framework as it aims to enshrine ethical standards into new AI technologies.

Unacceptable risks

AI that regulators assess as a threat to safety, livelihoods and rights face an all-out ban. “This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments,” the press release said.

The regulation adds that citizens are protected by some other practices by existing regulations on data and consumer protection, for example. But it explicitly prohibits enforcement agencies and others using controversial technologies such as real-time facial recognition systems in all but exceptional circumstances: “The use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply.”

High risks

The EU’s list of high risk applications is long and may prove contentious. Using AI to score exam papers, authenticate travel documents and in critical infrastructure are all considered high risk. Organisations will need to clear their applications with the regulators before they appear on the market. They also need to demonstrate:

  • Adequate risk assessment and mitigation systems
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user; Appropriate human oversight measures to minimise risk; High level of robustness, security and accuracy.

UK practices

The regulations will apply to all companies using AI in Europe. The UK, in the meantime, has been developing its own approach to regulation, which does not seem likely to result in countrywide rules for all sectors.

Download the EU rules on AI and machinery.