The European Union has launched a regulatory regime for artificial intelligence that bans systems that are likely to pose and unacceptable risk to society.
“On Artificial Intelligence, trust is a must, not a nice to have,” Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”
The EU has made trust a central plank of its framework as it aims to enshrine ethical standards into new AI technologies.
AI that regulators assess as a threat to safety, livelihoods and rights face an all-out ban. “This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments,” the press release said.
The regulation adds that citizens are protected by some other practices by existing regulations on data and consumer protection, for example. But it explicitly prohibits enforcement agencies and others using controversial technologies such as real-time facial recognition systems in all but exceptional circumstances: “The use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply.”
The EU’s list of high risk applications is long and may prove contentious. Using AI to score exam papers, authenticate travel documents and in critical infrastructure are all considered high risk. Organisations will need to clear their applications with the regulators before they appear on the market. They also need to demonstrate:
The regulations will apply to all companies using AI in Europe. The UK, in the meantime, has been developing its own approach to regulation, which does not seem likely to result in countrywide rules for all sectors.