The UK government has called for better understanding of artificial intelligence (AI) to balance the potential risks and opportunities in a discussion paper on the topic.

There are a wide range of potential social risks associated with the development of AI, the report said. Those could include, for instance, the degradation of the information environment as new algorithms spam false data onto existing platforms, unpredictable labour market disruption and bias in corporate and governmental systems.

“Long-term consequences, particularly as frontier AI becomes more embedded in mainstream applications and more accessible to children and vulnerably people, are highly uncertain,” it said.

Dual use

As well as such social risks, there was also the risk of misuse. In some cases, the benefits of AI could just as easily be turned into risks. For example, while AI systems have accelerated developments in the life sciences, those same capabilities could be used to create new viruses, poisons or chemical weapons. 

“While the impact of current systems on biological and chemical security risks is still limited, anticipated near-future capabilities have the potential to increase dual-use science capabilities,” the report said. “Current AI systems in particular pose risks where current biological and chemical supply chains already feature vulnerabilities.”

Cyber defences also successfully use AI to help protect organisations. But they have also been deployed by skilled hackers so that, for example, viruses alter over time to avoid detection. In addition, social engineering attacks now use AI to gather intelligence and impersonate the voices of real people. The report expressed concern that much of this work could be automated in future and scaled-up at speed.

The report also discussed the possibility that AI would slip out of the control of humans as its capabilities increased.

Risk management

Managing such risks will not be easy. AI safety standards are relatively immature compared with the technology and develop glacially. In addition, there is little co-ordination globally between regulatory bodies.

This lack of progress is compounded because developers prioritise speed of deployment over safety. “Competition in AI has raised concern about potential ‘race to the bottom’ scenarios, where actors compete to rapidly develop AI systems and under-invest in safety measures,” it said.

Prime Minister Rishi Sunak said that the UK would not rush to regulate AI but would invest to better understand and evaluate the safety of AI models in government.