Civil liberties groups and digital rights campaigners are seeking to raise awareness of the risks of using of artificial intelligence in mass surveillance technologies ahead of a forthcoming European Union report on the issue.
“Biometric data are data about our bodies and behaviours, which can divulge sensitive information about who we are,” it says. “For example, our faces can be used for facial recognition to make a prediction or assessment about us – and so can our eyes, veins, voices, the way we walk or type on a keyboard, and much more.”
The EU executive plans to announce its legislative proposals on AI soon, which are expected to cover high-risk sectors such as healthcare, energy, transport and parts of the public sector.
The EU has already published its whitepaper proposing a risk-based approach to governing AI. The body hopes to develop a framework in which AI applications can be considered trustworthy. It also said that further work was needed to address the safety, liability and data and human rights risks around the use of novel technologies.
“The adoption of ethical and legal proposals relating to intellectual property, civil liability and ethics by MEP’s in October last year indicate the direction of future legislation,” Luke Scanlon, head of fintech propositions at the law firm Pinsent Masons, has said. “The draft proposals focused on the issues to be addressed including developments relating to the need for human-centric AI controls, transparency, safeguards against bias and discrimination, privacy and data protection, liability and intellectual property.”
In the UK, recommendations on how to manage bias in algorithmic decision-making has also gained traction. While pulling short of recommending the creation of a new regulator, the paper urged existing regulators to reassess their compliance procedures are such technologies. The House of Lords Liaison Committee has also recently called for sector-specific regulatory guidance on AI backed up by an overarching AI code of conduct.
Companies need to keep on top of the risks posed by such technologies, as we wrote in an article in 2019.
Key questions for business include:
Are companies aware of the presence of algorithmic risks?