Campaigners raise awareness of AI risk

Civil liberties groups and digital rights campaigners are seeking to raise awareness of the risks of using of artificial intelligence in mass surveillance technologies ahead of a forthcoming European Union report on the issue.

The group – a coalition of different activist organisations – says its Reclaim your face initiative seeks to ban the use of harmful AI in, for instance, biometric mass surveillance.

“Biometric data are data about our bodies and behaviours, which can divulge sensitive information about who we are,” it says. “For example, our faces can be used for facial recognition to make a prediction or assessment about us – and so can our eyes, veins, voices, the way we walk or type on a keyboard, and much more.”

Legislation coming

The EU executive plans to announce its legislative proposals on AI soon, which are expected to cover high-risk sectors such as healthcare, energy, transport and parts of the public sector.

The EU has already published its whitepaper proposing a risk-based approach to governing AI. The body hopes to develop a framework in which AI applications can be considered trustworthy. It also said that further work was needed to address the safety, liability and data and human rights risks around the use of novel technologies.

“The adoption of ethical and legal proposals relating to intellectual property, civil liability and ethics by MEP’s in October last year indicate the direction of future legislation,” Luke Scanlon, head of fintech propositions at the law firm Pinsent Masons, has said. “The draft proposals focused on the issues to be addressed including developments relating to the need for human-centric AI controls, transparency, safeguards against bias and discrimination, privacy and data protection, liability and intellectual property.”

UK approach

In the UK, recommendations on how to manage bias in algorithmic decision-making has also gained traction. While pulling short of recommending the creation of a new regulator, the paper urged existing regulators to reassess their compliance procedures are such technologies. The House of Lords Liaison Committee has also recently called for sector-specific regulatory guidance on AI backed up by an overarching AI code of conduct.

Companies need to keep on top of the risks posed by such technologies, as we wrote in an article in 2019.

Key questions for business include:

Are companies aware of the presence of algorithmic risks?

  • How do companies develop policies and cultivate a corporate culture that ensures algorithmic risks are understood across its functions?
  • What does an effective algorithmic risk management framework look like?
  • What are the ethical considerations surrounding automated decision-making, including data collection and privacy concerns?
  • Who are the right talents in the era of algorithms?
  • How do we retain full control over the technologies that are impacting our lives and making the decisions for us?


  • About Enterprise Risk Magazine

    Enterprise Risk Magazine is the leading quarterly title for risk managers and enterprise risk, with a print circulation of over 5,500.

    Enterprise Risk is published on behalf of the Institute of Risk Management (IRM). The majority of IRM members receive their copy of Enterprise Risk at their home address, meaning the title... Read more
  • Categories

  • Tags