EC pilots AI ethics assessment

The European Commission (EC) has published ethical guidelines detailing the parameters for how businesses and other public organisations should use artificial intelligence (AI).

The document – Ethics guidelines for trustworthy AI – aims at helping organisations create AI that users can trust. Bias in algorithms used by financial services firms and other entities has become a concern among leading experts in the field.

“In a context of rapid technological change, we believe it is essential that trust remains the bedrock of societies, communities, economies and sustainable development,” says the report. “We therefore identify trustworthy AI as our foundational ambition, since human beings and communities will only be able to have confidence in the technology’s development and its applications when a clear and comprehensive framework for achieving its trustworthiness is in place.”

Rather than setting out a range of abstract ethical principles, the guidance instead is piloting a trustworthy assessment list. The report authors – comprised of an independent high-level group on AI that has been set up by the EC – urge organisations to provide feedback to them in order to improve the lists’ practicality.

Trustworthy

To be trustworthy, AI must have three main components that need to be met throughout the entire lifecycle of the application, the report suggests. AI should be lawful and in compliance with relevant rules and regulations; adhere to ethical principles and values; and “it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm,” it says.

In an ideal world all three components should work in harmony and overlap in their operations. But in real life, there can be misalignment between the different components that need to be resolved by social intervention.

The report’s trustworthy AI assessment list poses critical questions in seven key areas: human agency and oversight; technical robustness and safety; privacy and date governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and, accountability.

Risk managers may be particularly interested in this last section, which poses key questions to management about the processes embedded in their applications and AI-driven systems.

Auditability

  • Did you establish mechanisms that facilitate the system’s auditability, such as ensuring traceability and logging of the AI system’s processes and outcomes?
  • Did you ensure, in applications affecting fundamental rights (including safety-critical applications) that the AI system can be audited independently?

Minimising and reporting negative impact

  • Did you carry out a risk or impact assessment of the AI system, which takes into account different stakeholders that are (in)directly affected?
  • Did you provide training and education to help developing accountability practices?
  • Which workers or branches of the team are involved?
  • Does it go beyond the development phase?
  • Do these trainings also teach the potential legal framework applicable to the AI system?
  • Did you consider establishing an ‘ethical AI review board’ or a similar mechanism to discuss overall accountability and ethics practices, including potentially unclear grey areas?
  • Did you ensure, in applications affecting fundamental rights (including safety-critical applications) that the AI system can be audited independently?
  • Did you foresee any kind of external guidance or put in place auditing processes to oversee ethics and accountability, in addition to internal initiatives?
  • Did you establish processes for third parties (e.g. suppliers, consumers, distributors/vendors) or workers to report potential vulnerabilities, risks or biases in the AI system?

Documenting trade-offs

  • Did you establish a mechanism to identify relevant interests and values implicated by the AI system and potential trade-offs between them?
  • How do you decide on such trade-offs? Did you ensure that the trade-off decision was documented?

Ability to redress

  • Did you establish an adequate set of mechanisms that allows for redress in case of the occurrence of any harm or adverse impact?
  • Did you put mechanisms in place both to provide information to (end-)users/third parties about opportunities for redress?

Click here for the full document.

  • About Enterprise Risk Magazine

    Enterprise Risk Magazine is the leading quarterly title for risk managers and enterprise risk, with a print circulation of over 5,500.

    Enterprise Risk is published on behalf of the Institute of Risk Management (IRM). The majority of IRM members receive their copy of Enterprise Risk at their home address, meaning the title... Read more
  • Categories

  • Tags