The European Commission (EC) has published ethical guidelines detailing the parameters for how businesses and other public organisations should use artificial intelligence (AI).
The document – Ethics guidelines for trustworthy AI – aims at helping organisations create AI that users can trust. Bias in algorithms used by financial services firms and other entities has become a concern among leading experts in the field.
“In a context of rapid technological change, we believe it is essential that trust remains the bedrock of societies, communities, economies and sustainable development,” says the report. “We therefore identify trustworthy AI as our foundational ambition, since human beings and communities will only be able to have confidence in the technology’s development and its applications when a clear and comprehensive framework for achieving its trustworthiness is in place.”
Rather than setting out a range of abstract ethical principles, the guidance instead is piloting a trustworthy assessment list. The report authors – comprised of an independent high-level group on AI that has been set up by the EC – urge organisations to provide feedback to them in order to improve the lists’ practicality.
Trustworthy
To be trustworthy, AI must have three main components that need to be met throughout the entire lifecycle of the application, the report suggests. AI should be lawful and in compliance with relevant rules and regulations; adhere to ethical principles and values; and “it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm,” it says.
In an ideal world all three components should work in harmony and overlap in their operations. But in real life, there can be misalignment between the different components that need to be resolved by social intervention.
The report’s trustworthy AI assessment list poses critical questions in seven key areas: human agency and oversight; technical robustness and safety; privacy and date governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and, accountability.
Risk managers may be particularly interested in this last section, which poses key questions to management about the processes embedded in their applications and AI-driven systems.
Auditability
Minimising and reporting negative impact
Documenting trade-offs
Ability to redress
Click here for the full document.
This website uses cookies to ensure you get the best experience on our website.
Read our Privacy Statement & Cookie Policy