With the proliferation of artificial intelligence (AI) systems in organisations, machine or assisted decision-making is becoming prevalent. Recent guidance from the Information Commissioners’ Office is designed to help organisations explain to the users of such programs and services how decision are made.
The guidance is needed, says the ICO, to clarify the rights of individuals to have the decision made about them explained by those using personal data. “You need to be able to give an individual an explanation of a fully automated decision to enable their rights to obtain meaningful information, express their point of view and contest the decision,” the guidance says. “But even where an AI-assisted decision is not part of a solely automated process (because there is meaningful human involvement), if personal data is used, it is still subject to all the GDPR’s principles.”
Lack of clarity
The guidance trots through the different types of AI and how those systems can be involved in decision-making. The problem with AI is that it is sometimes unclear how and by whom a decision has been made – whereas with humans it is generally possible to locate who is responsible and accountable for decision-making.
“There should be no loss of accountability when a decision is made with the help of, or by, an AI system, rather than solely by a human,” the ICO says. “Where an individual would expect an explanation from a human, they should instead expect an explanation from those accountable for an AI system.”
The guidance is split into three parts to deal with date protection officers (DPOs) and compliance teams, developers and the senior management team with overall responsibility for AI systems. When the business is purchasing AI from a third party, “both the compliance teams, including DPOs and senior management, should expect assurances from the product manager that the system you are using provides the appropriate level of explanation to decision recipients,” says the guidance.
Direct interaction
The assurance organisations provide should give a high-level understanding of the systems and types of explanations produced. In addition, there may be occasions when the DPO or compliance teams need to interact directly with those about whom decisions have been made – for example, in cases where a complaint has been made. In such instances, they will need to provide a more detailed understanding of how a decision has been reached, and the team members involved will need to be trained on how to convey this information appropriately to affected individuals.
The ICO says it may conduct external audits on a business’ AI system to assess whether the organisation complies with data protection law. “Data protection includes the expectation that decisions made with AI are explained to individuals affected by those decisions,” it says. “During an audit you will need to produce all the documentation you’ve prepared and the testing you’ve undertaken to ensure that the AI system is able to provide the different types of explanation required.”
This website uses cookies to ensure you get the best experience on our website.
Read our Privacy Statement & Cookie Policy