Artificial intelligence is going to be “a redefining event” and there needs to be a debate about its consequences on risk managers, regulation and society at large, according to a gathering over over 150 people at a recent IRM/Imperial College Business School event.

Artificial intelligence is a “cognitive technology” that extends what used to be considered as human processes – such as thinking, learning and predicting – and embeds them in networked machines. And since the technology is often freely available today from major developers such as Google, IBM and Microsoft, start-up companies are just as able to disrupt industries as large, cash-rich organisations.

“Data is the source of competitive advantage when it is harnessed to the power of cognitive technologies,” said one presenter – the meeting was held under Chatham House rules to enable free debate. But most businesses still do not have their data structured in a way that enables the effective uses of machine-learning algorithms to develop processes that can help predict future behaviours. Applying natural language processing algorithms to the data will be absolutely essential.

Once the data is collated in a “big data lake”, organisations can look at prior histories to examine customer behaviour, and gain surveillance insights through monitoring employee and supplier activities by looking at voice data, electronic communications and chat rooms. Using behavioral tracking can be a huge positive for businesses who want to understand their customer base better.

In risk management, for example, artificial intelligence can be used to merge policies, procedures, and controls with the regulators and regulatory changes to improve their organisations’ compliance.

But there were warnings too: “If organisations see risk managers as the policeman, they will not get the best data,” said one speaker. “The system has be set up to bring the right risk data to the right risk managers when and whether I need it. And, if people know that wrist management are watching they will come clean quicker.”

And questions:

  • What risks will we have to manage when artificial intelligence is democratised?
  • What are companies really using artificial intelligence for?
  • Will risk managers bring their own human biases to artificial intelligence?
  • How does such a really significant technology change the environment itself?
  • Should artificial intelligence risks be regulated?

The panel members conceded that they did not know the true, future capability of artificial technologies – which is why they had opened many of their algorithms to business:

The panel and participants emphasised the need to learn how humans and machines could work together effectively, transparently and ethically in future.

For more about the event, read this.