In 2021, artificial intelligence (AI) augmentation will create $2.9 trillion of business value and 6.2 billion hours of worker productivity globally, according to Gartner, the research analyst.
Gartner says augmented intelligence, which it defines as ”a human-centred partnership model of people and AI working together to enhance cognitive performance” will predominate the AI field over the next few years. Instead of the machines taking over, such programs will help workers learn, make decisions and experience new things.
“Augmented intelligence is all about people taking advantage of AI,” said Svetlana Sicular, research vice president at Gartner. “As AI technology evolves, the combined human and AI capabilities that augmented intelligence allows will deliver the greatest benefits to enterprises.”
Not so smooth
Some of this technology is already here. AI helps translators quickly produce a rough draft of work from a foreign language. One translator told ER, “we’ll soon be just editors.”
But there are hurdles to get over if these predictions are to materialise. Lloyd’s of London, the insurance market, identified four key risks in its recent report Taking control: artificial intelligence and insurance.
It identified uncertainty among stakeholders (including regulators) as a potential drag factor in rapid uptake. From an ethical point of view, it is now well accepted that AI embodies the prejudices, biases and social injustices that they adopt from their human programmers. “The consequence is that the AI depends on the information on which it is trained, and it might act against human interests,” says the report.
The fact that much AI deploys machine-learning routines that cannot be followed by humans has implications as to how far the decisions AI programs reach can be trusted because of their lack of transparency.
Lloyd’s also noted that liability issues could prove difficult to resolve. “Whilst much can be done to encourage and motivate the engineers behind the systems to consider the ramifications of their decisions in designing neural networks and selecting data sets for learning, there needs to be robust legal frameworks to define liability,” the report said. “Generally speaking, the manufacturer of a product is liable for defects that cause damages to users. However, in the case of AI (especially strong AI) decisions are not a consequence of the design, but of the interpretation of reality by a machine.”
Gartner noted that customer experience was likely to be the primary source of AI-derived business value in its AI business value forecast. Sicular said that the scale of personalisation that AI could offer would be a strong driver for its uptake among customers. “The goal is to be more efficient with automation, while complementing it with a human touch and common sense to manage the risks of decision automation,” she said.
“The excitement about AI tools, services and algorithms misses a crucial point: The goal of AI should be to empower humans to be better, smarter and happier, not to create a ‘machine world’ for its own sake,” said Sicular. “Augmented intelligence is a design approach to winning with AI, and it assists machines and people alike to perform at their best.”
Given the challenges flagged up by Lloyd’s, Sicular’s optimism may prove over-done. But then convenience has been a major factor in the success of the technology industry – and its ability to circumvent a wide range of privacy issues and social barriers.