Several elections may have been influenced by so-called fake news disseminated over social media over the past 18 months. They include elections in the UK, the United States, France, Germany, India, Kenya and the Netherlands. And fake news was probably at play in referenda in the UK and Spain.

For example, stories have been circulating that Hillary Clinton gave 20% of the United States’ Uranium to Russia in exchange for $145 million in donations to the Clinton Foundation. The story first surfaced in a book, then online and also during the 2016 Presidential campaign. The online fact-checker Snopes gives chapter and verse on the story.

It is false, even if the story is not black and white.

But it is hard to know how seriously to take such stories without the kind of extensive research that Snopes founder David Mikkelson carries out – you can read a history of the venture here. It is harder still to calculate their effect on voters as their overall impact is ambient.

Spotting fake news – when it is not of the type “Elvis sighting on moon” – can be relatively easy, but time consuming when it is more subtle – see here for six tips. It has a lot in common with journalistic practice of checking sources and background information.

There have been worrying developments recently in the related field of fake product reviews. This should be of concern to every risk manager whose business has customer-facing websites that allow reviewers to have their say.

Researchers at Chicago University have created an AI program that can write restaurant reviews that fool spam filters. They tested the program on the review site Yelp and the comments by-passed the filters, according to a report in Scientific American.

“Human test subjects asked to evaluate authentic and automated appraisals were unable to distinguish between the two,” it said. “When asked to rate whether a particular review was ‘useful,’ the human respondents replied in the affirmative to AI-generated versions nearly as often as real ones.”

While the Chicago team say that AI is not advanced enough to create fake news, it is likely to be advanced enough to flag up real-life news as fake in the comments and review section. This has, in fact, been happening to journalists on Twitter. The bots latch on to certain key words and effectively get the journalist’s site suspended for suspicious activity.

While businesses have been concerned about the effect of bad social media publicity on their companies’ reputations, this trend could take things to a whole new level – a battle between defensive and offensive AI. Good luck!