Another pandemic and disrupted energy supplies ranked as two of the most significant risks to the UK in the government’s 2023 National Risk Register. The document declassifies some risks for the first time.
The report said that there is a 5-25 per cent likelihood of a frest “catastrophic” pandemic over the next five years. Previously, it had said the risk was only 1-5 per cent likely. In addition, an attack on the UK’s infrastructure with a significant impact also fell into the 5-25 per cent likelihood bracket. Causes could include a Russian disruption of energy supplies in Europe, or terror attacks.
Cut off
“While the UK relies less on Russian energy than many other European countries, it is still exposed to disruption in European energy markets,” the document said. “A severe gas shortage in mainland Europe for a significant period could also negatively impact continental European gas-fired electricity generation capacity, which could affect the UK’s security of energy supply in winter, impacting household electricity consumers,” it added.
The Business Continuity Institute said of the high rating given to another pandemic: “This shows that it is still a pre-occupation of those government departments contributing to the NRR, following the long-lasting impacts of the COVID-19 pandemic. When assessing future risks there also needs to be a sense of clear-sightedness to eliminate any chance of a recency bias and make sure we are assessing the full spectrum of threats and risks.”
More risks
Other “significant” risks in the 5-25 per cent band included severe space weather, low temperatures and snow, an emerging infectious disease and nuclear miscalculation, not involving the UK.
AI featured in the risk register for the first time. AI risk was described having implications “spanning chronic and acute.” Those could cause an increase in harmful misinformation or a reduction of economic competitiveness.
“The UK Government has committed to hosting the first global summit on AI Safety which will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor risks from AI,” the report said.
This website uses cookies to ensure you get the best experience on our website.
Read our Privacy Statement & Cookie Policy