Gadget

AI climbs the WEF risk ladder as trust erodes

Artificial intelligence shows the steepest rise of any risk in the World Economic Forum’s Global Risks Report 2026, climbing from near the bottom of the short-term outlook to the top tier over a decade.

Its trajectory places AI in the same risk bracket as misinformation and cyber insecurity, meaning it was no coincidence that the report was released as the WEF convened its annual Davos meeting, where heads of government, corporate leaders and policymakers assess threats to global stability.

“Technological developments and new innovations are driving opportunities, with vast potential benefits from health and education to agriculture and infrastructure, but also leading to new risks across domains, from labour markets to information integrity to autonomous weapons systems,” the report warned.

Davos thrives on the language of opportunity, so it was a little startling to see benefit tied to danger.

The Global Risks Report 2026 draws on a survey of more than 1,300 experts from government, business, academia and civil society, who assess the severity of 33 global risks across short- and long-term horizons. The rankings reflect perspectives shaped by direct exposure to political, economic and technological systems.

“Misinformation and disinformation” ranked #2, and “Cyber insecurity” #6, respectively, on the two-year outlook.

These risks already exert pressure on elections, markets and institutions, largely because both operate through scale. A fabricated video seldom shifts a national outcome by itself, but a sustained flow of synthetic content weakens verification and turns public debate into parallel realities. A single breach seldom destabilises an economy, but a persistent pattern of intrusion drives up cost and chips away at confidence in digital systems that lie at the centre of business operations.

The longer-term ranking sharpens the focus on AI.

“Adverse outcomes of AI” is the risk with the largest rise in ranking over time, moving from #30 on the two-year outlook to #5 on the 10-year outlook.

Davos delegates often treat AI as a productivity lever. The report poses a different question: what follows when automated judgement becomes routine across hiring, lending, healthcare, welfare and policing, and the incentives around those systems reward speed and scale above careful scrutiny?

In aggregate, these systems amplify bias and concentrate decision-making power inside technical stacks that most stakeholders cannot inspect.

The report links AI to labour markets and social cohesion. For example, labour impact often appears as changed job design, compressed wages for routine work, and a widening premium for those who can build, tune or govern automated systems. It also shows up in procurement and supply chains. Firms that control data and platforms gain leverage, while smaller players become dependent users of tools they cannot audit or adapt.

In this context, misinformation acts as an accelerant. Generative AI lowers the cost of producing persuasive false content and raises the speed at which it travels. The effect extends beyond election manipulation. It reaches consumer markets through fake endorsements and fraudulent customer support. It reaches finance through market-moving rumours and synthetic executive voices. And, ultimately, strains trust across society.

Cyber insecurity forms the enabling condition that turns every digital dependency into a potential failure point. A hospital’s reliance on connected or a municipality’s payment platform becomes a choke point. Cloud migration increases efficiency, but also expands the attack surface. Security teams face adversaries who benefit from automation and a criminal marketplace that treats ransomware kits and stolen credentials as ordinary products.

For African economies, these dynamics carry a specific edge. Digital infrastructure expands access to banking, schooling and healthcare at scale, often faster than legacy systems ever did. That speed also creates exposure, where skills, budgets and regulation remain uneven.

A deepfake campaign aimed at a local election can run through the same global platforms used elsewhere. A ransomware group can reach a small business in Johannesburg or Nairobi with the same playbooks used against multinationals. AI tools can lift productivity for entrepreneurs and small firms, and those same tools can support impersonation and intimidation at low cost.

The Global Risks Report 2026 provides a scorecard for Davos. Misinformation, cyber insecurity and adverse outcomes of AI appear together because they interact continuously. Synthetic content erodes shared facts; security gaps create entry points for disruption; and automated systems amplify both. The rankings capture these interactions in a way that provides global leaders with a current status and a future context for the decisions they make today. No one can say they were not warned.

Arthur Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of “The Hitchhiker’s Guide to AI – The African Edge”.

Exit mobile version