AI-driven cyberattacks are accelerating. Can humans keep pace?

By Egwu Favour Emaojo

AI-driven cyberattacks are accelerating. Can humans keep pace?

Key reasons to read this article

  • Autonomous cyberwar is no longer fiction; attacks are already being conducted without human input.
  • In some campaigns, AI completed up to 90% of the work, meaning hackers are no longer needed.
  • Hospitals, airports, and power grids are in the firing line, not just corporate databases.
  • A new global “cyber inequality” is emerging with poorer nations falling dangerously behind.
  • The balance of power between machines and humans is shifting and the window to respond is closing.

In the time it takes to read this sentence, AI-enabled tools could potentially map a financial institution’s defensive architecture and identify an exploitable entry point. Today, criminals are using AI to automate the stages of an attack, from writing convincing phishing emails to identifying network vulnerabilities and mimicking trusted users.

As a result, attacks are becoming not only faster but also more complex. What once took days can now unfold in minutes. This acceleration is widening what some analysts describe as growing “cyber inequity” between nations. While wealthier countries can afford sophisticated AI-driven firewalls, many developing nations remain defenseless against large-scale, automated AI threats.

AI has transformed cybercrime from human-paced hacking into machine-speed warfare.

According to Oliver Tavakoli, CTO of Vectra AI, this landscape is likely to become 10 times more volatile over the next 18 to 24 months, although predictions vary depending on how both the attackers and the defenders deploy AI systems.

Cyberattacks, from days to minutes

CEOs and security chiefs already see rapid change underway. According to a recent World Economic Forum survey:

  • 94% of executives stated AI will be the single biggest force reshaping cybersecurity in 2026.
  • 87% flagged AI-related vulnerabilities as the fastest-growing risk.
  • The percentage of organizations securing AI tools jumped from 37% (2025) to 64% (2026).
  • About 16% of breaches in 2025 involved AI-powered attacks (phishing/deepfakes).
  • Global cybercrime losses are estimated to have hit US$10.5 trillion in 2025 and are expected to reach US$15.6 trillion by 2029.
  • Unit 42 found that the mean time to exfiltrate data plunged from nine days in 2021 to just two days in 2023 and fell to 30 minutes by 2025.
What once took days to breach can now be executed autonomously in minutes.

Taken together, these figures indicate a reduction in breach timeframes, with some attackers able to extract data in under an hour. At the same time, defensive AI systems are also operating at machine speed, meaning the technology’s advantage is not exclusively offensive.

Case study: increasing automation in cyber campaigns

In late 2025, there was a cyber-espionage campaign in which an AI tool reportedly orchestrated the entire attack. Anthropic’s investigation found that attackers utilized its Claude model to autonomously select targets, conduct exploitation, and extract data with minimal human intervention, with 80–90% of the campaign being undertaken by AI.

The operation probed roughly 30 global targets (tech, finance, energy, and government) and launched thousands of probes, often multiple per second, at a speed humans could not match. Anthropic’s analysts called it the first documented case of a large-scale cyberattack executed without substantial human intervention. In other words, an AI system hijacked the network, leaving those defending it scrambling.

In a development context, this speed gap represents a significant threat to public safety. When cyberattacks become autonomous, the target is no longer just data; it is the functional continuity of essential services such as hospitals, power grids, and airports.

Hospitals are particularly vulnerable. Statistics show that 92% of U.S. healthcare organizations have suffered at least one cyberattack in the last year. An AI agent can breach a medical network and encrypt patient records or life-support monitoring systems in minutes. Because these attacks occur at machine speed, human IT teams often cannot detect the intrusion until the service has been totally disrupted.

When cyberattacks become autonomous, the real target is not just data, but the continuity of hospitals, airports and power grids.

Attackers have long relied on human gullibility; AI supercharges that. In 2024, the FBI warned that criminals were using AI to scale up phishing and scams by scraping a victim’s social media and using database leaks to generate thousands of highly personalized messages in seconds. FBI Agent Robert Tripp warned that this can lead to devastating financial losses, reputational damage, and the compromise of sensitive data. In one incident, a business email compromise involving AI-generated messaging resulted in €15.5 million in losses for a European retailer.

A new digital divide

The expansion of AI capabilities may also deepen a global cybersecurity divide. Recent research shows a clear correlation between a country’s wealth and its cyber resilience. UNDP economist Philip Schellekens warns that AI is heralding a new era of rising inequality between countries. Put bluntly, countries or organizations that can invest in AI security systems may gain resilience advantages, while others risk prolonged exposure.

Spending disparities illustrate this divide. A World Bank study noted that per capita spending on cybersecurity is about US$1 in India or Mexico, versus US$30 in the U.S. and Canada. The U.S. alone spends about 16 times more on cybersecurity than all of Latin America and the Caribbean combined.

Whoever combines intelligent machines with skilled human oversight will determine the future of global security.

In practice, this may mean that a small nation’s hospital network or power utilities could only have basic firewalls and outdated IT teams, while wealthier countries can deploy advanced AI monitoring and patrol their networks 24/7. The inequality is dangerous, as leaving much of the world defenseless could eventually undermine global stability and security.

Humanity appears to have unleashed a powerful new weapon before everyone is ready. But whoever combines the sharpest AI with the savviest human oversight will gain the upper hand. For ordinary people, from patients in hospitals to travelers in airports, the stakes could not be higher. AI has become a new kind of cyber-weapon.

Unless governments and institutions invest aggressively in defenses, train more cyber experts globally, and foster international cooperation, vulnerabilities could widen.