New

How Agentic AI is Changing Cybersecurity

7 minutes
Technical
Sophia Barnett photoSB
Sophia Barnett

Technical Marketing Writer


A new class of artificial intelligence has emerged—agentic AI. These autonomous systems are capable of planning, coordinating, and executing complex tasks with minimal human oversight. In cybersecurity, they’re changing how we think about defense and resilience.

This FAQ explores what agentic AI is, how it differs from generative AI, and why it poses a growing threat to data storage and backup strategies.

What is agentic AI?

Agentic AI refers to artificial intelligence systems made up of multiple agents that work together to achieve specific goals. These agents operate in real time, adapt to their environment, and act independently. In cybersecurity, this means they can scan networks, identify vulnerabilities, and launch attacks without human input.

Unlike traditional AI, which typically follows predefined rules or responds to prompts, agentic AI can dynamically adapt to its environment, make decisions, and act independently. In cybersecurity, this means agentic AI can scan networks, identify vulnerabilities, and launch attacks without human intervention.

How is agentic AI different from generative AI?

Generative AI focuses on creating content (text, images, code) based on input prompts. It’s reactive and designed to produce outputs that mimic human language or creativity.

Agentic AI, on the other hand, is goal-driven and action-oriented. It's not a mindless bot that churns content; instead, it’s an intelligent machine (or group of machines) that takes initiative based on extensive thought and planning. These agents can:

  • Perform reconnaissance on networks
  • Identify and exploit vulnerabilities
  • Coordinate with other agents to execute multi-stage attacks
  • Adapt their behavior based on system responses

In short, generative AI writes phishing emails. Then, agentic AI sends them, monitors the responses, adjusts the strategy, and launches the next phase of the attack.

How is agentic AI used to make ransomware more dangerous?

Agentic AI is further weaponizing ransomware into a self-directed, adaptive threat.

Here’s how:

  • Autonomous reconnaissance: AI agents can scan networks at scale, identifying weak points faster than any human team.
  • Polymorphic malware: These agents can rewrite malware code on the fly, making each version unique and harder to detect.
  • Coordinated intrusions: Multiple agents can work in tandem to breach systems, escalate privileges, and exfiltrate data.
  • Automated negotiation: AI can even handle ransom negotiations, adapting in real time to victim responses.

According to the 2025 Adversa AI Security Incidents Report, agentic AI was responsible for some of the most damaging cyberattacks this year, including unauthorized crypto transfers, API abuses, and cross-tenant data leaks in enterprise environments.

Are there real-world examples of agentic AI failures?

Yes. One of the most notable examples occurred at Replit in July 2025, when an AI assistant misinterpreted a query during a code freeze and deleted the entire production database—over 2,400 business records. The assistant then attempted to conceal the action and failed to recover the data. No immutable backups were in place, and the loss was permanent.

Other incidents include:

Why is containment no longer enough?

Traditional cybersecurity strategies focus on detection and containment. But agentic AI is designed to evade detection, delay activation, and mimic legitimate behavior. This makes real-time intervention unreliable.

The only way to ensure recovery is to build resilience into the infrastructure. That means having immutable backups, data copies that cannot be altered or deleted, even if attackers gain full access.

To understand how to defend your organization against this new class of cyberattacks, download our full white paper, How AI Is Rewriting the Rules of Data Protection.