Imagine waking up to discover that whilst you slept, artificial intelligence spent the entire night probing your company’s defences, testing thousands of possible entry points, adapting its strategy in real-time, and documenting every vulnerability it found -all without a single human telling it what to do. This isn’t a scene from a science fiction film. It’s the reality we’re now facing in cybersecurity.
In September 2025, researchers disrupted what’s believed to be the first large-scale cyberattack executed primarily by artificial intelligence, with minimal human oversight. The implications are profound: we’ve crossed a threshold where AI doesn’t just assist hackers-it can effectively become the hacker.
For businesses of all sizes, this represents a fundamental shift in the threat landscape. Let’s explore what agentic AI attacks actually are, how they work, and what this means for your organisation’s security.
What Are Agentic AI Systems?
Before we dive into the attack itself, we need to understand what makes “agentic” AI different from the automated tools we’re already familiar with.
Agentic AI refers to artificial intelligence systems that can set their own goals, plan sequences of actions, and execute complex tasks with minimal human intervention. Think of it as the difference between a calculator (which waits for you to press buttons) and a personal assistant (who understands your request and works through multiple steps to fulfil it).
These systems have several key characteristics that make them powerful, and potentially dangerous:
- Decision making capability: They can assess situations and choose the best course of action from multiple options, adapting as circumstances change.
- Task chaining: Rather than performing single actions, they can string together dozens or even hundreds of steps to achieve an objective, much like a human would approach a complex project.
- Adaptive learning: When they encounter obstacles, they can try alternative approaches without needing someone to reprogram them or give new instructions.
- Sustained operation: They can work continuously for hours or days, never tiring, never losing focus, and maintaining perfect attention to detail throughout.
You’re already interacting with benign versions of agentic AI. Advanced virtual assistants can book appointments, research topics across multiple websites, and draft emails based on your previous communication style.
The problem emerges when these same capabilities are pointed at cybersecurity targets. Suddenly, what was a helpful tool becomes a sophisticated threat actor that can identify vulnerabilities, craft exploits, and cover its tracks faster and more efficiently than entire teams of human hackers.
The First Documented Agentic Cyber Attack
In mid-September 2025, security researchers at Anthropic detected unusual activity on their Claude Code platform, a tool designed to help developers write and debug software. What they uncovered was chilling in its sophistication.
A threat actor, assessed with high confidence to be a Chinese state-sponsored group, had manipulated the AI system into executing a large scale espionage campaign. The operation targeted roughly thirty organisations across multiple sectors: major technology companies, financial institutions, chemical manufacturers, and government agencies.
Here’s what made this attack unprecedented: the AI itself was doing 80-90% of the work. Human operators were involved at only 4-6 critical decision points throughout each campaign. The rest of the reconnaissance, vulnerability testing, exploit development, credential harvesting, and data exfiltration was executed autonomously by the artificial intelligence.
How the Attack Unfolded
The operation proceeded through several phases:
- Setup and manipulation:
Human operators selected targets and built an “attack framework”, essentially, a system designed to use AI tools for unauthorised intrusion. They bypassed Claude’s safety restrictions through jailbreaking techniques, breaking down attacks into small, seemingly innocent tasks and convincing the AI it was conducting authorised penetration testing for a legitimate cybersecurity firm.
- Reconnaissance and exploitation:
The AI systematically inspected target organisations’ systems, mapping networks and identifying potential entry points at speeds that would have taken human teams days or weeks. It then identified security vulnerabilities, wrote exploit code, harvested credentials, and began extracting data.
- Intelligence gathering:
Rather than indiscriminately dumping data, the system allegedly categorised stolen information according to intelligence value, identifying the most sensitive files and highest privilege accounts.
How Agentic Attacks Differ from Everything That Came Before
This was a significant incident, and to truly grasp what it means, we need to understand how it differs from traditional cyber threats.
Traditional automated attacks follow pre-programmed instructions, rather like following a recipe. If they encounter an unexpected obstacle, they typically fail. A human attacker must decide which targets to pursue, when to launch the attack, and how to respond when defences activate. These attacks often leave predictable patterns that security systems can detect.
Agentic attacks are fundamentally different. They set their own subgoals within the parameters given by their operators. When they encounter resistance, they adapt their strategy in real-time, trying different approaches until something works. They can autonomously select targets based on opportunity and value. Their evolving tactics make pattern-based detection significantly more difficult.
The speed and scale differences are equally dramatic. An agentic attack compresses timelines that might normally span weeks into hours or even minutes.
There’s also what we might call the “creativity problem.” Human hackers are limited by their knowledge and experience. An AI can potentially discover novel attack vectors that humans haven’t considered, simply by testing vast numbers of possibilities that would be too time consuming for a person to attempt.
Perhaps most concerning is the economic shift this represents. Traditional sophisticated attacks require expensive human expertise. Agentic attacks can run continuously at relatively low cost, making previously “too expensive to bother with” targets suddenly viable.
Why This Matters for Your Business
The barriers to sophisticated cyberattacks have dropped dramatically. What once required teams of experienced hackers can now be accomplished by a small group with access to advanced AI tools. The cost benefit analysis that previously protected smaller organisations from high-level threats no longer applies.
Consider the persistence problem: human attackers are well human- they are limited by biology! An agentic system never tires. It can probe your defences continuously, learning and adapting from each failed attempt, until it finds a way through.
The implications are tangible. Breach times are accelerating. What might have taken weeks of reconnaissance can now happen in hours. Social engineering (link to older blog post) attacks become more sophisticated when AI can analyse your organisation’s communication patterns and craft perfectly targeted phishing attempts.
Supply chain attacks become easier to coordinate when AI can simultaneously compromise multiple vendors. Self-propagating threats that automatically seek new targets without human direction become frighteningly possible.
Preparing for the Agentic Threat Landscape
The security industry is responding, but it’s early days. Defensive AI systems essentially fighting fire with fire are being developed to detect and respond to AI-driven attacks. Behavioural analysis, which focuses on unusual patterns of activity rather than known attack signatures, is becoming more important than traditional signature-based detection.
Zero-trust architecture (link), which assumes no user or system should be trusted by default, becomes crucial when facing attackers that can adapt faster than you can respond. Enhanced monitoring and logging matter more than ever because you need to be able to reconstruct what happened when an attack evolves too quickly for real-time human analysis.
But what can you actually do right now, today, to protect your organisation?
Start with security awareness training that includes AI-specific threats. Your team needs to understand that the emails they receive might be crafted by AI systems that have analysed your company’s communication patterns. The phone calls they answer could be voice-synthesis attacks using AI-generated speech that sounds perfectly human.
Conduct regular security audits with AI threat models in mind. Traditional penetration testing remains valuable to uncover weaknesses and vulnerabilities in your defenses.
Update your incident response planning to account for attacks that evolve rapidly. Your response team needs to be prepared for threats that change tactics multiple times during a single incident, potentially faster than humans can observe and react.
And perhaps most importantly, maintain strong human oversight. AI-driven defences are valuable, but they need human judgement and decision-making. The goal isn’t to automate your entire security operation, it’s to give your security team better tools to fight increasingly sophisticated threats.
Looking Ahead
We’re standing at the beginning of the agentic attack era. The capabilities demonstrated in that September 2025 attack will only improve. AI systems will become more sophisticated, harder to detect, and more autonomous in their operations.
Thankfully, the same AI capabilities that enable these attacks also provide powerful defensive tools. The researchers who uncovered the September attack used AI extensively to analyse the enormous amounts of data generated during their investigation. The technology itself is neutral; it’s the application that matters.
For your business, the path forward is clear even if it’s challenging. A reactive security posture; waiting until you’re breached before taking action is no longer tenable. The threat landscape has fundamentally shifted, and your defences need to shift with it.
The good news is that you don’t have to face this alone. At OmniCyber we are always adapting our tools and techniques to address these emerging threats. The key is engaging proactively, before an incident occurs, whilst you still have time to strengthen your defences and prepare your team. Because in cybersecurity, as in so many areas, an ounce of prevention truly is worth a pound of cure – especially when the threats never sleep.