Why Shadow AI Is A Bigger Threat Than Claude Mythos
Anthropic’s powerful frontier AI model Claude Mythos, has raised serious concerns about the risks presented by AI-driven vulnerability discovery and exploitation. However, there’s reason to suggest that shadow AI might pose a bigger threat to enterprises in the short term.
While advancements in frontier models like Mythos threaten to accelerate vulnerability discovery, shadow AI leaves hidden entry points in enterprise environments that threat actors can exploit.
According to a survey conducted by WalkMe in 2025, 78% of employees admit to using AI tools that were not approved by their employer. At the same time, the number of ungoverned tools has skyrocketed, with Reco finding that organizations average 269 shadow AI tools per 1,000 employees.
Each of these tools can leak information to unauthorized third parties. Rapid AI adoption has come at the cost of security and defenders now need to adapt to a fast-moving threat landscape in which AI tools can be targeted with prompt injection attacks and other exploits.
Getting To Grips With Shadow AI
Poor governance has been a problem since the start of the AI race. Shortly after the release of ChatGPT, Samsung had a security incident in which an engineer pasted source code into ChatGPT. The incident resulted in a company-wide ban on employees using the chatbot.
Now, as agentic AI picks up steam and machine identities continue to multiply across the enterprise, it’s becoming harder to mitigate risk. For instance, IBM’s 2025 Cost of a Data Breach report found that 63% of organizations lacked AI governance policies to manage AI or prevent the proliferation of shadow AI. This widespread lack of governance is increasing the risk of exploitation by cybercriminals.
Harman Kaur, CTO of Tanium, an autonomous IT company, told me in a video interview that the siloed vulnerability management, endpoint management and patching defenders have been doing is “no longer really an option."
“We really, really don’t have time to react to things. Security can’t be just focused on security. And more importantly, if they’re going to be leveraging AI, they need data from the other teams to actually give it the full context of what’s happening,” Kaur said.
Kaur says that the biggest risk today is people, as individuals build their own solutions and then connect them to different business tools that CISOs might not have visibility into. Not having an understanding of where an organization’s data going, can be a significant risk.
How Much Risk Does Shadow AI Present?
One of the biggest challenges in addressing shadow AI lies in defining the scope of risk. It’s not just the tools that employees use that security teams need to track, but every piece of infrastructure that has access to protected data. Each company has a different risk profile, depending on the tools and infrastructure in use. “Shadow AI is bigger than employees using an unsanctioned chatbot at work. It includes unmanaged AI infrastructure deployed outside traditional governance controls, AI features abstracted behind application interfaces, and cases where users do not even realize they are interacting with an LLM,” Gabriel Bernadett-Shapiro, a distinguished AI research scientist at enterprise cybersecurity platform SentinelOne, told me via email.
“When AI becomes another abstraction within the software stack, organizations lose the ability to understand what data is being processed, what systems the model can access, what actions it can take, and who is accountable when something goes wrong,” Bernadett-Shapiro said.
Bernadett-Shapiro notes that the risk becomes more acute when AI moves from generating text to performing actions. Chatbots that answer a question based on company documentation are low risk, a model connected to tools, files, APIs, code repositories, ticketing systems, browsers, or internal knowledge bases can potentially shut down a business.
To address the issue, Bernadett-Shapiro suggests that companies should treat shadow AI as an enterprise visibility and exposure management problem. Teams need to know where AI is being used, including applications containing embedded AI, teams deploying models and which systems can access sensitive data or perform actions.
The nature of identity in the enterprise is changing. While machine identities aren’t new, a new wave of AI agents entering the workplace has the potential to increase complexity.
"Shadow AI isn’t just unsanctioned tools. It’s unsanctioned identities. Every AI agent an employee spins up becomes an active actor inside your environment, authenticating, pulling data, and executing across systems,” Roy Katmor of identity security platform, Orchid Security, told me via email. Katmor says addressing shadow AI requires a shift in how teams operate. Firstly, they should move from access visibility to behavior observability by focusing on what identities actually do inside applications, not just how they log in. Secondly, he says to treat AI agents as first-class identities by discovering them, assigning ownership and enforcing least privilege early. Thirdly, organizations can look to remove embedded credentials and reduce excessive permissions that create shortcuts in the environment. He also says companies should look to build auditability before deployment so identity activity is visible and explainable in real time, rather than reconstructed after an incident.
Securing Shadow AI Agents
Of course, shadow AI is just the tip of the iceberg. As AI agents perform more tasks autonomously in the workplace, defenders are going to need to maintain visibility. “Shadow AI is a big risk,” Artyom Poghosyan, CEO and cofounder of cloud privileged access management provider Britive, told me in a video interview. He said there have been multiple examples of how an agent deployed on a developer laptop was able to utilize the access and credentials of the user to enter production databases and delete data.
For Poghosyan, the first step is really to understand what’s out there, where agents are deployed and operating. Agents also introduce more complex risks. “I think one big risk is how agents can creatively figure out access laterally," using a credential to move across multiple systems.
He says what’s most important here isn’t just discovering agents, but also being able to put guardrails and secure mechanisms in place around the agent. Implementing privileged access management is one way to provide visibility over agents while controlling capabilities.
Loading article...