O ver the last year, cybercriminals have been pushing AI to see how far it will go to help them break into company networks. On Monday, Google issued a warning: the tech has helped criminals successfully develop a powerful hacking tool known as a zero-day exploit for the first time.

Zero-day exploits are small programs that target previously-unknown and unpatched vulnerabilities to install malware and access data on a target computer or network. That makes them a rare and potent commodity among hackers. Google security researchers found evidence hackers had developed such an exploit to target an unnamed open-source, web-based IT admin tool. A “mass vulnerability exploitation operation” was in the works, the tech giant said. It was able to stop the attack from happening by alerting the vendor of the IT tool.

“Some things that used to require months and years of experience ... can be done almost instantaneously.” Eyal Sela, director of threat intelligence at Gambit Security

Google said there were a number of signs that artificial intelligence helped write the malicious code (though it couldn’t tell which AI system was used). The code was structured in a way that was “highly characteristic” of AI, the report said, including a “textbook” use of the Python language and “detailed help menus” not typically seen in human-written programming. It also contained what appeared to be an AI hallucination, referencing a vulnerability that didn’t exist.

Google said it’d also discovered hackers, including those working for Chinese and North Korean intelligence, using its Gemini AI chatbot to help research potential cyberattack targets. In one case, a Chinese-linked cybercrime group dubbed UNC2814 tricked Gemini by asking it to act like a network security expert. Then Gemini agreed to look for vulnerabilities in TP-Link routers (which have been banned in the U.S. for security reasons).

John Hultquist, chief analyst of the Google Threat Intelligence Group, said North Korea was “a very early adopter of AI,” moving from phishing to developing cyberattacks on company and government networks. “It’s interesting because this is an area where they have typically not focused, preferring to do social engineering. It may indicate that they are using AI to evolve,” Hultquist added.

Google’s discovery of an AI zero-day exploit is the latest in a growing number of instances where hackers used AI as a kind of cybercriminal copilot or to carry out an attack in its entirety.

In May, Dragos Security, which protects critical infrastructure from cyberattacks, said hackers used Anthropic’s Claude to try to target municipal water and drainage utility systems in Monterrey, Mexico, earlier this year.

Eyal Sela, who first documented those attacks, said Google’s discovery of an AI-written zero-day showed how early adopters were benefitting from the wealth of new automated coding technology. Most worrying: even low-skilled hackers can use AI to carry out attacks using techniques they don’t understand, Sela told Forbes .

“There are some things that used to require months and years of experience that can be done almost instantaneously,” said Sela, director of threat intelligence at Gambit Security. “This is not an exaggeration.”