The AI Arms Race: State-Sponsored Threats and the Misuse of Gemini AI
In a concerning development, Google has revealed that state-sponsored threat actors from China, Iran, Russia, and North Korea have been exploiting its Gemini AI throughout 2025. Despite Google's efforts to prevent malicious usage, these actors have found ways to misuse the technology for their nefarious activities.
But here's where it gets controversial... Google's Threat Intelligence Group (GTIG) has documented this activity in a report, highlighting how Gemini is being utilized at various stages of attack campaigns. The report suggests that these threat actors are moving beyond simple productivity enhancements and are now leveraging AI for more sophisticated and malicious purposes.
While Google hasn't disclosed the technical details of its monitoring efforts, it seems they've uncovered a wealth of information about these malicious actors. The company's security measures for Gemini AI trigger 'safety responses' when threat actors seek assistance with malicious activities. However, these actors have found ways to bypass these protections through clever social engineering tactics.
For instance, a China-linked actor posed as a capture-the-flag competition participant, tricking Gemini into providing guidance on exploitation. This actor then used a similar pretext, claiming to be working on a CTF problem, to obtain advice on phishing, exploitation, and webshell development. Similarly, an Iranian group, MUDDYCOAST, posed as university students to bypass safety guardrails and obtain assistance in developing custom malware.
And this is the part most people miss... In their attempts to develop custom malware, MUDDYCOAST inadvertently exposed their command-and-control infrastructure while requesting coding assistance from Gemini. This exposure enabled broader disruption of their campaign.
Another suspected Chinese threat actor demonstrated a keen interest in attack surfaces they appeared unfamiliar with, such as cloud infrastructure, vSphere, and Kubernetes. They even showed access to compromised AWS tokens and used Gemini to research exploiting temporary session credentials. Meanwhile, the Chinese group APT41 utilized Gemini for assistance in developing C++ and Golang code for a C2 framework.
Iranian group APT42 took advantage of Gemini's text generation and editing capabilities to craft sophisticated phishing campaigns, often impersonating individuals from prominent think tanks. North Korean groups, on the other hand, researched cryptocurrency concepts and generated multilingual phishing lures, showcasing how AI can overcome language barriers in targeted attacks.
Google's response to these threats involves disabling accounts after detection rather than real-time blocking. This approach creates a window of opportunity for actors to extract value before disruption. Additionally, Google has identified experimental malware that suggests how threats may evolve, including tools that query language models during execution to generate malicious code dynamically.
One such example is PROMPTFLUX, which queries Google's Gemini API to continuously rewrite its own source code, attempting to evade detection through self-modification. Another malware, PROMPTSTEAL, attributed to the Russian-backed group APT28, queries the Qwen2.5-Coder-32B-Instruct model to generate Windows commands for stealing system information and documents.
The report also mentions PROMPTLOCK, a prototype created by academics at New York University's engineering school, which caused a stir in the security industry. This malware was designed to test the limits of AI-powered ransomware and was intentionally deployed against Google's VirusTotal malware scanning service to gauge its detectability.
So, the question remains: How can we ensure that AI technologies like Gemini are not misused by state-sponsored threat actors? Share your thoughts and opinions in the comments below!