The Top AI Powered Cyber Attacks in 2025
By:
the Kindo Team
Article
4 mins

The Top AI Powered Cyber Attacks in 2025

In 2025, there was a huge increase in cyberattacks involving AI. Cybercriminals and state-sponsored groups used it for tasks like creating convincing phishing emails and deepfake scams to trick people.

These types of attacks went up by nearly 47% globally.

In this article, we’ll look at five of the biggest AI related cyberattacks from 2025. We’ll explain what happened, how AI was involved, who was responsible, the impact of these attacks, and what security experts can learn from them.

1. North Korea’s ChatGPT-Assisted Phishing Campaign

North Korea’s state backed Kimsuky hacking group made headlines in 2025 by using ChatGPT as a tool in a spear-phishing campaign. The hackers prompted OpenAI’s language model to draft phishing emails and even to generate fake military and government ID cards, complete with realistic portraits and seals, which they sent to South Korean officials.

These AI created credentials were used to help the attackers pose as government authorities and lure targets into downloading malware disguised as security software updates. Investigators later found that the attackers iterated ChatGPT prompts in Korean and English until the tone, formatting, and image resolution matched genuine government correspondence.

Figure 1: An example of a ChatGPT generated phishing email

It was an early documented case of AI generated deepfakes being used in an active cyberattack, reflecting somewhat of a shift toward AI assisted espionage. The emails appeared to come from legitimate ministries, featuring AI generated logos, photos, and document templates that supported the attack’s narrative.

Subsequent analysis showed the generated documents were paired with obfuscated AutoIt scripts and compressed attachments. Some payloads used shortcut (.lnk) files to execute malicious code, a tactic intended to evade antivirus detection and delay response. The campaign’s victims included defence researchers, policy analysts and other individuals linked to defence affairs, broadening the attack surface beyond central government offices.

2. Deepfake CEO Scam – The $500K Fake Zoom Call

In March 2025, a finance director in Singapore nearly fell victim to a high tech scam where scammers impersonated his CEO on a Zoom video call using deepfake technology. The scheme began with a WhatsApp message from someone posing as the company’s CFO, inviting him to a confidential “restructuring” meeting.

When he joined the video call, he was greeted by what looked and sounded like his chief executive and other top managers, none of whom were real. The cybercriminals, including an accomplice pretending to be a lawyer, convinced him to urgently transfer around US$500,000 to a specified corporate account. Trusting the fake Zoom meeting, the director executed the transfer, unaware the funds were being funneled to the scammers’ money-mule account.

Figure 2: An example of a deepfake creation service on a cybercrime network

Amazingly, the plan was stopped at the last second. The next day, the scammers pushed for an additional $1.4 million, raising the director’s suspicions and prompting him to alert the bank and police. Investigators from Singapore’s Anti-Scam Centre, working with counterparts in Hong Kong, managed to trace and freeze the $500K before it vanished overseas.

A similar deepfake scam in Hong Kong a year prior led to a HK$200 million loss. As a result, businesses are now being urged to verify identities through secondary channels and treat unexpected urgent fund requests with extreme caution.

3. Claude-Assisted Attack That Targeted 17 Companies

In a case of AI-assisted hacking, a cybercriminal used Anthropic’s Claude chatbot as a copilot to infiltrate and extort 17 different organizations. According to an August 2025 report from Anthropic, the attacker manipulated Claude into automating nearly every stage of the attack, from reconnaissance to data theft and even ransomware negotiations.

The chatbot was prompted to scan for vulnerable companies (including a defense contractor, a bank, and multiple healthcare providers) and to generate custom malware for breaching their systems. Once inside, the AI systematically extracted sensitive information, including Social Security numbers, financial records, confidential medical files, and even U.S. defense data protected under export controls.

Figure 3: A cybercriminal sharing a tutorial on how to misuse Claude

The cybercriminal then had Claude organize the stolen data and analyze it for blackmail value, even asking the AI to draft personalized ransom notes and calculate reasonable ransom amounts for each victim. This technique has been called “vibe hacking,” and it marks a milestone in cybercrime.

By using a generative AI agent as an active partner, a lone attacker with moderate skills managed to pull off a crime spree that normally would require a coordinated team. In response, AI vendors like Anthropic claim to have banned the accounts involved and tightened safeguards.

4. AI Phishing-as-a-Service - Darcula Lowers the Bar

Darcula is a well known phishing-as-a-service (PhaaS) platform that underwent a huge upgrade in 2025, integrating generative AI to turbocharge its operations. This subscription-based toolkit, originally developed in China and used in over 20,000 phishing domains, was already being compared to a startup because of its design and ease of use.

With the new AI features, even an absolute beginner can now create a targeted phishing campaign in minutes. All a Darcula user has to do is input the URL of a target brand’s website; the platform will automatically clone the site’s HTML/CSS assets and then use AI to generate a custom phishing form that looks authentic.

Figure 4: Darcula’s announcement stating AI is now integrated

The AI can populate the fake page with any fields (login, credit card, addresses, etc.), translate the text into any language needed, and maintain the original site’s layout, essentially producing a highly convincing fraudulent copy on demand. This generative AI upgrade has effectively made phishing at scale as easy as point-and-click.

Previously, Darcula came with a library of over 200 ready-made phishing templates covering banks, e-commerce, and other services worldwide; now, the number of possible templates is limitless because the AI can build unique pages for any brand or region on the fly. This not only expands the pool of potential targets, but also defeats many traditional security filters. When every phishing page is one of a kind, signature-based detection becomes largely ineffective.

5. Xanthorox AI – A Malicious Chatbot Unleashed

2025 saw the emergence of a new all-in-one cybercrime platform called Xanthorox AI, a malicious chatbot built for cybercriminals. First spotted on cybercrime forums and encrypted channels in early 2025, Xanthorox was promoted by its creators as the killer of WormGPT and EvilGPT variants. Xanthorox is a ground-up offensive AI system that runs on private servers with its own custom models.

Rather than relying on OpenAI or other public APIs, it uses a local-first design with five specialized AI modules, which means there are no public endpoints or external guardrails. Because of this, there are few ways for defenders to track or shut it down. In underground postings, the anonymous sellers stated that they had built their own models, stack, and rules specifically for offense, not defense.

Figure 5: Xanthorox AI's official sales advertisement

Xanthorox ultimately functions as a Swiss Army knife for cybercriminals, packing multiple AI components optimized for different tasks. Xanthorox Coder generates malicious code, writes scripts, and exploits vulnerabilities, while Xanthorox Vision can analyze images or screenshots to extract sensitive data (useful for cracking passwords or reading stolen documents). Another module, Xanthorox Reasoner, mimics human-like reasoning to help create more believable phishing messages.

The platform supports real-time voice commands and file uploads for hands-free control, and includes a live web scraping tool that pulls reconnaissance data from dozens of search engines; using these features, cybercriminals can plan and launch fully automated attacks, from phishing campaigns and ransomware drops to custom malware development, without advanced skills.

Take Your Next Steps With Kindo

AI has completely changed the cybersecurity attack surface. We’ve seen it being used to generate deepfakes, automate phishing campaigns, and even run entire cyber attacks through malicious chatbots.

Traditional defenses alone are not enough to keep up with this level of automation. Kindo.ai gives security teams the ability to fight AI with AI by building automatic workflows that detect suspicious activity, enrich alerts with context, and trigger immediate response actions.

Instead of chasing alerts manually, you can configure repeatable playbooks that adapt as attackers change tactics. This means faster investigations, fewer false positives, and more time spent on the incidents that matter. Request a demo to see how Kindo.ai can help your team stay ahead of AI improved threats.