top of page
Search

AI-Powered Cyber Attacks: What Security Leaders Need to Know (+ What They Can Do)

  • Apr 10
  • 5 min read


Many conversations with CISOs center around ransomware trends and what to watch for. But recently, they've evolved around a valid question: What happens if these cyber attack groups get their hands on generative AI that writes malware faster than an enterprise can patch? Or uses agentic AI to move throughout networks and auto-adapt to evade detection? 


These are legit concerns. Because traditional defense programs aren’t built to withstand AI-based tactics, techniques, and procedures (TTPs). It's smart, scalable, and creates nuanced risks that were unimaginable just two years ago.


Consider hyper-personalized phishing campaigns. Or polymorphic malware that evades traditional signature-based detection. These AI-powered cyberattacks outpace the controls most organizations have in place. Here’s what security leaders need to know and how to stay ready:


TL;DR

  • Attackers are using generative AI to craft undetectable, legitimate-seeming phishing emails, and agentic AI to develop autonomous and self-evolving malware, and automate vulnerability discovery.

  • AI-driven phishing attacks now include voice cloning and deepfakes that impersonate executives, making them indistinguishable from actual conversation. 

  • Organizations must update their cybersecurity risk assessment process to account for AI threats and risk. And use governance to ensure security and avoid creating new vulnerabilities.


How Attackers Are Weaponizing AI

AI-driven phishing: Once obvious. Now detectable 


Poorly worded emails from princes promising millions in exchange for bank account info used to be the primary phishing threat.  

But generative AI has eliminated most grammatical errors and cultural tells. Attackers sound legit and can now scrape LinkedIn and public profiles to craft spear-phishing emails that mimic trusted colleagues. They automate open-source intelligence (OINT) and can get actionable, targeted material for hundreds of executives in minutes.


Just look at the AI phishing campaign Huntress uncovered earlier this year. Hundreds of organizations were targeted at once. Generative AI pumped out luring phishing emails at scale. And most were vulnerable because messages could bypass email filtering controls.


Voice cloning and deepfake audio add another layer. An attacker might impersonate a CFO over the phone to trick a help desk into resetting credentials. Or pretend to be a CEO, calling an employee to transfer funds to a vendor (similar to what happened to the WPP CEO in 2024).


Even scarier: deepfake video conferencing. Remember the 2024 story of the Arup firm? Cybercriminals were literally pretending to be the CFO during a video call. The voice and footage were so convincing that an employee authorized a $25.6 million transfer.


AI-assisted malware development


Everyone knows generative AI can write code. The problem is that attackers can use it to create a lot of code. And not the good kind.


It’s polymorphic malware that rewrites itself every time it executes. And it evades traditional signature-based anti-malware and detection tools. So now employees are stuck relying on human instinct (their most vulnerable security layer) to not download a sketchy file. 


This malware can adapt to its environment and stays dormant until the moment is right.


For instance, at the beginning of this year, IBM X-Force researchers uncovered an AI-assisted malware strain called Slopoly. Because of its extensive code comments, structured logging, and accurately named variables, it had clear signs of large language model (LLM) assistance. Attackers maintained persistent access to the servers for over a week and stole a ton of data before deploying Interlock ransomware.


AI-powered vulnerability discovery


Attackers always look for weaknesses. But instead of finding one vulnerability at a time, they can automate to find dozens in an hour.


AI vulnerability discovery scans code, network configurations, third-party dependencies, and other intel sources. And in many cases, does it faster than a typical pen test. From there, attackers identify zero-day vulnerabilities and generate working exploits. So vulnerabilities get weaponized before you even know they exist.


But the risks get worse. Agentic AI can combine with generative AI so attackers can not only discover vulnerabilities, but also act on them autonomously. They’ll attack and breach environments at “machine speed,” without human intervention at every step.


This happened recently, when China-backed hackers used Anthropic’s AI to automate cyberattacks. The group targeted over 30 commercial and government organizations. They bypassed security guardrails, scaled phishing to massive levels, and quickly breached the target networks. 


Cybersecurity Risks of AI Attacks: Examples by Industry


Healthcare: Kettering Health ransomware + AI voice impersonation

Attackers used AI voice deepfakes to impersonate hospital executives and trick IT help desks into resetting credentials. This gave them access to the EHR system and patient data. From there, they shut down Kettering Health’s 14 hospitals with ransomware. Patients couldn’t reach staff or the call center. The attackers even impersonated hospital team members and demanded credit card payments for fake medical expenses. It took weeks for Kettering to resume normal operations.


Manufacturing: EvilAI malware uses AI-generated code to evade detection

Attackers used generative AI to write clean, normal-looking malware code that evaded traditional security scanners. This malware, called EvilAI, disguised itself as legitimate productivity tools like "AppSuite" and "PDF Editor." They even had professional interfaces and valid digital signatures. Targeting mostly manufacturers (58 confirmed infections), the malware stole browser credentials, established persistent backdoor access, and maintained encrypted communication with its command-and-control servers.



Data Centers: AI coding agents unknowingly spread backdoored packages

Attackers from the criminal group TeamPCP compromised LiteLLM, an open-source AI gateway, by publishing malicious versions to PyPI. Claude Code then autonomously installed the compromised package without any human action or awareness. The AI agent had full system access to the cloud environment, turning it into an unwitting attack vector. The malware harvested cloud credentials and Kubernetes secrets while quickly spreading through the data center infrastructure.



How to Prepare for AI Attacks  


With AI attacks, the most overlooked vulnerabilities are inside the company. So like many other cybersecurity risks, it's all about governance.


  1. Govern AI use and gain visibility across all AI tools: Unmanaged, unintentional AI tools that employees use, like direct APIs, third-party vendor integrations, and shadow AI services, create new attack surfaces. Establish governance and strict policies to protect data when using AI tools or developing new models. Ideally, gain continuous visibility into all AI usage across the organization, whether through internal deployments, vendor-provided AI, or public API access.

  2. Update risk assessments: Revitalize the cybersecurity risk assessment process to spot governance gaps and treat deepfakes, mass AI phishing, and model poisoning as credible threats.

  3. Strengthen identity controls: Convincing emails can get past day-to-day users. So enforce another layer via MFA or authentication keys. Identity threat detection and response (ITDR) can also catch compromised accounts before attackers move through the network.

  4. Stay vigilant: Don’t underestimate the power of dialing the phone to verify unusual requests (even from known executives). Employees’ awareness should also expand to deepfake audio, hyper-personalized AI-generated emails, and AI voice calls that spoof information with startling accuracy.


Build Your AI-Ready Security Strategy


AI amplifies threat actor speed, scale, and sophistication. The only way to mitigate AI risk: match that speed with disciplined governance.  

OakTruss Group bridges the relationship between cybersecurity and AI to safeguard your business. Cut through the noise and build a framework that defends against external threats while governing internal AI adoption.


FAQs: Top Questions on AI Attacks and Cyber Risk


How are attackers using generative AI to create more convincing phishing attacks?


Attackers use generative AI to create more persuasive content and scrape online sources for personal details. They can automate a good chunk of this process and craft convincing, hyper-personalized spear-phishing emails at scale. Voice cloning and deepfake audio have also made phishing attacks nearly indistinguishable from legitimate conversations with colleagues.


What’s the first thing I can do to defend against AI-powered attacks?

It starts with updating your cybersecurity risk assessment process to treat deepfakes, AI-generated phishing, and automated vulnerability discovery as credible threats. From there, you can start adding security layers and new controls based on the most pressing vulnerabilities.


 
 
bottom of page