The rapid evolution of artificial intelligence (AI) has given rise to highly convincing deepfakes, enabling cybercriminals to orchestrate sophisticated scams with alarming precision. Experts warn that individuals and businesses alike must remain more vigilant than ever to counter these emerging threats.
A recent high-profile “romance scam” in France, where a woman lost €830,000 ($840,000), and fraudulent donation drives in Los Angeles highlight how cybercriminals are exploiting AI to manipulate victims. Arnaud Lemaire of cybersecurity firm F5 stressed that "absolutely everyone" is now a potential target.
AI-Enhanced Scams on the Rise
One of the most common forms of cyberattack, phishing, involves tricking users into clicking malicious links, installing harmful software, or divulging sensitive information. According to Verizon’s 2024 Data Breach Investigations Report, phishing and social engineering accounted for over 20% of nearly 10,000 reported data breaches worldwide.
AI-powered tools now make phishing attempts even more deceptive. Large language models (LLMs) allow attackers to craft flawless messages, eliminating the linguistic errors that once served as red flags. Moreover, AI-driven data analysis enables cybercriminals to tailor scams with highly personalized details.
Steve Grobman, Chief Technical Officer at McAfee, pointed out that cybercriminals are leveraging breached data to automate and refine attacks that previously required human effort. "What once needed an army of scammers can now be done with AI," he said.
Deepfakes: A Growing Threat
Cybercriminals are no longer just relying on text-based scams. AI-generated deepfake videos have reached a level where most users cannot distinguish them from reality.
In a shocking case last year, fraudsters used deepfake technology to impersonate a company’s CEO and other executives in a videoconference, successfully stealing $26 million from a multinational firm in Hong Kong.
“The latest generation of deepfake video is nearly impossible for the average person to detect,” Grobman warned. “People need to apply the same skepticism to video content as they do to edited images.”
Staying Safe in the AI Era
To combat deepfake scams, experts recommend simple verification tactics, such as:
- Cross-checking video content with trusted sources.
- Asking video callers to move their cameras or interact in real time, as AI struggles to replicate dynamic movements.
- Using personal verification methods when handling sensitive requests, such as financial transactions.
Lemaire jokingly compared the need for verification to having a “safe word” in personal situations: "If your CEO suddenly asks for a $25 million transfer, verify it in a way only they would know."
A Changing Cyber Landscape
The rise of AI-driven cybercrime has created an entire underground economy, complete with tools for hire. Ransomware groups like LockBit threaten to encrypt and leak sensitive data, while deepfake services allow scammers to impersonate individuals for as little as $5.
Despite these threats, cybersecurity experts remain optimistic. “AI is a tool that can be used for both attack and defense,” said Martin Kraemer of KnowBe4. However, he emphasized that human awareness remains the strongest line of defense.
As Grobman put it, adapting to AI threats is like transitioning from horseback riding to automobiles: "People need to rethink online safety just as they had to rethink road safety. Staying alert is no longer optional—it’s essential."