“Mom, it’s me! I’ve been in an accident and need money right away!”
The voice on the phone sounds exactly like your child, but it’s actually an artificial intelligence clone created from a three-second clip of his voice on Facebook. Welcome to the frightening new world of AI-powered fraud. Generative artificial intelligence (GenAI) has handed scammers a powerful new toolkit that makes yesterday’s email scams look amateur by comparison.
The sophisticated fraud techniques emerging today are virtually undetectable to the untrained eye, or ear. And the financial impact is staggering. Since 2020, phishing and scam activity has increased by 94%, with millions of new scam pages appearing monthly. Even more alarming, experts estimate losses from AI-powered scams will reach $40 billion in the U.S. by 2027.
Generative AI refers to so-called artificial intelligence systems that create new content — text, images, audio or video — based on data they’ve been trained on. Unlike traditional AI that analyzes existing information, generative AI produces entirely new, convincing content. The most concerning part? These powerful tools are increasingly accessible to fraudsters who use them to create sophisticated scams that are harder than ever to detect.
BEST ANTIVIRUS FOR MAC, PC, IPHONES AND ANDROIDS — CYBERGUY PICKS
Today’s scammers use generative AI to “supercharge” their existing techniques while enabling entirely new types of fraud, according to Dave Schroeder, UW–Madison national security research strategist. Here are the four most dangerous ways they’re using this technology.
With just three seconds of audio, easily obtained from social media, voicemails or videos, fraudsters can create a convincing replica of your voice using AI. “Imagine a situation where a ‘family member’ calls from what appears to be their phone number and says they have been kidnapped,” explains Schroeder. “Victims of these scams have said they were sure it was their family member’s voice.”
These AI-generated voice clones can be used to manipulate loved ones, coworkers or even financial institutions into transferring money or sharing sensitive information, making it increasingly difficult to distinguish between genuine and fraudulent calls.
Today’s AI tools can generate convincing fake identification documents with AI-generated images. Criminals use these to verify identity when fraudulently opening accounts or taking over existing ones. These AI-generated fake IDs are becoming increasingly sophisticated, often including realistic holograms and barcodes that can bypass traditional security checks and even fool automated verification systems.
Many financial institutions use selfies for customer verification. However, fraudsters can take images from social media to create deepfakes that bypass these security measures. These AI-generated deepfakes are not limited to still images; they can also produce realistic videos that can fool liveness detection checks during facial recognition processes, posing a significant threat to biometric authentication systems.
Similarly, GenAI now crafts flawlessly written, highly personalized phishing emails that analyze your online presence to create messages specifically tailored to your interests and personal details. These AI-enhanced phishing attempts can also incorporate sophisticated chatbots and improved grammar, making them significantly more convincing and harder to detect than traditional phishing scams.
HOW TO PROTECT YOUR DATA FROM IRS SCAMMERS THIS TAX SEASON
While everyone is at risk from these sophisticated AI scams, certain factors can make you a more attractive target to fraudsters. Those with substantial retirement savings or investments naturally represent more valuable targets — the more assets you have, the more attention you’ll attract from criminals looking for bigger payoffs. Many older adults are particularly vulnerable as they didn’t grow up with today’s technology and may be less familiar with AI’s capabilities. This knowledge gap makes it harder to recognize when AI is being used maliciously. Compounding this risk is an extensive digital footprint: if you’re active on social media or have a significant online presence, you’re inadvertently providing fraudsters with the raw materials they need to create convincing deepfakes and highly personalized scams designed specifically to exploit your trust.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
FBI WARNS OF DANGEROUS NEW ‘SMASHING’ SCAM TARGETING YOUR PHONE
Protection against AI-powered threats requires a multi-layered approach that goes well beyond just digital measures. Awareness is your first line of defense — understanding how these scams work helps you spot red flags before you become a victim. This awareness should be paired with both digital safeguards and “analog” verification systems that exist entirely offline. Here are some key steps to protect yourself:
1. Invest in personal data removal services: Generative AI fundamentally needs your personal data to craft convincing scams, which is why limiting your online footprint has become paramount in today’s fraud landscape. The less information about you that’s publicly available, the fewer raw materials scammers have to work with. Going completely off-grid is unrealistic for most of us today — much like never leaving your home. But you can reduce your online footprint substantially with a personal data removal service like Incogni, making yourself significantly less exposed to AI-powered scams.
By removing your personal data from data broker companies, you not only protect yourself from GenAI-powered fraud but also gain numerous other privacy benefits, such as reduced risks of receiving spam and falling victim to identity theft, as well as helping to prevent stalking and harassment. As AI technology advances, gen-AI scams will only become more sophisticated. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
2. Establish your own verification protocols: Consider agreeing on a “safe word” that only family members know. If you receive an unexpected call from a relative in distress, ask for this word before taking action.
3. Choose strong, unique passwords for each account: Create complex passwords using a combination of uppercase and lowercase letters, numbers, and special characters. Avoid using easily guessable information like birthdays or common words. Consider using a password manager to generate and store complex passwords. A password manager can generate and store strong, unique passwords for all your accounts, reducing the risk of password reuse and making it easier to maintain good password hygiene. Get more details about my best expert-reviewed Password Managers of 2025 here.
4. Enable two-factor authentication (2FA) on all accounts: 2FA adds an extra layer of security by requiring a second form of verification, such as a code sent to your phone, in addition to your password.
5. Receive MFA codes via an authenticator app on your phone rather than email when possible: Using an authenticator app like Microsoft Authenticator or Google Authenticator is more secure than receiving codes via email. Authenticator apps generate time-based one-time passcodes (TOTPs) that are not transmitted over email or SMS, reducing the risk of interception by hackers. Additionally, authenticator apps often support biometric authentication and push notifications, making the verification process both secure and convenient.
6. Use a strong antivirus software: Modern cybersecurity threats are evolving rapidly, with AI being used to create more convincing phishing attacks, deepfake scams, and malware. Investing in strong antivirus software can help identify and block suspicious activity before it reaches you. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.
7. Trust your intuition and verify: If something feels “off,” like you notice unusual phrasing or strange background noises, trust your instincts. Don’t let fraudsters create a false sense of urgency. If you receive a communication claiming to be from a financial institution, call that institution directly using the official number from its website.
8. Monitor your accounts: Review account statements regularly for suspicious transactions. Don’t hesitate to request a credit freeze if you suspect your data has been compromised.
SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES
So, is this all a bit scary? Absolutely. But the good news is, you’re now armed with the knowledge to fight back. Stay alert, take those protective steps I mentioned seriously, and remember that a little healthy skepticism goes a long way in this new age of AI fraud. Let’s make it much harder for these AI-powered scams to succeed.
Do you think tech companies are doing enough to protect us against AI-powered scams and fraud? Let us know by writing us at Cyberguy.com/Contact
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels:
Answers to the most-asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.