Video | 0:51 run time

The Rise of AI-Enhanced Scams 

Security

How To Protect Your Sensitive Information

With the introduction of new online technologies that empower users with more and more convenient information sharing, it’s important to understand that this empowerment comes at a cost in the form of cyber attacks that can easily access your personal and financial information if you’re not increasingly vigilant. 

The Increasing Use of Artificial Intelligence (AI) in Online Scams

With the rapidly growing use of AI technology, it’s only natural that fraudsters would find a way to use this powerful tool for ill-gotten gain. 

What is Artificial Intelligence?

Let’s take a step back and make sure we all understand what Artificial Intelligence, or AI, is. AI is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.

As a field of computer science, artificial intelligence encompasses (and is often mentioned together with) machine learning and deep learning. These disciplines involve the development of AI algorithms, modeled after the decision-making processes of the human brain, that can ‘learn’ from available data and make increasingly more accurate classifications or predictions over time.

The Rise of AI in Scams

The rapid growth of cyber criminals leveraging deepfake technology to create convincing fraudulent content represents a major evolution from traditional scamming methods like phishing attacks and other traditional online scams. The unintentional negative impact of generative AI tools is that they actually empower fraudsters to easily create fake audios or videos, making their deceitful campaigns very difficult to detect by a potential victim.

Generative AI accelerates DIY fraud

Experian predicts fraudsters will use generative AI to accelerate “do-it-yourself” fraud with a wide range of deepfake content, such as emails, voice and video as well as code creation to set up scam websites and perpetuate online attacks. Fraudsters may also use generative AI to socially engineer “proof of life” schemes. Using stolen identities, fraudsters will leverage generative AI to create fake identities on social media. They can then interact online with these new profiles that look like a real consumer. This could dramatically increase the number of fraud attacks. To safeguard customers, companies will likely have to utilize multilayered fraud prevention solutions that “fight AI with AI.”

Types of AI-Powered Scams

  • Voice Cloning Scams: In AI voice scams, malicious actors will scrap audio data from a target’s social media account, and then run it through a text-to-speech app that can generate new content in the style of the original audio. These sorts of apps can be accessed online for free, and have legitimate non-nefarious uses. The scammer will create a voicemail, or voice note depicting their target in distress and in desperate need of money. This will then be sent out to their family members, hoping they’ll be unable to distinguish between the voice of their loved one and an AI-generated version.3
  • Deepfake Video and Video Call Scams: The thieves used deepfake technology—audiovisual content created with generative artificial intelligence (GenAI) that mimics the voice and likeness of people—to set up a video call between the duped employee and imitations of the company's chief financial officer and several other corporate executives.
  • AI-Generated Images and Deepfake Scams: In an AI image scam, fraudsters use AI-generated images to create fake profiles, impersonate individuals, or deceive people into believing false information. These images can be used in various online scams, including romance scams, identity theft, or fake social media profiles.
  • AI-Generated Websites: Scam Pages attempted to sell products that do not exist or to get users to divulge personal details; some were posting the AI-generated images on stolen Pages. AI-generated images are shown on the Facebook Feed to users who do not follow the Pages.7
  • AI-Enhanced Phishing Emails: Phishing scams have been around for years – scammers will send out emails or text messages masquerading as a legitimate company, such as Microsoft, in an attempt to get you to click on a link, which will lead you to a malicious website. From there, a threat actor can inject malware into your device or steal personal information such as a password. Historically, one of the easiest ways to spot them has been spelling and grammar errors that a company as prestigious as Microsoft would simply not make in an official email to its customers.
  • AI-Generated Listings: Scammers can also use AI to create images and descriptions for fake listings as part of an online marketplace scams. For example, they might list an in-demand item for sale and then ask you to pay a deposit to hold the item. Or, the listing could direct you to a different website that they use to steal your payment information. Scammers could also list apartments and homes as part of a rental property scam.

How to Protect Yourself From AI Scams

Your personal and financial information are critical to keep safe from scammers who intend to use this information to harm you financially and in other ways. Following are just some of the ways you can protect yourself. 

  1. Be cautious and keep your guard up whenever someone contacts you from an unfamiliar source, including email account, phone number and social media profile.
  2. Scammers want you to react quickly. If you feel any pressure at all, pause before you take action. 
  3. Reach out to the person or organization via a more trusted channel such as a legitimate company website and listed phone number. 
  4. Phone a trusted family member or phone and explain the situation and ask them their opinion on whether or not it seems like a scam. 
  5. Get into the practice of not immediately clicking on links in emails, texts and social media comments or messages.
  6. Avoid using irreversible payment methods. If you use cash, crypto or a gift card, for example, you might not be able to get your money back. As best as you can, use a bank transfer that may be reversible. 
  7. Create a secret password or phrase. Create a secret password or phrase with your family members and friends that you can use to verify each other's identities. Use something that a scammer won't be able to figure out using people search sites or reading social media posts.
  8. Use unique passwords for all your online accounts and enable multifactor authentication (MFA). This can help keep identity thieves and scammers who break into one of your accounts from logging in to other accounts.

What to Do if You're a Victim of an AI Scam

If you've fallen for a scam, the next steps could depend on the type of scam, but here are a few things you may want to do.

  • Try to get your money back. If you sent a payment using an app, credit card or bank account, contact the company to see if you can stop or reverse the transaction.
  • Report the scam to the FTC. File a report with the Federal Trade Commission (FTC) at ReportFraud.ftc.gov. Reporting scams can help the FTC track trends, warn others about scams and charge scammers with crimes.
  • Report the identity theft to the FTC. If the AI scam was after your identity rather than your money, you can report the identity theft on IdentityTheft.gov. The FTC will create a personalized recovery plan for you based on what happened.
  • Protect your credit. You have the right to add fraud alerts to your credit reports, which alert companies that review the reports that you've been a victim of identity theft and that they should take extra steps to verify your identity. If you add a fraud alert with one credit bureau, they will automatically notify the remaining bureaus. You also have the right to freeze your credit reports for free, which limits access to your credit reports and may keep someone from fraudulently opening a new credit account.
  • Resecure your accounts. Even if you recently created unique passwords for your accounts, you may want to update your passwords again.11

It’s certainly a shame that with the wonderful new technologies introduced to people that can dramatically improve their experiences and their lives, we have to be increasingly vigilant against those that would use those same tools against us to steal personal and financial information. 

Artificial Intelligence generated scams are a growing trend that is a problem that will not go away. Stay vigilant and if a situation doesn’t feel right, get out of that situation, and verify through trusted channels. 

At Central Bank, your protection is important to us. Review the questions we don’t ask that other sources should also not ask.

The information provided in these articles is intended for informational purposes only. It is not to be construed as the opinion of Central Bancompany, Inc., and/or its subsidiaries and does not imply endorsement or support of any of the mentioned information, products, services, or providers. All information presented is without any representation, guaranty, or warranty regarding the accuracy, relevance, or completeness of the information.