Online Fraud Prevention Tips for AI Scams Going After Your Finances

AI Scams Going After Your Finances

Imagine the person you love most.

If that person called you on the phone, claiming that they’ve been taken hostage and a ransom has to be paid, would you second guess whether the call was genuine?

Since the beginning of 2023, there has been a surge of AI voice scams. The scammers mimic the voice of a distressed family membe, and most victims couldn’t tell the difference between the real voice of their loved one and the AI-generated one, because most of the time, it’s spot on.

This is only one type of online scam that leverages AI to exploit people for financial gain. And the truth is, there will be more in the future.

What are strong online fraud prevention methods that can protect you against common AI-based internet scams that target your wallet?

Fighting AI Online Fraud With Bots

Online fraud is automated and inexpensive. As a result, institutions are up against a large number of scams, such as:

  • Digital skimming
  • Credential cracking
  • Authorized Push Payment (APP) Scam

With digital skimming, a customer’s credit card data is stolen while they make an online purchase.

Credential cracking uses AI to guess someone’s credentials to gain illicit access to their account and steal sensitive information. Scammers typically use it to obtain banking information.

APP is a bank transfer scam. A person wires money from their bank account to a scammer’s bank account. But why?

This type of internet fraud often starts with phishing. A scammer calls the victim on their phone, urging them to send money to a specific bank account. Or they might impersonate a bank or a boss via email.

Banks are less likely to reimburse individuals who suffer an APP scam because they essentially voluntarily send their money to a criminal.

How to Automate Scam Detection?

APP and digital skimming are mostly directed toward financial institutions and e-commerce companies.

Because of the large volume of new AI-based threats, businesses are turning to automated cybersecurity solutions designed to detect and mitigate threats of this type automatically.

Bots increase visibility to attack surfaces for teams that manage security and block malicious traffic that could compromise accounts or users’ private data.

Recognizing AI-Based Voice or Video Scams

Voice scams are possible using a sample of only a couple of seconds of a person’s voice. Even the number that appears on your screen might display the victim’s name.

Now, scammers have taken these voice scams to the next level and impersonate people on video calls as well.

Similar to phone call scams, live videos that mimic another person (also known as deep fakes) are used to call their loved ones, convincing them that they’re in danger and need money.

How to Spot a Deep Fake or AI Voice Calls?

Deep Fake videos (of a mimicked individual) can be indistinguishable from genuine videos of that person. The more videos taken from different angles a scammer has, the more convincing the video will be.

The good news is — unless a person is a well-known celebrity, it’s not likely a scammer will have a lot of material to work with and create a seamless deep fake video.

Therefore, the video you see won’t be perfect, and you can detect AI at work by:

  • Asking a person to wave their hand in front of their face — and pay attention if glitching occurs
  • Look for any odd shadows or stains

So what about phone call scams that use the “victim”’s voice?

Knowing that such technology exists and that scammers use it for monetary gain is a major step in preventing successful video and phone call scams.

Pay attention to requests that raise a red flag — such as urgent requests for money transfers or things a person wouldn’t normally say. Introducing a safe word with your loved ones is another way to avoid falling for these scams.

Also, always call back the person on their mobile phone to confirm that you’ve been talking to them via video or phone.

Protecting Users Against AI Social Media Scams

Threat actors use deep fake videos and AI voice cloning to scam victims out of their money on social media. Or to send scam messages generated by ChatGPT to the user’s private inbox.

In the context of social media, scammers can use AI to create:

  • Fake profiles featuring AI-generated images of people
  • Fraudulent advertisements using deep fakes of celebrities or experts
  • Phishing messages to post on comments and send in private messages

Therefore, AI-powered social media scams involve Facebook ads that are created using deep-fake video technology or computer-generated images of people to post on fake profiles.

How to Fight AI Scams on Social Media?

Ideally, social media platforms need to be more responsible and thorough in the detection of fake ads and listings that are posted on their platforms.

In reality, they have a difficult time discerning profiles for genuine users and those that are created using AI.

As a result, users are the ones who have to learn how to recognize illegitimate messages and ads.

Regardless of whether they’re AI-based or not, social media scams are based on social engineering — meaning that phishing awareness training can help users avoid scams.

Some of the phishing messages are obvious — such as spam on Facebook and Instagram. Admins block specific words and emojis to reduce the number of bot-generated comments.

Others might contain malware-infected links or request personal information.

Online Fraud Prevention Is Getting More Challenging

The Internet is riddled with online fraud attempts.

Scams are coming to email inboxes, social media, and websites. Now, criminals use AI to automate them, impersonate the voice of family members, and create deep fakes for video calls.

To make the detection of AI scams even more challenging, bad actors pair them with proven social engineering (phishing) tactics to exploit people for financial gain.

The best ways to detect and fight online scams are by automating online fraud prevention with bots, applying phishing awareness training, and raising awareness for emerging AI scams.

The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of The World Financial Review.