By Dylan Croll
Financial scams are much more sophisticated nowadays, thanks to artificial intelligence, than the poorly-written emails your grandmother received from a Nigerian prince promising a windfall.
Experts report that scammers are using artificial intelligence to make convincing FaceTime calls, phone calls, and emails to unsuspecting victims. They pose as prospective lovers, close friends, and even IRS agents.
As a result, experts warn the problem could reach unprecedented levels as fraud becomes almost undetectable. They urge Americans to be vigilant about avoiding swindles.
“This is crime 3.0,” said Haywood Talcove, CEO of LexisNexis Risk Solutions, a company that offers identity fraud protection. “It is possible to blow through most of the tools we have set up to protect our financial institutions and our government institutions with the use of this technology — whether it is artificial intelligence, generative AI, or deep-fake technology.”
There is a problem with AI for everyone
There has been an increase in online scams in recent years. According to the Federal Trade Commission, consumers lost roughly $8.8 billion to fraud in 2022, a 19% increase from the previous year.
Since most online scams go unreported, these numbers don’t reflect the true extent of the problem, according to Kathy Stokes, AARP’s director of fraud prevention.
Stokes said, “We don’t know how big it is, but we know it’s so much bigger than anything we’ve ever seen.”
According to recent FTC data, young people are now more likely to fall victim to fraud than older people.
It’s a problem for everyone, not just older adults, and artificial intelligence has been part of fraud for a long time, Stokes says. “But now the generative AI adds a whole new level of sophistication to it.”
The 16th
Dylan Croll
Nowadays, thanks to artificial intelligence, financial scams are a lot more sophisticated – and believable – than the days when your grandmother received an email from a Nigerian prince promising a windfall.
Artificial intelligence is used by scammers to persuade unsuspecting victims through FaceTime calls, phone calls, and emails. They may pose as prospective lovers, close friends, or even IRS agents.
Therefore, experts warn fraud could reach unprecedented levels as it becomes almost undetectable. They urge Americans to stay vigilant in order to avoid being swindled.
“We call it crime 3.0,” Haywood Talcove, CEO of LexisNexis Risk Solutions, a data analytics company that offers identity fraud protection among other things. With the use of this technology — whether it’s artificial intelligence, generative AI, or deep-fake technology — we are able to blow through most of the defenses that our financial institutions and government institutions have put in place.”
In New York, May 18, 2023, the ChatGPT app is displayed on an iPhone. A judge is deciding whether to sanction two lawyers who allege ChatGPT tricked them into including false legal research in their court documents. In Manhattan federal court on Thursday, June 8, 2023, the lawyers apologized for their involvement in written submissions that left Judge Kevin Castel both baffled and disturbed.
There is a problem with AI for everyone.
Consumers lost roughly $8.8 billion due to online scams in 2022, a 19% increase over the previous year, according to data released by the Federal Trade Commission in February.
According to Kathy Stokes, AARP’s director of fraud prevention, or Fraud Watch Network, the numbers do not reflect the true extent of the problem because most online scams are not reported.
“We don’t know how big it is, but we know it’s bigger than we can imagine,” Stokes said.
According to recent FTC data, young people now fall victim to fraud more frequently than seniors.
“AI is a problem for everyone, not just the elderly, and it has been used in fraud for many years,” Stokes said. “But this generative AI makes the way they target people so much more sophisticated.”
“It’s just when an older adult is targeted, they tend to lose more because the criminals are after the assets they have saved for retirement – an insurance policy if they are widowed, and home equity.”
I don’t see a 25-year-old fraudster in you
In several kinds of scams, criminals use artificial intelligence to better deceive their victims, according to experts.
Adam Brewer, a tax lawyer, told Yahoo Finance that criminals can now send more persuasive letters requesting money with ChatGPT.
It’s just more polished now. Essentially, they use computers to write scripts or letters, Brewer said. “This is going to be a lot harder for the average person to detect.”
A romance scam is a scam in which fraudsters pose as prospective lovers and trick victims into paying them money. Deepfake technology can be used by fraudsters to alter their image as well as even their voice. This type of fraud is particularly common among elderly, lonely men.
Talcove said, “You don’t look like a 25-year-old fraudster, you look like a 40-year-old attractive female, and everything fits the image, fits the voice.” “That is when you’re looking at one of the more devastating impacts of artificial intelligence.”
Also highlighted was ransom fraud, where people receive a call from a loved one or close friend asking for money in the middle of the night.
“You’re lying in bed at night. Your phone rings and it sounds like your kid, and they’ve been arrested in the Bahamas. You need to send them $5,000 right away,” Talcove said.
‘If you’ve heard of it’
For addressing this kind of fraud, experts recommend several courses of action.
To prevent ransom fraud, adult children should make sure their parents don’t send money to strangers by using a family password they wouldn’t know, Talcove said.
Potential victims should watch out for messages that create a “heightened emotional state,” such as winning a lot of money or starting a new romantic relationship.
“Those things put us into a place in our brain, the amygdala, where it’s hard to come out of it and access logical thinking. Criminals have known this for a long time,” Stokes said. The tools they’re able to use have become so much better, so we need to pay attention to that.
A red flag is an emotional reaction to an incoming communication. That’s the flag. That’s where you should disengage.”
Stokes recommended conducting a reverse image search on social media to verify someone’s identity.
It is obvious that the person is trying to defraud you and deceive you if they appear under other people’s names, she said, but she also acknowledged that doing a reverse image search is not as effective “when I’ve created an actual human and I can make hundreds of other humans that don’t exist, but appear real.”
According to Brewer, people should be extremely suspicious of government requests for immediate action. Government agencies like the IRS move slowly and are unlikely to contact you over the phone, email, or text first.
Brewer said it will take a long time for them to send letters. “So you can be sure that it’s most likely a scam if someone calls, texts, or emails you, telling you that you must act in the next few minutes, days, or hours, because that’s not the timeframe the IRS operates on.”
In the end, Brewer said, the best defense against fraud is awareness.
“That’s what really separates people from falling victim. If you’ve heard about it, your mind will put it in the fraud category,” he said. In contrast, if you hadn’t heard of it, your mind might have run and you send money to someone or do something you’ll regret later. Now you’re instantly skeptical.”