
A hacker for hire, Chris Silvers recalls clearly one of the few times he was stopped in his tracks while conducting a penetration test for a small retail company. He’d been hired to try to get into the communications room in the back of the store by impersonating someone from the corporate IT department. Outfitted with his toolbelt and an air of authority, he approached the cash register and informed the cashier that he was there from IT to troubleshoot some problems with the network.
But Silvers made it just halfway through his pitch when the cashier held up a hand and stopped him until she could get her supervisor on the phone. Though it may be a decidedly low-tech example, Silvers says that the decision to get another person involved halted the (albeit phony) break-in attempt.
“A social engineer defines their target and then they go after that target. If you change their target in the middle of their campaign or their process, you have an opportunity to throw them off their game,” says Silvers, the founder and principal consultant of security consulting firm CG Silvers Consulting.
That kind of mindset is especially important to highlight for bank executives, as the financial services industry increasingly comes under attack from fraudsters using deepfake media, made possible by generative AI.
The Financial Crimes Enforcement Network of the Treasury Department warned of a rise in threats involving deepfake media in a memo issued in November 2024. FinCEN writes that “criminals have used GenAI to create falsified documents, photographs, and videos” to circumvent customer identification,verification and customer due diligence controls.” Those can include falsified driver’s licenses or passports, synthetic identities or highly realistic audio or video. “FinCEN analysis of [Bank Secrecy Act] data also shows that malicious actors have successfully opened accounts using fraudulent identities suspected to have been produced with GenAI and used those accounts to receive and launder the proceeds of other fraud schemes.”
Ben LeClaire, a principal with Plante Moran who specializes in cybersecurity, says: “This is not science fiction anymore. This is a real world threat.” LeClaire says he’s noticed a significant uptick in deepfake fraud attempts beginning in 2023 and escalating in 2024. Enabled by the cheap and easy availability of generative AI, even small-scale individual fraudsters can convincingly impersonate customers and executives and ultimately gain access to sensitive information or funds.
LeClaire notes that smaller community banks that pride themselves on knowing each customer personally may be especially vulnerable to deepfake attempts; fraudsters can steal a photo online or even a voice message to fake a highly realistic call to customer service. He notes, however, that larger banks serving multiple different states may have unique risk.
“If somebody’s located in Indianapolis at a call center and they’re serving calls and [interactive teller machines] all day from multiple different states, there really is no way for them to inherently know who they’re going to be talking to,” he says.
AI detection software can run during phone calls and video calls and search for anomalies the human eye might not pick up on, like pixelation or abnormal breathing or blinking patterns, LeClaire says. He also recommends that banks perform regular employee training, at least annually, to keep all of their bankers up to date on the threat landscape. And he recommends that bankers join information-sharing groups, such as the Financial Services Information Sharing and Analysis Center, where they can learn about the kinds of threats that their peers are encountering.
FinCEN names nine red flags to look out for in its recent alert, including identification documents with inconsistent photos, or device data that’s inconsistent with a customer’s profile. While Silvers agrees with some of these, he also believes it’s more important to cultivate a mindset that’s a little bit more curious and willing to pause and involve another person when something just doesn’t feel quite right.
Still, deepfake attempts will almost certainly target customers more frequently than banks themselves. On this front, banks can issue periodic reminders to customers not to give out their information over the phone, and they can urge customers to adopt multi-factor authentication. They might also consider hosting lunch-and-learn events or webinars where customers can learn more about deepfake fraud attempts, LeClaire says.
While the bank may not be strictly financially responsible if a customer loses money or compromises their identity by falling for a deepfake, it still poses a reputation risk for the bank.
“There’s not a silver bullet,” Silvers says. “As consumers or business people or financial institutions, we can’t sit back and think, ‘All I have to do is this, and I’ll be okay.’ That false sense of security is the most dangerous state of mind you can have, because you’re not aware and you’re not being a little suspicious when you get that unsolicited phone call.”