Financial regulators are focused on the implications and risks of artificial intelligence for effective cybersecurity at financial institutions, according to recent reports and speeches.
“Like all technologies, artificial intelligence can be used as a tool or as a weapon,” said Acting Comptroller of the Currency Michael Hsu during a June conference on AI and financial stability, hosted by the Financial Stability Oversight Council. “A lot depends on who is wielding it and for what purpose.”
The speed and increasingly sophisticated capabilities of AI mean that financial institutions should not wait to strengthen their cybersecurity defenses against malicious, offensive use. But smaller financial institutions may hesitate to implement AI technologies without guidance or encouragement from regulators: 43% of respondents to Bank Director’s 2024 Risk Survey said they were waiting for regulatory clarity before applying AI to “significant functions” within their banks.
Cybersecurity concerns increased over the past year for 86% of respondents to Bank Director’s 2024 Risk Survey. They are right to be concerned: AI lowers the barriers to entry for attackers, increases the sophistication, believability and automation of their attacks and allows them to launch attacks faster, according to a March Treasury Department report looking at AI-specific cybersecurity threats at financial institutions. The Office of the Comptroller of the Currency featured AI as an emerging risk in its fall 2023 semiannual risk review and the Federal Deposit Insurance Corp. flagged generative AI as an operational risk for banks in its 2024 risk review. Generative AI is a technology that uses unstructured, human-created information to create new content such as text, images, audio and video based on user prompts. Large language models like ChatGPT are a subset of generative AI.
In his speech, Hsu said the “frequency and scale of ransomware attacks are likely to increase,” thanks to AI-generated malware. He highlighted the importance of banks’ investing in their operational resiliency capabilities to prepare for such attacks.
“I’m actually at a point in my cybersecurity career where I’m scared,” says Chris Silvers, the founder and principal consultant of security consulting firm CG Silvers Consulting. “And it takes a lot to scare me in cybersecurity.”
Silvers recently conducted a penetration test for a multinational client that used deepfake, or manipulated, audio. Silvers uploaded a public, three-minute video of the CEO to a site that analyzed the audio and made audio files of what Silvers typed, which was a short conversation of six questions asking for information. The site was affordable and easy to use; it allowed Silvers to edit the recordings so they would sound more natural.
To conduct the test, he spoofed the CEO’s number, called the test target — a division president — and played the recorded audio that simulated the CEO’s conversation. Silvers only made certain recordings, and he says there was a moment during the call where he panicked at the employee’s response and wondered if the CEO’s silence would seem odd, but it didn’t. At the end of the test, Silvers had duped the employee into disclosing sensitive information and the employee did not report the call as suspicious.
Silvers worries that the existing solutions to combat such targeted and convincing deepfakes are insufficient when it comes to the potential proliferation and sophistication of these threats. Financial institutions need to think about how to authenticate communication between individuals or verify and authenticate requests between a customer and the institution. His deepfake experience leads him to support executives implementing “risk-based” practices that require verification from a third person for certain requests that a bad actor may target, such as password resets, systems access and disclosure of sensitive information.
“I am convinced this is a game changer,” he says. “We need to change the way we think about risk management, because AI is prevalent and impactful, especially generative AI.”
The Treasury report highlighted that certain AI-enhanced cyber risks can be managed by existing information technology systems, within existing processes. Additionally, many banks have leveraged AI capabilities in their cybersecurity programs for years, including in efforts that focus on detecting anomalies or analyzing user behaviors.
“AI tools can help detect malicious activity that manifests without a specific, known signature,” the Treasury wrote, adding that these capabilities are “critical in the face of more sophisticated, dynamic cyberthreats.”
Financial institutions should also consider how they can leverage defensive AI “to significantly improve the quality and cost efficiencies of their cybersecurity and anti-fraud management functions,” the Treasury wrote. Eighty-one percent of respondents to Bank Director’s Risk Survey said their bank would be open to applying AI technology to cyberattack detection and prevention.
Financial institutions should revisit their cybersecurity policies and practices on an “ongoing basis” because the threat nature can change daily, says Jason Chorlins, principal of risk advisory services at accounting and advisory services firm Kaufman Rossin. Additionally, they should revisit the efficacy of their cybersecurity training for staff in anticipation of more effective phishing attacks.
Executives will want to understand how the operational model of different AI applications impacts their institution’s policies, procedures and processes from an information security, cybersecurity, data management and third-party risk management perspective, he says.
Going forward, executives at smaller financial institutions will need to figure out how to make sustainable cybersecurity investments with limited budgets and staff, says Kim Phan, partner at Troutman Pepper who focuses on privacy, data security, and regulatory compliance. Indeed, the median cybersecurity budget respondents reported in Bank Director’s Risk Survey was $150,000 for fiscal year 2024, including personnel and technology. Free and high-quality publications like the March Treasury report and materials put out by the National Institute of Standards and Technology, or NIST, could be especially useful for smaller community institutions, she says.
“My message for those smaller entities is that there are a lot of free resources and tools available to them if they are willing to take the time, make the investment and adapt them as appropriate for their business model,” Phan says. “They don’t have to assume that there’s nothing they can do.”