Sometime soon, banks and credit unions are going to begin interacting with digital agents and not just humans. Agentic AI, which are AI-powered systems capable of acting autonomously on behalf of humans, are rapidly moving from futuristic concept to practical reality. Soon, a digital agent could call a bank or credit union to dispute a charge, open an account, or request a wire transfer, all without its human counterpart ever lifting a finger. But this new convenience raises an urgent question: how do financial institutions know who, or what, they’re really interacting with?
“One of the key risks associated with agentic AI is identity,” says Suzanne Sando, lead analyst of fraud management for Javelin Strategy & Research. “Because these agents are intended to act relatively independently on behalf of a customer, it’s imperative that banks be able to determine who is actually behind the interaction. Is this a real human or is it a bot? And then the next determination that needs to be made is, is this bot malicious or is it legitimate and intended to act on behalf of someone else?”
As agentic commerce emerges, the line between customer and machine grows blurrier. And this means the next frontier of fraud and risk management is changing drastically. Banks and credit unions are well-versed in performing know-your-customer protocols; soon they will have to become proficient in KYA — know your agent.
Financial institutions should treat AI agents as digital representatives under existing KYC and anti-money laundering frameworks, according to Chris Nichols, director of capital markets for SouthState Bank, a correspondent bank and subsidiary of $66 billion SouthState Bank Corp. based in Winter Haven, Florida.
“In many ways, AI agents have more in common with humans than with IT resources,” Nichols wrote in a blog, predicting that 2026 is the year when banks will begin interacting with agents. “AI agents will grow, evolve, learn, adapt and age out just like humans. Like humans, each agent comes with a risk profile that needs to be tracked and managed accordingly. Require agents to be tied to an account holder or institution with validated identity documentation. All bank agents get certified and embedded with the certification number once it is moved into production. A bank not only manages a repository of certified agents, but it also maintains a directory so that other banks, employees and customers can check.”
Tying Digital Identities to Real Ones
To manage this emerging risk, banks and credit unions will also need to implement technical controls that anchor agentic interactions to verified human identities. That could mean extending today’s digital identity frameworks, such as device fingerprinting, behavioral biometrics or cryptographic signatures, to include AI agents. In practice, each approved agent might carry a verifiable credential issued by the customer’s bank or identity provider, ensuring that every transaction or request can be cryptographically linked back to a legitimate owner.
Because banks and credit unions have previously been operating under the assumption that there is a human on the other end of any digital interaction, the industry needs to take new and innovative methods to authenticate identities and purchase intent, Sando says.
“This includes incorporating behavioral biometrics to detect specific behaviors and characteristics of the user, determining whether they are in line with the consumer’s typical behaviors, bot detection to assist in determining the difference between a human and a non-human, device intelligence to further ensure the legitimacy of an interaction, and performing these checks in a real-time environment,” Sando adds. “And as already mentioned, the next key distinction for banks to make is whether the bot is malicious or not, which is certainly a more subtle distinction.”
Nichols wrote that banks should also use AI-based “agent fingerprinting” to confirm the same model/instance is interacting over time, as well as track decision patterns and transaction habits to detect anomalies.
Financial institutions can also require the human or corporate owner of the agent to complete standard KYC and then attach the credentials to an agent.
“This means the agent needs to store a verified link between the agent’s cryptographic ID and the owner’s customer profile,” Nichols wrote.
Banks can also create a digital token for each agent that ties it to a specific account owner or cardholder, says Aaron McPherson, founder of AFM Consulting and longtime industry analyst.
Furthermore, once that is done banks and credit unions can create user agreement policies that state the user is responsible for any actions taken by their agent. As with victims of Zelle scams, it’s likely that customers will contact their bank for restitution when their agents act maliciously. “Some banks will [agree to that] and some banks won’t,” McPherson says.
Ultimately, banks and all businesses that employ agentic AI need to think about the consequences of “putting out technology for consumers who may not be sophisticated enough to know all the risks,” he says.
What If an Agentic Agent Goes Rogue?
In addition to scams, financial institutions must consider what happens when those same tools are turned against them. The technology that enables AI assistants to manage a customer’s finances can also empower fraudsters to create convincing impostors. Agents will be able to look, sound and behave like the real thing.
“Banks should certainly be aware of the risks that customers will face with agents,” Sando says. “Criminals love to capitalize on new technology and new payment channels, and agentic commerce will be no exception.”
Looking ahead, Sando thinks that financial institutions — and indeed all businesses that use AI agents — can anticipate a rise in impersonation scams where fraudsters using generative AI create more sophisticated and convincing impersonations of legitimate agents. Some fraudsters will take advantage of low-code platforms to create agentic apps, offering customers enticing introductory deals to use, she says.
AI agents are typically programmed with guardrails that prevent them from making unauthorized purchases or transfers, notes McPherson, and he adds that there needs to be a clear audit trail that is captured to determine if AI agents are complying with each user’s intent.
“This kind of transparency is essential for post-incident analysis: If something goes wrong, banks need to be able to trace whether the issue stemmed from a fraudster manipulating the interaction, a technical failure, or even a merchant misinterpreting the agent’s intent,” McPherson says.
Banks will also need to invest in technology that detects Gen AI-powered fraud such as voice cloning and the creation of realistic-looking avatars, he adds. This will likely involve a combination of biometric authentication and behavioral analytics to assess risk, similar to the anomaly detection used to detect unusual agentic AI.
As fraud tactics evolve, detection systems and fraud and security vendors must continuously adapt to recognize new patterns of deception, McPherson says.
The threats may even uncover some unexpected trends.
“Ironically, this may lead to a return to the branch for many folks to do their banking,” McPherson says.