It started with Vincent Maglione, the chief information security officer at New York-based Grasshopper Bancorp, receiving some questions from fellow employees about using artificial intelligence.
This was back in January — more than a year after OpenAI’s ChatGPT had made its public debut in November 2022 and captured the public’s imagination. His experience using different generative AI tools in his personal time, coupled with these questions, convinced him of two things: The $836 million bank could benefit greatly but it first needed to think about how to use it appropriately and prudently.
He decided to draft an AI use policy and procedures manual without being asked and mentioned it to his superiors. The timing was perfect: almost immediately after, he says, executives talked about AI’s application and potential in an all-hands meeting and asked to see the policy for revisions and approval. Grasshopper’s AI use document has been in effect since February, he says. The policy covers AI tools and technology, including generative AI applications where users input prompts to create new content such as text, images, audio and video. Large language models like ChatGPT are a subset of generative AI.
“I think every bank — every corporation — should start utilizing AI,” Maglione says. “Putting guidelines in place as early as you can enables your employee base and gives them permission that they can use [the technology] and learn together.”
He says all financial institutions could benefit from an AI acceptable use policy. But according to Bank Director’s 2024 Technology Survey, sponsored by Jack Henry & Associates, only 33% of respondents said their bank had developed a policy. Sixty-one percent said their institution had not. This comes as executives are increasingly curious about AI tools and exploring use cases within their institutions, but bank regulators have yet to roll out formal guidance or regulations.
An AI policy should define what AI is and articulate the institution’s vision, objectives and values for implementing the technology, says Lee Wetherington, senior director of corporate strategy at Jack Henry. It should also speak to implementation elements like infrastructure, engineering, security, regulatory compliance, data management, model risk management and communications. He says a policy should include AI use cases, including authorized applications and tools in use, forbidden applications and the approval process for new uses.
The lack of policy means “many boards may not have achieved clarity and consensus on what they mean by AI,” Wetherington says, adding that if they haven’t been “called out by regulators” on the absence of the policy, they may not feel much urgency to create these policies and procedures.
The bank’s board and senior management should support the policy creation process, given AI’s risks, says Kim Phan, a privacy, data security and regulatory compliance attorney and a partner at Troutman Pepper. A financial institution will also want to bring in their legal, compliance and audit teams for their input and expertise; these teams will be involved in implementing and overseeing the policy. She adds that the policy development process should bring in key stakeholders, like the business lines that would deploy the AI tool in question.
Maglione at Grasshopper modeled his draft on existing guidance and commentary from regulators, along with the European Union’s AI Act, which was one of the only pieces of AI legislation at the time. Phan points out that Colorado has passed the only comprehensive AI law in the United States, which financial institutions may want to study, given other states may adopt similar laws. She also recommends they look at the AI governance framework published by the National Institute of Standards and Technology, or NIST. The U.S. Department of the Treasury is among the government agencies that must begin addressing AI and recently released its AI plan.
In the midst of a lack of clear standards for the financial industry, Grasshopper’s policy reflects its values, standards and customer focus; Maglione didn’t want it to read as if AI wrote it. One important decision was prohibiting employees from using a generative tool on customer data — a decision affirmed by other individuals in the information security industry who said that was the safest and easiest way to protect client information.
“All the efficiency gains on the back end are great, but if one client’s account number gets out, to me, that’s a failure. The senior leadership understands that as well,” Maglione says. As part of that, the policy defines both customer data and corporate data, which employees can use tools on.
He also expects that the bank will need to update the policy or procedures to reflect changes in AI’s capabilities and applications. But the rapid pace of change in the environment isn’t daunting to Maglione; it only underlines the need he sees for banks to have realistic policies that permit and encourage employees to use AI in permissible ways.
To get the bank more comfortable with AI tools, he installed security tools that fence in the bank’s customer data and he is training cohorts of employees to use these tools, including generative AI. Employees are experimenting with the tool in their business lines and workflows, with Maglione tracking their learnings and efficiencies.
Phan says it’s OK for banks to explore AI use cases without having an AI acceptable use policy but should create a policy alongside these use cases before any live deployment of AI. Of respondents in Bank Director’s 2024 Technology Survey who had discussed allocating budget or resources to AI, 84% said their bank is exploring fraud detection or prevention use cases, 67% customer service, 67% back-office efficiencies and 61% said their bank is exploring targeted marketing. Financial institutions should also think about how their current vendors could enable AI technology within the tools and platforms the bank uses, which means updating vendor management and third-party risk management policies to include acceptable use considerations.
Phan adds that it’s also OK for banks to prohibit using certain technologies, based on their risk assessment and relative sophistication. She points out that many firms ban work communications on personal devices and that many institutions used to avoid social media. But institutions should be learning and thinking about the technologies’ capabilities and potential use and misuse all the same.
“At some point, banks are going to hit a critical moment where AI is going to be so ubiquitous and consumers are going to have such a high expectation that banks are utilizing it that they won’t be able to avoid it anymore,” she says. “But I don’t think we’re there yet.”