
Artificial intelligence has great potential and application for financial institutions, but executives must manage its risks carefully.
The prospect of such powerful technology has left many executives wondering what other institutions are doing and how they can benefit from early adopters’ best practices.
To that end, Bank Director’s Banking & Fintech Editor Kiah Haslett recently spoke with Vincent Maglione, chief information security officer at Grasshopper Bank, and Chris Nichols, director of capital markets for SouthState Bank, to discuss their institutions’ journeys in exploring and using artificial intelligence.
The two institutions are very different. Grasshopper Bank, which is the bank unit of New York-based Grasshopper Bancorp, began operations in 2019 as a digital-only bank and had $868 million in assets at the end of 2024. On the other end of the spectrum, SouthState Bank is the regional bank unit of Winter Haven, Florida-based SouthState Corp., which has $65 billion in assets and 343 branch locations following the recent closure of a deal.
But executives at both institutions are exploring how they can safely deploy AI applications — including generative AI, which produces content based on user prompts — to different bank applications. To do that, Maglione and Nichols both shared how they addressed issues related to governance, data management, security and employee training. The transcript below has been edited for brevity, clarity and flow. To watch the full conversation, access the webinar recording here.
BD: How did your bank start its AI journey?
Maglione: The AI journey we’re on today started early in 2024. Our CTO came to the IT team and told us to start thinking about AI and how we could use it in the future. I wrote up a policy and a procedure for AI acceptable use because I thought we should put a framework in place before we start empowering ourselves with AI. He said, “This is very nice but a little heavy-handed; let’s talk about it next month.”
The next day, we had an all-hands meeting for the whole bank, and every other word out of our CEO’s mouth was AI. My CTO came back to me and said, “Vincent, let’s work on this policy. Let’s get this going as quickly as we can.” After that, we started a pilot program to teach employees to use generative AI tools.
Nichols: SouthState started a couple of years ago by standing up an AI Governance Committee and a working group. From there, we looked at applications — like ChatGPT — and then identified use cases. Both of those are important to track. So now we have an inventory list and we conduct a materialities test — basically scoring on the risk side — and are trying to figure out, “Does it fit our use cases and our priorities?”
BD: What concerns did you or others have about using AI, and how did you address those?
Maglione: Some of the main concerns I have are the lack of regulatory direction on how to govern this. We’re taking pieces from other regulations like the EU’s General Data Protection Regulation, the Gramm-Leach-Bliley Act and the EU’s AI Act. The National Institute of Standards and Technology has an AI framework that has been evolving.
For us, being a little more on the heavy-handed side is the way to go right now: making sure our employees understand the risks that come with using AI and how to use it properly for their purposes.
Nichols: Security and privacy are at the top of our list. What happens to our data? What happens to our prompts? Where does that go? Where does it sit? … Beyond that, we’re concerned about accuracy and toxicity [when a model outputs harmful content].
BD: How are bank employees using AI?
Maglione: [Once Grasshopper] put our policies and procedures in place, I pulled together a small group of about 20 people who would utilize AI and had a techier background for generative AI training. We’re not pushing it on anybody; they didn’t have to use it going forward. We had weekly check-ins with them to see how they were using it and if this was something feasible for the bank.
We’ve got some great feedback on that first initial pilot group; they saved about 80 hours a month. We created another small group, and the efficiency exponentially increased. We recently pushed it to the entire bank, and we’ve been getting great feedback from everybody. We did it this way to make sure we empowered our employees to work more efficiently but also make sure they’re using AI safely and ethically as well.
Nichols: One is our own internal large language model, Tate, which we created within the Microsoft environment and has been ringfenced to our standards. We use Tate [as a knowledge base to answer employee questions] and keep adding policies and procedures, all that information. What used to take the average employee about 10 to 12 minutes to search now provides answers in about 10 seconds. It cites sources, and users can thumbs-up or thumbs-down answers. … It records the number of uses as we roll it out, and the ROI is pretty clear.
Two is [Microsoft’s] Copilot; that’s important for daily productivity within the Microsoft 365 environment. Whether it’s setting meetings for 20 people at a time, transcribing notes, summarizing emails — that’s becoming more and more of a skill [set] our bankers need — to interact with Microsoft 365, whether it’s creating PowerPoints, what have you. Getting into fraud, getting into customer service — those are higher-risk use cases. While we use traditional AI for fraud and some of our vendors do, we will be using genAI to unlock some of that power for fraud and cybersecurity, but those are higher-risk areas we’re just now learning about.