
At a time when banks and credit unions are investing more heavily in technology such as artificial intelligence, chatbots and next-gen digital platforms, the nagging question for many is whether the upgrades are successful.
Ask five bankers how they define success for a new core integration or AI tool, and you might get five very different answers. Some banks are turning to peer performance in consortiums, evaluating how they compare to peer groups, while others prefer vendor scorecards that track key criteria, with scores aggregated into a weighted total and reviewed periodically.
Some track key performance indicators are tied to speed or savings. For others, it’s customer adoption, system stability or efficiency. Or they could simply rely on whether a project “went live.”
“Many banks still default to ‘go-live’ as the primary success metric, but that misses the broader picture,” says Chris Miller, a senior director in Cornerstone Advisors’ delivery channels practice. “Top institutions are beginning to develop internal benchmarks by tracking KPIs over time and using [other tools] for relative context.”
In Bank Director’s 2024 Technology Survey, 68% of respondents said they didn’t measure return on investment for technology projects. What should financial institutions be tracking? Interviews with several industry leaders point to a mosaic of meaningful metrics and a call for greater clarity and discipline.
Different Tools Call for Different Metrics
In the realm of digital transformation, institutions are beginning to monitor adoption rates across online and mobile platforms as a share of total users, as well as the pace at which routine client interactions shift toward digital channels. Net promoter scores, a metric that measures loyalty and satisfaction by asking customers if they would recommend a company to someone else, are increasingly being tracked with a specific lens on digital user experience, helping institutions assess customer sentiment toward mobile apps and online banking tools.
At Michigan State University Federal Credit Union, these metrics are further refined by member segment. The institution tracks adoption rates for fintech partners based on demographic fit. Silvur, a retirement planning tool, is measured among members age 55 and older, while Debbie, a budgeting platform, targets the 18-to-35 age group.
“We measure adoption of services in the traditional sense like you would any standard banking product,” says Benjamin Maxim, the chief technology officer for the $8.3 billion East Lansing, Michigan-based credit union.
Software implementation metrics tend to center on project delivery, if a system goes live on time, within budget and within scope. An implementation’s success is tied to whether internal teams use the tool effectively, if the platform remains stable, and whether the number of post-launch change requests signals sound scoping or a need for rework.
AI deployments often demand more specialized measurement. At MSUFCU, success is tracked by the percentage of loans that are auto-decisioned using a machine learning model developed with Experian. The credit union is currently at 60% and has plans to push toward 80% with an updated model. Chatbot performance is similarly monitored, with 97% accuracy in responses and 79% of conversations fully handled by the chatbot without human intervention.
Oconee Financial Corp. in Watkinsville, Georgia, is piloting AI to augment, not replace, human workers. In one use case, an AI tool extracts data from a collection of documents and financial data that a borrower provides to a lender as part of a loan application, saving staff more than an hour per file. That time savings adds up, especially when experienced staff can shift their attention from data gathering to analysis.
“We continue to work with a KPI to prove the validity of time saved with AI,” says John Davis, the $646 million Oconee State Bank’s chief innovation technology officer.
The Problem with Industry Benchmarks
Despite growing sophistication in KPI tracking, standardized benchmarks remain elusive. “No standard set of KPIs has been universally adopted,” Cornerstone’s Miller says. “That’s a problem — and an opportunity.”
Most community banks and credit unions lack the tools or internal expertise to collect performance metrics consistently, says Stephen Curry, founder and chairman of Endurance Advisory Partners. Rather, they often rely on vendor-generated reports or manual scorecards, which may not capture the full story.
Financial institutions above $10 billion in assets generally have an advantage because they are more likely to use advanced KPIs and key risk indicators to inform decisions, segment clients and track vendor performance across categories like uptime and support responsiveness.
“The underlying information is underutilized,” Curry says, noting that deeper insights are often buried in business intelligence systems or in third-party dashboards that aren’t fully mined. Where benchmarks exist, they can be illuminating, helping management teams better understand things such as transaction volumes and customer acquisition costs, Curry says.
From Scorecards to Strategy: Better Metrics for the Road Ahead
Rather than defaulting to broad cost-to-income ratios or system uptime, Curry and other industry experts argue for a more intentional approach to performance measurement. A foundational metric that Endurance uses with clients is technology infrastructure spend as a percentage of total assets, which typically benchmarks around 1%.
Another metric Curry favors is technology adoption depth, or the percentage of a system’s features that are actively used by staff and customers. For Curry, it’s one thing to have a robust customer relationship management or core platform; it’s another to make full use of its capabilities.
Institutions could also monitor customer friction throughout the digital journey. This might include abandonment rates during online account opening or mobile enrollment — data points that help identify pain points in user experience. Cross-sell effectiveness can be measured by the number of products sold per digital customer, a useful way to gauge whether digital tools are deepening relationships, not just facilitating transactions.
Vendor performance can be quantified with metrics like uptime, resolution time, and support responsiveness. For banks with more than $5 billion in assets, time to value, or how long it takes for a tech investment to deliver measurable outcomes — is becoming an increasingly relevant metric, especially for AI projects where results should ideally materialize within six to nine months, Curry says.
“To maximize tech investments, community banks should prioritize accessible, high-impact KPIs, evolving their approach as they scale,” Curry says.
Another approach involves looking at spending as a percentage of operating revenue to monitor long-term success, says Mike Rempel, a senior director at Cornerstone. “Technology isn’t the sole driver of revenue, but if a financial institution is going to make these investments, it should be monitoring the resulting financial improvements,” Rempel adds. If the expected revenue lift doesn’t appear, he says the issue may lie with the technology, or with other factors like ineffective sales teams or liquidity constraints.
Perhaps the challenge isn’t simply which metrics to track, but how to apply them in ways that drive strategic decisions. Maxim says MSUFCU evaluates digital products not just by adoption, but by whether the benefits justify support costs. Sometimes, he adds, a product may underperform financially but still merit investment if it improves the member experience and boosts lifetime customer value, or the total net profit a business can expect to earn from a customer over the duration of their relationship.
In a fast-changing environment, measurement strategies are becoming just as important as the technologies themselves. As institutions sharpen their focus on how they define and track progress, many are rethinking how they measure the return on technology investments.