Machine learning in credit has huge potential to benefit banks and borrowers in credit underwriting, but it also carries a huge amount of risk.
“The application of machine learning in credit underwriting is one of the most important — both high risk and potentially high reward — applications of this kind of technology in the financial services space,” says Kelly Thompson Cochran, deputy director at FinRegLab, a Washington-based nonprofit research group.
Machine learning models can be complex, difficult to explain and could potentially replicate historical bias and discrimination. In response, federal regulators have issued notices about lenders’ use of these models and emphasized the lenders’ liability for their decisions.
Cochran points out that access to affordable credit is a crucial tool that consumers and businesses use to manage expenses and build wealth. That’s why FinRegLab decided to study credit underwriting models that leverage machine learning technology, the results of which it recently published in a series of research papers.
The group partnered with two professors at Stanford University to evaluate proprietary tools offered by seven technology companies — Arthur, H20.ai, Fiddler, RelationalAI, SolasAI, Stratyfy and Zest AI — for their transparency, accuracy and fairness. Multiple banks use some of the models, banks that FinRegLab agreed not to identify. The group also looked at how explainable these decisions are and the effectiveness of different techniques to remove bias. They also interviewed stakeholders on the implications of machine learning underwriting models.
They found that lenders will have to pay special attention to what’s called adverse action notices. Regulations require lenders to give borrowers a reason why they were declined for credit. Cochran outlines why lenders that use machine learning in their credit models will need to think about those notices’ wording and rationale. It’s not always clear how the various factors interact with each other and influence a lending decision, which may complicate a lender’s adverse action notice. She adds that the Consumer Financial Protection Bureau is already thinking about this possibility, given its fall 2022 guidance on this issue. Cochran says the CFPB is focused on the accuracy, specificity and validity of these notices, so consumers can make actionable changes to improve their ability to apply for credit. That task could be complicated if a machine learning model can’t isolate why a consumer is a bad credit risk, or if the model indicates a factor that falls under protected class status.
“You shouldn’t make the description so vague that it’s obscuring what’s going on, instead of informing the consumer,” she says.
But Cochran stresses that prudent application of these models could potentially expand financial inclusion. These models can incorporate alternative data sets that may help identify potential borrowers that traditional models exclude.
Cochran says it’s important that lenders interested in this application understand the difference between traditional models, which may be more familiar and easier to understand, versus models that use artificial intelligence techniques like machine learning. Another area of focus for lenders will be on their existing data quality and model risk governance practices, which they may need to strengthen to safely deploy these models.