FinXTech Logo The Intersection of Financial Institutions and Technology Leaders

Tracking the Patchwork of State AI Laws

August 28, 2025

By Polo Rocha

 

If the advances in artificial intelligence seem dizzying, state lawmakers have been almost as quick drafting laws that govern AI usage.

The patchwork of state laws can be a challenge for banks and credit unions, since talks to create a federal AI framework are still early and may not ultimately rein in state action. Instead, financial institutions must follow states’ quickly evolving demands, as lawmakers guard against privacy risks and AI-based discrimination.

Though far fewer of them passed, state lawmakers introduced over 1,000 AI-related bills this year, according to the National Conference of State Legislatures. Lawyers say next year could be even busier, as out-of-session lawmakers return to statehouses and bring with them a slew of AI measures for the industry to track.

It’s smart to go ahead and be in front of all of these,” says Robert Maddox, who represents banks at the law firm Bradley, adding that banks may need to budget more toward AI compliance as more states tackle the issue.

The past few sessions offer some clues on where states will go. Action is taking place in both red and blue states, with California, Colorado, Nebraska, Utah and Texas all passing AI measures. There are some exemptions available for banks and credit unions, but lawyers note they’ll need to carefully examine whether and how they apply.

Banks need to be “extremely proactive,” Maddox says. They don’t just need to review whether their own operations comply with the evolving state frameworks — but also whether the vendors they work with do too. That includes reviewing vendor contracts to ensure they’re routinely disclosing their uses of AI and whether they’re compliant, he says.

“It is going to be a wonderful technology to assist financial institutions,” Maddox says of AI, but it’s also “going to be quite a lift” for compliance and risk teams to track the quickly shifting legal picture. 

For example, the debate over state and federal powers is “evolving really rapidly,” says Jules Carter, a lawyer at Moore & Van Allen, potentially setting up a clash later on if the federal government decides to rein state actions.

That is particularly relevant for federally chartered banks, which are overseen by the Office of the Comptroller of the Currency. The agency has historically argued that some state laws are pre-empted for national banks under the country’s dual banking system — and a similar AI-flavored fight is possible down the line, Carter says.

To help banks and credit unions keep up, FinXTech has compiled a few major themes that state laws cover.

Privacy and Opt-Out Rights
By far the most popular policy approach is protecting consumers’ privacy rights, including their ability to opt out of their data being used in AI tools.

States that have passed related measures include Colorado, Indiana, Connecticut,  Kentucky, New Hampshire, Washington, Utah, New Jersey, Montana, California, Minnesota, Florida, Tennessee, Rhode Island, Oregon, Virginia, Nebraska, Delaware, Texas and Maryland, according to a PwC tracker.

One typical example is Connecticut, which this year passed a law that lets consumers opt out of having their personal data processed for “profiling” in automated decisions that have major impacts. That includes access to loans, housing, insurance, jobs, health care services and educational opportunities.

The sheer amount of state laws has drawn concern from tech companies, who argue the state-by-state patchwork will stifle innovation. An effort in Congress to impose a 10-year moratorium on state laws failed this summer. The White House, however, has since laid out an AI action plan that recognizes state rights to pass “prudent laws that are not unduly restrictive to innovation.”

AI Discrimination
Colorado has taken the lead on targeting AI discrimination, potentially giving other states an example to follow. 

The state’s expansive AI protections law requires that companies take “reasonable care to protect consumers” from algorithmic discrimination in high-risk areas such as lending or housing. That includes having an effective risk management program in place, plus periodic impact assessments outlining “steps that have been taken to mitigate the risks” of discrimination.

It’s unclear whether other states will follow Colorado’s approach, and lawmakers have discussed changes to their landmark law. Current law exempts banks from the AI regime if their regulators have a similarly strict one in place, a hurdle that lawyers say is vague and could fold in some institutions.

Texas this year passed an AI law that prohibits companies from using AI systems “with the intent to unlawfully discriminate” against protected classes, though lawyers note that the requirement to prove intent could make it less onerous. Federally insured institutions are exempt if they comply “with all federal and state banking laws and regulations.”

As more states take action on AI-based discrimination, banks seem to be “much better positioned to respond” than other industries, says Carter, the Moore & Van Allen lawyer. Preventing discrimination in models is far from a new concept for bank compliance teams, she notes. 

Banks have long had to thoroughly understand risks tied to any models they use, agrees Jeremy Mandell, co-chair of the financial services group at the law firm Morrison Foerster. That’s the case “whether the model is based in an Excel spreadsheet or it’s generative AI,” he says. “The same principles apply.”

Transparency
Understanding complex models could help banks as states look to boost AI transparency. California, for example, will soon require AI developers to publicly share a “high-level summary of the datasets used” to train their generative AI systems.

The requirement doesn’t just cover the tech giants that develop them — but also companies that make a “substantial modification” to those models for their purposes.

Other state transparency initiatives focus more on customer service. Utah, for example, passed a law that requires companies “clearly and conspicuously” disclose that customers are talking to AI assistants if asked. Maine lawmakers have passed similar requirements.

Employment
States are also limiting usage of AI for employment purposes. Colorado’s broad AI law, for example, covers employment opportunities as a high-risk area that warrants extra precautions against discrimination.

Illinois also approved a law last year making AI-based discrimination in employment a civil rights violation and requiring employers to disclose to staff they’re using AI for hiring and firing. State officials would investigate any potential violations.

California, meanwhile, is aiming to pass a broader “No Robo Bosses Act” that forbids companies from making hiring or firing decisions without a human involved. Gov. Gavin Newsom vetoed a broad AI bill last year, and a similar fate could be in store for the bill, lawyers note.

But regardless of the outcome, lawyers and advisors don’t expect any slowing of AI-related proposals, in California or elsewhere. That’s particularly the case as talks to create a more unified federal framework are in their early stages and may not come to fruition.

For banks doing business in multiple states, the safest approach is to be conservative, argues Vikas Agarwal, chief technology and innovation officer at PwC. His advice for banks: Design your uses of AI “for the strictest state and work backwards.”

Polo Rocha is a contributing writer for FinXTech.