The world is abuzz with artificial intelligence (AI) and advanced AI chatbots. Unfortunately, regulation rarely keeps up with the breakneck pace of change in the technology world. This leaves many financial institutions wondering how to take advantage of the seemingly endless possibilities of AI without ending up in regulatory hot water.
Even if your financial institution is not ready to dive headfirst into this enticing world, your employees may be curious about what the hype is all about and exploring it on their own. In addition, some of your third parties may be exploring these possibilities as well. Here are three things every financial institution should be doing today to address the future impact of AI:
1. Updating the financial institution’s acceptable use policy to address AI.
Given the rising popularity and curiosity associated with AI, it is critical to provide employees with guidance about your financial institution’s stance on the use of AI. Some areas your financial institution should address in your Acceptable Use Policy and IT program include:
• Restricting the use of customer data within AI programs without the prior knowledge and approval of Information Security Department.
• If public AI sites such as Chat GPT are accessible by employees, defining what activity is permissible on these sites. Some examples may include drafting communications or policies, brainstorming ideas, or analyzing publicly available information.
• Requiring that all AI outputs are reviewed to verify that the information is accurate and correct.
2. Understanding how AI intersects with current regulation.
Although financial regulatory agencies have yet to publish any regulation related to AI, the United States government has been pushing for guidance. The first installment of it came from the National Institute of Standards and Technology (NIST) in the form of an Artificial Intelligence Risk Management Framework, published in January 2023. In less than 35 pages, it offers a resource to help manage AI risks for organizations designing, developing, deploying, or using AI systems. The framework provides the readers with a method for evaluating and assessing risks associated with AI through common risk management principles that include: govern, map, measure and manage. In addition, the appendix provides a useful summary of how AI risks differ from those of traditional IT systems.
Another resource that mentions AI is the Office of the Comptroller of the Currency’s Model Risk Management Handbook. On Page 4 of the handbook, it describes how users should evaluate systems using AI/machine learning and that these systems may be considered a model. It goes on to state that “even if your financial institution determines that a system using AI is not a model, risk management should be commensurate with the level of risk of the function that the AI supports.”
3. Gaining an understanding of how third parties use and manage AI risks.
Given the dramatic potential for change presented by AI, many technology providers are carefully deliberating how to incorporate and utilize AI within their pre-existing systems or to launch new products. It is important that bank boards and management teams gain an understanding of what third parties are doing and how they are developing appropriate risk management programs.
Long standing financial institution model providers, in areas such as the Bank Secrecy Act or asset and liability management, have a deep understanding of model risk management guidance requirements and often provide certification reports that provide your financial institution with comfort over their model’s inner workings. However, the majority of technology vendors incorporating AI into their products may not be as versed in financial institution regulation and may not hold the view that their products are models — leaving them with nothing to show you. This is why it is critical to talk to third parties early and understand their future plans, to start the discussion about risk management and assurance early.
Like so many other things going on in this rapidly changing world, AI presents new and unique risks — but there is no reason that financial institutions can not setup effective risk management programs to properly identify and monitor these emerging risks. The key is keeping an eye on the horizon and taking the proper incremental steps to continue evolving and maturing your financial institution’s risk management program.