Thought Leadership |

Why Explainable AI is the Future of Lending

Other Topics

Artificial Intelligence & Machine Learning

Region

share
key points
  • Explainable AI is an adaptation of conventional AI that prioritizes transparency, trust, and interpretability in order to provide detailed explanations for decisions.

  • Automated decisioning allows relationship managers to focus on customer-facing activities while also increasing loan volumes and cycle speeds.

The introduction of Apple’s transformative product in 2007 was a moment that changed the future. Like the now iconic iPhone, the launch of generative AI tools like ChatGPT and Google Bard have captured the public’s imagination, launching an explosion of applications and experiments across nearly every industry, from business and commerce, to art and government.

For many of these use cases, the “how” behind the AI model isn’t particularly relevant. If you ask ChatGPT to compose a wedding invitation in the form of a Shakespearean sonnet, you don’t necessarily need to understand how AI can instantly conjure the right rhyme scheme.

A wedding invitation is one thing; a financial institution’s credit lending decision is quite another. When it comes to such high stakes decisions, understanding the inner workings of the model is vitally important—even required. For those use cases, explainable AI is the solution.

What is Explainable AI?

As AI continues to rapidly evolve, a key issue has emerged. Many AI models are like “black boxes,” complex and difficult to understand, which can make it hard for people to trust the model and its decisions.

Explainable AI, on the other hand, is a specialized adaptation of conventional artificial intelligence that is designed specifically to provide detailed, understandable explanations for its decisions, actions and outputs. It prioritizes transparency, interpretability and trust above all else. As a highly regulated industry, financial services serves as a well-structured, high-stakes arena for the measured implementation and rigorous evaluation of explainable AI.

In fact, federal regulators have already gone on public record demanding that financial institutions and other regulated entities tread carefully with how they use AI models in their lending operations, with fair lending compliance as the bedrock standard for its ethical usage.

The Consumer Financial Protection Bureau (CFPB), for example, issued a circular in May 2022 advising against the use of complex, so-called “black box” algorithms in making lending decisions. Specifically, the CFPB said that when the technology used to make credit decisions is too complex, opaque or new to explain adverse credit decisions, companies cannot claim that same complexity or opaqueness as a defense against violations of the Equal Credit Opportunity Act.

In April 2023, four federal agencies—the CFPB, the Department of Justice, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission—released a joint pledge “to uphold America’s commitment to the core principles of fairness, equality and justice as emerging automated systems, including those sometimes marketed as ‘artificial intelligence’ or ‘AI,’ have become increasingly common in our daily lives–impacting civil rights, fair competition, consumer protection and equal opportunity.”

Within this regulatory context, explainable AI is an important advancement toward providing greater transparency into the inner workings of the model used and providing key stakeholders like loan officers, credit analysts, credit committee members, customers, Board members and regulators with greater confidence in the model’s outputs. It also offers financial institutions a host of downstream benefits, including better risk management, greater operational efficiency and an improved customer experience.

Using Explainable AI in Lending

Let’s explore a few ways that explainable AI can help financial institutions optimize their lending operations.

Within commercial and small business, lenders can apply explainable AI models to evaluate risk. Rich Data Co. (RDC), an nCino partner, has developed an advanced AI platform that leverages alternative data and machine learning to interpret borrower behavior and creditworthiness based on a much wider range of data than traditional credit sources. The platform synthesizes vast, diverse data into a tailored multidimensional risk profile for each customer objectively and with out human biases. Most importantly, RDC’s explainable AI methodology also ensures regulatory compliance and ethical practices.

Where the “explainability” comes in is through a dashboard view that RDC refers to as “self-describing decisions.” Within this view—known in the industry as a Shapley feature importance graph—users can see exactly which factors contributed either positively or negatively to the overall risk score. Explainable AI can be used in this way to provide robust, self-describing risk grades at any point in the loan lifecycle: at origination, during the annual review or at credit renewal. It is more quantifiable and less subjective than traditional methods of determining credit risk and is conducive to a “continuous monitoring” risk management approach, where re-ratings can be scheduled on a recurring basis, such as monthly or quarterly.

Not only does explainable AI offer greater accuracy and risk reduction, but the ability to schedule these data pulls and analysis means that a significant percentage of annual reviews can be automated, freeing up relationship managers to work on higher-value, customer-facing activities.

Explainable AI isn’t just a risk management tool—it can also be deployed to support consumer lending credit decisioning. For example, Zest AI, another nCino partner, utilizes an explainable AI model that digs deeper into the applicant’s credit file, delivering a decision that is far more robust than one based solely on a FICO score. This is particularly valuable for “thin file” applicants and results in up to 25% higher approval rates than traditional credit approval methods. Institutions using Zest AI have also seen increased auto decisioning rates, enabling higher loan volumes and faster loan cycles.

Explainable AI empowers lenders to monitor and address “model drift” that may occur with AI and machine learning models over time. Since these models are continually learning and adjusting as new data is entered into the system, users need to continuously verify the model’s accuracy and adherence to the original objectives, as well as the institution’s risk appetite. Such verification is impossible with “black box” AI models, but with explainable AI, all of the data is available and accessible.

nCino is Leading the Way

Applying AI to lending processes is nothing new for nCino. In fact, nCino has been at the forefront of AI and machine learning ever since our launch of nCino IQ (nIQ) back in 2019. Today, nIQ’s growing set of capabilities include Commercial Pricing & Profitability and Automated Spreading, features designed to enhance efficiency and accuracy and provide deeper data insights within the commercial and small business loan cycles.

Explainable AI represents the next step in nCino’s journey of using cognitive technologies to deliver data-driven insights, intelligent automation and industry benchmarks.

AI isn’t a moment; it’s the future.

The industry winners will be those organizations that successfully harness this powerful technology to deliver faster decisions, scale strategically and generate industry leading productivity and ROI. To do this, we must put the “black box” days of AI behind us and embrace explainable AI as a way to achieve better compliance and more trusted relationships with all key stakeholders.