Fair lending, Artificial Intelligence and Machine Learning

Fair lending, Artificial Intelligence and Machine Learning

By Martin B. Ellis, Esq.

Evaluating consumers and making fair credit decisions requires an increased analysis of data in order for creditors to truly know their customers and make solid decisions based on all relevant factors.  Moreover, fair lending issues now include illegal disparate treatment, which  occurs when a lender bases its lending decision on one or more of the prohibited discriminatory factors covered by the fair lending laws.

The disparate impact test for fair lending violations has always had three parts: (i) whether there is disparate impact to a protected group; (ii) whether the disparate impact is business-justified; and (iii) there is no less discriminatory way of achieving the same business objective.

Although previous reliance on uniformly applied benchmark credit decisions may have been accepted practice, if this results in a disparate impact on a protected class this may be problematic. The traditional response to disparate impact has focused on the second part of the test, identifying a business justification for the disparate lending outcomes. Previously, that was because banks, regulators, and lawyers lacked a feasible way to implement the third part of the test by robustly searching for less discriminatory alternative (LDA) underwriting models.  Enter artificial intelligence (AI) and machine learning (ML).

The AI and ML Solution

Responsible lenders have tried using past methods to search for LDAs. They often used a method called “drop one,” which involves recomputing the disparate impact caused by a model multiple times after dropping variables out of the model one at a time, but seldom uncovering true LDAs.

Over the past few years, however, various AI/ML-powered technologies have emerged allowing lenders to search robustly for LDAs. Both regulators and civil rights leaders are aware of these technologies. Increasingly, responsible lending institutions—including the likes of Freddie Mac—are starting to use them.

From a purely legal perspective, using a benchmark model when LDAs exist potentially puts lenders in violation of part three of the disparate impact test. AI/ML models available now must be explainable with mathematical certainty.  The bottom line from this analysis of the current and future state of consumer finance is that responsible, transparent AI/ML technologies must in understandable terms produce fair, accurate, and compliant lending activity.

Finally, ensuring this technology is, in fact, responsible and transparent requires three due diligence items that should be nonnegotiable with a creditor’s vendor:

  1. Demand Transparency. Every model should come with both risk and fair lending documentation that explains exactly how it works, what data was used, and how fair lending issues have been addressed. If the model doesn’t have that, don’t use it.
  2. Demand Trustworthy data. Data used to train and run your model must come from reliable data sources. Alternative data might be great, but if it hasn’t been vetted for compliance it can present risks.
  3. Demand Fairness. Demand models that have been tested to ensure they are as fair as possible to achieve their business objectives and that they have been subject to proper LDA testing. If not, they might violate fair lending laws and harm not only your customers but the bank itself.

Questions? Comments? Please reach out to Marty Ellis at 410.825.5223 or mellis@shumakerwilliams.com.

The information contained herein is provided for general informational purposes only and may not reflect the current law in your jurisdiction. No information contained in this blog should be construed as legal advice from Shumaker Williams P.C. or the individual author, nor is it intended to be a substitute for legal counsel on any subject matter. This blog is current as of the date of original publication. 

 

By

Shumaker Williams

February 22, 2023