Fiddler image for ZingPR blog.png

AI bias exists. What can you do about it?

Complex AI algorithms today are black boxes. While they can generally work well, their inner workings are unknown and unexplainable. This presents a potential customer service risk to banks—for example last year’s gender bias controversy for the Apple Card and Goldman Sachs. What measures can banks take to avoid these situations and ensure their AI is trustworthy and understandable?

ZingPR worked with the CEO and co-founder of Fiddler Labs, AI expert Krishna Gade, on an opinion piece that explained how bias can creep into a bank’s AI algorithms and why banks need to take steps today to make their AI more explainable. The result: a successful placement in a prominent banking and financial services publication.

 

Explainable AI: taking the algorithm out of the black box

by Krishna Gade, CEO of Fiddler Labs

2020 report from the World Economic Forum and the University of Cambridge found that nearly two-thirds of financial services leaders expect to broadly adopt AI within the next two years – that compares to just 16 percent today.

As technical teams at banks of all sizes evaluate how and where to apply AI in their IT infrastructure, they must consider this: What are the risks associated with ceding control of sensitive and important decision-making to an AI application?

While AI algorithms can handle enormous volumes of information, the inner workings are typically unknown and unexplainable. The result is essentially a “black box” that sometimes produces bias in its pursuit of accuracy. In sensitive applications, such as banking, this trade-off is undesirable – any increase in prediction error could make it more difficult for certain segments of the population to get access to credit and mortgages.

A foundational fix is needed to deal with the risk of bias. “Explainable AI,” which can offer both accuracy and transparency, can help bankers address these concerns.

Take the recent example of the Apple Card, managed by Goldman Sachs. What started as a tweet thread contending gender bias (including from Apple co-founder, Steve Wozniak and his spouse) quickly became a brand-damaging spectacle. A number of women reported experiencing significantly lower credit limits than their male spouse when all of their other input factors were the same (or in some cases higher). Apple ended up with a black eye and a regulator opened an investigation into Goldman Sachs and its algorithm-prediction practices.

How could this issue have been avoided, or at least handled better? Explainable AI provides that foundational fix I mentioned earlier. It combines both accuracy and transparency in a way that reduces the risks of deploying AI solutions in the banking industry.

To better understand how this aligns with classic development practices, let’s look at the high-level lifecycle of an Explainable AI application:

  1. Examine the data for any potential bias.

  2. Check individual model predictions for a deeper understanding of model behavior.

  3. Validate every iteration of the model for bias or suspicious performance issues.

  4. Keep track of all models being deploying to production.

  5. Monitor models in production for both performance and fairness.

The data used to train AI is critical – if the data is flawed to begin with, this flaw permeates into everything that an AI algorithm does going forward. Therefore, we need a way to check for bias and other issues in both data and models through all stages of the AI lifecycle. We also need human oversight throughout the training process because, without it, you may end up building a black-box AI application.

In the Apple Card example, this issue may have been avoided if humans had visibility into every stage of the AI lifecycle. They could have seen examples in the validation stage of how the model was behaving when a certain input factor was isolated and compared with the global dataset. They also could have had the ability to override an algorithm’s prediction if they felt it was unfair or incorrect.

As you plan for and embark on your AI projects, here are some key guidelines for infusing visibility and insight into the final product.

First, make building Explainable AI a priority. This means thinking from the outset about the ethics in your AI, who’s involved in developing your applications, and how to tackle bias.

Make sure you infuse explainability across your AI development process. This includes making sure you have the needed visibility and transparency into how results are produced so you can correct your course as needed.

Next, ensure that when your AI models are deployed, you are continuously monitoring them. AI models can encounter vast differences in data when in production, so continuous monitoring is essential to prevent model decay and to prevent outliers and data drift situations skewing its decision-making.

Lastly, implement an AI governance process. This means developing a framework where you can track and manage your models, validate your models for fairness and bias on a regular basis, ensure humans are in the loop to approve or override sensitive decisions, and continuously monitor and improve your models. With these approaches, you can build AI with trust, visibility and insights.

 

Previous
Previous

Highlights from Bioinsider’s Therapeutic Pipeline for COVID-19 Virtual Meeting

Next
Next

Biovista's AI identifies two more drugs for treating COVID-19