Skip to main content

The Challenges of Building an AI Model Without Bias

Learn strategies to mitigate bias when building AI models.

image of 8 compiled headshots of different ethnicities and genders.

Even as AI technology advances by leaps and bounds, the fact remains: Its effectiveness is determined by the data scientists who build the models at the technology's core. AI and machine learning technologies rely on the competency of their human masters, but with that reliance comes the combined challenges of human bias, intention and trust, plus the additional complexity that most AI models are unique. In short, machines can learn and perform tasks with accuracy, but only if they’re programmed with an effective AI model. The issue is ensuring how.

Building a Better Model

AI models, the framework from which insights are gleaned, are based on a finite set of linear mathematical calculations and formulas. They respond to data, behavior and reasoning to generate predictive algorithms. But in the end, insights leveraged from AI are only as good as the models used to derive them. 

It all comes down to a messy combination of bias and lack of creativity. AI makes decisions based on human-designed training data, and that data is vulnerable to human prejudice, unintentional or otherwise, as well as the limitations of restrictive thinking. While impeded creativity can be alleviated by group work and divergent approaches, biases in human decision making are influenced by multiple entangled factors. These factors include the age, gender and geographical origin of the programmer, as well as whatever historic influence and inherited social inequities they might possess. 

Biases are difficult to unravel and they sneak into algorithms in many ways. They also aren’t going to go anywhere soon. In March 2018, IBM Research reported that the number of biased AI systems and algorithms will increase by 2023, so the time to act is very much now. However, the sheer variety of AI models employed across the globe makes for a tremendous task, and certain algorithms are more adept at dealing with biases than others. 

Mitigating Bias

Determining and alleviating these biases early before they become ingrained in business processes is critical. 

Two-way learning enables all parties to learn from an experience. By training human users to understand bias within their approaches to building machine learning models, the resultant AI technology will be cleaner and less susceptible to prejudice. Programmers can also mitigate bias by ensuring that weighting within machine learning is logical. 

For example, in the video game Pac Man, running into a ghost is fatal, so the weighting to avoid ghosts is higher than other outcomes. This is logical weighting, albeit a basic example, but the Pac Man lesson is one all AI models can benefit from. By contrast, improper use of weighting could result in unexpected biases. If seven people in a data set of 50 individuals have criminal histories, two of whom are named Steve, depending on the model, the data may start considering all individuals named Steve as having potential criminal backgrounds. Logical weighting alleviates the instances of inaccurate weighting. 

Thinking outside the digital workplace is also important. AI models can be fed real world, logistical data so they can respond to real world situations. For instance, if an AI model is focused on a human activity such as sport analysis, programmers could place sensors on shoes of real people and track the data generated by actual use. Another way of thinking beyond comfort zones is ensuring that data is tested for accuracy instead of what’s easiest to prove. Investing time in additional trials will result in a more accurate model of the data. 

The Future of AI Insights

As Alexander Pope famously commented, “to err is human,” but that doesn’t mean AI should also suffer human shortcomings. By altering our behavior toward building AI models, inconsistencies will continue to decline. Robust approaches to data management and input unimpeded by bias will ensure, to a large extent, that AI models are equipped with data that’s truly impartial, improving the trust between AI and humans. Data will be processed more accurately and, by extension, algorithms and outcomes will also improve. 

This opens a two-way street for overall growth. As programmers become more aware of human flaws and reflect that awareness in how they construct AI models, the machines will be equipped to identify and highlight inconsistencies in human decision making. If programmers look closely enough, they can report on how humanity can become even more impartial in their behavior patterns. At that point, the student becomes the teacher, and there is no mistaking what that means.

IBM Systems Webinar Icon

View upcoming and on-demand (IBM Z, IBM i, AIX, Power Systems) webinars.
Register now →