Why Are We Still Talking About Bias in AI?

Share this:

AI can be remarkably quick and insightful, but it is not without flaws. Like a sentient being, the AI you rely on in business should always be learning — and might make some “bad” decisions in the process. But the outcome of those mistakes could lead to massive public embarrassment if you’re not recognizing and mitigating them. Beyond embarrassment, it could do serious harm to the individuals, or classes of individuals, who are impacted by the outcomes of the AI’s “bad” decisions.

An inadvertent bias in AI can go back to the original programming — it’s devilishly hard to cover every eventuality — or it could be due to a lack of experience or information. While an algorithm isn’t intrinsically biased, its capacity to make “good” decisions that produce fair outcomes for human beings is only as good as the data we feed it. Is this data complete? Does it account for a diverse population? Is it influenced by any biases — unconscious or otherwise — from humans? These are critical questions companies that operate in the AI realm must ask. 

Given AI’s prominent position in business operations for a decade (plus), though, why is the evaluation of its overall ethics still even a discussion point?

Well, because AI, like any software, is imperfect. Its improvement is an iterative journey, and the humans that are still in charge of how AI is used and implemented are, of course, imperfect.

While many businesses are taking necessary steps to hire in-house AI ethics teams, relegating ethics discussions to a set group of employees may not be enough to earn the trust of consumers today, 40% of which don’t trust companies to ethically use their data. Being accountable doesn’t mean you’re doing what is right and fair for everyone — it likely isn’t possible to create a team as diverse as the population you’re aiming to reach, for example.

Rather than delegating AI ethics to one specific team, ethics should be part of everything your organization does. A data ethics business foundation can help keep businesses accountable and encourage their employees to recognize inequities and act on them in real-time.

Put in simpler terms, we cannot make an algorithm accountable for its decisions, but as organizations and humans, we have the opportunity to make ourselves accountable for eliminating inequitable AI. This is an opportunity that we must grasp.

What’s at stake?

Often, when AI produces an unfair outcome, the first question is: What’s the business or legal impact? Many decision-makers today have this question ingrained in their minds — it’s critical to consider all intended and unintended business consequences for the decisions we make.

Integrating data ethics into the fabric of your company culture means rethinking this question. You should not only ask how the tech you deploy influences your business, you should also think about how it affects data subjects and how those data subjects may become the victim of the bias because these are the real people, partners, and customers you serve. Producing the fairest outcomes for them should be the highest priority. Then the goal should be eliminating the consequence for the individual rather than reducing negative consequences for you, as a business. If you do this successfully, it will benefit you in the long term.

A recent example of AI bias is Twitter’s automatic cropping feature that estimated what a person might want to see first in a picture and then cropped the image accordingly. After user complaints led to an internal investigation, Twitter found that the algorithm was favoring white individuals over Black individuals. Twitter has since discontinued the automatic cropping feature, eliminating the biased AI algorithm. The company then took it a step further by hosting a contest to discover other ways the algorithm may have been biased, and discovered that it was also ageist, ableist, and Islamophobic. 

A defensive corporate culture may have viewed this discovery as harmful to the company’s AI credibility and caused the company to sweep the issue under the carpet. However, Twitter should be commended for being open and transparent about the algorithm’s faults and taking steps to both stop its use and educate others in the industry about the limits of AI.

This type of ethical leadership is exactly what we need to see within the industry because whatever the repercussions, we have an obligation to identify, diagnose, and mitigate bias. This sort of transparency fosters consumer trust and ultimately benefits your business and the wider industry ecosystem. 

How does AI become biased? 

Out of the box, your AI system is not intrinsically biased, but it can exhibit bias for a variety of reasons. The most common problem is the data imported to train the AI model. The result can be biased or produce unfair outcomes if the original data is not broad, diverse, or accurate enough. In that case, the results only satisfy a narrow range of specific scenarios.

It’s important to have trained staff with a broad range of experience who can spot bias in the results. Most AI systems go into production when they’re good enough, but for companies with a strong data ethics foundation, your definition of “good enough” should be informed by thinking beyond usual parameters.

Not only should you have dedicated team members whose job it is to review and refine your AI algorithms, but people throughout your organization should be aware that AI can produce biased results so they can stay vigilant. For example, a marketer may use an AI-based system to refine communications to consumers. They are likely not trained in the ins and outs of the technology powering the platform, but if data ethics have been ingrained in them, they will be able to review the results the system produces and ask smart questions to ensure their validity. 

Companies should not blindly follow their AI output, or automatically see it as the correct answer or optimum result. You (and your system) have to be actively looking for bias. If you’re not aware of the bias, you can’t hope to modify it. A little skepticism is healthy. We know that AI has the potential to produce biased outcomes, so we should take accountability to avoid that; we cannot abdicate responsibility to an algorithm.

What to do next

No AI algorithm is perfect. AI can help us accomplish great things, but only when the appropriate checks are in place. While not necessarily traditional, data ethics principles can be one of the many ways your business helps mitigate bias in AI. These principles should outline what your company believes and how it expects all employees to act. By holding each person in your business accountable — from the CEO who makes business-level decisions down to the intern responsible for posting on social media — you’ll create a culture where ethics are a part of all that you do, including the AI models that you utilize.

Hopefully, all companies will strive to have appropriate controls in place to avoid, identify, diagnose, and mitigate AI bias — before it can do any harm.

John Story is General Counsel and Chief Data Ethics Officer at Acoustic.

Tags: