Audits attempt to clean up AI bias


SOURCE: AXIOS.COM
OCT 16, 2021

Illustration: Annelise Capossela/Axios

AI algorithms employed in everything from hiring to lending to criminal justice have a persistent and often invisible problem with bias.

The big picture: One solution could be audits that aim to determine whether an algorithm is working as intended, whether it's disproportionately affecting different groups of people and, if there are problems, how they can be fixed.

  • But the field of algorithmic auditing is still in its infancy, and until the AI field is governed by meaningful regulations, it will be challenging to carry out audits worthy of the name.

How it works: Algorithmic audits — usually conducted by outside companies — involve examining an algorithm's code and the data used to train it, and assessing its potential impact on populations through interviews with stakeholders and those who might be affected by it.

  • But unlike financial audits — which have clearly defined objectives codified in established law — the field of algorithmic auditing "is a bit disjointed," says Liz O'Sullivan, CEO of the AI auditing company Parity.
  • "There are a lot of different companies that are trying to do different kinds of approaches to validating whether a model is performance compliant or discriminatory," she adds.

Between the lines: Financial audits exist in part to open up the black box of a company's internal operations to outside investors, and ensure that a company remains in compliance with financial laws and regulations.

  • In the case of algorithmic audits, however, the actual workings of AI can be a black box to the company itself because unless explainability is built into the foundation of an algorithmic model, it can be easy to get lost.
  • That can make it difficult for auditors to determine how and where an algorithm might be going wrong, let alone "translate that mathematical representation into words that a general person can understand," says Henry Jia, data science lead at the software company Excella.
  • Untangling that process often involves going back to the original data. "A majority of the biases that become part of the model is inherited from problems in the data," adds Jia.

Details: Algorithmic audits can help companies screen their AI products for flaws that may not be apparent at first glance.

  • Utah-based HireVue, which uses AI to help screen millions of job applicants as part of its recruitment services, contracted with outside auditors to evaluate its hiring process for early-career applicants.
  • The audit turned up issues with how the model can handle different accents in processing video interviews, notes Lindsey Zuloaga, HireVue's chief data scientist.

Yes, but: HireVue was criticized by some outside observers who accused the company of using the results of the audit as marketing material rather than an opportunity for fundamental change. (HireVue says it was transparent with the audit by making the full report public.)

  • Part of the problem, notes O'Sullivan, is that outside of specific areas like lending and hiring that have well-established anti-discriminatory regulations, many of the fields now touched by algorithms are essentially "like the wild, wild West."
  • Audits can help, but more fundamental change will require new laws around AI and the updating of regulations put in the books decades before anyone was working with algorithms.

"We have this incredible new vocabulary of fairness, new ways of thinking about it, that are simply not in the law."

— Liz O'Sullivan, Parity

Similar articles you can read