When AI breaks bad


SOURCE: AXIOS.COM
SEP 18, 2021

Illustration: Annelise Capossela/Axios

A new report about artificial intelligence and its effects warns AI has reached a turning point and its negative effects can no longer be ignored.

The big picture: For all the sci-fi worries about ultra-intelligent machines or wide-scale job loss from automation — both of which would require artificial intelligence that is far more capable than what has been developed so far — the larger concern may be about what happens if AI doesn't work as intended.

Background: The AI100 project — which was launched by Eric Horvitz, who served as Microsoft's first chief scientific officer, and is hosted by the Stanford Institute on Human-Centered AI (HAI) — is meant to provide a longitudinal study of a technology that seems to be advancing by the day.

  • The new update published on Thursday — the second in a planned century of work — gathers input from a committee of experts to examine the state of AI between 2016 and 2021.
  • "It's effectively the IPCC for the AI community," says Toby Walsh, an AI expert at the University of New South Wales and a member of the project's standing committee.

What's happening: The panel found AI has exhibited remarkable progress over the past five years, especially in the area of natural language processing (NLP) — the ability of AI to analyze and generate human language.

  • The experts concluded that "to date, the economic significance of AI has been comparatively small," but the technology has advanced to the point where it is having a "real-world impact on people, institutions, and culture."

The catch: That means AI has reached a point where its downsides in the real world are becoming increasingly difficult to miss — and increasingly difficult to stop.

  • "All you have to do is open the newspaper, and you can see the real risks and threats to democratic principles, mental health and more," says Walsh.

Between the lines: The most immediate concern about AI then is what will happen if it is cemented in daily life before its kinks are fully worked out.

  • Companies have already begun employing OpenAI's massive GPT-3 NLP model to analyze customer data and produce content, but big text-generating systems have had persistent problems with encoded bias. A new paper released this week found the biggest models tend to frequently regurgitate falsehoods and misinformation.
  • Walsh points to Australia, which this week announced it will begin experimenting with allowing police in two of its largest states to use facial recognition technology to check if people in COVID-19 quarantine are remaining at home.
  • "It's already been implemented without any debate, even though we know that facial recognition carries serious risks of bias, especially for people of color," he says.

Context: Australia's move is an attempt to use AI to solve tricky social problems like the pandemic — what the panel calls "techno-solutionism" — rather than treating AI as it should be: one tool among many.

  • An algorithm used to determine who gets a bank loan or insurance might have what the panel calls "an aura of neutrality and impartiality" because it appears to be the product of a machine rather than a human being, but the decision AI makes "may be the result of biased historical decisions or even blatant discrimination."
  • "The racism, sexism, ageism in our society is going to be part of the AI systems we create," says Walsh.
  • Unless we realize that fact, AI could inadvertently launder existing social ills, hiding human biases inside the black box of an algorithm.
Similar articles you can read