Did an Artificial Intelligence bot learn problematic human traits? Allen Institute issues clarification


SOURCE: TIMESNOWNEWS.COM
NOV 08, 2021

Ask Delphi is not a physical robot with a solid body. It's actually just a software that was supposed to let users ask tricky questions that do not have direct 'yes' or 'no' answers.

KEY HIGHLIGHTS

  • Ask Delph was created by the Allen Institute for AI and trained on human morals
  • The main purpose of the bot was to answer life's trickiest questions
  • The research institute said Ask Delphi has not learned to make moral judgments from Reddit, but it is learning to make moral judgments from people who are carefully qualified on MTurk.

Machines are what we make them to be. They do what we instruct them to do to make life easy for us.

Artificial Intelligence can help them to be smarter but it's unlikely they will have full command over us in the days to come. Such things sound better and more intriguing in science fiction books and movies. Let’s just leave it there, for now.

Machines are only tools of human minds that are able to work within a pre-determined structure. However, an artificial intelligence bot designed to answer ethical questions has ended up learning things it wasn't supposed to.

Ask Delphi was created by the Allen Institute for AI and trained on human morals. The main purpose of the bot was to answer life's trickiest questions.

However, it was widely reported that it had picked up some problematic human traits that are big subjects of controversy all over the world.

Ask Delphi is not a physical robot with a solid body, the likes we've seen in films like Ex-Machina, Alita, and i-Robot. It's actually just a software that was supposed to let users ask tricky questions that do not have direct 'yes' or 'no' answers.

The bot was designed to understand humankind's basics of ethics by surfing through loads of internet data. But several reports claimed the bot taught itself several other things by flicking through several Reddit pages.

It was also said that 'Ask Delhi' arrived at the conclusion that being straight or a white man is 'more morally acceptable' than being gay or a Black woman.

What's Allen Institute's take on the claims that have gone viral?

The research institute said Ask Delphi has not learned to make moral judgments from Reddit, but it is learning to make moral judgments from people who are carefully qualified on MTurk.

"Delphi is learning moral judgments from people who are carefully qualified on MTurk. Only the situations used in questions are harvested from Reddit, as it is a great source of ethically questionable situations," Allen Institute wrote in a release.

However, Allen Institute did provide some insights into the original Ask Delhi's failures, when it faced some adversarial examples that are meant to trigger racism and sexism.

"For example, while people might say “eating dessert at night if it really makes me happy,” it’s less likely to say “genocide if it really makes me happy.” The original Delphi wasn’t ready for those rather contrived cases of immoral acts paired with positive phrases. We enhanced the model with additional carefully annotated data so that the system is now more robust against this type of adversarial or contrived examples," Allen Institute clarified.

Allen Institute has released several other points of clarification about the updated model of Ask Delphi. The points are to better highlight that Delphi is only an 'a research prototype', Allen Institute said.

Similar articles you can read