Digital deception: How do humans respond to lying robots?
SOURCE: EARTH.COM
SEP 06, 2024
Earth.com staff writer
Honesty is usually considered the best policy, but there are moments when telling the truth isn’t always the right choice. Social norms help humans decide when to be truthful and when to bend the truth to spare someone’s feelings or avoid harm. But how do these norms translate to robots, which are becoming increasingly integrated into our daily lives?
To explore whether humans are comfortable with robots lying, researchers asked nearly 500 participants to evaluate and justify different types of robot deception.
“I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” explained lead author Andres Rosero, a PhD candidate at George Mason University.
“With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.”
The researchers selected three common work scenarios involving robots – medical, cleaning, and retail – and tested three distinct types of deceptive behavior.
These behaviors were categorized as external state deceptions (lying about the outside world), hidden state deceptions (concealing the robot’s capabilities), and superficial state deceptions (exaggerating the robot’s abilities).
In the external state deception scenario, a robot acting as a caretaker for a woman with Alzheimer’s lies, telling her that her late husband will be home soon.
The hidden state deception scenario involves a woman visiting a house where a robot is cleaning, unaware that the robot is secretly filming her.
In the superficial state deception scenario, a robot in a store complains about feeling pain while helping move furniture, prompting a human to ask someone else to take over for the robot.
The team recruited 498 participants, asking each to read one of the scenarios and respond to a questionnaire.
The participants were asked whether they approved of the robot’s actions, how deceptive they considered the behavior, whether the deception was justifiable, and who, if anyone, they believed was responsible for the lie. The researchers then analyzed the responses to identify patterns in how people viewed each type of deception.
Of the three scenarios, participants were most disapproving of the hidden state deception involving the house cleaning robot that was secretly filming. They rated it as the most deceptive and found it largely unjustifiable.
In contrast, while the external and superficial state deceptions were also seen as deceptive, the superficial deception – where the robot pretended to feel pain – was viewed more negatively, likely because it seemed manipulative.
Interestingly, participants were most supportive of the external state deception, where the robot lied to the Alzheimer’s patient. Many justified the lie by saying it protected the patient from emotional pain, prioritizing compassion over truthfulness.
While participants could offer justifications for all three forms of deception – some suggested that the hidden state deception might have been for security reasons – most were firmly against the idea of robots lying about their capabilities without disclosure.
More than half the participants who encountered the superficial state deception, where the robot faked pain, also found it unacceptable. When it came to these unacceptable deceptions, participants often pointed the blame at the robot’s developers or owners rather than the robot itself.
“I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities, because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” Rosero said.
“We’ve already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions.”
However, the researchers acknowledged that this study represents just a first step in understanding human responses to robot deception. They noted that future experiments should use real-life scenarios – such as videos or roleplay simulations – to better gauge human reactions.
“The benefit of using a cross-sectional study with vignettes is that we can obtain a large number of participant attitudes and perceptions in a cost-controlled manner,” Rosero explained.
“Vignette studies provide baseline findings that can be corroborated or disputed through further experimentation. Experiments with in-person or simulated human-robot interactions are likely to provide greater insight into how humans actually perceive these robot deception behaviors.”
The research opens the door to understanding how we navigate the ethics of deception when it comes to emerging technologies like robots and AI.
As robots increasingly interact with humans in caregiving, service, and other industries, it’s crucial to explore how comfortable we are with their behaviors, particularly when they involve deception.
With this study, the researchers have started a conversation about the ethical complexities of robot deception, pushing us to consider not just what robots can do, but what they should do. And as AI and robotic technologies continue to evolve, the need for clear regulations to protect users from potential manipulation becomes ever more pressing.
The study is published in the journal Frontiers in Robotics and AI.
LATEST NEWS
Devices
Here’s how to stop annoying vibrations and notifications on your Samsung device
OCT 12, 2024
WHAT'S TRENDING
Data Science
5 Imaginative Data Science Projects That Can Make Your Portfolio Stand Out
OCT 05, 2022
SOURCE: TODAYSMEDICALDEVELOPMENTS.COM
OCT 12, 2024
SOURCE: SCITECHDAILY.COM
OCT 05, 2024
SOURCE: DAILYITEM.COM
OCT 05, 2024
SOURCE: NEWSBYTESAPP.COM
SEP 28, 2024
SOURCE: TECHSPOT.COM
SEP 21, 2024