Experiment on Human-Robot Deception
We performed an experiment to investigate how a robot’s deception of someone influences their perception of the robot’s anthropomorphism, likability, and intelligence.
4 months (part-time)
Jan. 2015 – Apr. 2015
Project Manager, Researcher
A research project for a human-robot interaction course
Priya Ganadas, Amalya Henderson, Elizabeth Ji
There are many scenarios in which robot deception could be useful
- A robot could use deception to make malicious individuals believe that they have accomplished their goal of kidnapping or hacking the robot while remaining loyal to its owner.
- A military robot could hide from an enemy by planting fake tracks and leading them astray.
- A robot could trick a human into thinking that it is not a robot but instead another animal such as a squirrel in order to maintain a more invisible presence in everyday life.
- A robot might establish a better relationship with the human by seeming more human-like and likable, encouraging empathy.
How will humans react when robots deceive them?
The answers to these questions can help designers determine the outcomes of using robots in the above deceptions scenarios and improve how well humans and robots interact.
- make a robot appear more humanlike
- make the human have more empathy for the robot
- make the robot seem smarter
- make the human more critical of a robot
- Will the observer’s ratings of the robot’s performance decrease overall after they realize that the robot can deceive (the observer becomes more critical of the robot)?
- Will the robot’s deception of its human owner increase the observer’s rating of how anthropomorphic, intelligent, or likable the robot is?
We tested 16 different characteristics including:
- elegance of robot movement
Control: The robot does not deceive the researcher
Experimental: the robot deceives the researcher
Participant’s Godspeed ratings of the robot’s anthropomorphism, likability, and intelligence
Participant’s Ruse Task Questionnaire ratings on the robot’s verbal performance to test their criticality of the robot
Experiment and Robot Design
Deception Scenario Design
Ruse Task Design
Robot Operation: Wizard of Oz
Analysis and Results
Twenty-nine sessions, five rejected
We conducted 29 sessions with a convenient sample Carnegie Mellon University students. I led 10 sessions.
Due to technical issues and originally undisclosed prior knowledge of robotics, 5 participants’ results have been excluded from our analysis. The final count was 12 participants in the control condition and 12 participants in the experimental condition.
Two-Way ANOVA on the Ruse Task Questionnaire Results
It is unlikely that the presence of robot deception increases the observer’s criticality of the robot.
After performing the two-way ANOVA, the results were not found to be significant (the p-value of 0.55 was greater than the p-value test of 0.05) indicating it is very unlikely for there to be a significant difference between the control and experimental groups on how they rated the robot’s emotive capabilities before and after the researcher left the room.
One-Way ANOVA on the Godspeed Questionnaire Results
The robot’s deception of its human owner significantly increases the observer’s rating of how human-like, lifelike, and intelligent the robot is.
After performing a one-way ANOVA on each characteristic, three characteristics were found to have statistically significant results compared to a p-value test of 0.05: human-like (p-value: 0.02), intelligent (p-value: 0.05), and lifelike (p-value: 0.03). Additionally, K < 2 and the analysis of variance did not yield a significant F-ratio.
There are potential issues with script deviations and researcher expressions.
We recruited people we knew, but no experimenter conducted the experiment with someone they knew.
Possible Confounding Variables
- Significant findings may have been based on sympathy for the robot rather than the deceptive behavior.
- Poor ratings could be caused by the quality of the robot’s technological implementation.