‘Three strikes’: Study finds people won’t forgive robot co-workers for repeated mistakes
Ann Arbor, MI — People can lose trust in their robot co-workers after only a few mistakes, results of a recent study show.
Researchers from the University of Michigan asked 240 people to individually work on a task alongside a robot co-worker, also called “cobots.” Each robot was programmed to make mistakes at certain points during the collaboration. The robot would then try to repair the resulting broken trust with the participants by using strategies such as explanations, apologies, denials and promises of greater trustworthiness.
After three mistakes, however, the participants lost trust in the robot – and none of its attempts to regain that trust worked.
“Our study’s results indicate that after three violations and repairs, trust cannot be fully restored, thus supporting the adage ‘three strikes and you’re out,’” study co-author Lionel Robert, a professor in the U-M School of Information, said in a press release. “In doing so, it presents a possible limit that may exist regarding when trust can be fully restored.”
Connor Esterwood, study co-author and researcher at the U-M School of Information, adds that the study has at least two major takeaways: Robots need to have mastered a task before any attempt is made at repairing their trustworthiness with human co-workers, and robots need better trust-repairing strategies.
The study was published online in the journal Computers in Human Behavior.
Post a comment to this article
Safety+Health welcomes comments that promote respectful dialogue. Please stay on topic. Comments that contain personal attacks, profanity or abusive language – or those aggressively promoting products or services – will be removed. We reserve the right to determine which comments violate our comment policy. (Anonymous comments are welcome; merely skip the “name” field in the comment box. An email address is required but will not be included with your comment.)