Skip to main content

Research Repository

Advanced Search

When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games

Han, The Anh; Perrett, Cedric; Powers, Simon T.

Authors

The Anh Han

Cedric Perrett



Abstract

The actions of intelligent agents, such as chatbots, recommender systems, and virtual assistants are typically not fully transparent to the user. Consequently , users take the risk that such agents act in ways opposed to the users' preferences or goals. It is often argued that people use trust as a cognitive shortcut to reduce the complexity of such interactions. Here we formalise this by using the methods of evolutionary game theory to study the viability of trust-based strategies in repeated games. These are reciprocal strategies that cooperate as long as the other player is observed to be cooperating. Unlike classic reciprocal strategies, once mutual cooperation has been observed for a threshold number of rounds they stop checking their co-player's behaviour every round, and instead only check it with some probability. By doing so, they reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative. We demonstrate that these trust-based strategies can outcompete strategies that are always conditional, such as Tit-for-Tat, when the opportunity cost is non-negligible. We argue that This is the accepted version of the manuscript published in Cognitive Systems Research. this cost is likely to be greater when the interaction is between people and intelligent agents, because of the reduced transparency of the agent. Consequently , we expect people to use trust-based strategies more frequently in interactions with intelligent agents. Our results provide new, important insights into the design of mechanisms for facilitating interactions between humans and intelligent agents, where trust is an essential factor.

Citation

Han, T. A., Perrett, C., & Powers, S. T. (2021). When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games. Cognitive Systems Research, 68, Article 111-124. https://doi.org/10.1016/j.cogsys.2021.02.003

Journal Article Type Article
Acceptance Date Feb 6, 2021
Online Publication Date Apr 8, 2021
Publication Date 2021-08
Deposit Date Apr 17, 2021
Publicly Available Date Apr 9, 2022
Print ISSN 1389-0417
Publisher Elsevier
Peer Reviewed Peer Reviewed
Volume 68
Article Number 111-124
DOI https://doi.org/10.1016/j.cogsys.2021.02.003
Keywords Trust; evolutionary game theory; intelligent agents; cooperation; prisoner's dilemma; repeated games
Public URL http://researchrepository.napier.ac.uk/Output/2762563

Files

When To (or Not To) Trust Intelligent Machines: Insights From An Evolutionary Game Theory Analysis Of Trust In Repeated Games (accepted version) (860 Kb)
PDF

Licence
http://creativecommons.org/licenses/by-nc/4.0/

Copyright Statement
Accepted version licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.







You might also like



Downloadable Citations