What is trust? We may begin by observing that trust is a ubiquitous phenomenon. We put our bodies in the hands of pilots, drivers, and doctors; expect the food we purchase to be safe; happily deposit our children at their day-care centres; and ask complete strangers to help us navigate foreign cities (Baier, 1986, Pettit, 1995). In the extreme instance, soldiers even trust their enemies not to fire at them when they lay down their arms and raise a white flag. With AI, we’re dealing with something new. We need to think about whether we should extend trust to these systems.
The theoretical problem is a matter of whether an AI-relevant theory of trust is feasible or articulable. By contrast, the practical problem of trust concerns how we ought to design and engineer our AI systems to address any trust-relevant worries and concerns (Chen, 2021).
The first AI prototype
The Golem (“unshaped form” in Hebrew) is considered one of the earliest AI prototypes. Originally a Jewish myth, it describes an anthropoid figure made of clay brought to life through incantations and rituals. One method to animate the Golem involved inscribing the Hebrew word “emet” (truth) on its forehead. To deactivate it, the first letter had to be erased, changing “emet” to “met” (death).
Image: TCD / Prod.DB / Alamy Stock Photo (A scene from film, Der Golem, 1920)
Trust as a Quaternary Relation
(degree = 4) (Chen, 2021)
In the paradigmatic trust relation, a trustor (A) places trust in a trustee (B) to perform an action (Ø) relative to some goal (G) that makes acting (Ø-ing) desirable (Castelfranchi & Falcone, 1998, Chen, 2021). For instance, I might trust a plumber (Bob) to fix a leaky pipe. In this scenario, I am the trustor and Bob is the trustee. I trust that Bob has the necessary competence in plumbing, the action (Ø) needed to fix the leaky pipe. My goal (G) can be characterised in terms of a satisfactory resolution of the plumbing issue.
Given that many open questions remain about the concepts of volition, autonomy, rationality, intentionality, and consciousness and whether AI systems possess them, we should suspend judgment about whether an artificial agent can qualify as a trustee.
Epistemology is a branch of philosophy that is concerned with the origin, nature, scope, and limits of knowledge, as well as justification, belief, and related notions. For epistemology, the central question of trust is: when is A’s trust in B justified?
Reliance is a pre-requisite of trust. It involves the expectation that someone will deliver as expected on something, with at most a negligible risk of an adverse outcome occurring. Risk assessment theorists do not distinguish between trust and reliance. To them, A trusts B to a degree (m) if A is willing to risk relying on B to a certain degree (m) that meets or exceeds a probability threshold (n) (Gambetta, 1998). In other words, A trusts B because A has assigned a sufficiently high degree of probability m to B Ø-ing.
For most philosophers of trust, however, the matter goes beyond mere reliance. My smartwatch, on which I rely to track my vital signs, might fail to display my heart rate accurately as a result of a technological malfunction. While I might be disappointed or upset, I will not feel betrayed by my smartwatch. By contrast, if I trust Bob to fix a leaky pipe and he winds up causing further damage to the plumbing instead, I might feel betrayed and demand an apology or even a compensation from him.
Disappointment is our reaction to misplaced reliance. It differs from betrayal, our reaction to misplaced trust. Trust appears to involve reliance plus some extra factor that concerns why A would rely on B to be willing to perform the desired Ø.
Caution: science in progress
With the proliferation of self-driving laboratories, where AI replaces human participants in devising and conducting experiments, social scientists Lisa Messeri and Molly Crockett warn against potential epistemic risk and “scientific monocultures.” They caution that placing unwavering trust in AI’s objectivity, productivity, and understanding of complex concepts in the scientific research process could result in “producing more but understanding less”.
Photo: Marilyn Sargent / Berkeley Lab
A variety of theories look at this from different perspectives. This extra factor might involve B having the right motivation (motives-based theory), B possessing the requisite goodwill (willbased theory), A taking a certain stance toward B that entails A feeling betrayed if B fails to live up to certain normative expectations (participant stance theory), or B having a standing commitment to do what A has trusted her to do (commitment theory).
Philosophers of trust are typically looking beyond reliance in search of a complete theory of trust. In addition, at least some accounts of this extra factor of trust motivate the challenge from philosophers who are sceptical about the intentionality of AI systems (Chen, 2021). For instance, the will-based theory requires that B should possess (or at least be perceived as possessing) the relevant goodwill. Since only things that have wills can possess goodwill, it follows that we can only trust agents that have wills. More generally, given that many open questions remain about the concepts of volition, autonomy, rationality, intentionality, and consciousness and whether AI systems possess them, we should suspend judgment about whether an artificial agent can qualify as a trustee (B) (Johnson, 2006, Himma, 2009). As AI systems lack the relevant intentionality, they can neither take into account interests they might have, act in favour of the interests of the trustor (A) because they care about A or possess goodwill toward A, nor experience the possibility of a conflict of A’s interests and their own. The sceptic’s response to the theoretical problem of trust will be: an AI-relevant theory of trust cannot be articulated because AI lacks intentionality.
The trust paradox
A 2023 study in PLOS One reveals that people often express higher support for using AI-enabled technologies than their actual trust in them. This attitude is most evident in domains like police surveillance, followed by drones, cars, general surgery, and social media content moderation. Factors such as perceptions of AI's effectiveness and “the fear of missing out” drive this paradoxical behaviour.
Photo: Aly Song / REUTERS
In the search for a complete theory of trust, distrust — rather surprisingly — often gets neglected. As it is possible for A to neither trust nor distrust B, the attitude of distrust is not simply the absence of trust. In addition, trust and distrust are mutually exclusive. A cannot trust and distrust B at the same time to perform an action (Ø). Trust and distrust are best thought of as contrary rather than contradictory attitudes in a threefold partition of attitudes, with the third attitude being one of neither distrust nor trust (Jones, 1996, Hardin, 1999).
Philosophers of trust often overlook factors like the social, economic, and political climate in which the attitudes of trust, distrust, and neither-trust-nor-distrust emerge.
Philosophers of trust often overlook factors like the social, economic, and political climate in which the attitudes of trust, distrust, and neither-trust-nor-distrust emerge. Krishnamurthy (2015) offers a notable exception in finding that distrust ought to be the default attitude in a climate of oppression, tyranny, and injustice. Indeed, the precise nature of the climate from which AI systems emerge may well provide far stronger grounds for scepticism than worries about the absence or presence of relevant intentionality in these AI systems.
The concept of trust has dominated efforts to develop principles and guidelines for the use of AI. These have come from the private sector, national AI strategies, and academic proposals for AI governance regimes, while also being at the centre of calls from civil society for ethical AI. This discursive trend has been diagnosed as the commodification of trust in the public discourse on AI and society (Krüger & Wilson, forthcoming).
The commodification of trust is driven by a need for a trusting population of service users to harvest data at scale.
An ideological critique commences with the observation that a wide swathe of AI systems emerge from a climate marked by oppression, tyranny, and injustice. This climate is best described in terms of surveillance capitalism, an ad-based business model in which personal data is collected and commodified by technological corporations for profit-making purposes (Zuboff, 2018). For instance, Google’s collection of behavioural surplus includes e-mails, texts, photos, songs, messages, videos, locations, communication patterns, attitudes, preferences, interests, faces, emotions, illnesses, social networks, purchases, and so on. This behavioural surplus is fed into machine intelligence (AI and other related technological disciplines), transformed into prediction models, and sold to advertisers and data brokers. A famous example of the excesses of surveillance capitalism would be the Pokémon Go game. This Googleincubated augmented reality game uses mobile devices with GPS to locate, capture, train, and battle virtual Pokémon which appear as if they are in the player’s real-world location. Innocent Pokémon Go players are herded to eat, drink, and purchase in restaurants, bars, fast-food joints, and shops that pay to be featured.
In AI BICS trust
As a 2023 report by KPMG Australia and the University of Queensland reveals, people in India, China, South Africa, and Brazil — the BICS countries — show less skepticism towards AI. While Finland reports the lowest willingness to trust AI (16%), BICS countries rank highest (56-75%). They also feel the most optimistic, excited, and relaxed about AI, perceive the most benefits from it, and report the highest levels of AI adoption and use at work.
Photo: Christy Jacob / Unsplash
Given the social, economic, and political climate of oppression, tyranny, and injustice under which AI systems are implemented, it makes sense to err on the side of caution with distrust rather than blind trust.
Given the reach of surveillance capitalism and the sheer extent to which it relies on AI systems to help services anticipate what users will do now, soon, and later, there are healthy grounds for developing strategies of resistance and adopting distrust as a default position. Villani et al. (2018) have suggested that an attitude of distrust by the general public toward AI could stymy innovation, but it remains possible for the general public to rely on AI systems without having to trust them. Given the social, economic, and political climate of oppression, tyranny, and injustice under which AI systems are implemented, it makes sense to err on the side of caution with distrust rather than blind trust. Besides the theoretical and practical problems of trust, there is the ideological problem of trust that both philosophers of trust and the general public ought to recognise in the AI-relevant context. What is the social, economic, or political climate under which AI systems are being designed and is trust being commodified as a part of these circumstances? ∞
A manuscript-length version of this article is currently under review.
REFERENCES
Baier, Annette. “Trust and Antitrust.” Ethics, vol 96, no 2, 1986, pp 231–260.
- Castelfranchi, Cristiano, and Rino Falcone. “Principles of Trust for MAS: Cognitive Anatomy, Social Importance, & Quantification.” Proceedings of the International Conference on Multi-Agent Systems (Cat. No. 98EX160), 1998, pp 72–79.
Chen, Melvin. “Trust & Trust-Engineering in Artificial Intelligence Research: Theory & Praxis.” Philosophy & Technology, vol 34, no 4, 2021, pp 1429–1447.
- Gambetta, Diego. “Can We Trust Trust?” Trust: Making and Breaking Cooperative Relations, edited by Diego Gambetta, Blackwell, 1998, pp 213–237.
- Hardin, Russell. “Do We Want Trust in Government?” Democracy and Trust, edited by Mark E Warren, Cambridge University Press, 1999, pp 22–41.
- Himma, Kenneth E. “Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to Be a Moral Agent?” Ethics & Information Technology, vol 11, no 1, 2009, pp 19–29.
- Johnson, Deborah G. “Computer Systems: Moral Entities but Not Moral Agents.” Ethics & Information Technology, vol 8, no 4, 2006, pp 195–204.
- Jones, Karen. “Trust as an Affective Attitude.” Ethics, vol 107, no 1, 1996, pp 4–25.
- Krishnamurthy, Meena. “(White) Tyranny and the Democratic Value of Distrust.” The Monist, vol 98, no 4, 2015, pp 391–406.
- Krüger, Steffen and Christopher Wilson. “The Problem with Trust: On the Discursive Commodification of Trust in AI.” AI & Society, vol 38, 2023, pp 1753–1761.
- Pettit, Philip. “The Cunning of Trust.” Philosophy & Public Affairs, vol 24, no 3, 1995, pp 202–225.
- Villani, Cédric, et al. “For a Meaningful Artificial Intelligence: Towards a French and European Strategy.” Conseil national du numérique, 2018.
- Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs, 2019.
DR MELVIN CHEN
Dr Melvin Chen is a Senior Lecturer in Philosophy and Faculty Member of the University Scholars Programme at Nanyang Technological University (NTU). His research interests include the philosophy of technology, epistemology, aesthetics, philosophy and literature, the medical humanities, ethics, and metaethics. He holds a BA in Literature from the National University of Singapore (NUS), an MPhil in Ibsen Studies from the University of Oslo, and a PhD in Philosophy from Cardiff University. His first two books — one on aesthetics from a Southeast Asian perspective and another on chess — will be released soon.
JULY 2024 | ISSUE 12
NAVIGATING THE AI TERRAIN