The artificial intelligence (AI) revolution is upon us, with the promise of advances such as driverless cars, smart buildings, automated health diagnostics and improved security monitoring. Many current efforts are aimed to measure system trustworthiness, through measurements of Accuracy, Reliability, and Explainability, among other system characteristics. While these characteristics are necessary, determining that the AI system is trustworthy because it meets its system requirements won't ensure widespread adoption of AI. It is the user, the human affected by, the AI who ultimately places their trust in the system. The study of trust in automated systems has been a topic of psychological study previously. However, Artificial Intelligence (AI) systems pose unique challenges for user trust. AI systems operate using patterns in massive amounts of data . No longer are we asking automation to do human tasks, we are asking it to do tasks that we can't. Moreover, AI has the ability to learn and alter its own programming in ways we don't easily understand. The AI user has to trust the AI because of its complexity and unpredictability, changing the dynamic between user and system into a relationship. Alongside research toward building trustworthy systems, understanding user trust in AI will be necessary in order to achieve the benefits and minimize the risks of this new technology.
and Jensen, T.
Trust and Artificial Intelligence, NIST Interagency/Internal Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=931087
(Accessed September 27, 2021)