People's Trust in AI Systems to Make Moral Decisions is still some Way Off

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions.

Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by artificial intelligence increase in their technological capacities and move into the moral domain it is critical that we understand how people think about such artificial moral advisors.

Research led by the University of Kent's School of Psychology explored how people would perceive these advisors and if they would trust their judgement, in comparison with human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas.

Published in the journal Cognition, the research shows that people have a significant aversion to AMAs (vs humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g. adhering to moral rules rather than maximising outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors - human or AI - who align with principles that prioritise individuals over abstract outcomes.

Even when participants agreed with the AMA’s decision, they still anticipated disagreeing with AI in the future, indicating inherent scepticism.

Dr Jim Everett led the research at Kent, alongside Dr Simon Myers at the University of Warwick.

Dr Jim Everett who led the research at Kent said: "Trust in moral AI isn't just about accuracy or consistency - it's about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from healthcare to legal systems, therefore there is a major need to understand how to bridge the gap between AI capabilities and human trust."

Myers S, Everett JAC.
People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.
Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028

Most Popular Now

Researchers Find Telemedicine may Help R…

Low-value care - medical tests and procedures that provide little to no benefit to patients - contributes to excess medical spending and both direct and cascading harms to patients. A...

AI Revolutionizes Glaucoma Care

Imagine walking into a supermarket, train station, or shopping mall and having your eyes screened for glaucoma within seconds - no appointment needed. With the AI-based Glaucoma Screening (AI-GS) network...

AI may Help Clinicians Personalize Treat…

Individuals with generalized anxiety disorder (GAD), a condition characterized by daily excessive worry lasting at least six months, have a high relapse rate even after receiving treatment. Artificial intelligence (AI)...

Accelerating NHS Digital Maturity: Paper…

Digitised clinical noting at South Tees Hospitals NHS Foundation Trust is creating efficiencies for busy doctors and nurses. The trust’s CCIO Dr Andrew Adair, deputy CCIO Dr John Greenaway, and...

AI can Open Up Beds in the ICU

At the height of the COVID-19 pandemic, hospitals frequently ran short of beds in intensive care units. But even earlier, ICUs faced challenges in keeping beds available. With an aging...

Mobile App Tracking Blood Pressure Helps…

The AHOMKA platform, an innovative mobile app for patient-to-provider communication that developed through a collaboration between the School of Engineering and leading medical institutions in Ghana, has yielded positive results...

Can AI Help Detect Cognitive Impairment?

Mild cognitive impairment (MCI) can be an early indicator of Alzheimer's disease or dementia, so identifying those with cognitive issues early could lead to interventions and better outcomes. But diagnosing...

Customized Smartphone App Shows Promise …

A growing body of research indicates that older adults in assisted living facilities can delay or even prevent cognitive decline through interventions that combine multiple activities, such as improving diet...

AI Model Predicting Two-Year Risk of Com…

AFib (short for atrial fibrillation), a common heart rhythm disorder in adults, can have disastrous consequences including life-threatening blood clots and stroke if left undetected or untreated. A new study...

New Study Shows Promise for Gamified mHe…

A new study published in Multiple Sclerosis and Related Disorders highlights the potential of More Stamina, a gamified mobile health (mHealth) app designed to help people with Multiple Sclerosis (MS)...

Patients' Affinity for AI Messages …

In a Duke Health-led survey, patients who were shown messages written either by artificial intelligence (AI) or human clinicians indicated a preference for responses drafted by AI over a human...

New Research Explores How AI can Build T…

In today’s economy, many workers have transitioned from manual labor toward knowledge work, a move driven primarily by technological advances, and workers in this domain face challenges around managing non-routine...