People's Trust in AI Systems to Make Moral Decisions is still some Way Off

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions.

Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by artificial intelligence increase in their technological capacities and move into the moral domain it is critical that we understand how people think about such artificial moral advisors.

Research led by the University of Kent's School of Psychology explored how people would perceive these advisors and if they would trust their judgement, in comparison with human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas.

Published in the journal Cognition, the research shows that people have a significant aversion to AMAs (vs humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g. adhering to moral rules rather than maximising outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors - human or AI - who align with principles that prioritise individuals over abstract outcomes.

Even when participants agreed with the AMA’s decision, they still anticipated disagreeing with AI in the future, indicating inherent scepticism.

Dr Jim Everett led the research at Kent, alongside Dr Simon Myers at the University of Warwick.

Dr Jim Everett who led the research at Kent said: "Trust in moral AI isn't just about accuracy or consistency - it's about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from healthcare to legal systems, therefore there is a major need to understand how to bridge the gap between AI capabilities and human trust."

Myers S, Everett JAC.
People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.
Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028

Most Popular Now

Sam Neville Joins the Highland Marketing…

Leading chief nursing information officer Sam Neville is joining the Highland Marketing advisory board. Sam brings a passion for nursing and safety to the board, which debates the big issues...

AI Tool that may Assist Underserved Hosp…

As the fields of healthcare and technology increasingly evolve and intersect, researchers are collaborating on the best ways to use emerging technologies such as artificial intelligence (AI) to care for...

AI-Supported Breast Cancer Screening - N…

The new findings are published in The Lancet Digital Health. The initial results of the Mammography Screening with Artificial Intelligence (MASAI) study* - a randomised trial to evaluate whether AI...

AI Model Identifies Potential Risk Genes…

Researchers from the Cleveland Clinic Genome Center have successfully applied advanced artificial intelligence (AI) genetics models to Parkinson's disease. Researchers identified genetic factors in progression and FDA-approved drugs that can...

AI Improves Personalized Cancer Treatmen…

Personalized medicine aims to tailor treatments to individual patients. Until now, this has been done using a small number of parameters to predict the course of a disease. However, these...

The Future of Healthcare is Digital

8 - 10 April 2025, Berlin, Germany. The Berlin Exhibition Centre will be all about digital health from 8 to 10 April 2025. DMEA, Europe's leading event for digital healthcare, organised...

DMEA nova Award: Looking for the Best Id…

8 - 10 April 2025, Berlin, Germany. Innovative startups from the digital health sector can now apply for the DMEA nova Award 2025. We are looking for the best idea or...

Stanford Medicine Study Suggests Physici…

Artificial intelligence-powered chatbots are getting pretty good at diagnosing some diseases, even when they are complex. But how do chatbots do when guiding treatment and care after the diagnosis? For...

OmicsFootPrint: Mayo Clinic's AI To…

Mayo Clinic researchers have pioneered an artificial intelligence (AI) tool, called OmicsFootPrint, that helps convert vast amounts of complex biological data into two-dimensional circular images. The details of the tool...

Testing AI with AI: Ensuring Effective A…

Using a pioneering artificial intelligence platform, Flinders University researchers have assessed whether a cardiac AI tool recently trialled in South Australian hospitals actually has the potential to assist doctors and...

AI Accelerates the Search for New Tuberc…

Tuberculosis is a serious global health threat that infected more than 10 million people in 2022. Spread through the air and into the lungs, the pathogen that causes "TB" can...

Students Around the World Find ChatGPT U…

An international survey study involving more than 23,000 higher education students reveals trends in how they use and experience ChatGPT, highlighting both positive perceptions and awareness of the AI chatbot’s...