According to a new research led by the University of Kent’s School of Psychology, AI is not reliable in making moral decisions because it cannot understand and perceive human experiences to full extent. Some new AI models called Artificial Moral Advisors (AMAs) are being developed which will be able to help human beings in making moral decisions based on moral principles, guidelines and ethical theories. Right now, only the prototypes of AMAs have been developed and they are not yet being used for making decisions and recommendations that encompass morality.
The researchers wanted to understand people’s perception of these models and whether they are able to trust their judgements and decisions. It was found that these AI models have some capability to make more morally right and rational decisions, people are still not able to trust them completely. It was also found that humans tend to trust humans more than AI when it comes to giving moral advice, even if both of them are giving the same advice.
This case was especially true when the advice was about utilitarian principles and doing what's best for the majority. Humans trust advice and decisions which focus on individuals especially in situations when it could be harmful to a large majority of people. Even when people chose some AMAs advice, they still said that there's a chance they would be disagreeing with them in the future.
Image: DIW-Aigen
Read next:
• Researchers Explore How Personality and Integrity Shape Trust in AI Technology
• AI Boosts Productivity But At The Cost of Eroding The Human Mindset
The researchers wanted to understand people’s perception of these models and whether they are able to trust their judgements and decisions. It was found that these AI models have some capability to make more morally right and rational decisions, people are still not able to trust them completely. It was also found that humans tend to trust humans more than AI when it comes to giving moral advice, even if both of them are giving the same advice.
This case was especially true when the advice was about utilitarian principles and doing what's best for the majority. Humans trust advice and decisions which focus on individuals especially in situations when it could be harmful to a large majority of people. Even when people chose some AMAs advice, they still said that there's a chance they would be disagreeing with them in the future.
Image: DIW-Aigen
Read next:
• Researchers Explore How Personality and Integrity Shape Trust in AI Technology
• AI Boosts Productivity But At The Cost of Eroding The Human Mindset