According to a recent research published in Scientific Reports, most of the AI users are overly influenced by it, even though AI admits its limitations to them. For the study, 558 participants were asked to do two experiments and the results showed that people are blindly trusting AI especially in uncertain situations. One of the researchers, Colin Holbrook, said that it is a concerning situation and society should know the risks if they are overly dependent on AI, especially when the AI technology is still improving day by day.
The researchers designed the experiments which mimicked real life high pressure and uncertain real world military decisions. Participants were shown the images of innocent civilians first and then an image of drone strike after. Participants faced a zero-sum dilemma where if they failed to identify and eliminate enemies, it could result in civilians dying. Mistakenly targeting innocent civilians as enemies could also result in them killing innocent people. The participants were shown quick images with enemy or civilian symbols in 650 milliseconds and AI was assisting participants to identify the symbols in those images. Participants were given two opportunities to confirm or change their choices and AI was offering encouragement.
In the first experiment, researchers wanted to test whether the presence of a physical robot would influence the trust level more than a virtual one so in one scenario, participants were given a full-size, human-like android with 1.75 meters height. The results showed that the physical presence of the robot had little effect on how much participants trusted its advice. The second experiment was online with a larger group of participants and half of the participants interacted with a highly anthropomorphic virtual robot that had human-like behavior, while the other half interacted with a basic computer interface that only responded with texts. The results showed that even if the AI was basic, it had a significant influence on decision-making of participants.
The results of both experiences showed that participants changed their decisions based on random advice by AIs, with 58.3% changing their decisions in the first experience and 67.3% changing their decisions in the second experiment. Participants were correct 70% of the times initially but their accuracy dropped to 50% when they followed AI’s unreliable guidance.
When AI agreed with the initial decisions of participants, the participants felt 16% more confident but when the AI disagreed, participants felt a 9.48% drop in their confidence. The participants who felt that AI is smarter were more likely to trust its judgement. U.S Air Force is testing AI co-pilots so it is better to understand and address the risks of excessive reliance on AI, especially in military decisions.
Image: DIW-Aigen
Read next: As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models
The researchers designed the experiments which mimicked real life high pressure and uncertain real world military decisions. Participants were shown the images of innocent civilians first and then an image of drone strike after. Participants faced a zero-sum dilemma where if they failed to identify and eliminate enemies, it could result in civilians dying. Mistakenly targeting innocent civilians as enemies could also result in them killing innocent people. The participants were shown quick images with enemy or civilian symbols in 650 milliseconds and AI was assisting participants to identify the symbols in those images. Participants were given two opportunities to confirm or change their choices and AI was offering encouragement.
In the first experiment, researchers wanted to test whether the presence of a physical robot would influence the trust level more than a virtual one so in one scenario, participants were given a full-size, human-like android with 1.75 meters height. The results showed that the physical presence of the robot had little effect on how much participants trusted its advice. The second experiment was online with a larger group of participants and half of the participants interacted with a highly anthropomorphic virtual robot that had human-like behavior, while the other half interacted with a basic computer interface that only responded with texts. The results showed that even if the AI was basic, it had a significant influence on decision-making of participants.
The results of both experiences showed that participants changed their decisions based on random advice by AIs, with 58.3% changing their decisions in the first experience and 67.3% changing their decisions in the second experiment. Participants were correct 70% of the times initially but their accuracy dropped to 50% when they followed AI’s unreliable guidance.
When AI agreed with the initial decisions of participants, the participants felt 16% more confident but when the AI disagreed, participants felt a 9.48% drop in their confidence. The participants who felt that AI is smarter were more likely to trust its judgement. U.S Air Force is testing AI co-pilots so it is better to understand and address the risks of excessive reliance on AI, especially in military decisions.
Image: DIW-Aigen
Read next: As ChatGPT Evolves, Researchers Uncover Unforeseen Political Leanings in AI Models