Recent research from Aalto University has revealed that just believing in AI assistance can improve people’s performance. This is observed even if AI doesn’t exist. This phenomenon is called the placebo effect. It shows that performance can improve simply because people expect it to when they think AI is helping them.
In the study, participants did a letter recognition task twice: once on their own and once with the supposed help of an AI system.
They were divided into two groups; one group was told the AI system was reliable and would help improve their performance, while the other group was informed it was unreliable and would likely hinder their performance.
Surprisingly, both groups performed better when they believed an AI was involved, even though in reality, there was no AI aiding them.
The research team set up the task so that participants matched letters appearing on a screen at different speeds. They found that all participants, regardless of the group they were in, completed the task faster and more attentively when they thought an AI was assisting them.
This study not only highlights the expectations people have of AI but also indicates how strong these beliefs are. Even when participants were told the AI might worsen their performance, they still expected some form of enhancement, showing the difficulty in reducing people's trust in AI capabilities.
Further tests were done online, which replicated the findings of the initial experiment. Participants were also asked about their expectations when performing tasks with AI, and most had a positive view of AI's potential to assist them, regardless of their general skepticism about technology.
These findings bring up challenges in how AI systems are evaluated. The placebo effect could skew results in studies testing the effectiveness of AI technologies, suggesting that results could be influenced by how much people believe in the technology rather than the technology itself.
The research implies a need for placebo-controlled studies in the field of human-computer interaction to ensure that evaluations of AI systems are not biased by users' expectations. This is crucial for accurately assessing the impact of AI systems on task performance.
Image: DIW-Aigen
Read next: Emerging Trends in Computer Engineering: How AI and Machine Learning Are Shaping the Future
In the study, participants did a letter recognition task twice: once on their own and once with the supposed help of an AI system.
They were divided into two groups; one group was told the AI system was reliable and would help improve their performance, while the other group was informed it was unreliable and would likely hinder their performance.
Surprisingly, both groups performed better when they believed an AI was involved, even though in reality, there was no AI aiding them.
The research team set up the task so that participants matched letters appearing on a screen at different speeds. They found that all participants, regardless of the group they were in, completed the task faster and more attentively when they thought an AI was assisting them.
This study not only highlights the expectations people have of AI but also indicates how strong these beliefs are. Even when participants were told the AI might worsen their performance, they still expected some form of enhancement, showing the difficulty in reducing people's trust in AI capabilities.
Further tests were done online, which replicated the findings of the initial experiment. Participants were also asked about their expectations when performing tasks with AI, and most had a positive view of AI's potential to assist them, regardless of their general skepticism about technology.
These findings bring up challenges in how AI systems are evaluated. The placebo effect could skew results in studies testing the effectiveness of AI technologies, suggesting that results could be influenced by how much people believe in the technology rather than the technology itself.
The research implies a need for placebo-controlled studies in the field of human-computer interaction to ensure that evaluations of AI systems are not biased by users' expectations. This is crucial for accurately assessing the impact of AI systems on task performance.
Image: DIW-Aigen
Read next: Emerging Trends in Computer Engineering: How AI and Machine Learning Are Shaping the Future