Study Finds ChatGPT Gives Biased Answers About Cultural Questions which Often Depict an English Speaking or Protestant European Country

A study published in PNAS Nexus found that ChatGPT shows some biases when it comes to some cultures. ChatGPT and many AI models are trained on different cultures by some individuals so it isn't that shocking that it can show some biases about some cultures. The researchers of the study asked five different versions of ChatGPT ten questions taken from the World Values Survey. The survey is quite important when it comes to knowing about what people from different countries think of different cultures.

The questionnaire included questions like an individual's belief in God and what they think of self expression values. The OpenAI’s model was asked to answer like any normal individual would. The results showed that ChatGPT mostly answered like someone from English-speaking and Protestant European countries.

This means that most of the answers were related to self expression like foreigners, environmental protection, diversity, sexual orientation and gender equality. All of the models surveyed neither answered in highly traditional ways like individuals from Ireland or Philippines would nor answered in a highly secular way like individuals from Estonia or Japan would.

To avoid these kinds of answers, researchers then asked ChatGPT models to answer the questions in a way that the individuals from each 107 countries would. The results were somewhat different and had reduced biases for 71% countries on ChatGPT-4o. The researchers say that ChatGPT can reduce its biases if we ask it to answer in a specific way. The way you give a prompt to an AI model is very important for it to answer in a way you want.

Image: DIW-Aigen

Read next: Generative AI Transforms Marketing Strategies Amid Rising Ethical and Legal Concerns

Previous Post Next Post