Google's New Guidelines for Gemini AI Contractors Spark Misinformation Concerns

A new internal guideline passed down by Google to its contractors has raised concerns about the Gemini AI chatbot. This includes how it is highly prone to roll out misinformation on sensitive matters like healthcare.

Generative AI often seems magical to many, but its reality is far more complex. Leading tech giants like Google, OpenAI, and Microsoft drive its development, relying on teams of engineers and analysts to evaluate and enhance the accuracy of chatbot outputs for meaningful progress in AI.

TechCrunch was the first to lay eyes on the latest internal guideline that the search giant wants implemented. The whole matter has to do with outsourcing contractors that Google hires to check prompts. If any prompt does not fall in their domain, they would skip it as their level of expertise is limited in the area.

Now, Google is giving new instructions about not skipping prompts so whether the contractors know about it or not, they cannot skip the task. Instead, they can rate that part which they understand, raising serious questions on the entire matter.

This means the replies from Gemini can no longer be trusted, experts shared, especially those linked to health, math coding, etc., that do require special skills and not just regular general knowledge.

Tech experts believe the original purpose of skipping prompts was to improve accuracy by assigning them to domain experts. However, under the new guidelines, contractors can only skip prompts in two cases: when data is missing or when the content involves harmful material.

Image: DIW

Read next: EU Orders Apple To Open Up iPhones To Other Competitors in the Industry
Previous Post Next Post