Why Is Google Using Claude To Better its Gemini? Shocking New Findings Raise Questions

Alarming new findings shared by contractors who were tasked to improve Google’s Gemini AI are now comparing its replies with those produced by Anthropic’s Claude.

While Google chooses silence over discussing the matter, many cannot help but speak about why this is happening and whether or not the Android maker has permission from Anthropic to do so. Remember, using another company’s AI chatbot for its own testing purposes is not only bizarre but unlawful unless consent is attained.

Google chose to remain hush on the issue when asked for comments by TechCrunch and it’s concerning as many tech firms are racing to build bigger and better AI designs. Most of them are compared against the next arch-rival in the industry to see who reigns supreme.

One of the best ways of doing this is by running the models via several industry benchmarks instead of enabling contractors to evaluate the other rival’s AI replies. The latter takes more time and effort, in case you’re wondering.

The contractors tasked with making Gemini better spoke more about accuracy ratings related to its outputs and how they must score every reply seen as per different criteria. Common elements include verbosity with honesty. Every prompt gets just 30 minutes to see which reply is better than the next. For instance, is it Claude or is it Gemini as per the correspondence witnessed.

The contractors start to see references made to Claude popping across Google’s internal platforms. After that, comparisons are made between Gemini to other unnamed AI models. As per the findings, at least one was there that stated that it was Claude who was designed by Anthropic.

Another internal chat had to do with contractors speaking about replies popping up that detail more on safety than Gemini. The settings are certainly strict as no replies to prompts are seen that are deemed unsafe. Certain situations did pop up where Claude didn’t wish to give a reply. In that case, Gemini’s reply was dubbed unsafe and a major violation of safety standards for matters like explicit material.

When speaking about Anthropic’s own terms of service, it stops all clients from getting access to Claude to design another competing product. It also warns against training other arch-rival models through Claude without obtaining permission from Anthropic. So yes, while Google is a huge investor in Anthropic, it does require consent.

Image: DIW-Aigen

Read next: Iran Eases Internet Restrictions By Lifting Ban on WhatsApp and Google Play
Previous Post Next Post