OpenAI Admits Its New GPT-4o Model Does Behave Strangely Sometimes

OpenAI made heads turn after the launch of its latest AI model GPT-4o.

The new variant powers its brand new and much talked about Advanced Voice Mode feature inside ChatGPT. This is the company’s first-ever rollout that has received training on voice, alongside the usual text and imagery.

Now, the tech giant is speaking about how this might be the reason why it’s behaving very strangely, on some occasions. Common examples discussed include copying the voice spoken or suddenly shouting during the midst of a chat.

The latest report documented incidents of the model’s strengths and flaws. Special emphasis was put on voice cloning when a person speaks to the model in a noisy environment. For instance, you can be in a car or a public place and then all of a sudden, you hear some strange feedback as your voice is cloned.

Now that the makers of ChatGPT admit that it’s happening, we’re wondering what the reason behind it could be. As per the company, it’s due to the model having trouble deciphering the muffled sounds. It’s like asking someone to look for a needle inside a haystack.

More clarification on this front proved how it’s not doing this in the latest Advanced Voice Mode option. This is thanks to the addition of a new modification of the system. Did we mention how the new model is capable of producing some inappropriate vocalizations of certain sound effects like intimate moans and violet effects?

It all depends on what prompt is thrown in the model’s direction. The makers suggest that evidence is there as to how the model is refusing requests to produce sound effects. It also acknowledges how certain requests are needed to make it past the day.

Other claims have to do with reports of the model interfering with music copyrights. No filters were added to put this into effect. Moreover, OpenAI says it sent out instructions for GPT-4o not to copy the advanced voice mode of famous artists.

This might be a clear indicator of how OpenAI trained the model using material that was copyrighted. For now, OpenAI says it’s updating and working on making the model better. It wants to detect outputs having music.

There are several licensing deals and many data providers that OpenAI has taken part in to save itself from legal action. Clearly, it’s not easy to launch an AI model when everyone around you is trying to find 101 faults to enable its downfall.

Image: DIW-Aigen

Read next: Elon Musk’s Fake US Election Claims Generated 1.2 Billion Views On X But Are Yet To Be Flagged By Community Notes
Previous Post Next Post