OpenAI Launches New GPT-4o Mini That Surpasses Other AI Models In Affordability And Speed

Tech giant OpenAI is turning heads with the launch of yet another revolutionary AI model called GPT-4o mini.

The company says the model is the most cost-effective product of its lineup, being nearly 60% cheaper than GPT-3.5 Turbo. Moreover, it’s managed to attain an 82% score on its MMLU AI benchmark.

Today, the model is outperforming the company’s GPT-4 chat preferences. Furthermore, it surpasses both the Gemini 1.5 Flash and Haiku’s Claude 3 in several departments like reasoning and textual intelligence.

Both Vision and Text reasoning are superior, scoring a massive 82% across MMLU. Meanwhile, math reasoning and all coding are also great, scoring 87% as compared to Google’s Gemini Flash which scored 75.5%, and Claude Haiku which struck 71.7%.

In terms of coding, it hit the 87.2% mark which again is a great benchmark. Additionally, experts are talking about the strong function calling feature that gives developers the chance to produce apps fetching data or taking actions with the external system.

There is better long contextual performance when compared to its GPT-3.5 Turbo small counterpart.

After the launch, OpenAI rolled out a new statement on this front including how they always envisioned the future as one where different models get integrated into each app on any given website. The Mini model is giving way to more developers to better scale and create AI apps that run efficiently but at the same time, are affordable in use.

This can assist the future of AI in terms of its accessibility, reliability, and embedding into everyday experiences. And that’s why OpenAI says they’re proud to become leaders in this domain.

Undoubtedly, the latest model's noteworthy benchmarks are really setting the stage for greater commitment and pushing boundaries to make AI capabilities accessible to the masses.

The fact that the spring update from the firm spoke about GPT-4o Turbo being a great product, many raised questions on its expensive pricing as it was deemed to be the firm’s most costly offering in today’s AI market.

This mini model seems to solve that problem as it reduces costs for developers, standing at USD 0.6 per million output tokens. Shockingly, that’s a 60% decrease from its own GPT-3.5 Turbo variant which it’s replacing.

It’s also in line with the pricing strategy offered by Gemini 1.5 Flash and Claude 3 Haiku. Moreover, the latest model will be up for grabs for developers through APIs very soon. This way, consumers can attain this model via ChatGPT and different mobile applications.

Enterprise users are said to gain access to the latest model by the end of next week so as you can see, the company is really making sure releases occur quickly so more people get access quicker.


Read next: Google Under Scrutiny In Italy For Questionable Commercial Practices
Previous Post Next Post