OpenAI’s o3-Mini Debuts with Enhanced Reasoning, Challenging DeepSeek R1

It looks like OpenAI is battling the rising success of DeepSeek’s latest R1 model by debuting a new o3-mini model. However, the question is if it will be good enough to bar the Chinese startup giant’s success.

The launch comes a few days after we heard growing rumors about the rise in anticipation among AI users through social media. This is the company’s second model that comes under the reasoning category of models. In other words, the company says it can take more time to evaluate situations and reflect on its chain of thought. This is before it jumps ahead of gives answers to prompts put in by the user.


The final result is a model that performs at the same level as those having a PhD degree or those giving replies to hard queries such as math, engineering, science, and more. The latest model is now up for grabs through ChatGPT and also entails a free-of-cost tier. You can find it through the company’s API.

Thankfully, the model is cheaper in cost, quicker in terms of speed, and also able to perform better than models of a similar kind seen in the past. This includes its own sibling o1-mini. While it can be compared to the DeepSeek R1, many don’t know that this is a planned release for the company. It was shared well before the launch of DeepSeek’s R1 so you can’t actually deem it as a strategy by the company. To be exact, it was declared in December 2024 and that’s when Sam Altman also shared through his social media profile that it was under discussion and would launch together on ChatGPT and the OpenAI API together.

This mini model won’t be up for grabs in terms of an open-source model. This means codes can’t get downloaded for usage offline nor could they undergo customization to a similar extent. However, it could restrict appeal when compared to the DeepSeek-R1 for a few applications.

The company failed to roll out any more details about the bigger o3 model shared in December with the o3-mini. During that moment, it shared a delay of several weeks before testing with third parties.

In terms of performance and its features, it’s quite like o1. The OpenAI o3-mini is designed to give reasoning in coding, science, and math. The performance is similar to OpenAI o1 in terms of reasoning but it does offer great advantages.

There’s a 24% quicker response time when you compare the o1-mini speed time for replies is down to just 10.3 seconds. You can expect better accuracy and external testers putting the mini model replies 56% of the time.

Meanwhile, it also provides 39% fewer errors on complex queries. There’s similarly greater performance in coding and performing STEM tasks, not to mention good reasoning effort. Users can benefit from reasoning done on a low, medium, and high level. It similarly gives users the chance to balance both speed with reliability.

The model even has great benchmarks including outpacing o1 in plenty of cases as per the o3-mini system card shared by the company online. The context window is 200k and has a maximum of 100k in every output. It’s quite like the complete o1 model that edges out DeepSeek’s content window for R1. However, it’s yet to outperform Google Gemini 2.0 Flash Thinking model. The latter can produce a higher performance that goes up to 1M tokens.

Reasoning capabilities might be there but the model does fall behind in terms of vision capabilities. This is why any user wishing to upload pictures could be better off with the o1 for now.

Read next: Global AI Safety Report Warns of Cyber Threats, Manipulation, and Weaponization Risks
Previous Post Next Post