Meta’s Llama 3.1 405B May Outperform OpenAI’s GPT-4o As Leaked Data Suggests Major Milestone For AI Community

Leaked findings suggest a major milestone for the AI community where Meta’s Llama 3 could outperform world leader GPT-4o in several domains. The news comes after a series of breakthrough benchmarks for large language models were revealed.

In April of this year, we saw tech giant Meta roll out Llama 3 as its revolutionary open-source LLM. Here, the first two variants Llama 3 8B and 3 70B give rise to breakthrough benchmarks in terms of size and performance. But in just three months, a host of other LLMs came into the picture and were said to outperform them in many aspects.

Meta has highlighted how its biggest Llama 3 model has a whopping 400B parameters that are still in the experimental phase. But now, we’re getting more news on this front in terms of how leaked data from Reddit has unveiled some initial benchmarks that include all of Meta’s upcoming models and their ability to outperform the famous GPT-4o by OpenAI.

It’s a huge discovery and the first time that we’re seeing open-source models beat out closed-source models in the AI space.


When tech giant Meta paved the way for the launch of its Llama 3 model, it promised to facilitate more growth and development for the entire open AI community so that models might be rolled out more responsibly. They believe strongly in how being more open would give rise to safer and better products. And in the end, the market benefits from greater competition.

So not only does the tech giant benefit but the entire tech world. Remember, the leaked benchmarks revealed so far display how Meta’s Llama 3.1 is doing so much better when compared to its GPT-4o on a host of tests such as GSM8K, MMLU humanities, and Winograd. But these are just some of the many enlisted so far.

There is talk about how it’s falling behind in terms of HumanEval and other aspects but even this news is major. A lot of the results were said to improve with time as the figures are for the base models only and how instruction tuning is necessary.

We know that OpenAI is not going to like the sound of that. It’s at the top of the game in terms of LLMs and it’s also making way for the upcoming GPT-5. The latter would enable a lot of reasoning capabilities that challenge Meta’s Llama 3.1 leadership across the whole large language model space.

Still, seeing Meta’s Llama 3.1 do so well in terms of great performance against GPT-4o proves how the power and potential it has in the world of OpenAI cannot be underestimated as the next best thing for cutting AI tech and a new world of innovation in an already competitive industry.

Read next: EU Questions Meta’s Paid Ad-Free Business Model
Previous Post Next Post