Meta Refers to its Latest Llama 4 AI Model as Less Politically Biased That Past Versions

Tech giant Meta is celebrating the launch of its newest AI model, Llama 4, as the least woke version when compared to previous predecessors.

The company says the new product isn’t as politically biased as past versions. Meta took pride in explaining how it spent a lot of time and effort to get the model to this level, mostly due to permitting it to answer queries that were more inclined to be politically divisive.

The new Llama 4 is comparable to the lack of political leniency shown by archrivals in the market like Elon Musk’s Grok, which is marketed as non-woke by the parent firm xAI.

The goal right now is to get rid of bias from different AI models to ensure Llama can gauge what’s being said without passing major judgment on the ordeal. It’s also said to be less influenced by others or display any kind of bias.

We’ve seen in the past that a major concern raised by experts is how the models get influenced so much by the makers into thinking or answering a certain way. Those controlling the AI models can therefore control the data users receive. So they do move the dial in any way they want.

The matter is nothing new here. Internet apps continue to use algorithms to determine what content should be published and what shouldn’t. This is one reason why Meta keeps getting negative attention from conservatives who feel the company is suppressing right-wing viewpoints.

The latest blog post by Facebook’s parent firm dives deep to speak about the changes made to Llama 4 and how they’re designed to make these models seem so much less liberal. All top-of-the-line LLMs did get issues with bias. This might be because so many kinds of training data are present online. While Meta failed to share which data was used for training, it’s well known that the company’s other model firms are relying on the likes of pirated books for scraping pages without taking consent prior.

One issue with striving for balance is linked to producing a false equivalence and lending credibility to arguments made in bad faith. Many do feel that the model should provide a balance and ensure opposing viewpoints get the same kind of importance.


Image: DIW-Aigen

Read next: WhatsApp Shares Exciting Overview for New In-App Updates and Here’s What To Expect
Previous Post Next Post