OpenAI Creates Internal Scale To Assess Progress Of Its AI Models

Tech giant OpenAI has just produced an internal scale that is designed to assess and monitor the progress of its large language models or AI with the help of human intelligence.

The company’s spokesperson spoke to the media outlet Bloomberg about how several of its AI models are widely used around the globe including ChatGPT. After launching in its most basic level 1, it’s near to level 2 which it can solve problems equivalent to the PhD mind. Level 3 will soon be launched and that’s where the AI model could take decisions on the user’s behalf. Meanwhile, level 4 will give rise to the newest innovations on this front and level 5 is the last step to get success where AI would be capable of doing all of the organization’s work.

In the past, we’ve seen OpenAI define AI as an automated system that goes above and beyond the capabilities of humans to produce tasks of great economic value.

The company’s unique structure is based upon the firm’s mission to get AGI. Right now, tech experts claim that it’s a little hard to comprehend what’s meant in OpenAI’s charter on this front.

But a new scale does exist through which the company could test its progress and that which is carried out by other arch-rivals in the industry and when they’re close to attaining AGI.

It’s not an easy road and experts predict we’re still quite far away from getting computing power that can safely be called out as hitting the AGI target. It would take many more billions, experts predict to get to that level, probably five years or more.

The new scale is currently being tested and isn’t close to release soon but the company says it was rolled out the day after the company collaborated with the Los Alamos National Laboratory.

The goal is to see how advanced AI systems such as GPT-4o could assist with the likes of scientific research.

Meanwhile, the program manager explained how the main goal right now is to test how great the current offerings of OpenAI are, which has been scrutinized ever since it chose to enable the dissolution of its own security team of experts.

This news came when the group’s own head left the organization alongside another key AI researcher who offered his resignation after mentioning through a post how safety in the company has now taken a backseat and that’s not something they’re willing to compromise upon.

For now, no other details were provided in terms of how models are assigned to such levels. However, leaders did prove a research project making use of the GPT-4 AI model during a new meeting where human-like reasoning was on display.

Such a scale is designed to assist in what progress means for the firm instead of others assuming or wrongly interpreting it. For instance, different experts are saying different things about OpenAI's greatness.

Some feel it has a lot of shortcomings while OpenAI’s own CEO explained that it was necessary to push back matters like ignorance and ensure these models are potentially more intelligent than the rest.


Read next: Choosing Between TikTok and Snapchat for Effective Brand Marketing Strategies
Previous Post Next Post