Things seem to be going downhill for the makers of ChatGPT after the company was left with no choice but to disband its Superalignment Team.
The news came shortly after the team’s co-leaders resigned from the organization after confirming that they could no longer work in a firm where security had taken a backseat.
For those who might not be aware, the team is in charge of curating systems that control superintelligent models that could potentially exceed human intelligence one day.
Seeing their shocking departure meant the company was left with no choice but to make the change that is now again casting doubt on how safe AI really is without the team that was once tasked to protect humanity.
Taking to the account on X, one of the co-leaders has opened up about how the leadership continued to ignore safety as it was much more focused on the launch of ‘shiny products’.
Putting the message into a new thread, it was revealed how so many processes were ignored and more emphasis was on profits than user safety. And as members of the team, it was no longer to possible work with such mindsets.
One other post added how there were plenty of disagreements on that front including how OpenAI’s leadership’s priorities were going downhill. And that a breaking point was bound to occur anytime soon.
Those who were a part of the team felt that this was the right place to carry on with research. But as the co-leaders mentioned, the need now is to really take some time out and realize what needs to be done to ensure AI systems are properly regulated and not becoming smarter than humans.
A year ago, we saw the tech giant take part in pledges which spoke about dedicating 20% of the overall resources to such efforts including superalignment of superintelligences. This was a move to ensure responsible use of AI and stop systems from outperforming humans in all aspects.
But as time went by, it was getting more and more difficult to reach these objectives. There was massive appreciation at the start but that didn’t long to dwindle out. Now, it’s sad to see some top names in the AI world leave despite having great contributions.
As one can imagine, the news is bound to be bad for the makers of ChatGPT who just unveiled its revolutionary conversational GPT-4o technology while top investors are also setting the stage for a massive conference next week.
So what’s next is the question. We no longer have the team that was designed to ensure OpenAI does not steer in the wrong direction and end up causing human extinction by superseding the capabilities of the human mind.
Speaking to Bloomberg, OpenAI did reveal in the past how it was really focused on great research efforts to assist the organization attain safety goals.
But in a short period, internal tensions between the leadership and the safety team’s heads really had things going downhill. To have AI models that are so dangerous can only be a warning to the world of what’s next and what we’re dealing with.
The company’s own internal matters never seem to come to an end. We’re talking issues relating to the CEO not being open on many fronts with his team members and that gave rise to more distrust amongst the board. Let’s not forget the issue that exploded to such an extent that Altman was booted out due to an internal revolt.
Thankfully, he was called back in despite all the drama and shakeup taking center stage and just when you thought the company was on its way to getting more success now than ever with great earnings reports for Q1 of this year, this news proves that the future can never be predicted.
So far, there is no response from OpenAI on this matter but we’ll keep you updated. There was a mention by the company’s rep who vowed to speak on this matter in the next few days through a public post released on behalf of the organization.
Image: DIW
Read next: Microsoft Receives EU Warning For Failing To Disclose Systemic Risks Posed By Generative AI Tools
The news came shortly after the team’s co-leaders resigned from the organization after confirming that they could no longer work in a firm where security had taken a backseat.
For those who might not be aware, the team is in charge of curating systems that control superintelligent models that could potentially exceed human intelligence one day.
Seeing their shocking departure meant the company was left with no choice but to make the change that is now again casting doubt on how safe AI really is without the team that was once tasked to protect humanity.
Taking to the account on X, one of the co-leaders has opened up about how the leadership continued to ignore safety as it was much more focused on the launch of ‘shiny products’.
Putting the message into a new thread, it was revealed how so many processes were ignored and more emphasis was on profits than user safety. And as members of the team, it was no longer to possible work with such mindsets.
One other post added how there were plenty of disagreements on that front including how OpenAI’s leadership’s priorities were going downhill. And that a breaking point was bound to occur anytime soon.
Those who were a part of the team felt that this was the right place to carry on with research. But as the co-leaders mentioned, the need now is to really take some time out and realize what needs to be done to ensure AI systems are properly regulated and not becoming smarter than humans.
A year ago, we saw the tech giant take part in pledges which spoke about dedicating 20% of the overall resources to such efforts including superalignment of superintelligences. This was a move to ensure responsible use of AI and stop systems from outperforming humans in all aspects.
But as time went by, it was getting more and more difficult to reach these objectives. There was massive appreciation at the start but that didn’t long to dwindle out. Now, it’s sad to see some top names in the AI world leave despite having great contributions.
As one can imagine, the news is bound to be bad for the makers of ChatGPT who just unveiled its revolutionary conversational GPT-4o technology while top investors are also setting the stage for a massive conference next week.
So what’s next is the question. We no longer have the team that was designed to ensure OpenAI does not steer in the wrong direction and end up causing human extinction by superseding the capabilities of the human mind.
Speaking to Bloomberg, OpenAI did reveal in the past how it was really focused on great research efforts to assist the organization attain safety goals.
But in a short period, internal tensions between the leadership and the safety team’s heads really had things going downhill. To have AI models that are so dangerous can only be a warning to the world of what’s next and what we’re dealing with.
The company’s own internal matters never seem to come to an end. We’re talking issues relating to the CEO not being open on many fronts with his team members and that gave rise to more distrust amongst the board. Let’s not forget the issue that exploded to such an extent that Altman was booted out due to an internal revolt.
Thankfully, he was called back in despite all the drama and shakeup taking center stage and just when you thought the company was on its way to getting more success now than ever with great earnings reports for Q1 of this year, this news proves that the future can never be predicted.
So far, there is no response from OpenAI on this matter but we’ll keep you updated. There was a mention by the company’s rep who vowed to speak on this matter in the next few days through a public post released on behalf of the organization.
Image: DIW
Read next: Microsoft Receives EU Warning For Failing To Disclose Systemic Risks Posed By Generative AI Tools