OpenAI Lays Down New Framework To Address Safety Concerns For Its Most Advanced Tools

Tech giant OpenAI has just laid down an innovative framework that’s used to take into consideration some of its most advanced AI offerings.

This would also include the ability of its own board to take back any safety decisions as per a new plan outlined on the company’s website yesterday.

OpenAI mentioned how its goal right now is to deploy the latest form of technology that it felt was required for a safe approach in some particular areas like those involving cybersecurity threats or perhaps even some linked to nuclear risks.

The firm is also taking on the initiative for a new advisory group that would look over safety reports and then send those out to the firm’s leading executives and respective board members. It’s obviously in the hands of the executives to lead the path by making decisions. However, their final say can be altered by the board, when deemed fit.

We witnessed ChatGPT’s launch a year back, and during that period, we saw it undergo a plethora of changes linked to AI that have continued to peak on the list in terms of research as well as keeping the general public aware.

The world of Generative AI continues to keep users keenly interested in terms of providing writing opportunities. However, it would not be wrong to mention that such endeavors arise with so much safety concern, thanks to the chance of spreading disinformation as well as causing major errors or even tricking the human mind.

We witnessed in April of this year how a leading group of the industry’s experts united and added their signatures to a new open letter. This called for a six-monthly pause period in terms of producing systems that were not only mighty in nature but also provided a major risk to society regarding the potential of GPT4 -4 technology.

Meanwhile, a poll was generated in May by Reuters that spoke about how nearly two-thirds of the American population was worried about how negative of an impact AI could have and how 61% felt it was actually threatening the human race.

Therefore, it makes sense as to why OpenAI has to this extent to give the board more power in terms of reversing decisions that are a threat to user safety.


Read next: The Younger Generation is More Afraid of Online Frauds then Someone Hacking their Social Media Accounts
Previous Post Next Post