OpenAI and Anthropic Agree To Share AI Models With US AI Safety Institute

The US AI Safety Institute was first established last year for better regulation of AI models. Now, both OpenAI and Anthropic have signed an agreement with the institute to ensure the complete safety of various projects. This will help give the tech giants more feedback before model design and after the launch.

We saw OpenAI CEO Sam Altman drop hints this past month about the deal and now Google’s spokesperson is also confirming that discussions are in the works.

For now, no other company tackling the AI space was said to be included but we do hope to share more information when it’s available. It’s interesting how the news came when Google updated its chatbots and Gemini image generators.

The director of the US AI Safety Institute shared a statement on this matter. She says safety is the main factor that fuels tech innovation. This is why such agreements are necessary and the institute looks forward to greater collaborations in the future as this is the start.

She also spoke about the deal as a serious milestone to better drive the future of AI

The institute is designed to curate and publish the latest guidelines on this front while setting up tests and practices that determine the potential dangers linked to this system. As we all are well aware, AI can do a lot of good but the opposite is also true.

This is certainly the first deal of its kind. The agency hopes to get more access so it can reach more models out there before they get launched for the public. Such agreements and collaborations minimize risk and better evaluate the capabilities of AI models to enhance security. We’re also hearing more about how the institute has major plans to link with the same agency in the UK.

The news comes as regulators around the globe are trying to introduce more AI guardrails but fail to tackle the concerns on the rise. We did see California introduce a new bill regarding AI safety recently but that does come with a hefty $100M cost.

This bill is also a little controversial as it forces many AI firms to roll out kill switches which shut down models in cases where they go out of control.


Read next: Your Phone is the New Shopping Mall – See How It’s Taking Over!
Previous Post Next Post