An ex-researcher from OpenAI is making concerning claims about the future of AI and how it poses an imminent risk to the human race.
Paul Christiano mentioned during a recent appearance on the Bankless podcast how he takes the growing trend of generative AI very seriously and so should the world, thanks to the huge number of risks attached.
There might be a 10 to 20% chance of the AI world taking over, with most humans being dead, he further explained.
Christiano who happens to be a part of the famous Alignment Research Center that works as a non-profit added how he used to be a part of the language model team for OpenAI. And now, he’s being questioned about the likelihood of the full-blown Eliezer Yudkowsky doom endeavor.
It was interesting to see how the researcher continued to mention in bold terms such trending technology could be an absolute doomer of the world and this warning is not something new. It’s been popping up for nearly two decades.
Christiano mentioned how his views opt to differ from many other people’s points of view in terms of the speed with which technology could develop. It happens to be in this very quick transformation phase that others continue to speak about.
Christiano also elaborated further on how he imagined it to be like an endeavor that was something close to a one-year transition from what we’re seeing of AI systems today and that could turn into a huge deal. It added how this accelerating change is followed up by the likes of more acceleration and after having a view on that, so many things might feel as if AI issues appear due to the fact that they occur soon after building upon AI technology.
As a whole, you might be getting close to a 50/50 probability of doom, moments after you see an AI system reaching the human compatibility level, the researcher claimed.
But remember, this is not something new that we’re hearing from Christiano. He joins a huge number of voices that were unnerved thanks to developments made in the AI field. Just recently, we saw a leading group of experts from the AI domain offer signatures to temporarily pause the training of AI models and systems.
They felt a six-month slowdown in AI development advancement was the way to go but still no news on how the world would adapt to it.
Read next: Scammers Are Spoofing ChatGPT Websites And Stealing Data From Unsuspecting Victims
Paul Christiano mentioned during a recent appearance on the Bankless podcast how he takes the growing trend of generative AI very seriously and so should the world, thanks to the huge number of risks attached.
There might be a 10 to 20% chance of the AI world taking over, with most humans being dead, he further explained.
Christiano who happens to be a part of the famous Alignment Research Center that works as a non-profit added how he used to be a part of the language model team for OpenAI. And now, he’s being questioned about the likelihood of the full-blown Eliezer Yudkowsky doom endeavor.
It was interesting to see how the researcher continued to mention in bold terms such trending technology could be an absolute doomer of the world and this warning is not something new. It’s been popping up for nearly two decades.
Christiano mentioned how his views opt to differ from many other people’s points of view in terms of the speed with which technology could develop. It happens to be in this very quick transformation phase that others continue to speak about.
Christiano also elaborated further on how he imagined it to be like an endeavor that was something close to a one-year transition from what we’re seeing of AI systems today and that could turn into a huge deal. It added how this accelerating change is followed up by the likes of more acceleration and after having a view on that, so many things might feel as if AI issues appear due to the fact that they occur soon after building upon AI technology.
As a whole, you might be getting close to a 50/50 probability of doom, moments after you see an AI system reaching the human compatibility level, the researcher claimed.
But remember, this is not something new that we’re hearing from Christiano. He joins a huge number of voices that were unnerved thanks to developments made in the AI field. Just recently, we saw a leading group of experts from the AI domain offer signatures to temporarily pause the training of AI models and systems.
They felt a six-month slowdown in AI development advancement was the way to go but still no news on how the world would adapt to it.
Read next: Scammers Are Spoofing ChatGPT Websites And Stealing Data From Unsuspecting Victims