Ever since AI started to experience its dizzying rise in late 2022, many tech leaders began sounding the alarm. They claimed that AI has a number of massive risks that need to be factored in before the industry continues to grow at such a rapid pace. Even Sam Altman, CEO of ChatGPT developer OpenAI, has echoed some of these concerns, but recent statements by the co-founder of GoogleBrain indicated that tech leaders are exaggerating these fears due to an ulterior motive.
According to Andrew Ng, an AI expert that played a role in starting GoogleBrain, industry insiders are stoking fears about AI in order to corner the market. They’re hoping to spark increased regulation of the industry at a time when it’s still finding its footing, and in doing so they might eliminate the competition and leave the field wide open for them to dominate.
It bears mentioning that Ng is an adjunct professor at Stanford, and that he helped Sam Altman himself get started with learning about AI early on in his career. In his view, the idea that humanity could go extinct due to unregulated AI is inherently flawed. This makes him one of the few voices that aren’t raising concerns about the dangers of AI. He’s certainly going against the grain here, especially after so many tech CEOs and AI experts signed a statement comparing the risk of AI to that of nuclear war and other apocalyptic events.
Ng is worried that an irrational uptick in regulation could stifle innovation while the industry is still in a nascent stage of development. While he acknowledged the some regulation is par for the course, he urged policy makers to carefully consider restrictions instead of enacting blanket bans on research that can curtail growth.
As the head of Google Brain, a research team focusing on deep learning and its applications in AI, Ng certainly has a unique perspective on the industry. Google Brain was merged with DeepMind just this year. Given that Ng no longer has skin in the game after the merging of Google Brain, he might be the sole voice of reason. Alternatively, he might be downplaying very real risks that need to be considered, especially given how AI has countless applications that are yet to fully come to light. Tech leaders may need to respond to these accusations, and their replies might provide an impetus for regulation or a freeing up of innovation.
Photo: Stanford Online / YT
Read next: What Skills Are AI Better At Than Humans?
According to Andrew Ng, an AI expert that played a role in starting GoogleBrain, industry insiders are stoking fears about AI in order to corner the market. They’re hoping to spark increased regulation of the industry at a time when it’s still finding its footing, and in doing so they might eliminate the competition and leave the field wide open for them to dominate.
It bears mentioning that Ng is an adjunct professor at Stanford, and that he helped Sam Altman himself get started with learning about AI early on in his career. In his view, the idea that humanity could go extinct due to unregulated AI is inherently flawed. This makes him one of the few voices that aren’t raising concerns about the dangers of AI. He’s certainly going against the grain here, especially after so many tech CEOs and AI experts signed a statement comparing the risk of AI to that of nuclear war and other apocalyptic events.
Ng is worried that an irrational uptick in regulation could stifle innovation while the industry is still in a nascent stage of development. While he acknowledged the some regulation is par for the course, he urged policy makers to carefully consider restrictions instead of enacting blanket bans on research that can curtail growth.
As the head of Google Brain, a research team focusing on deep learning and its applications in AI, Ng certainly has a unique perspective on the industry. Google Brain was merged with DeepMind just this year. Given that Ng no longer has skin in the game after the merging of Google Brain, he might be the sole voice of reason. Alternatively, he might be downplaying very real risks that need to be considered, especially given how AI has countless applications that are yet to fully come to light. Tech leaders may need to respond to these accusations, and their replies might provide an impetus for regulation or a freeing up of innovation.
Photo: Stanford Online / YT