This past weekend we saw Google CEO Sundar Pichai sit down for an exclusive interview with CBS’ 60 Minutes. And as expected, the efforts of Google on the AI front were the focus of the talk.
This included a complete chat on how strongly Pichai felt about the need for adequate regulations when using AI as things may easily go out of control.
But now, another new report by Bloomberg is making some strong allegations against tech giant Google and how it rushed to release its array of AI products including the Bard chatbot. The report went on to highlight how no ethical guardrails were put in place, despite the company knowing how damaging the effects could be.
Google seemed to be in a rush to launch the products after feeling a clear threat from fellow competitors Microsoft and OpenAI. And if that does not sound dramatic enough, the company intentionally ignored the choice to pay heed to its own AI ethics team with the launch of Bard.
The report says that the group heading the ethics division at Google is now disempowered and feels demoralized at the happenings. These are the sentiments of those who left the firm and those still working inside.
Such staffers are responsible for providing safety and ethical implications for new launches but Google requested them to move out of the way and take a step back. This includes no removal of any generative AI tools undergoing development.
The article goes on to mention what happened when Bard failed to get a launch out to the public. This is when Google workers were requested to experiment with the product and some people did generate some critical replies and feedback.
Another worker mentioned how Bard was nothing less than a pathological liar as per recent screengrabs that were put out in the open via one internal discussion. Meanwhile, another person called it out as cringe-worthy. Similarly, another felt strongly about how Bard gave them incorrect feedback related to a plane crash when they asked the chatbot for advice on landing a plane. And that’s when another spoke about it providing details on how to scuba dive which were not only dangerous but could cause death.
Therefore, it would not be wrong to say that the tech giant launched this as a whole experiment. And telling users beforehand that mistakes were inevitable was a huge blow on Google’s part.
Read next: Cyber Scammers Target Unsuspecting Users Via Facebook Ads in a Latest Fraud Scheme
This included a complete chat on how strongly Pichai felt about the need for adequate regulations when using AI as things may easily go out of control.
But now, another new report by Bloomberg is making some strong allegations against tech giant Google and how it rushed to release its array of AI products including the Bard chatbot. The report went on to highlight how no ethical guardrails were put in place, despite the company knowing how damaging the effects could be.
Google seemed to be in a rush to launch the products after feeling a clear threat from fellow competitors Microsoft and OpenAI. And if that does not sound dramatic enough, the company intentionally ignored the choice to pay heed to its own AI ethics team with the launch of Bard.
The report says that the group heading the ethics division at Google is now disempowered and feels demoralized at the happenings. These are the sentiments of those who left the firm and those still working inside.
Such staffers are responsible for providing safety and ethical implications for new launches but Google requested them to move out of the way and take a step back. This includes no removal of any generative AI tools undergoing development.
The article goes on to mention what happened when Bard failed to get a launch out to the public. This is when Google workers were requested to experiment with the product and some people did generate some critical replies and feedback.
Another worker mentioned how Bard was nothing less than a pathological liar as per recent screengrabs that were put out in the open via one internal discussion. Meanwhile, another person called it out as cringe-worthy. Similarly, another felt strongly about how Bard gave them incorrect feedback related to a plane crash when they asked the chatbot for advice on landing a plane. And that’s when another spoke about it providing details on how to scuba dive which were not only dangerous but could cause death.
Therefore, it would not be wrong to say that the tech giant launched this as a whole experiment. And telling users beforehand that mistakes were inevitable was a huge blow on Google’s part.
Read next: Cyber Scammers Target Unsuspecting Users Via Facebook Ads in a Latest Fraud Scheme