OpenAI just added the former head of the National Security Agency, Paul Miki Nakasone, as an integral member of its board.
However, that news is not going down well with top whistleblower Edward Snowden who mentioned how the act was not only unacceptable but a true insight into the intentions of the AI giant.
Snowden is infamous for leaking surveillance secrets from the NSA, nearly ten years back. He now lives in Russia and according to him, this change is major and can have serious consequences.
A new tweet took a lot of people by surprise when he mentioned more about how OpenAI can never be trusted after hiring a former American army general. Paul Nakasone will now be the firm’s head for safety and security, raising eyebrows with critics too.
The company, on the other hand, says it takes huge pride in the entire cybersecurity experience that it deems to be world-class. It is therefore hoped that this would ensure OpenAI enhances the world of cybersecurity and also keeps it guarded from a host of hacking groups.
Snowden alleged that OpenAI might have some dangerous motives with such a man in charge of a pivotal security and safety committee. He even accused the company of making such a dangerous decision to portray its betrayal of many rights related to every single person on the planet, and this came as a stark warning in his words.
While the tweet by Snowden failed to elaborate more on this front, OpenAI was seen attracting a lot of criticism in terms of how it collects data for training AI models. We know that ChatGPT is busy processing charts from a host of millions of people each day and that could give rise to the technology conducting serious surveillance.
But what do security experts have to say on this front?
Matthew Green did take the time to explain what he felt about this matter in detail. According to him, AI’s biggest use in the professional world has to do with surveillance on a mass scale so adding such a major personality to the board might have big logic.
While OpenAI is yet to respond on this matter in detail, the company did treat the recent hiring as just something that it felt was imperial and linked to AI development.
For a while now, the makers of ChatGPT have been struggling so much on this front. Its security systems were in search of a means to better guard AI and ensure the data of users is entrusted and secure at all times.
OpenAI has mentioned time and time again how its generative AI systems have the greatest potential to deliver serious benefits like putting an end to all attacks across certain public places like banks, hospitals, schools, and more.
Similar to how certain tech providers function, the firm could be compelled to have its user data disclosed so it could better comply with the requests generated by law enforcement agencies. This entails user name, details for contact, content, and also IP address. On the other hand, the tech giant does not provide data controls which could block data from getting accessed for the sole purpose of training AI models.
This has been a matter of debate for months and critics still argue how OpenAI needs to do better in this regard. After all, an institution that promises safety and security for all cannot train models with data without consent.
Image: DIW-Aigen
Read next: X Receives Intense Backlash For Allowing Adult Content As Indonesia All Set To Shut Down App
However, that news is not going down well with top whistleblower Edward Snowden who mentioned how the act was not only unacceptable but a true insight into the intentions of the AI giant.
Snowden is infamous for leaking surveillance secrets from the NSA, nearly ten years back. He now lives in Russia and according to him, this change is major and can have serious consequences.
A new tweet took a lot of people by surprise when he mentioned more about how OpenAI can never be trusted after hiring a former American army general. Paul Nakasone will now be the firm’s head for safety and security, raising eyebrows with critics too.
The company, on the other hand, says it takes huge pride in the entire cybersecurity experience that it deems to be world-class. It is therefore hoped that this would ensure OpenAI enhances the world of cybersecurity and also keeps it guarded from a host of hacking groups.
Snowden alleged that OpenAI might have some dangerous motives with such a man in charge of a pivotal security and safety committee. He even accused the company of making such a dangerous decision to portray its betrayal of many rights related to every single person on the planet, and this came as a stark warning in his words.
While the tweet by Snowden failed to elaborate more on this front, OpenAI was seen attracting a lot of criticism in terms of how it collects data for training AI models. We know that ChatGPT is busy processing charts from a host of millions of people each day and that could give rise to the technology conducting serious surveillance.
But what do security experts have to say on this front?
Matthew Green did take the time to explain what he felt about this matter in detail. According to him, AI’s biggest use in the professional world has to do with surveillance on a mass scale so adding such a major personality to the board might have big logic.
While OpenAI is yet to respond on this matter in detail, the company did treat the recent hiring as just something that it felt was imperial and linked to AI development.
For a while now, the makers of ChatGPT have been struggling so much on this front. Its security systems were in search of a means to better guard AI and ensure the data of users is entrusted and secure at all times.
OpenAI has mentioned time and time again how its generative AI systems have the greatest potential to deliver serious benefits like putting an end to all attacks across certain public places like banks, hospitals, schools, and more.
Similar to how certain tech providers function, the firm could be compelled to have its user data disclosed so it could better comply with the requests generated by law enforcement agencies. This entails user name, details for contact, content, and also IP address. On the other hand, the tech giant does not provide data controls which could block data from getting accessed for the sole purpose of training AI models.
This has been a matter of debate for months and critics still argue how OpenAI needs to do better in this regard. After all, an institution that promises safety and security for all cannot train models with data without consent.
Image: DIW-Aigen
Read next: X Receives Intense Backlash For Allowing Adult Content As Indonesia All Set To Shut Down App