There are some major security concerns arising at Samsung as the company faced a recent data leak incident.
The top tech giant from South Korea has been forced to resort to a ban on its employees against the use of the world’s most popular AI-based tools and technology. So that means workers will not be given the chance to access ChatGPT, Bard from Google, and even Microsoft’s Bing.
The electronic powerhouse firm revealed the shocking news to its employees via a notification that was generated at one of the company’s biggest divisions seen on Monday. This new policy, as highlighted first by Bloomberg is coming forward due to huge concerns linked to data made used by AI platforms.
It gets stored on a range of external servers and this might end up getting exposed to other people, as confirmed by Bloomberg in its report recently.
There is a huge amount of interest getting generated in this regard on AI platforms like ChatGPT. It continues to grow both internally as well as externally as revealed by Samsung while speaking to their staff members. The interest focuses on the great efficiency as well as the usefulness of such a platform. But when you take into account some major security risks attached to the world of generative AI, Samsung says it’s just not willing to put these threats aside.
Generative AI has really made a lot of members of the public aware of the great benefits that this type of technology has after it was launched last year in November. And since then, we’re seeing a new kind of chatbot that was designed on AI engines that writes software, can hold chats, and even come up with poetry.
Today, we’re seeing tech giant Microsoft make use of GPT-4 technology by ChatGPT that boosts the search engine results of Bing, provides the best tips for writing, and even designs presentations.
The brand new policy is certainly arising at a time when Samsung is expressing a lot of concerns regarding its security status and how it can only control the situation by reducing the risks that generative AI technology brings with it.
Let’s not forget how the popular tech giant from South Korea is not the only one that is worried about the pace at which AI technology is advancing with. In March, we heard more about how so many tech executives and experts from the tech world felt that they needed to sign a letter that would pause the training of AI systems. This open letter wanted AI labs to put a temporary stop to creating AI systems and even cited how many risks came to human society, thanks to its uprising.
These sudden rules by Samsung arose when its own engineers ended up leaking internal source codes by unintentional means. They did this by uploading them onto the ChatGPT software, as mentioned in a recently published memo. So right now, the HQ is trying to make the environment safe all over again.
Read next: Bluesky and Mastodon are gearing up to face each other as rival apps
The top tech giant from South Korea has been forced to resort to a ban on its employees against the use of the world’s most popular AI-based tools and technology. So that means workers will not be given the chance to access ChatGPT, Bard from Google, and even Microsoft’s Bing.
The electronic powerhouse firm revealed the shocking news to its employees via a notification that was generated at one of the company’s biggest divisions seen on Monday. This new policy, as highlighted first by Bloomberg is coming forward due to huge concerns linked to data made used by AI platforms.
It gets stored on a range of external servers and this might end up getting exposed to other people, as confirmed by Bloomberg in its report recently.
There is a huge amount of interest getting generated in this regard on AI platforms like ChatGPT. It continues to grow both internally as well as externally as revealed by Samsung while speaking to their staff members. The interest focuses on the great efficiency as well as the usefulness of such a platform. But when you take into account some major security risks attached to the world of generative AI, Samsung says it’s just not willing to put these threats aside.
Generative AI has really made a lot of members of the public aware of the great benefits that this type of technology has after it was launched last year in November. And since then, we’re seeing a new kind of chatbot that was designed on AI engines that writes software, can hold chats, and even come up with poetry.
Today, we’re seeing tech giant Microsoft make use of GPT-4 technology by ChatGPT that boosts the search engine results of Bing, provides the best tips for writing, and even designs presentations.
The brand new policy is certainly arising at a time when Samsung is expressing a lot of concerns regarding its security status and how it can only control the situation by reducing the risks that generative AI technology brings with it.
Let’s not forget how the popular tech giant from South Korea is not the only one that is worried about the pace at which AI technology is advancing with. In March, we heard more about how so many tech executives and experts from the tech world felt that they needed to sign a letter that would pause the training of AI systems. This open letter wanted AI labs to put a temporary stop to creating AI systems and even cited how many risks came to human society, thanks to its uprising.
These sudden rules by Samsung arose when its own engineers ended up leaking internal source codes by unintentional means. They did this by uploading them onto the ChatGPT software, as mentioned in a recently published memo. So right now, the HQ is trying to make the environment safe all over again.
Read next: Bluesky and Mastodon are gearing up to face each other as rival apps