Parents and activists have been talking about the harms and dangers related to AI tools for months.
This might be one of the leading reasons why tech giant OpenAI has just created a new team to prevent harm associated with AI tools’ misuse.
The matter has been getting plenty of attention for months after many spoke about how kids were in the greatest danger of being abused by the latest variant of technology.
The news comes after the firm just posted a new job listing on this front through its career page. It says the existence of the Child Safety Team is one that the firm has been working alongside for months and it’s happy to announce some fruitful findings on this array.
The team is looking to hire more specialists linked to child safety that would be required to apply the policies from OpenAI with content made through AI. They are similarly working on a wide number of review processes linked to sensitive matters and a lot of those have to do with minors, which is a sensitive issue.
So many tech vendors of various sizes are rolling out resources on this front to ensure they’re in compliance with laws such as those linked to the American Government Online Privacy Protection Rules. They control what kids see and do online and what they’re not authorized to access.
The fact that the company is hiring experts in the domain of child safety is not actually a really big surprise. This is especially true as the firm does expect a large underage user base someday soon.
The new team’s formation arose a few weeks after we saw the tech giant roll out partnerships on this front with companies like Common Sense Media so that ensures a great kid-friendly partnership in regards to AI terms and conditions.
The team has expressed their own concerns about the policies in place by the company right now, most of them linked to minors and how they’re being handled on the system.
Research has proved time and time again how minors are turning more toward AI tools for not only entertainment but also for help with their homework, allowing the chatbots to do their work. They also ask the chatbot to help with matters like personal issues.
As mentioned during one poll, 29% of all children have spoken about the tool ChatGPT and how it helps them deal with matters like anxiety and mental health problems. Meanwhile, 22% of them speak about how it assists them with matters linked to their friends and 16% even added how it helps with family matters where they are having issues with.
It appears to be some kind of growing risk and experts are not turning a blind eye toward that.
In the past, we saw plenty of educational activists and organizations issue bans over the tool after they found it guilty of misinformation and even plagiarism of the worst kind. Some have gone as far as reversing the ban but most are convinced that GenAI is actually here for the betterment of society and here to assist them rather than causing great harm.
We’ve seen plenty of surveys be generated on this front where people spoke about kids reporting making use of Generative AI for negative reasons like making fake data or pictures that end up upsetting them.
This is why the number of calls with kids' usage of technology keeps growing as we speak. Moreover, this might be one of the leading reasons why UNESCO generated a series of calls to governments so they could better regulate the use of AI technology in schools and enable age restrictions and the right safeguards.
Photo: Digital Information World - AIgen/HumanEdited
Read next: Microsoft Unveils Ambitious Plans To Train 2M Indians In AI After Being Hailed As World’s Most Valuable Firm
This might be one of the leading reasons why tech giant OpenAI has just created a new team to prevent harm associated with AI tools’ misuse.
The matter has been getting plenty of attention for months after many spoke about how kids were in the greatest danger of being abused by the latest variant of technology.
The news comes after the firm just posted a new job listing on this front through its career page. It says the existence of the Child Safety Team is one that the firm has been working alongside for months and it’s happy to announce some fruitful findings on this array.
The team is looking to hire more specialists linked to child safety that would be required to apply the policies from OpenAI with content made through AI. They are similarly working on a wide number of review processes linked to sensitive matters and a lot of those have to do with minors, which is a sensitive issue.
So many tech vendors of various sizes are rolling out resources on this front to ensure they’re in compliance with laws such as those linked to the American Government Online Privacy Protection Rules. They control what kids see and do online and what they’re not authorized to access.
The fact that the company is hiring experts in the domain of child safety is not actually a really big surprise. This is especially true as the firm does expect a large underage user base someday soon.
The new team’s formation arose a few weeks after we saw the tech giant roll out partnerships on this front with companies like Common Sense Media so that ensures a great kid-friendly partnership in regards to AI terms and conditions.
The team has expressed their own concerns about the policies in place by the company right now, most of them linked to minors and how they’re being handled on the system.
Research has proved time and time again how minors are turning more toward AI tools for not only entertainment but also for help with their homework, allowing the chatbots to do their work. They also ask the chatbot to help with matters like personal issues.
As mentioned during one poll, 29% of all children have spoken about the tool ChatGPT and how it helps them deal with matters like anxiety and mental health problems. Meanwhile, 22% of them speak about how it assists them with matters linked to their friends and 16% even added how it helps with family matters where they are having issues with.
It appears to be some kind of growing risk and experts are not turning a blind eye toward that.
In the past, we saw plenty of educational activists and organizations issue bans over the tool after they found it guilty of misinformation and even plagiarism of the worst kind. Some have gone as far as reversing the ban but most are convinced that GenAI is actually here for the betterment of society and here to assist them rather than causing great harm.
We’ve seen plenty of surveys be generated on this front where people spoke about kids reporting making use of Generative AI for negative reasons like making fake data or pictures that end up upsetting them.
This is why the number of calls with kids' usage of technology keeps growing as we speak. Moreover, this might be one of the leading reasons why UNESCO generated a series of calls to governments so they could better regulate the use of AI technology in schools and enable age restrictions and the right safeguards.
Photo: Digital Information World - AIgen/HumanEdited
Read next: Microsoft Unveils Ambitious Plans To Train 2M Indians In AI After Being Hailed As World’s Most Valuable Firm