If you happen to be a Facebook user and fear that your data might be misused by the app or used to train its AI models, then this new update by Meta is for you.
The tech giant is going the extra mile to ensure its users feel safe about their information by updating its Resource Center with a new form. This form will give them greater control over their own data and also determine how and where their information may be used and for what purpose.
The company updated the app’s Help Resource Section on the webpage this week to entail a new form that has the title, ‘Generative AI Data Subject Rights’. This is designed to enable users to generate requests for any data used for the training of AI models.
The company is really going the extra mile with this new kind of tool where users can opt out of the endeavor if they don’t wish to put their data up for grabs to the public. Remember, more and more firms are developing advanced chatbots and converting simple text into something that’s super sophisticated, alongside images.
Therefore, Meta says it wants people to get more access to get rid of, edit, or share any personal information that happened to be a part of third-party sources on offer. Remember, the company is using such information to have its LLM trained to the best of its capabilities.
The form that’s provided for users entails the term data which means the information for users that is found easily on the web or through any licensed sources. This type of data can stand for the billions of data used for training purposes that make use of predictions as well as patterns to develop more similar content.
Meanwhile, another page shows how it makes use of this data for the sake of generative AI by collecting public data online, with licensing data from a series of other providers.
For instance, blog posts can entail personal information like your name, address, contact number, and more.
This particular form does not take into consideration the activity performed across Meta’s properties or apps like comments seen on Facebook, or images on Instagram. Therefore, there is a small possibility that the firm may use this data for training.
The spokesperson at Meta added how the firm’s latest Llama 2 is one of the biggest LLMs and it was not trained with users’ data. And also, they’re yet to roll out any features for consumers across the system too.
In case you were not already aware, Microsoft, OpenAI, and even Alphabet gather huge figures of personal data belonging to users so they can better train their AI models. Experts claim that such information is very valuable for these firms. So it’s important to be as transparent as possible for companies to avoid legal complications later on as all the information is processed for their benefit.
Update (on 19th September 2023): A staff member from DigitalInformationWorld team tried to see how much data Meta/Facebook might have used for AI training by filling the Resource Center form, but it turns out that Meta/Facebook don't want to get into any legal trouble here, as they are not sharing any evidence of whether they've used any data from their users, as most users who fill out data request form get a counter question on if their data is really being used by Meta’s generative AI models. Here's what Facebook's Privacy Operations team responded on one user request who filled the data request form.
The tech giant is going the extra mile to ensure its users feel safe about their information by updating its Resource Center with a new form. This form will give them greater control over their own data and also determine how and where their information may be used and for what purpose.
The company updated the app’s Help Resource Section on the webpage this week to entail a new form that has the title, ‘Generative AI Data Subject Rights’. This is designed to enable users to generate requests for any data used for the training of AI models.
The company is really going the extra mile with this new kind of tool where users can opt out of the endeavor if they don’t wish to put their data up for grabs to the public. Remember, more and more firms are developing advanced chatbots and converting simple text into something that’s super sophisticated, alongside images.
Therefore, Meta says it wants people to get more access to get rid of, edit, or share any personal information that happened to be a part of third-party sources on offer. Remember, the company is using such information to have its LLM trained to the best of its capabilities.
The form that’s provided for users entails the term data which means the information for users that is found easily on the web or through any licensed sources. This type of data can stand for the billions of data used for training purposes that make use of predictions as well as patterns to develop more similar content.
Meanwhile, another page shows how it makes use of this data for the sake of generative AI by collecting public data online, with licensing data from a series of other providers.
For instance, blog posts can entail personal information like your name, address, contact number, and more.
This particular form does not take into consideration the activity performed across Meta’s properties or apps like comments seen on Facebook, or images on Instagram. Therefore, there is a small possibility that the firm may use this data for training.
The spokesperson at Meta added how the firm’s latest Llama 2 is one of the biggest LLMs and it was not trained with users’ data. And also, they’re yet to roll out any features for consumers across the system too.
In case you were not already aware, Microsoft, OpenAI, and even Alphabet gather huge figures of personal data belonging to users so they can better train their AI models. Experts claim that such information is very valuable for these firms. So it’s important to be as transparent as possible for companies to avoid legal complications later on as all the information is processed for their benefit.
Update (on 19th September 2023): A staff member from DigitalInformationWorld team tried to see how much data Meta/Facebook might have used for AI training by filling the Resource Center form, but it turns out that Meta/Facebook don't want to get into any legal trouble here, as they are not sharing any evidence of whether they've used any data from their users, as most users who fill out data request form get a counter question on if their data is really being used by Meta’s generative AI models. Here's what Facebook's Privacy Operations team responded on one user request who filled the data request form.
"Thank you for contacting us.Read next: Google Search Makes It Easier to Find the Sources of AI-Generated Answers
Based on the information provided, we were unable to process your request. To help us process your request, please provide examples or screenshots that show evidence of your personal information (for example, your name, address or phone number) in responses from Meta’s generative AI models. Once you provide this evidence, we would be happy to investigate further.
If you have any questions about how Meta uses information from our products and services, please see our Privacy Policy: https://www.facebook.com/privacy/policy
To learn more about generative AI, and our privacy work in this new space, you can review the information we have in Privacy Center: https://www.facebook.com/privacy/genai".