Meta is finally breaking its silence on whether or not images taken on its much-talked-about Ray-Ban smart glasses will be used to train AI.
Despite remaining hushed on the matter at the start, that’s no longer the case. The company now has plenty to say including a confirmation that all pictures shared with Meta AI will be used to train its models. The same goes for video content.
It’s great to see Facebook’s parent firm finally offering more insights on the subject when media outlet TechCrunch asked for greater clarification on the subject. This includes how all locations where multimodal AI is present will now have pictures and videos shared with the company’s AI systems. The goal is to better its AI systems as per the company’s privacy policy.
Meanwhile, another statement that was sent out in the past highlighted how that would not be the case for content captured by Ray-Ban smart glasses. This was true unless the user ended up submitting them to AI. Now, if Meta AI is used for analysis, the images will come under a totally new set of policies.
In other words, the firm is making use of the latest consumer AI products to roll out a huge array of data that might be used to produce more impactful AI model generations. Hence, the only way that users can opt out is to avoid utilizing AI features from the start.
The impact of this behavior is raising a lot of debate as so many users of the new Ray-Ban smart glasses might not comprehend how to share or not share personal data to Meta’s systems. This includes personal information, addresses, and social circles.
As per Meta, they continue to stick to their saving grace that the user interface keeps them well informed of all of this. Now the question is if that’s the case, why was Meta so quiet at the start. Either executives didn’t wish to share details or perhaps they didn’t know what was the answer.
Experts have discussed that there is no surprise here. Training Meta’s AI Llama models on all data Americans post through public profiles is already taking place. Now, Meta is clearly changing the definition of what it means by publicly available information.
It’s going to include all things people see through smart glasses and also similarly ask the AI model to analyze.
For now, it’s very important and the news comes as Meta continues to publish the latest AI features that make it easier for users to invoke AI more naturally. This means users will have no problems in agreeing to give data to Meta as they see fit.
Let’s not forget how Meta launched another live video analysis offering for its smart glasses during last week’s Connect conference. This will roll out a stream of pictures in a continuous manner across its multimodal AI models. Another marketing video showed Meta using the feature to search around closets. The whole analysis was done through AI and users could select an outfit based on this data.
Now the hidden part is that the tech giant fails to promote sending their pictures for AI model training. One spokesperson of Meta pointed out recently how its privacy policy puts in simple terms that any AI interactions of Meta will be used for training purposes. Now pictures will be included but at the time, it failed to add more clarity.
Meta’s Terms of Service were shared which claims sharing pictures with Meta AI means giving it full authority to analyze them and facial features are included here.
Remember, the tech giant just rolled out a massive $1.4B to the state of Texas after a lawsuit accused the firm of using facial recognition tech without consent. This was linked to Facebook’s age-old Tag Suggestions feature. It was launched in 2011 and then in 2021, Facebook made it exclusively ‘opt-in’ while deleting billions of biometric data that was collected.
Now Meta along with other tech rivals in the industry are really pushing for smart glasses as the best and new means for computing. All of the products have people wearing cameras on their faces. Interestingly, they’re all based on AI so many privacy concerns have existed and now will increase.
Image: DIW-Aigen
Read next: Reddit: The New Frontier in Search Engines? A Surprising Shift in User Preferences
Despite remaining hushed on the matter at the start, that’s no longer the case. The company now has plenty to say including a confirmation that all pictures shared with Meta AI will be used to train its models. The same goes for video content.
It’s great to see Facebook’s parent firm finally offering more insights on the subject when media outlet TechCrunch asked for greater clarification on the subject. This includes how all locations where multimodal AI is present will now have pictures and videos shared with the company’s AI systems. The goal is to better its AI systems as per the company’s privacy policy.
Meanwhile, another statement that was sent out in the past highlighted how that would not be the case for content captured by Ray-Ban smart glasses. This was true unless the user ended up submitting them to AI. Now, if Meta AI is used for analysis, the images will come under a totally new set of policies.
In other words, the firm is making use of the latest consumer AI products to roll out a huge array of data that might be used to produce more impactful AI model generations. Hence, the only way that users can opt out is to avoid utilizing AI features from the start.
The impact of this behavior is raising a lot of debate as so many users of the new Ray-Ban smart glasses might not comprehend how to share or not share personal data to Meta’s systems. This includes personal information, addresses, and social circles.
As per Meta, they continue to stick to their saving grace that the user interface keeps them well informed of all of this. Now the question is if that’s the case, why was Meta so quiet at the start. Either executives didn’t wish to share details or perhaps they didn’t know what was the answer.
Experts have discussed that there is no surprise here. Training Meta’s AI Llama models on all data Americans post through public profiles is already taking place. Now, Meta is clearly changing the definition of what it means by publicly available information.
It’s going to include all things people see through smart glasses and also similarly ask the AI model to analyze.
For now, it’s very important and the news comes as Meta continues to publish the latest AI features that make it easier for users to invoke AI more naturally. This means users will have no problems in agreeing to give data to Meta as they see fit.
Let’s not forget how Meta launched another live video analysis offering for its smart glasses during last week’s Connect conference. This will roll out a stream of pictures in a continuous manner across its multimodal AI models. Another marketing video showed Meta using the feature to search around closets. The whole analysis was done through AI and users could select an outfit based on this data.
Now the hidden part is that the tech giant fails to promote sending their pictures for AI model training. One spokesperson of Meta pointed out recently how its privacy policy puts in simple terms that any AI interactions of Meta will be used for training purposes. Now pictures will be included but at the time, it failed to add more clarity.
Meta’s Terms of Service were shared which claims sharing pictures with Meta AI means giving it full authority to analyze them and facial features are included here.
Remember, the tech giant just rolled out a massive $1.4B to the state of Texas after a lawsuit accused the firm of using facial recognition tech without consent. This was linked to Facebook’s age-old Tag Suggestions feature. It was launched in 2011 and then in 2021, Facebook made it exclusively ‘opt-in’ while deleting billions of biometric data that was collected.
Now Meta along with other tech rivals in the industry are really pushing for smart glasses as the best and new means for computing. All of the products have people wearing cameras on their faces. Interestingly, they’re all based on AI so many privacy concerns have existed and now will increase.
Image: DIW-Aigen
Read next: Reddit: The New Frontier in Search Engines? A Surprising Shift in User Preferences