Facebook is attempting to fix the videos and photos that may be unfriendly for its users as announced on Thursday by the company.
“Deepfakes” are a kind of doctored media in which people can use the technology available — more advanced than ever — to manipulate a video or an image in such a way that it seems that the said content was real, however false it may be. This could severely harm a person’s life or of many if used vindictively.
Until now, all of the Facebook’s fact checkers have been made to analyze the accuracy of text based articles. Now, it's expanding this approach towards checking images and videos along with the text involved. This includes all of Facebook’s fact-checking partners — 27 in total — globally, in 17 countries. New fact checking partners are being introduced onboard regularly as well for this reason among others.
It works as a machine learning model, built by the company, uses “engagement signals” to detect any corruption in the authenticity of the content. This process includes the use of reports from the users all over the network.
Optical character recognition (OCR) is also utilized to segregate the text from the images which is then compared to the particular headlines from the articles of fact checkers.
There are three categories in which the false content is separated,
“Deepfakes” are a kind of doctored media in which people can use the technology available — more advanced than ever — to manipulate a video or an image in such a way that it seems that the said content was real, however false it may be. This could severely harm a person’s life or of many if used vindictively.
Until now, all of the Facebook’s fact checkers have been made to analyze the accuracy of text based articles. Now, it's expanding this approach towards checking images and videos along with the text involved. This includes all of Facebook’s fact-checking partners — 27 in total — globally, in 17 countries. New fact checking partners are being introduced onboard regularly as well for this reason among others.
It works as a machine learning model, built by the company, uses “engagement signals” to detect any corruption in the authenticity of the content. This process includes the use of reports from the users all over the network.
Optical character recognition (OCR) is also utilized to segregate the text from the images which is then compared to the particular headlines from the articles of fact checkers.
There are three categories in which the false content is separated,
- Fabricated or manipulated,
- Out of context
- Text or audio claim