Facebook is planning to open-source the two algorithms that it uses to detect child sexual exploitation, terrorism and other forms of graphical violence. The algorithms known as PDQ and TMK+PDQF are utilized by the tech giant to store and compare files with possible examples of harmful content.
Both of these are released on Github – says Facebook.
According to their official blog post, Facebook says that they have open-sourced their algorithms with hopes that other tech companies, organizations and app developers use the same to identify harmful content on their platforms - and thus remove it in a timely manner.
The company also says that organizations that already have a safeguard protocol in place can add these technologies as well to enhance their operations.
After the Christchurch shooting, several tech companies have been pressurized to remove harmful content from their database. In fact, Australia threatened to punish tech executives with fines and even jail time if they did not remove the videos of the attack at their first priority.
Facebook, along with several other tech giants signed a pledge called the “Christchurch call” that abided them to devote more resources to prevent harmful content from appearing on their platforms.
Facebook also admitted that child exploitation videos are on the rise. In just one year, the company saw a 541% increase in the number of videos reported by the tech industry to the CyberTipline.
This move of Facebook will greatly help identify and prevent illicit videos from appearing on the World Wide Web. Previously Google and Microsoft has also made their technologies available to the public.
Facebook is also collaborating with the University of Maryland, Cornell University, Massachusetts Institute of Technology, and The University of California, Berkeley that will assist the tech company in finding ways to stop users from making alterations to banned media and making them public again.
Facebook also held its fourth annual child safety hackathon in Menlo Park today.
Photo: AP
Read next: How Effective Is Flagging False Reports On Facebook?
Both of these are released on Github – says Facebook.
According to their official blog post, Facebook says that they have open-sourced their algorithms with hopes that other tech companies, organizations and app developers use the same to identify harmful content on their platforms - and thus remove it in a timely manner.
The company also says that organizations that already have a safeguard protocol in place can add these technologies as well to enhance their operations.
After the Christchurch shooting, several tech companies have been pressurized to remove harmful content from their database. In fact, Australia threatened to punish tech executives with fines and even jail time if they did not remove the videos of the attack at their first priority.
Facebook, along with several other tech giants signed a pledge called the “Christchurch call” that abided them to devote more resources to prevent harmful content from appearing on their platforms.
Facebook also admitted that child exploitation videos are on the rise. In just one year, the company saw a 541% increase in the number of videos reported by the tech industry to the CyberTipline.
This move of Facebook will greatly help identify and prevent illicit videos from appearing on the World Wide Web. Previously Google and Microsoft has also made their technologies available to the public.
Facebook is also collaborating with the University of Maryland, Cornell University, Massachusetts Institute of Technology, and The University of California, Berkeley that will assist the tech company in finding ways to stop users from making alterations to banned media and making them public again.
Facebook also held its fourth annual child safety hackathon in Menlo Park today.
Photo: AP
Read next: How Effective Is Flagging False Reports On Facebook?