Pinterest has recently shed light on its efforts on combatting online spam, and blocking accounts with more malicious intent.
The Pinterest community is an interesting one. Perhaps owing to the hobbyist platform's niche in the online social media game, the application could honestly be called one of the more wholesome places on the internet. It's just a place where users come together, display their arts and crafts, share videos, and have an all around good time. Compared to the likes of Facebook, Instagram, and Twitter, which aren't inherently toxic but can invite such discourse, Pinterest is a haven where users only discuss the things that bring them joy. It's a very active platform, consistently growing in active monthly users by the quarter, and the developers behind it seem intent on maintaining this sense of tranquility.
Spam accounts, bots, and hackers are all too literally everywhere online. Sadly, even the likes of Pinterest does not escape such individuals. The problem is that with a platform such as Pinterest, where older, less internet savvy users often congregate as well, users can easily fall victim to the likes of phishing attacks, pyramid schemes, etc. A sense of strict online moderation is necessary moving onwards. And in that very spirit, Pinterest is introducing a slew of new protocols to counter such behavior, making their domain a safer space.
In a post from the developers at Pinterest's Engineering Blog, details are further divulged. Since Pinterest is a more picture based social media platform, spam messaging doesn't present in the same way as it would on, say, Facebook. Hackers and the like can send Pins with unrelated images that cover up the likes of malicious external websites. What does Pinterest do in exchange? Well, new AI models detect users and bots engaging in spamming others in the community. These users are immediately limited in their reach of messages, and then once the link is examined and found to be false, the accounts engaging in such behavior are blocked. The AI is further trained to recognize website domains as spam content, as opposed to simple links, in order to make it more efficient and accurate.
In order to prevent the occurrence of false positives via this process, users engaging in potentially spam delivery but at a low threshold are then personally addressed by the developers. Users are also notified about the likes of investigations and account blocking, so they have a chance to recover in the incident of a false positive or other unrelated issues.
Photo: Thomas Trutschel via Getty Images
Read next: A small set of creators will be able to test livestream events on Pinterest
The Pinterest community is an interesting one. Perhaps owing to the hobbyist platform's niche in the online social media game, the application could honestly be called one of the more wholesome places on the internet. It's just a place where users come together, display their arts and crafts, share videos, and have an all around good time. Compared to the likes of Facebook, Instagram, and Twitter, which aren't inherently toxic but can invite such discourse, Pinterest is a haven where users only discuss the things that bring them joy. It's a very active platform, consistently growing in active monthly users by the quarter, and the developers behind it seem intent on maintaining this sense of tranquility.
Spam accounts, bots, and hackers are all too literally everywhere online. Sadly, even the likes of Pinterest does not escape such individuals. The problem is that with a platform such as Pinterest, where older, less internet savvy users often congregate as well, users can easily fall victim to the likes of phishing attacks, pyramid schemes, etc. A sense of strict online moderation is necessary moving onwards. And in that very spirit, Pinterest is introducing a slew of new protocols to counter such behavior, making their domain a safer space.
In a post from the developers at Pinterest's Engineering Blog, details are further divulged. Since Pinterest is a more picture based social media platform, spam messaging doesn't present in the same way as it would on, say, Facebook. Hackers and the like can send Pins with unrelated images that cover up the likes of malicious external websites. What does Pinterest do in exchange? Well, new AI models detect users and bots engaging in spamming others in the community. These users are immediately limited in their reach of messages, and then once the link is examined and found to be false, the accounts engaging in such behavior are blocked. The AI is further trained to recognize website domains as spam content, as opposed to simple links, in order to make it more efficient and accurate.
In order to prevent the occurrence of false positives via this process, users engaging in potentially spam delivery but at a low threshold are then personally addressed by the developers. Users are also notified about the likes of investigations and account blocking, so they have a chance to recover in the incident of a false positive or other unrelated issues.
Photo: Thomas Trutschel via Getty Images
Read next: A small set of creators will be able to test livestream events on Pinterest