In UK and Europe, Instagram has been working on some new technology that prevents the display of self-harm and suicide post on its app.
Technology never fails to impress humankind. Similarly, Instagram launched a tool that works by identifying the content that delivers any sort of suicidal information. After recognizing the post it will either hide it or directly remove from the app.
Adam Mosseri, head of Instagram describes this new tool in a blog post which includes artificial intelligence.
Post that are identified as harmful by the Instagram algorithm will be passed to human moderators who will decide further actions. The human moderator either guides the user to get support organizations or to notify emergency services about the harmful content. However, in a press conference, Instagram informed the news agency of UK Press Association that this latest tool does not involve human referral in UK and Europe because of the data safety concerns related to the General Data Protection Regulation (GDPR). The firm ensures that enforcement of human referral would be the next step.
Tara Hopkin, Instagram Public Policy director claims that “only if a post is specifically reported to us by the member of the community then we can take help of technology as well as human review.”
Recently, Facebook and Instagram are under pressure due to less control over harmful and suicidal content. Such posts are being shared at a higher rate on daily basis. After the suicide of 14-year-old schoolgirl Molly Russell, the dread of how such content will affect the young individuals has been increased.
All the social media platforms namely Instagram, Facebook, Pinterest, YouTube, Google, and Twitter accepted to follow the guidelines released by the Samaritan’s Mental Health Agency to maintain quality standards on this problem.
Lydia Grace, Samaritans program manager for online harm stated that “we have noticed positive results in recent months, still there is a lot more to be done to combat self-harming content.”
“We need to have a regulatory system to ensure that these online sites take measures to conceal offensive content utilizing resources. Also, need to ensure that helpful content must be there to support young individual.”
“The goal of Our Excellence Program is to build a center where people will be getting awareness about suicide prevention. Concerning this, recommendations are being designed for technological platforms to create a healthier online environment by reducing the access to harmful and suicidal content and providing maximum help opportunities.”
On the other hand, Instagram also preferred to become a platform where people could confess that they thought of causing harm to themselves.
Ms. Hopkin said that “experts can help us destigmatize suicide problems. We are headings towards the right position where such a platform is provided, making sure that people stay away from suicidal content.”
Technology never fails to impress humankind. Similarly, Instagram launched a tool that works by identifying the content that delivers any sort of suicidal information. After recognizing the post it will either hide it or directly remove from the app.
Adam Mosseri, head of Instagram describes this new tool in a blog post which includes artificial intelligence.
Post that are identified as harmful by the Instagram algorithm will be passed to human moderators who will decide further actions. The human moderator either guides the user to get support organizations or to notify emergency services about the harmful content. However, in a press conference, Instagram informed the news agency of UK Press Association that this latest tool does not involve human referral in UK and Europe because of the data safety concerns related to the General Data Protection Regulation (GDPR). The firm ensures that enforcement of human referral would be the next step.
Tara Hopkin, Instagram Public Policy director claims that “only if a post is specifically reported to us by the member of the community then we can take help of technology as well as human review.”
Recently, Facebook and Instagram are under pressure due to less control over harmful and suicidal content. Such posts are being shared at a higher rate on daily basis. After the suicide of 14-year-old schoolgirl Molly Russell, the dread of how such content will affect the young individuals has been increased.
All the social media platforms namely Instagram, Facebook, Pinterest, YouTube, Google, and Twitter accepted to follow the guidelines released by the Samaritan’s Mental Health Agency to maintain quality standards on this problem.
Lydia Grace, Samaritans program manager for online harm stated that “we have noticed positive results in recent months, still there is a lot more to be done to combat self-harming content.”
“We need to have a regulatory system to ensure that these online sites take measures to conceal offensive content utilizing resources. Also, need to ensure that helpful content must be there to support young individual.”
“The goal of Our Excellence Program is to build a center where people will be getting awareness about suicide prevention. Concerning this, recommendations are being designed for technological platforms to create a healthier online environment by reducing the access to harmful and suicidal content and providing maximum help opportunities.”
On the other hand, Instagram also preferred to become a platform where people could confess that they thought of causing harm to themselves.
Ms. Hopkin said that “experts can help us destigmatize suicide problems. We are headings towards the right position where such a platform is provided, making sure that people stay away from suicidal content.”