After an extensive study of YouTube recommendations, two researchers have concluded that the video sharing service actually battles political radicalization instead of promoting it.
The study was helmed by Anna Zaitsev (UC Berkeley postdoctoral researcher) and Mark Ledwich (independent data scientist). The researchers claim that although radical videos exist on YouTube, the platform’s recommendation algorithm doesn’t promote such content currently.
The researchers’ conclusion is quite interesting, considering how numerous articles published earlier this year by New York Times were centered on radicalization on YouTube. One of the articles that went quite viral was about 26-year old Caleb Cain who claimed to have fallen in the “alt-right rabbit hole” a couple of years ago.
However, the latest research indicates that YouTube has revamped its recommendation algorithm. It’s not hidden from anyone that almost every social media platform (including YouTube) has been dealing with the subject of content moderation for a couple of years now.
YouTube mentioned in a recent blog post that borderline offensive content will always surface on the platform but during the last few years, the company has been making efforts to prevent borderline content as well as catastrophic misinformation from spreading.
Last week, Mark Ledwich published a story on Medium in which he explained that the narrative promoted by the New York Times (YouTube recommends radicalizing content to its users) doesn’t hold any value. According to Ledwich, the results of his research actually propose that YouTube’s recommendation algorithm discourages viewers from visiting content that can be termed questionable or radicalizing.
This is where things get interesting however. According to technology and online radicalization experts, the study has several shortcomings. For starters, the data set of YouTube recommendations studied in the research was from the perspective of a user not logged in to YouTube. Because of this, the recommendations would never be based on watch history and this pretty much defeats the purpose of the study.
According to one of University of North Carolina School of Information and Library Science’s associate professors (Zeynep Tufekci), experts would be able to conduct a more fruitful research on algorithms if companies give them access to private data. She added that a proper research without the concerned company’s participation is doable but it would be expensive and difficult.
Co-author of the study Anna Zaitsev also posted an essay on Medium recently in which she agreed that the study had its limitations. However, she said that an experimental assessment of the algorithm was a fruitful research in its own right and that it opens the door for a more qualitative research on radicalization.
As per reporter Kevin Roose, YouTube has explicitly announced numerous changes to its algorithm after the New York Times’ posts relevant to the topic were published. He also mentioned that the reports reflected the personal experience of online radicalization instead of a collective assessment of the algorithm.
Photo: Shutterstock
Read next: YouTube’s Child Video Rules will slam the Content Creators Financially
The study was helmed by Anna Zaitsev (UC Berkeley postdoctoral researcher) and Mark Ledwich (independent data scientist). The researchers claim that although radical videos exist on YouTube, the platform’s recommendation algorithm doesn’t promote such content currently.
The researchers’ conclusion is quite interesting, considering how numerous articles published earlier this year by New York Times were centered on radicalization on YouTube. One of the articles that went quite viral was about 26-year old Caleb Cain who claimed to have fallen in the “alt-right rabbit hole” a couple of years ago.
However, the latest research indicates that YouTube has revamped its recommendation algorithm. It’s not hidden from anyone that almost every social media platform (including YouTube) has been dealing with the subject of content moderation for a couple of years now.
YouTube mentioned in a recent blog post that borderline offensive content will always surface on the platform but during the last few years, the company has been making efforts to prevent borderline content as well as catastrophic misinformation from spreading.
Last week, Mark Ledwich published a story on Medium in which he explained that the narrative promoted by the New York Times (YouTube recommends radicalizing content to its users) doesn’t hold any value. According to Ledwich, the results of his research actually propose that YouTube’s recommendation algorithm discourages viewers from visiting content that can be termed questionable or radicalizing.
This is where things get interesting however. According to technology and online radicalization experts, the study has several shortcomings. For starters, the data set of YouTube recommendations studied in the research was from the perspective of a user not logged in to YouTube. Because of this, the recommendations would never be based on watch history and this pretty much defeats the purpose of the study.
According to one of University of North Carolina School of Information and Library Science’s associate professors (Zeynep Tufekci), experts would be able to conduct a more fruitful research on algorithms if companies give them access to private data. She added that a proper research without the concerned company’s participation is doable but it would be expensive and difficult.
Co-author of the study Anna Zaitsev also posted an essay on Medium recently in which she agreed that the study had its limitations. However, she said that an experimental assessment of the algorithm was a fruitful research in its own right and that it opens the door for a more qualitative research on radicalization.
As per reporter Kevin Roose, YouTube has explicitly announced numerous changes to its algorithm after the New York Times’ posts relevant to the topic were published. He also mentioned that the reports reflected the personal experience of online radicalization instead of a collective assessment of the algorithm.
Photo: Shutterstock
Read next: YouTube’s Child Video Rules will slam the Content Creators Financially