Twitter recently announced that it will be further looking into conducting reviews regarding its algorithms in order to remove bias and ensure “responsible machine learning” across the platform.
This statement by the social media giant comes in the midst of hot debate across the internet regarding the usage of algorithms and automated systems in general when it comes to moderating online conversation. Many people believe that algorithms, owing to their AI-based nature, can unintentionally display unfair bias, reinforce racial or gender disparity, and can ultimately even fail to filter out extremist discourse. Another issue is also that, due to their reliance on buzzwords that commonly crop up in controversial posts, well meaning content that serves to argue against bias can fall victim to collateral damage and restriction via the algorithms.
Twitter itself, in lieu of all these legitimate concerns, has chosen to disclose measures that will lend themselves to both improvement of the AI as a whole, while also increasing transparency towards the general user-base. The platform’s Ethics and Transparency team recently took to writing a blog post, detailing both the immense responsibility designing online conversation-regulating algorithms carries, and how they must be put under constant vigilance. Not only will Twitter be taking steps towards reviewing the results of its algorithms, with the active aim of establishing equity on the platform, it will also look into designing newer algorithms that rely on an explanatory version of machine learning. The latter option will help the general user base better comprehend what it is specific algorithms are out to accomplish, and how they’ll choose to do so.
It seems like online debate and discussion is getting more and more heated as the days go by. Racial tension especially seems like its at an all-time high. In times like these, if a platform is willing to allow its user base to engage in fair debate, it must also act as a moderator that doesn’t actively alienate any side of the conversation. While a thinking, feeling person can be trusted to make such decisions, AI must be actively taught, over and over again, if it is to learn from a system that promotes equity as opposed to racial or gender based superiority.
There’s no current timeline on when these measures will be enacted, but this author for one hopes it will be very soon.
Read next: The Global Impact Report Published by Twitter Sheds Light On The Positive Power Of Micro-Blogging Social Network
This statement by the social media giant comes in the midst of hot debate across the internet regarding the usage of algorithms and automated systems in general when it comes to moderating online conversation. Many people believe that algorithms, owing to their AI-based nature, can unintentionally display unfair bias, reinforce racial or gender disparity, and can ultimately even fail to filter out extremist discourse. Another issue is also that, due to their reliance on buzzwords that commonly crop up in controversial posts, well meaning content that serves to argue against bias can fall victim to collateral damage and restriction via the algorithms.
Twitter itself, in lieu of all these legitimate concerns, has chosen to disclose measures that will lend themselves to both improvement of the AI as a whole, while also increasing transparency towards the general user-base. The platform’s Ethics and Transparency team recently took to writing a blog post, detailing both the immense responsibility designing online conversation-regulating algorithms carries, and how they must be put under constant vigilance. Not only will Twitter be taking steps towards reviewing the results of its algorithms, with the active aim of establishing equity on the platform, it will also look into designing newer algorithms that rely on an explanatory version of machine learning. The latter option will help the general user base better comprehend what it is specific algorithms are out to accomplish, and how they’ll choose to do so.
It seems like online debate and discussion is getting more and more heated as the days go by. Racial tension especially seems like its at an all-time high. In times like these, if a platform is willing to allow its user base to engage in fair debate, it must also act as a moderator that doesn’t actively alienate any side of the conversation. While a thinking, feeling person can be trusted to make such decisions, AI must be actively taught, over and over again, if it is to learn from a system that promotes equity as opposed to racial or gender based superiority.
There’s no current timeline on when these measures will be enacted, but this author for one hopes it will be very soon.
Read next: The Global Impact Report Published by Twitter Sheds Light On The Positive Power Of Micro-Blogging Social Network