AI has a wide range of usage cases associated with it, many of which have stoked a fair amount of controversy. Recent innovations in the field have led some to suggest the use of AI to predict criminal behavior which can then be used to police such activities before they have had a chance to occur.
With all of that having been said and now out of the way, it is important to note that human rights groups are warning that such practices are quite dangerous. For one thing, they could lead to already oppressed minorities such as Black Americans and the Roma community in Europe becoming even more marginalized than might have been the case otherwise.
AI can be used to scan people’s faces and figure out if there are any outstanding warrants against them, but in spite of the fact that this is the case it has been shown to disproportionately target people of color. Research has shown that Black people are far more likely to get misidentified as criminals than people who are white.
Fair Trials, a human rights organization, has joined hands with several other watchdogs to urge the EU to take swift action. The European Parliament is set to take a vote that will determine whether or not machine learning surveillance will be allowed to come into play. Watchdogs are trying to get them to vote no because of the fact that this is the sort of thing that could potentially end up preventing this authoritarian practice from becoming commonplace.
People of color are already disproportionately targeted by the police with all things having been considered and taken into account. In the UK, black people were seven times more likely to be stopped and search at random than white people.
52.6 out of every 1,000 black people were randomly searched by the police between April of 2021 and March of 2022, yet only 7.5 out of every 1,000 white people faced the same level of scrutiny. All in all, AI surveillance has the potential to do more harm than good, and it needs to be kept at bay.
H/T: Cybernews
Read next: The Uncertain Future of the Metaverse: Trends and Skepticism
With all of that having been said and now out of the way, it is important to note that human rights groups are warning that such practices are quite dangerous. For one thing, they could lead to already oppressed minorities such as Black Americans and the Roma community in Europe becoming even more marginalized than might have been the case otherwise.
AI can be used to scan people’s faces and figure out if there are any outstanding warrants against them, but in spite of the fact that this is the case it has been shown to disproportionately target people of color. Research has shown that Black people are far more likely to get misidentified as criminals than people who are white.
Fair Trials, a human rights organization, has joined hands with several other watchdogs to urge the EU to take swift action. The European Parliament is set to take a vote that will determine whether or not machine learning surveillance will be allowed to come into play. Watchdogs are trying to get them to vote no because of the fact that this is the sort of thing that could potentially end up preventing this authoritarian practice from becoming commonplace.
People of color are already disproportionately targeted by the police with all things having been considered and taken into account. In the UK, black people were seven times more likely to be stopped and search at random than white people.
52.6 out of every 1,000 black people were randomly searched by the police between April of 2021 and March of 2022, yet only 7.5 out of every 1,000 white people faced the same level of scrutiny. All in all, AI surveillance has the potential to do more harm than good, and it needs to be kept at bay.
H/T: Cybernews
Read next: The Uncertain Future of the Metaverse: Trends and Skepticism