As AI continues to become an ever more prominent aspect of the manner in which you have currently chosen to end up living your life, it has the potential to cause enormous harm as well. For all of the benefits that it can provide, so called AI incidents which refer to malfunctions, as well as intentional misuse from threat actors, have been dangerous enough to put the world on the brink of nuclear war.
This occurred back in 1983, when a rudimentary form of AI that was being used by the Soviet Union mistakenly detected an incoming nuclear attack from the USA. Not all incidents are quite this cataclysmic, although they can still cause more harm than might have been the case otherwise. The most recent example that we can think of is that of Tessa, the chatbot offered by the National Eating Disorders Institute, or NEDA for short.
This chatbot offered extremely dangerous advice to the people it was conversing with, which may have increased their eating disorders risks with all things having been considered and taken into account. Tesla’s AI based self driving systems have also been quite dangerous, such as when they accidentally led to a car colliding with an unwary pedestrian.
With all of that having been said and now out of the way, it is important to note that these incidents have increase by a massive 690% year over year. About a quarter of these incidents, or 24.5% to be precise, come from three companies, namely Tesla, Meta and OpenAI. This data comes from Surfshark, and with all of that having been said and now out of the way, it is important to note that it reveals the worrying trend with AI.
There has been a 261% increase in the number of chatbots online. As these numbers continue to grow, so too will the incidents that put the world in harm’s way. Implementing AI can create enormous seismic shifts, but more work needs to be put into ensuring that these shifts occur in the right direction. Failing to do so could lead to a highly unpredictable outcome in terms of risk.
H/T: Surfshark blog
Read next: The Role of Tech Corporations in Carbon Emissions
This occurred back in 1983, when a rudimentary form of AI that was being used by the Soviet Union mistakenly detected an incoming nuclear attack from the USA. Not all incidents are quite this cataclysmic, although they can still cause more harm than might have been the case otherwise. The most recent example that we can think of is that of Tessa, the chatbot offered by the National Eating Disorders Institute, or NEDA for short.
This chatbot offered extremely dangerous advice to the people it was conversing with, which may have increased their eating disorders risks with all things having been considered and taken into account. Tesla’s AI based self driving systems have also been quite dangerous, such as when they accidentally led to a car colliding with an unwary pedestrian.
With all of that having been said and now out of the way, it is important to note that these incidents have increase by a massive 690% year over year. About a quarter of these incidents, or 24.5% to be precise, come from three companies, namely Tesla, Meta and OpenAI. This data comes from Surfshark, and with all of that having been said and now out of the way, it is important to note that it reveals the worrying trend with AI.
There has been a 261% increase in the number of chatbots online. As these numbers continue to grow, so too will the incidents that put the world in harm’s way. Implementing AI can create enormous seismic shifts, but more work needs to be put into ensuring that these shifts occur in the right direction. Failing to do so could lead to a highly unpredictable outcome in terms of risk.
H/T: Surfshark blog
Read next: The Role of Tech Corporations in Carbon Emissions