The world is changing in ways that we would have never imagined previously, and this is leading to quite a few problems. While it is making the world a better place, for the most part these changes are happening a little too rapidly, and as society we often struggle to deal with their consequences while being fully cognizant of the impact that they are having on our world.
Recently, an Oxford professor by the name of Nick Bostrom recently stated that artificial intelligence (AI) may just be the biggest threat that the world is currently facing, greater even that climate change. Many would doubt the validity of his words, but this professor has been endorsed by the likes of Bill Gates as well as Elon Musk. The stamp of approval from too tech world stars should definitely make people take his words a lot more seriously.
Bostrom, who has published books such as “Superintelligence: Paths, Dangers and Strategies” as well as a number of other works of literature that discuss the potential pitfalls of artificial intelligence, states that AI is becoming too big for the companies responsible for it to handle. Tech giants such as Google, Microsoft and Facebook, and other companies that are readily using AI on a regular basis, may be able to handle AI, but the ethics of this practice are still quite hazy.
The problem here is that governments are still quite new to the very concept of AI, and so they have been slow to make the right decision in this regard. Proper regulation has not been implemented yet, which leaves companies like Google and Facebook with free reign to devise their own code of ethics regarding how they plan to use artificial intelligence.
This is dangerous for a lot of reasons. To start off with, these companies are, first and foremost, profit making enterprises. This means that while they might appear to be community oriented and ethically sound, they might inadvertently put profits before ethics and make it quite difficult for people to continue using the technology that is currently progressing so rapidly with any real degree of safety. Bostrom, putting it delicately, says that asking companies to work ethically with their AI might just be asking a little too much of them.
The reason that he places AI above environmental damage as the biggest danger our species is currently facing has to do with the lack of control we have over it. Governments have started to take environmental damage seriously, and this is a step in the right direction if nothing else. AI, on the other hand, is completely new territory, so it’s far more likely that it could go wrong for us.
According to Bostrom, the pop culture depiction of apocalypse inducing artificial intelligence may be quite compelling as a story, but the true danger that AI poses is a generally a lot more insidious. AI, if it gets outside of our control, could have goals of its own, goals that would be quite different from the goals of humanity. The truly scary thing is that if we don’t find a way to keep it under control, it would have no reason to work towards these goals and could end up with an agenda of its own.
The main crux of Bostrom’s argument is that we are simply not prepared for the AI revolution, and in spite of this we are headed for it at full speed. More care needs to be taken in order to ensure that malevolent usage of AI does not become prevalent.
Read next: Google and Facebook know what you are about to do, even before you think of doing it, thanks to the new Algorithms!
Recently, an Oxford professor by the name of Nick Bostrom recently stated that artificial intelligence (AI) may just be the biggest threat that the world is currently facing, greater even that climate change. Many would doubt the validity of his words, but this professor has been endorsed by the likes of Bill Gates as well as Elon Musk. The stamp of approval from too tech world stars should definitely make people take his words a lot more seriously.
Bostrom, who has published books such as “Superintelligence: Paths, Dangers and Strategies” as well as a number of other works of literature that discuss the potential pitfalls of artificial intelligence, states that AI is becoming too big for the companies responsible for it to handle. Tech giants such as Google, Microsoft and Facebook, and other companies that are readily using AI on a regular basis, may be able to handle AI, but the ethics of this practice are still quite hazy.
The problem here is that governments are still quite new to the very concept of AI, and so they have been slow to make the right decision in this regard. Proper regulation has not been implemented yet, which leaves companies like Google and Facebook with free reign to devise their own code of ethics regarding how they plan to use artificial intelligence.
This is dangerous for a lot of reasons. To start off with, these companies are, first and foremost, profit making enterprises. This means that while they might appear to be community oriented and ethically sound, they might inadvertently put profits before ethics and make it quite difficult for people to continue using the technology that is currently progressing so rapidly with any real degree of safety. Bostrom, putting it delicately, says that asking companies to work ethically with their AI might just be asking a little too much of them.
The reason that he places AI above environmental damage as the biggest danger our species is currently facing has to do with the lack of control we have over it. Governments have started to take environmental damage seriously, and this is a step in the right direction if nothing else. AI, on the other hand, is completely new territory, so it’s far more likely that it could go wrong for us.
According to Bostrom, the pop culture depiction of apocalypse inducing artificial intelligence may be quite compelling as a story, but the true danger that AI poses is a generally a lot more insidious. AI, if it gets outside of our control, could have goals of its own, goals that would be quite different from the goals of humanity. The truly scary thing is that if we don’t find a way to keep it under control, it would have no reason to work towards these goals and could end up with an agenda of its own.
The main crux of Bostrom’s argument is that we are simply not prepared for the AI revolution, and in spite of this we are headed for it at full speed. More care needs to be taken in order to ensure that malevolent usage of AI does not become prevalent.
Read next: Google and Facebook know what you are about to do, even before you think of doing it, thanks to the new Algorithms!