When Alexa, Amazon’s AI voice assistant, is asked about Amazon being a monopoly, it claims not to know. While it quickly criticizes other tech monopolies, it remains quiet about the wrongdoings of the company that developed it. Alexa lies its loyalty to its creators rather than its users. However, in most cases, it’s not as clear whose interests AI systems serve. We must adopt a cautious mindset when approaching AI to protect ourselves from potential exploitation by these models.
This involves carefully crafting the input we provide and skeptical evaluating its response.
It has become difficult to distinguish who advantages since newer AI systems give more complex and reduced memorized responses. By now, we are aware of websites and platforms using dark patterns to make you keep using their services for longer durations, which profits them through ad revenue while dismissing your welfare. Google, TikTok, and Facebook are all a part of it.
But what sets AI models and these online websites apart is the way they interact, which has started to take the shape of real-life relationships, as it is easy to predict that in the near future, AI will be planning travels and negotiations for you, and possibly serve as mental health counselors.
Generative AIs are already on the track to become 24-hour digital assistants that you can customize to your preference. ChatGPT is an example.
Data scientists and security experts have claimed that people who depend on AI will unknowingly trust them to help plan their days. Ensuring that AIs are not clandestinely serving other individuals is essential. People are unaware that the platforms and devices they use may not be operating in their favor. Apps may sell personal data without your permission, and devices with cameras may spy on you. There is no way to ensure security when surveillance capitalism uses AI for its benefit and is quiet about its doings.
To use AI efficiently, you need to let it know everything about you, including things that people close to you know about. And sometimes, things that only you are aware of about yourself.
Digital giants have funded AI developers in large amounts yet offer their services to users for free or for a nominal price. How do these tech monopolies monetize from all of it? The answer is the same way that most online services make money, which means manipulation and surveillance are involved.
If you ask AI to decide where you should eat on your holiday, it will pick a restaurant that probably has incentives for its developer. You must have noticed that with paid Google search results and paid ads on Instagram and Facebook.
You may not also know if the AI model that you’re using generates an unbiased response when asked about government updates. There is no way of telling if the generated responses support the owner’s political alliance. Nor can you tell if the candidate has bribed them for support.
We should expect these AI companies to do better for people to trust their services. The proposed AI Act by the EU took a significant step in demanding transparency in data used for AI training, addressing potential bias, being open about possible risks, and providing reports on industry-standard tests.
The government should intervene to take initiatives to protect consumers of these AI models. Until then, users would have to use AI at risk and approach their responses critically.
Read next: Google’s AI Ambassador Agrees That AI is a Threat, Claims Google is Different
This involves carefully crafting the input we provide and skeptical evaluating its response.
It has become difficult to distinguish who advantages since newer AI systems give more complex and reduced memorized responses. By now, we are aware of websites and platforms using dark patterns to make you keep using their services for longer durations, which profits them through ad revenue while dismissing your welfare. Google, TikTok, and Facebook are all a part of it.
But what sets AI models and these online websites apart is the way they interact, which has started to take the shape of real-life relationships, as it is easy to predict that in the near future, AI will be planning travels and negotiations for you, and possibly serve as mental health counselors.
Generative AIs are already on the track to become 24-hour digital assistants that you can customize to your preference. ChatGPT is an example.
Data scientists and security experts have claimed that people who depend on AI will unknowingly trust them to help plan their days. Ensuring that AIs are not clandestinely serving other individuals is essential. People are unaware that the platforms and devices they use may not be operating in their favor. Apps may sell personal data without your permission, and devices with cameras may spy on you. There is no way to ensure security when surveillance capitalism uses AI for its benefit and is quiet about its doings.
To use AI efficiently, you need to let it know everything about you, including things that people close to you know about. And sometimes, things that only you are aware of about yourself.
Digital giants have funded AI developers in large amounts yet offer their services to users for free or for a nominal price. How do these tech monopolies monetize from all of it? The answer is the same way that most online services make money, which means manipulation and surveillance are involved.
If you ask AI to decide where you should eat on your holiday, it will pick a restaurant that probably has incentives for its developer. You must have noticed that with paid Google search results and paid ads on Instagram and Facebook.
You may not also know if the AI model that you’re using generates an unbiased response when asked about government updates. There is no way of telling if the generated responses support the owner’s political alliance. Nor can you tell if the candidate has bribed them for support.
We should expect these AI companies to do better for people to trust their services. The proposed AI Act by the EU took a significant step in demanding transparency in data used for AI training, addressing potential bias, being open about possible risks, and providing reports on industry-standard tests.
The government should intervene to take initiatives to protect consumers of these AI models. Until then, users would have to use AI at risk and approach their responses critically.
Read next: Google’s AI Ambassador Agrees That AI is a Threat, Claims Google is Different