OpenAI recently launched a feature called 'Memory' for ChatGPT. This feature means ChatGPT can remember all the chats it has had with a user. OpenAI says this will help the chatbot understand users better and improve accuracy. However, some people are worried about privacy.
In the past, there have been issues with how OpenAI handles data. For example, Italy once stopped ChatGPT because the company used personal data without asking users, which broke privacy laws. Also, a problem occurred when Samsung employees shared secret company info while using ChatGPT.
OpenAI has a policy that says it can look at user conversations to make its AI better. Davi Ottenheimer, a privacy expert, thinks OpenAI's approach to handling user data is not right. He believes calling the feature 'Memory' is misleading because it suggests something different than what it really is—long-term data storage. Unlike human memory, which fades, or computer memory, which can be erased, storing data this way can lead to unexpected privacy issues. Even though OpenAI says users can delete their chats, Ottenheimer has doubts.
Ottenheimer suggests OpenAI started with the wrong idea for AI and now it's hard for them to change it. He points out that after a while, AI can start to show unwanted behaviors, like bias. He thinks a better system would be one that forgets everything after each interaction, protecting privacy better.
He also talks about the importance of people having control over their data. Right now, when you use AI services, you're stuck in their system without any other choice. Ottenheimer believes data should be owned by the person it's about, and shared only if they agree to it.
He criticizes OpenAI for not being truly open, as it's hard to know what happens to your data. This can lead to problems, like misuse of chat histories.
Regulations can help control AI by making companies pay fines or hurting their reputation if they break the rules. Ottenheimer thinks the industry should focus more on integrity breaches, which harm both the company and the individual, instead of just privacy breaches.
In the end, he believes stricter regulations on integrity breaches are needed and will hopefully come soon. This way, companies will be more careful about how they handle data and protect user privacy.
Image: DIW-Aigen
Read next: AI-Generated News Sites Are Rolling Out Fake Reports Faster Than Ever
In the past, there have been issues with how OpenAI handles data. For example, Italy once stopped ChatGPT because the company used personal data without asking users, which broke privacy laws. Also, a problem occurred when Samsung employees shared secret company info while using ChatGPT.
OpenAI has a policy that says it can look at user conversations to make its AI better. Davi Ottenheimer, a privacy expert, thinks OpenAI's approach to handling user data is not right. He believes calling the feature 'Memory' is misleading because it suggests something different than what it really is—long-term data storage. Unlike human memory, which fades, or computer memory, which can be erased, storing data this way can lead to unexpected privacy issues. Even though OpenAI says users can delete their chats, Ottenheimer has doubts.
Ottenheimer suggests OpenAI started with the wrong idea for AI and now it's hard for them to change it. He points out that after a while, AI can start to show unwanted behaviors, like bias. He thinks a better system would be one that forgets everything after each interaction, protecting privacy better.
He also talks about the importance of people having control over their data. Right now, when you use AI services, you're stuck in their system without any other choice. Ottenheimer believes data should be owned by the person it's about, and shared only if they agree to it.
He criticizes OpenAI for not being truly open, as it's hard to know what happens to your data. This can lead to problems, like misuse of chat histories.
Regulations can help control AI by making companies pay fines or hurting their reputation if they break the rules. Ottenheimer thinks the industry should focus more on integrity breaches, which harm both the company and the individual, instead of just privacy breaches.
In the end, he believes stricter regulations on integrity breaches are needed and will hopefully come soon. This way, companies will be more careful about how they handle data and protect user privacy.
Image: DIW-Aigen
Read next: AI-Generated News Sites Are Rolling Out Fake Reports Faster Than Ever