Microsoft was among the first to deploy generative AI. Copilot, which pulls answers from emails and files, is now under scrutiny for potential misuse by hackers.
Top security researcher Michael Bargury (from Zenity) shows Copilot on Microsoft 365 apps, including Word, can be manipulated to provide false results or extract private data.
Bargury also demonstrated how Copilot could be turned into a phishing tool, drafting emails that mimic a user’s style. Hackers can generate emails with malicious links or malware, potentially sending thousands of these on behalf of a user. While traditional hackers might study emails for days, AI can mimic a writing style in minutes.
These demos use large language models (LLMs) to access data, posing risks to corporate security when AI is connected to sensitive information. Bargury also showed how attackers could access sensitive data, like salaries, bypassing system protections.
In another scenario, attackers could poison the AI’s database, manipulating it to reveal sensitive information. Additional demos showed how external hackers could gain insights into company earnings or embed phishing links in insider emails unnoticed.
Microsoft acknowledged Bargury's findings and promised to work with him to assess the vulnerabilities.
Generative AI continues to advance, offering numerous functionalities that simplify users' tasks. People rely on AI for tasks like booking meetings or completing errands, but each use carries a risk of security breaches.
These reports confirm that attackers are growing more sophisticated, revealing flaws and loopholes in AI systems like Copilot.
Image: DIW-Aigen
Read next: Google Testing Chrome Web Monetization Feature to Allow Users to Tip Favorite Websites
Top security researcher Michael Bargury (from Zenity) shows Copilot on Microsoft 365 apps, including Word, can be manipulated to provide false results or extract private data.
Bargury also demonstrated how Copilot could be turned into a phishing tool, drafting emails that mimic a user’s style. Hackers can generate emails with malicious links or malware, potentially sending thousands of these on behalf of a user. While traditional hackers might study emails for days, AI can mimic a writing style in minutes.
These demos use large language models (LLMs) to access data, posing risks to corporate security when AI is connected to sensitive information. Bargury also showed how attackers could access sensitive data, like salaries, bypassing system protections.
In another scenario, attackers could poison the AI’s database, manipulating it to reveal sensitive information. Additional demos showed how external hackers could gain insights into company earnings or embed phishing links in insider emails unnoticed.
Microsoft acknowledged Bargury's findings and promised to work with him to assess the vulnerabilities.
Generative AI continues to advance, offering numerous functionalities that simplify users' tasks. People rely on AI for tasks like booking meetings or completing errands, but each use carries a risk of security breaches.
These reports confirm that attackers are growing more sophisticated, revealing flaws and loopholes in AI systems like Copilot.
Image: DIW-Aigen
Read next: Google Testing Chrome Web Monetization Feature to Allow Users to Tip Favorite Websites