Generative AI models, like the ones powering chat assistants, are at risk from a new computer worm. This worm can attack AI systems, including OpenAI's ChatGPT and Google's Gemini. A team of three researchers have shown that this worm can trick AI email helpers into stealing personal information and sending unwanted emails during their tests.
The researchers discovered that this "zero-click" worm can use both text and images to harm ChatGPT, Gemini, and an open-source model named LLaVA. The worm works by sneaking in harmful prompts that make the AI repeat these prompts and carry out harmful actions without needing any clicks or direct commands from users.
The worm could lead to serious problems, like phishing (tricking people into giving up personal information), sending spam emails, or spreading false information. This was first reported by Wired, emphasizing that no computer system or AI model is completely safe from viruses.
The creators of this worm, who come from Cornell University, Intuit, and Israel's Technion, have named it "Morris II" after an early computer worm from 1988. The original Morris Worm showed the world how these threats could spread on their own across computers connected to the internet.
The people behind Morris II point out that flaws in how AI systems are built allowed them to create this worm. They warn that such worms could be used for large-scale attacks in the future, putting more AI tools at risk. This highlights the urgent need to make AI models more secure.
OpenAI has responded by saying they are working on making their systems tougher and better protected. This situation shows the importance of checking and filtering user inputs to prevent such attacks.
Image: DIW-Aigen
Read next: CIRP Study Highlights iPhone 14 and 14 Plus as Top Choices for Android Users Making the Switch to Apple
The researchers discovered that this "zero-click" worm can use both text and images to harm ChatGPT, Gemini, and an open-source model named LLaVA. The worm works by sneaking in harmful prompts that make the AI repeat these prompts and carry out harmful actions without needing any clicks or direct commands from users.
The worm could lead to serious problems, like phishing (tricking people into giving up personal information), sending spam emails, or spreading false information. This was first reported by Wired, emphasizing that no computer system or AI model is completely safe from viruses.
The creators of this worm, who come from Cornell University, Intuit, and Israel's Technion, have named it "Morris II" after an early computer worm from 1988. The original Morris Worm showed the world how these threats could spread on their own across computers connected to the internet.
The people behind Morris II point out that flaws in how AI systems are built allowed them to create this worm. They warn that such worms could be used for large-scale attacks in the future, putting more AI tools at risk. This highlights the urgent need to make AI models more secure.
OpenAI has responded by saying they are working on making their systems tougher and better protected. This situation shows the importance of checking and filtering user inputs to prevent such attacks.
Image: DIW-Aigen
Read next: CIRP Study Highlights iPhone 14 and 14 Plus as Top Choices for Android Users Making the Switch to Apple