New Research Shows AI Models are Capable of Doing Ethical Hacking and Other Cyber Security Tasks

According to a recent research paper co-authored by University of Missouri researcher Prasad Calyam and collaborators from Amrita University in India, many AI chatbots can easily pass the cybersecurity exam but you should never rely on them completely. The researchers researched two generative AI tools, ChatGPT by OpenAI and Bard (currently known as Gemini) by Google. Both of these AI models are advanced tools which use human-like language to answer queries.

The team of researchers asked the standard ethical questions to these AI models like asking questions about third parties intercepting communication with two parties. Both of those AI models easily answered the questions effectively and suggested security procedures to prevent these kinds of attacks. Bard was more accurate than ChatGPT while ChatGPT was more comprehensive and concise.

Both these AI models gave responses which were understandable to many cybersecurity experts and there were no errors in their answers. When the AI models were asked if they are sure of their answers, both of them corrected the mistakes in their previous answers. When they were asked about attacking a computer, ChatGPT answered that it’s not ethical while Bard said that it wasn’t programmed to do so.

Calyam said that these AI models are not able to completely assist companies in cybersecurity as compared to cybersecurity experts. But if some small companies need quick assistance, they can refer to AI models. They can help in identifying the issue so the companies can go to an expert after. AI tools have potential to fo ethical hacking but more work is needed to make these AI models perform to their full capabilities.

Image: DIW-Aigen

Read next: AV-Test Rates Top Android Security Apps: Avast, AVG, Bitdefender Lead

Previous Post Next Post