Search engine giant Google made it very clear during its first earnings call for 2024 that its operations were going to involve more generative AI because of the great potential it entails.
Keeping in line with that theme, we just saw the company roll out a new experiment where AI was used to see determine whether or not it could prevent the growing rise in cyberattacks.
The Android maker says the trial featured the use of generative AI to better explain to others why phishing alerts were getting flagged as threats.
As mentioned by the head of DeepMind at this year’s RSA conference, the company feels a strong need to have AI chatbots curated in a way that combats these growing threats related to hacking and other malicious attempts.
Today, Google’s Gmail offering is involved in blocking close to 70% of content that includes pictures and text as well as official logos that are designed to ward off scams.
We saw Google roll out trials with its Gemini Pro to determine whether or not any malware in documents could be spotted or not. And it’s astonishing to note how 91% of all threats were noticed and even fell behind the likes of specially trained AI endeavors.
These had success rates going up to 99% and they were running 100% more effectively.
With the use of Gemini Pro, you can now detect phishing texts with greater reliability too, the company added. However, experts do agree that Generative AI for this purpose is not and shouldn't be your best bet at using the LLM.
Generative AI is already doing a great job at providing explanations as to why phishing documents were highlighted in the first place, instead of simply behaving as email detectors of suspicious material.
Small details tend to give such tactics away like the phone number outlined not matching with the document’s official PayPal numbers. Similarly, the language and tone used to communicate in the documents were off and they tried to create urgency and scam potential victims.
And that is probably where Google’s LLM might outshine all others as it tends to roll out reports that seem like they came from an analyst’s point of view. But for now, the company is more engaged in the likes of showing people Gemini’s potential and not actually rolling it out as a major feature.
One of the biggest reasons behind this could simply be linked to matters where Google acknowledges the fact that models like Gemini need a mega source of computing power to carry out functions such as these.
Other than outlining such threats, it is also busy figuring out whether or not AI tech can detect any vulnerability located inside software codes.
From the looks of it so far, research has proven over time how large language models are struggling with detecting vulnerability and threats at large.
This makes it so much harder for these LLMs to highlight software flaws and the exact nature of the problem. And the company’s experts might blame that on poor training and very variable software designs that are far from flawless.
As it is, an experiment rolled out last year to better detect such vulnerabilities found just 15% while other attempts at trying to carry out this function resulted in greater problems. So yes, the trial is still up and running and only time can tell whether or not Google plans to use Gemini for warding off cyberattacks or not.
H/T: PCMag
Read next: StatCounter Revises Data Falsely Indicating Google’s Massive Search Market Share Loss In April 2024
Keeping in line with that theme, we just saw the company roll out a new experiment where AI was used to see determine whether or not it could prevent the growing rise in cyberattacks.
The Android maker says the trial featured the use of generative AI to better explain to others why phishing alerts were getting flagged as threats.
As mentioned by the head of DeepMind at this year’s RSA conference, the company feels a strong need to have AI chatbots curated in a way that combats these growing threats related to hacking and other malicious attempts.
Today, Google’s Gmail offering is involved in blocking close to 70% of content that includes pictures and text as well as official logos that are designed to ward off scams.
We saw Google roll out trials with its Gemini Pro to determine whether or not any malware in documents could be spotted or not. And it’s astonishing to note how 91% of all threats were noticed and even fell behind the likes of specially trained AI endeavors.
These had success rates going up to 99% and they were running 100% more effectively.
With the use of Gemini Pro, you can now detect phishing texts with greater reliability too, the company added. However, experts do agree that Generative AI for this purpose is not and shouldn't be your best bet at using the LLM.
Generative AI is already doing a great job at providing explanations as to why phishing documents were highlighted in the first place, instead of simply behaving as email detectors of suspicious material.
Small details tend to give such tactics away like the phone number outlined not matching with the document’s official PayPal numbers. Similarly, the language and tone used to communicate in the documents were off and they tried to create urgency and scam potential victims.
And that is probably where Google’s LLM might outshine all others as it tends to roll out reports that seem like they came from an analyst’s point of view. But for now, the company is more engaged in the likes of showing people Gemini’s potential and not actually rolling it out as a major feature.
One of the biggest reasons behind this could simply be linked to matters where Google acknowledges the fact that models like Gemini need a mega source of computing power to carry out functions such as these.
Other than outlining such threats, it is also busy figuring out whether or not AI tech can detect any vulnerability located inside software codes.
From the looks of it so far, research has proven over time how large language models are struggling with detecting vulnerability and threats at large.
This makes it so much harder for these LLMs to highlight software flaws and the exact nature of the problem. And the company’s experts might blame that on poor training and very variable software designs that are far from flawless.
As it is, an experiment rolled out last year to better detect such vulnerabilities found just 15% while other attempts at trying to carry out this function resulted in greater problems. So yes, the trial is still up and running and only time can tell whether or not Google plans to use Gemini for warding off cyberattacks or not.
H/T: PCMag
Read next: StatCounter Revises Data Falsely Indicating Google’s Massive Search Market Share Loss In April 2024