Computational giant Microsoft is on the rise to try and add AI technology to all of its products.
The company was seen moving ahead with this plan today with the launch of a new security tool called Security Copilot. Microsoft says the goal is to provide more data regarding threat intelligence to combat the growing number of cybersecurity attacks.
The company was similarly seen pitching the new tool forward as a reliable means to correlate information about such attacks while putting security incidents as one of their top priorities at this point in time.
In light of all of this, experts were quick to mention how so many tools were doing this right now already. So what is the purpose of adding more?
Microsoft argued and claims that the Security Copilot is designed to combine with the security portfolio already seen in its software programs. But this time around, it is designed to improve by adding the benefits of generative AI models, including the very recently launched GPT4 -4 technology.
Advancing in this realm means seeing both individuals and technology get ahead of the game at greater speed and scale with which AI is revolutionizing the world. And the new Security Copilot is designed to create a future where all defenders get empowered by tools and technology used to turn the world into a more secure place.
As of right now, the project is very new and Microsoft is staying fairly quiet about how exactly the entire GPT-4 technology will benefit and work in this tool. But what it is speaking about is how it is very highly trained and customized, displaying the best set of security skills to get rid of major cyber threats.
Furthermore, Microsoft is making it very clear from the start that it has not made use of any kind of customer data for training purposes. And that is important as it’s been a great source of criticism by experts in the past regarding previous language-powered AI models.
This customized tool is programmed to grab a hold of what others might end up missing. Similarly, it answers a leading number of queries that advise the ultimate plan of action while producing a summary of the events taking place and other processes along the way as well.
While most text-producing models have untruthful tendencies, it’s not clear how well this might be doing in terms of affecting production. As always, Microsoft is trying to save itself by claiming that the Security Copilot does not seem to always get things right.
Read next: Sam Altman Doubles Down on AI Fears Despite Criticism
The company was seen moving ahead with this plan today with the launch of a new security tool called Security Copilot. Microsoft says the goal is to provide more data regarding threat intelligence to combat the growing number of cybersecurity attacks.
The company was similarly seen pitching the new tool forward as a reliable means to correlate information about such attacks while putting security incidents as one of their top priorities at this point in time.
In light of all of this, experts were quick to mention how so many tools were doing this right now already. So what is the purpose of adding more?
Microsoft argued and claims that the Security Copilot is designed to combine with the security portfolio already seen in its software programs. But this time around, it is designed to improve by adding the benefits of generative AI models, including the very recently launched GPT4 -4 technology.
Advancing in this realm means seeing both individuals and technology get ahead of the game at greater speed and scale with which AI is revolutionizing the world. And the new Security Copilot is designed to create a future where all defenders get empowered by tools and technology used to turn the world into a more secure place.
As of right now, the project is very new and Microsoft is staying fairly quiet about how exactly the entire GPT-4 technology will benefit and work in this tool. But what it is speaking about is how it is very highly trained and customized, displaying the best set of security skills to get rid of major cyber threats.
Furthermore, Microsoft is making it very clear from the start that it has not made use of any kind of customer data for training purposes. And that is important as it’s been a great source of criticism by experts in the past regarding previous language-powered AI models.
This customized tool is programmed to grab a hold of what others might end up missing. Similarly, it answers a leading number of queries that advise the ultimate plan of action while producing a summary of the events taking place and other processes along the way as well.
While most text-producing models have untruthful tendencies, it’s not clear how well this might be doing in terms of affecting production. As always, Microsoft is trying to save itself by claiming that the Security Copilot does not seem to always get things right.
Read next: Sam Altman Doubles Down on AI Fears Despite Criticism