Software giant Microsoft is making it very clear to the world that it is a clear contender in the AI market.
The company is working behind the scenes to train a new AI model that’s large enough to offer great competition to Alphabet and OpenAI’s LLMs.
The latest endeavor is dubbed MAI-1 and is said to be under the supervision of a leading name from the tech world, Mustafa Suleyman. He’s a co-founder of Google’s DeepMind and the ex-CEO of AI firm Inflection.
Moreover, the report rolled out by The Information failed to give out more details including the real purpose of this AI model but from the looks of it, this would all depend on its great performance.
The company is said to be involved in the previewing of the model at this year’s Build conference for developers that is scheduled to take place during this past month as delineated in the report so we do have our fingers crossed on this.
But the software giant prefers to remain silent on the news and is therefore not returning any comments on the matter.
MAI-1 would be much bigger than the model seen in the past from Microsoft that it was involved in training. But a bigger size means higher costs on that front as well.
We saw tech giant Microsoft reveal in April another small-scale version of an AI model called Phi-3-mini that is designed to attract bigger client bases that have more cost-effective options.
Seeing the firm invest billions in OpenAI and roll out ChatGPT-like technology throughout its tech offerings says so much about its strong and intertwined relationship with OpenAI. Most of the productivity software enables it to take early leads while working in the competitive AI race.
Remember, the software giant is also working toward putting huge clusters of servers that get equipped with graphic processing units alongside bigger amounts of data to enhance models as per this report.
Seeing the company engage further with Nvidia’s GPUs is another point worth mentioning as it gives rise to a huge cluster of servers with a lot of data to enable improvements on this front.
MAI-1 will reportedly feature 500 billion parameters while OpenAI would have close to one trillion parameters while Phi-3 mini could entail 3.8 billion parameters. So as you can tell, a lot of interesting endeavors are in the works.
Image: DIW-Aigen
Read next: Google Rolls Out New AI Experiment That Alerts Users About Possible Cyberattacks
The company is working behind the scenes to train a new AI model that’s large enough to offer great competition to Alphabet and OpenAI’s LLMs.
The latest endeavor is dubbed MAI-1 and is said to be under the supervision of a leading name from the tech world, Mustafa Suleyman. He’s a co-founder of Google’s DeepMind and the ex-CEO of AI firm Inflection.
Moreover, the report rolled out by The Information failed to give out more details including the real purpose of this AI model but from the looks of it, this would all depend on its great performance.
The company is said to be involved in the previewing of the model at this year’s Build conference for developers that is scheduled to take place during this past month as delineated in the report so we do have our fingers crossed on this.
But the software giant prefers to remain silent on the news and is therefore not returning any comments on the matter.
MAI-1 would be much bigger than the model seen in the past from Microsoft that it was involved in training. But a bigger size means higher costs on that front as well.
We saw tech giant Microsoft reveal in April another small-scale version of an AI model called Phi-3-mini that is designed to attract bigger client bases that have more cost-effective options.
Seeing the firm invest billions in OpenAI and roll out ChatGPT-like technology throughout its tech offerings says so much about its strong and intertwined relationship with OpenAI. Most of the productivity software enables it to take early leads while working in the competitive AI race.
Remember, the software giant is also working toward putting huge clusters of servers that get equipped with graphic processing units alongside bigger amounts of data to enhance models as per this report.
Seeing the company engage further with Nvidia’s GPUs is another point worth mentioning as it gives rise to a huge cluster of servers with a lot of data to enable improvements on this front.
MAI-1 will reportedly feature 500 billion parameters while OpenAI would have close to one trillion parameters while Phi-3 mini could entail 3.8 billion parameters. So as you can tell, a lot of interesting endeavors are in the works.
Image: DIW-Aigen
Read next: Google Rolls Out New AI Experiment That Alerts Users About Possible Cyberattacks