Large language models are arguably the most prominent forms of tech out there at this current point in time, and Google’s Deepmind division has been hard at work trying to take them to the next level. It turns out that the researchers working on this project collaborated with experts at the University of Southern California to propose a new prompting framework that can enhance the abilities of LLMs with all things having been considered and taken into account.
This framework prioritizes self-discovery and the use of task specific logic and reasoning to come up with a solution to the problem that was posed to the chatbot in question. With all of that having been said and now out of the way, it is important to note that their research yielded positive results about the kind of impact it can have on the performance of GPT-4 and other LLMs.
The way this works is that the model crafts a reasoning structure of the LLM and leverages a variety of reasoning modules such as step by step thinking as well as critical thinking. If the proposed framework comes through, it would be enormously useful because of the fact that this is the sort of thing that could potentially end up reducing the amount of computing power required for inference between 10 to 40 times over.
One of the biggest issues facing LLMs as of right now is that they require a lot of energy for computational processes. As a result of the fact that this is the case, any research that can reduce the energy load has the potential to end up being a game changer, and this proposed framework is no different.
It will be interesting to see where things go from here on out, since Google has mostly been playing catch up with the likes of OpenAI and Microsoft after their enormous headstart with ChatGPT last year. A framework like this could give the tech juggernaut the edge it needs, and possibly give the entire industry a path forward that can pave the way for newer start ups to throw their hat into the ring.
Image: Digital Information World - AIgen
Read next: The Essential Guide to Implementing Generative AI in Small Businesses
This framework prioritizes self-discovery and the use of task specific logic and reasoning to come up with a solution to the problem that was posed to the chatbot in question. With all of that having been said and now out of the way, it is important to note that their research yielded positive results about the kind of impact it can have on the performance of GPT-4 and other LLMs.
The way this works is that the model crafts a reasoning structure of the LLM and leverages a variety of reasoning modules such as step by step thinking as well as critical thinking. If the proposed framework comes through, it would be enormously useful because of the fact that this is the sort of thing that could potentially end up reducing the amount of computing power required for inference between 10 to 40 times over.
One of the biggest issues facing LLMs as of right now is that they require a lot of energy for computational processes. As a result of the fact that this is the case, any research that can reduce the energy load has the potential to end up being a game changer, and this proposed framework is no different.
It will be interesting to see where things go from here on out, since Google has mostly been playing catch up with the likes of OpenAI and Microsoft after their enormous headstart with ChatGPT last year. A framework like this could give the tech juggernaut the edge it needs, and possibly give the entire industry a path forward that can pave the way for newer start ups to throw their hat into the ring.
Image: Digital Information World - AIgen
Read next: The Essential Guide to Implementing Generative AI in Small Businesses