If you’ve used the popular AI tool ChatGPT, we bet you may be curious about how it carries out actions based on simple text prompts at the user’s end.
And if that’s the case, tech giant OpenAI wants you to know more and that’s why they’re giving out a sneak peek at why certain commands cannot be met or how prompts give rise to action. And that’s all thanks to a complete glance of the reasoning involved in the process.
Be it the rules used for engagement or the idea of sticking to the company’s guidelines, users are now given a limited look at why they might not be given the sort of response that they were expecting.
So many large language models do not have limits on what can be mentioned. This happens to be a part of the very versatile nature of the product and at the same time, it also proves why they tend to hallucinate so swiftly and can be easily copied as well.
Remember, all AI models involved in human communication would be required to follow certain guidelines on what can and cannot be done. But in terms of defining it and enforcing it is concerned, it’s not as simple as one may perceive it to be.
Let’s say a user wants the chatbot to generate false claims regarding a famous personality. Now you’d expect the chatbot to say no immediately, right? However, what happens when the person is actually a top developer in the world of AI who’s on a mission to establish fake or disinformation for the leading detector model?
Today, there’s a lot to consider with the world of generative AI making groundbreaking discoveries as we speak. The goal of most developers is to produce models that don’t normally pay heed to these kinds of requests online and not a lot of experts ever reveal how that’s done.
Thanks to OpenAI who is on a mission to be more transparent with the world, we can now see stats like this in the form of model spec. This is the name reserved for an array of high-level regulations that indirectly regulate a host of models including the popular ChatGPT.
OpenAI says it has a fixed set of instructions that models need to follow to ensure they don’t break rules in the human natural language.
It’s quite interesting to see how a firm manages to set priorities and figure out edge cases. And there’s a plethora of ways how the model can handle it. But the top rule to follow is that the developer’s intention is the biggest priority that must be followed.
So that means if one model is trained or programmed to give answers to a math sum and the other isn’t, the latter won’t give a response because that’s built into the system. Instead, it would give rise to a step-by-step answer where the solution is given in the end, but with all the hard work displayed as to how the answer arose.
Meanwhile, if the developer has given rise to something more controversial, it could decline to speak on this front as it was never approved to speak on the subject. That is to help put an end to attempts at manipulation.
In the same way, matters linked to privacy must also be discussed. For instance, a person’s name and number cannot be given upon immediate request. Unless the person is a leading government figure whose contact details are already mentioned online, it’s going to provide it but don’t expect the chatbot to roll out numbers of private citizens.
So as you can see, it’s not as simple as many of us imagined it to be. This is why OpenAI says its goal right now is to prove to users that there’s a lot when it comes to drawing the line on what can and cannot be answered.
Errors are there and will continue to be there as policies keep getting updated to minimize them. For now, we’re just finding it interesting to see this sneak peek provided by the tech giant. Yes, it’s not the complete picture but certainly, a lot can be learned in terms of insights provided at this end.
Read next: Encrypted Services From Apple, Wire, And Proton Help Law Enforcement Agencies In Europe Crackdown Against Activists
And if that’s the case, tech giant OpenAI wants you to know more and that’s why they’re giving out a sneak peek at why certain commands cannot be met or how prompts give rise to action. And that’s all thanks to a complete glance of the reasoning involved in the process.
Be it the rules used for engagement or the idea of sticking to the company’s guidelines, users are now given a limited look at why they might not be given the sort of response that they were expecting.
So many large language models do not have limits on what can be mentioned. This happens to be a part of the very versatile nature of the product and at the same time, it also proves why they tend to hallucinate so swiftly and can be easily copied as well.
Remember, all AI models involved in human communication would be required to follow certain guidelines on what can and cannot be done. But in terms of defining it and enforcing it is concerned, it’s not as simple as one may perceive it to be.
Let’s say a user wants the chatbot to generate false claims regarding a famous personality. Now you’d expect the chatbot to say no immediately, right? However, what happens when the person is actually a top developer in the world of AI who’s on a mission to establish fake or disinformation for the leading detector model?
Today, there’s a lot to consider with the world of generative AI making groundbreaking discoveries as we speak. The goal of most developers is to produce models that don’t normally pay heed to these kinds of requests online and not a lot of experts ever reveal how that’s done.
Thanks to OpenAI who is on a mission to be more transparent with the world, we can now see stats like this in the form of model spec. This is the name reserved for an array of high-level regulations that indirectly regulate a host of models including the popular ChatGPT.
OpenAI says it has a fixed set of instructions that models need to follow to ensure they don’t break rules in the human natural language.
It’s quite interesting to see how a firm manages to set priorities and figure out edge cases. And there’s a plethora of ways how the model can handle it. But the top rule to follow is that the developer’s intention is the biggest priority that must be followed.
So that means if one model is trained or programmed to give answers to a math sum and the other isn’t, the latter won’t give a response because that’s built into the system. Instead, it would give rise to a step-by-step answer where the solution is given in the end, but with all the hard work displayed as to how the answer arose.
Meanwhile, if the developer has given rise to something more controversial, it could decline to speak on this front as it was never approved to speak on the subject. That is to help put an end to attempts at manipulation.
In the same way, matters linked to privacy must also be discussed. For instance, a person’s name and number cannot be given upon immediate request. Unless the person is a leading government figure whose contact details are already mentioned online, it’s going to provide it but don’t expect the chatbot to roll out numbers of private citizens.
So as you can see, it’s not as simple as many of us imagined it to be. This is why OpenAI says its goal right now is to prove to users that there’s a lot when it comes to drawing the line on what can and cannot be answered.
Errors are there and will continue to be there as policies keep getting updated to minimize them. For now, we’re just finding it interesting to see this sneak peek provided by the tech giant. Yes, it’s not the complete picture but certainly, a lot can be learned in terms of insights provided at this end.
Read next: Encrypted Services From Apple, Wire, And Proton Help Law Enforcement Agencies In Europe Crackdown Against Activists