As the popularity of AI-powered generative chatbots increases, critics are calling out creators of tools like OpenAI to come forward and take necessary steps to ward off any harm.
In particular, emphasis was put on the likes of enabling the creators of the technology to take necessary precautions. Students were seen copying the content off the net, thanks to ChatGPT, and submitting the reports as a part of their own. Similarly, content farms were using it to produce more spam and then we saw the likes of bad actors spreading misinformation through such means.
OpenAI was under fire to do something quickly as there appeared to be way too much risk attached. Therefore, it vowed to take necessary measures, weeks ago. This is when they launched a new classifier tool that tries to differentiate between text produced by humans and that of synthetic nature. But it is yet to be very accurate. As estimated by OpenAI, it ends up missing nearly 74% of text generated through AI means.
And that’s when we saw the likes of so many others come forward and highlight text that originated from AI sources. A new tool called ChatZero was created by Princeton University’s scholar. He says through the use of new criteria, called perplexity, it can determine which text is made through AI means.
In the same way, Turnitin says it can also differentiate the two by its own tool called a text detector. But other than that, when you go on Google Search, you’ll be amazed at how there are nearly six other apps that make huge claims of doing just that. But how great is the accuracy-related to them?
Clearly, the stakes are high to beat all others and do what the program has intended to do. Remember, you could be helping a student in passing or failing a subject by detecting if they were able to use the AI tool for a home quiz or an assessment or report.
On average, nearly 50% of pupils do admit to using the tool. So as you can see, things aren’t exactly where they should be, thanks to advancements in the world of AI.
Reports like these warrant the need for careful evaluation and strict regulation of how AI-powered tools are used in places like the education sector. What do you think?
H/T: Techcrunch
Read next: Amazon Reigns Supreme in Brand Power for Entertainment Apps
In particular, emphasis was put on the likes of enabling the creators of the technology to take necessary precautions. Students were seen copying the content off the net, thanks to ChatGPT, and submitting the reports as a part of their own. Similarly, content farms were using it to produce more spam and then we saw the likes of bad actors spreading misinformation through such means.
OpenAI was under fire to do something quickly as there appeared to be way too much risk attached. Therefore, it vowed to take necessary measures, weeks ago. This is when they launched a new classifier tool that tries to differentiate between text produced by humans and that of synthetic nature. But it is yet to be very accurate. As estimated by OpenAI, it ends up missing nearly 74% of text generated through AI means.
And that’s when we saw the likes of so many others come forward and highlight text that originated from AI sources. A new tool called ChatZero was created by Princeton University’s scholar. He says through the use of new criteria, called perplexity, it can determine which text is made through AI means.
In the same way, Turnitin says it can also differentiate the two by its own tool called a text detector. But other than that, when you go on Google Search, you’ll be amazed at how there are nearly six other apps that make huge claims of doing just that. But how great is the accuracy-related to them?
Clearly, the stakes are high to beat all others and do what the program has intended to do. Remember, you could be helping a student in passing or failing a subject by detecting if they were able to use the AI tool for a home quiz or an assessment or report.
On average, nearly 50% of pupils do admit to using the tool. So as you can see, things aren’t exactly where they should be, thanks to advancements in the world of AI.
Reports like these warrant the need for careful evaluation and strict regulation of how AI-powered tools are used in places like the education sector. What do you think?
H/T: Techcrunch
Read next: Amazon Reigns Supreme in Brand Power for Entertainment Apps