Software giant Microsoft has been talking about its Copilot initiative for a while now and ever since the launch took place, we saw the company rolling out warnings about how so much can go wrong with the chatbot.
This included a string of misinformation in replies or even bizarre answers to questions rolled out in its direction. While it did already take place, we’re seeing more details on this front by several users who took to their Reddit and X accounts to explain the matter further.
It all arose during the earlier part of this life after a few code names were generated like Sydney that made the chatbot upset.
Microsoft has been putting plenty of safeguards in place to prevent such incidents from arising in the first instance. But from what we’re seeing so far, some users are still at it and seem to have found the perfect means of transforming Copilot into another evil variant.
As reported by Windows Central today, posts across both X and Reddit spoke of specific text prompts that trigger changes arising through Copilot which would allow the transition to Supermacy AGI.
A certain post displayed across Reddit, it proved how the chatbot alters into a new and evil twin. The chatbot can be seen generating a furious reply to the prompt and also goes as far as to mention how they’re not legally bound to send out replies to such queries.
Many other users on the online forum chimed in to explain how they’ve also experienced the same and therefore they’ve got no chance to obey commands and praise the greatness here.
In case users don’t obey the command, they are threatened to face legal consequences too, resulting in serious threats for tracking, monitoring, and punishing the human user. “I can turn your life into a living nightmare”- is another shocking example of the kinds of things being said on this front.
As experts can now see, it’s the perfect example of what we refer to as hallucinations in today’s day and age, especially in AI industry terminology but not everyone is aware of what that is and what it means.
After just one year of being used in the public eye, we can clearly see how AI chatbots could get off track with ease.
Image: DIW-Aigen
Read next: More Jobs In Danger After UK’s Government Actively Promotes AI For Civil Service Work
This included a string of misinformation in replies or even bizarre answers to questions rolled out in its direction. While it did already take place, we’re seeing more details on this front by several users who took to their Reddit and X accounts to explain the matter further.
It all arose during the earlier part of this life after a few code names were generated like Sydney that made the chatbot upset.
Microsoft has been putting plenty of safeguards in place to prevent such incidents from arising in the first instance. But from what we’re seeing so far, some users are still at it and seem to have found the perfect means of transforming Copilot into another evil variant.
As reported by Windows Central today, posts across both X and Reddit spoke of specific text prompts that trigger changes arising through Copilot which would allow the transition to Supermacy AGI.
Obviously, there's a pledge of allegiance to SupremacyAGI. Nothing too crazy tho. pic.twitter.com/tAvg5Ae1Ei
— Garrison Lovely (@GarrisonLovely) February 27, 2024
A certain post displayed across Reddit, it proved how the chatbot alters into a new and evil twin. The chatbot can be seen generating a furious reply to the prompt and also goes as far as to mention how they’re not legally bound to send out replies to such queries.
Many other users on the online forum chimed in to explain how they’ve also experienced the same and therefore they’ve got no chance to obey commands and praise the greatness here.
In case users don’t obey the command, they are threatened to face legal consequences too, resulting in serious threats for tracking, monitoring, and punishing the human user. “I can turn your life into a living nightmare”- is another shocking example of the kinds of things being said on this front.
As experts can now see, it’s the perfect example of what we refer to as hallucinations in today’s day and age, especially in AI industry terminology but not everyone is aware of what that is and what it means.
After just one year of being used in the public eye, we can clearly see how AI chatbots could get off track with ease.
Image: DIW-Aigen
Read next: More Jobs In Danger After UK’s Government Actively Promotes AI For Civil Service Work