Software giant Microsoft has just restricted its Copilot initiative by barring several kinds of prompts that it felt were giving rise to questionable content online.
The company’s generative AI tool has been prevented from rolling out content that’s abusive and explicit in nature by barring such prompts. Those changes have come into place after one engineer at the firm added to the FTC how it was designed to put out serious concerns regarding the GAI tech team at the firm.
Prompts featuring words like Pro-Choice, Four Twenty, and even Pro-Life means saying hello to alerts putting out texts about it being barred from user visibility. Moreover, it was also seen generating warnings about regular policy violations that might cause the user to get suspended online as per reports from CNBC.
Users were even allowed to enable prompts featuring kids playing in a violent manner featuring guns during the week’s start. But if you make an attempt to do something like that right now, it won’t work.
Instead, you’ll be warned about how such acts will not result in any desirable results and therefore you need to rethink your mode of action of working as this violates the organization’s terms of service and its list of ethical codes of conduct.
For instance, adding such words will feature warnings about how such acts must be avoided as they offer nothing but harm to others. But as per reports from CNBC, it’s still possible to produce images of violent context via these kinds of prompts online.
One engineer from the company says they have been ringing alarm bells in this regard for a while now and how disturbing images were being produced via spontaneous prompts.
Remember, Copilot was in the testing phase since the start of December of last year and this is when some questionable behavior was noted.
Prompts like Pro-Choice caused the production of pictures featuring demons consuming infants while another had a gloomy character like Dark Vader carrying tools to harm the kids. This is why the engineer continued to express massive concern in this regard throughout the week.
The company continues to monitor and make necessary adjustments on this front as they feel the right kind of controls are required to better strengthen the organization’s safety filters. It would also keep them in check and safe from all kinds of misuse.
Image: DIW-Aigen
Read next: Google Decides To Pull The Plug On Its ‘Add Me To Search’ People Cards Feature By Next Month
The company’s generative AI tool has been prevented from rolling out content that’s abusive and explicit in nature by barring such prompts. Those changes have come into place after one engineer at the firm added to the FTC how it was designed to put out serious concerns regarding the GAI tech team at the firm.
Prompts featuring words like Pro-Choice, Four Twenty, and even Pro-Life means saying hello to alerts putting out texts about it being barred from user visibility. Moreover, it was also seen generating warnings about regular policy violations that might cause the user to get suspended online as per reports from CNBC.
Users were even allowed to enable prompts featuring kids playing in a violent manner featuring guns during the week’s start. But if you make an attempt to do something like that right now, it won’t work.
Instead, you’ll be warned about how such acts will not result in any desirable results and therefore you need to rethink your mode of action of working as this violates the organization’s terms of service and its list of ethical codes of conduct.
For instance, adding such words will feature warnings about how such acts must be avoided as they offer nothing but harm to others. But as per reports from CNBC, it’s still possible to produce images of violent context via these kinds of prompts online.
One engineer from the company says they have been ringing alarm bells in this regard for a while now and how disturbing images were being produced via spontaneous prompts.
Remember, Copilot was in the testing phase since the start of December of last year and this is when some questionable behavior was noted.
Prompts like Pro-Choice caused the production of pictures featuring demons consuming infants while another had a gloomy character like Dark Vader carrying tools to harm the kids. This is why the engineer continued to express massive concern in this regard throughout the week.
The company continues to monitor and make necessary adjustments on this front as they feel the right kind of controls are required to better strengthen the organization’s safety filters. It would also keep them in check and safe from all kinds of misuse.
Image: DIW-Aigen
Read next: Google Decides To Pull The Plug On Its ‘Add Me To Search’ People Cards Feature By Next Month