A leading AI engineer at Microsoft has reportedly sent letter to the FTC about how dangerous of a tool the company’s Copilot Designer is.
The engineer accused the copilot designer of producing the worst kind of images that humanity has seen through the assistance of AI technology.
For those who might not be aware, this is powered by DALL-E 3 from OpenAI and has the tendency to produce the most offensive pictures of them all that feature political bias as well as drug use and underage drinking. This also entails corporate trademarks and issues linked to copyright abuse, not to mention religious beliefs and conspiracies also becoming a part of the endeavor.
The fact that such concerning reports are arising from a leading engineer who has been with the firm for the past six years is major news. As reported by CNBC, there is evidence of how it compiled similar kinds of pictures with assistance from the tool.
As mentioned by Jones, this kind of tool can include images of females who get objectified in explicit manners without any kind of consent being taken online.
We can see how the engineer in discussion says that it’s a big eye-opener for the world and the fact that Microsoft is allowing this to occur without adding any kind of warning can seriously be offensive or a major issue.
The tool that is renowned for producing images on websites does not indicate any kind of warning beforehand but there happens to be one FAQ section on that page that states how they’ve put out plenty of controls to stop the production of pictures deemed harmful.
Microsoft has also woken up to mention how their system can highlight the harmful pictures that are made via a single prompt and therefore they go about blocking this prompt and hence the user is informed about this all.
Jones added how he has tried time after time to attain Microsoft’s attention in terms of installing age restrictions for this tool but the firm isn’t accepting any such requests anymore.
In the past three months, there have been arguments about this, the engineer added and they’ve even requested to have the tool removed from public usage until and unless there are more safeguards in place that ensure it’s 100% safe at all times.
But the fact that Microsoft is still rejecting the recommendations is another eyebrow-raising moment but some hope is left that a few disclosures are added to alter ratings on the app so it’s used only by mature audiences online.
But since matters are not being met with Microsoft, more hope is garnered from the FTC to intervene and stop image production of this kind and if not, to adequately ensure such material is disclosed rightly so users are aware from the start of what they can expect.
The software giant mentioned in the past how they were carrying out their own investigations on this front. But the fact that no action has been taken to date is worrying obviously.
Image: DIW-Aigen
Read next: Lawmakers Urge Meta To Enhance Security Efforts After Surge In Hacked User Accounts
The engineer accused the copilot designer of producing the worst kind of images that humanity has seen through the assistance of AI technology.
For those who might not be aware, this is powered by DALL-E 3 from OpenAI and has the tendency to produce the most offensive pictures of them all that feature political bias as well as drug use and underage drinking. This also entails corporate trademarks and issues linked to copyright abuse, not to mention religious beliefs and conspiracies also becoming a part of the endeavor.
The fact that such concerning reports are arising from a leading engineer who has been with the firm for the past six years is major news. As reported by CNBC, there is evidence of how it compiled similar kinds of pictures with assistance from the tool.
As mentioned by Jones, this kind of tool can include images of females who get objectified in explicit manners without any kind of consent being taken online.
We can see how the engineer in discussion says that it’s a big eye-opener for the world and the fact that Microsoft is allowing this to occur without adding any kind of warning can seriously be offensive or a major issue.
The tool that is renowned for producing images on websites does not indicate any kind of warning beforehand but there happens to be one FAQ section on that page that states how they’ve put out plenty of controls to stop the production of pictures deemed harmful.
Microsoft has also woken up to mention how their system can highlight the harmful pictures that are made via a single prompt and therefore they go about blocking this prompt and hence the user is informed about this all.
Jones added how he has tried time after time to attain Microsoft’s attention in terms of installing age restrictions for this tool but the firm isn’t accepting any such requests anymore.
In the past three months, there have been arguments about this, the engineer added and they’ve even requested to have the tool removed from public usage until and unless there are more safeguards in place that ensure it’s 100% safe at all times.
But the fact that Microsoft is still rejecting the recommendations is another eyebrow-raising moment but some hope is left that a few disclosures are added to alter ratings on the app so it’s used only by mature audiences online.
But since matters are not being met with Microsoft, more hope is garnered from the FTC to intervene and stop image production of this kind and if not, to adequately ensure such material is disclosed rightly so users are aware from the start of what they can expect.
The software giant mentioned in the past how they were carrying out their own investigations on this front. But the fact that no action has been taken to date is worrying obviously.
Image: DIW-Aigen
Read next: Lawmakers Urge Meta To Enhance Security Efforts After Surge In Hacked User Accounts