The number of incidents involving AI technology giving rise to misleading content has been on the rise for months.
A recent incident that gained massive attention around the world had to do with music sensation Taylor Swift where deepfake pictures of the star went viral, causing great concern. It featured the most explicit category of images and before we knew it, the internet was flooded with such an ordeal.
This was the time when we saw software giant Microsoft enter the line of fire as the culprit behind the act made use of Microsoft Designer to get the images. The firm elaborated more on this front and how it saw no proof of the act arising from its precincts but it did manage to make further changes to its system to avoid such acts from arising in the first place.
The goal was to enhance text filtering and address matters about service misuse from the software giant. Today, the company’s head has decided to elaborate more on this front through a post published on Microsoft’s official blog.
He failed to reference the singer and her shocking incident or any use of the company software designer to produce the pictures. But one thing is for sure, Smith mentioned how the company looks at this as a concerning affair and how the rise of abuse generated through the likes of AI and threat actors was at its peak, thanks to AI tools being easily accessible.
Smith was seen speaking about several areas of interest that the software giant hopes to focus on to combat matters linked to abusive AI content generation. The first one has to do with boosting the secure use of tools via the likes of particular text prompts, and tests, and for immediately banning users that carry out such unlawful acts.
The first means through which effective measures will be in place for safe tool usage is barring specific prompts while barring users who abuse such tools. In another ordeal, they hope to identify content produced through AI means.
The company is said to be in the process of adding watermarks and fingerprinting technology so the future can benefit.
In the same way, the software giant hopes to collaborate with plenty of individuals in the tech world to combat deepfakes in the world of AI. Smith is also adding a more ambitious approach that has to do with working alongside leading governments to create new laws to ensure such content is banned.
The goal is also to get rid of abusive content from the likes of LinkedIn and Xbox.
Smith also wishes to work alongside those in the tech world, alongside a host of law enforcement officials to better find a solution. Other matters on the card include making the public more aware of AI programs and systems, along with which education tools can be used to put various types of content on display, and what’s fake.
Photo: Digital Information World - AIgen
Read next: FTC Warns Tech Giants Against Silently Changing Privacy Policies
A recent incident that gained massive attention around the world had to do with music sensation Taylor Swift where deepfake pictures of the star went viral, causing great concern. It featured the most explicit category of images and before we knew it, the internet was flooded with such an ordeal.
This was the time when we saw software giant Microsoft enter the line of fire as the culprit behind the act made use of Microsoft Designer to get the images. The firm elaborated more on this front and how it saw no proof of the act arising from its precincts but it did manage to make further changes to its system to avoid such acts from arising in the first place.
The goal was to enhance text filtering and address matters about service misuse from the software giant. Today, the company’s head has decided to elaborate more on this front through a post published on Microsoft’s official blog.
He failed to reference the singer and her shocking incident or any use of the company software designer to produce the pictures. But one thing is for sure, Smith mentioned how the company looks at this as a concerning affair and how the rise of abuse generated through the likes of AI and threat actors was at its peak, thanks to AI tools being easily accessible.
Smith was seen speaking about several areas of interest that the software giant hopes to focus on to combat matters linked to abusive AI content generation. The first one has to do with boosting the secure use of tools via the likes of particular text prompts, and tests, and for immediately banning users that carry out such unlawful acts.
The first means through which effective measures will be in place for safe tool usage is barring specific prompts while barring users who abuse such tools. In another ordeal, they hope to identify content produced through AI means.
The company is said to be in the process of adding watermarks and fingerprinting technology so the future can benefit.
In the same way, the software giant hopes to collaborate with plenty of individuals in the tech world to combat deepfakes in the world of AI. Smith is also adding a more ambitious approach that has to do with working alongside leading governments to create new laws to ensure such content is banned.
The goal is also to get rid of abusive content from the likes of LinkedIn and Xbox.
Smith also wishes to work alongside those in the tech world, alongside a host of law enforcement officials to better find a solution. Other matters on the card include making the public more aware of AI programs and systems, along with which education tools can be used to put various types of content on display, and what’s fake.
Photo: Digital Information World - AIgen
Read next: FTC Warns Tech Giants Against Silently Changing Privacy Policies