We have seen the world of science evolve with time and the digital world has tried its best to make the most of the new advancements. Today, leading tech giant Microsoft announced its decision to bid farewell to its AI-powered Facial Recognition.
And for many, the news is a sign of relief because of its controversial design and the great amount of criticism that it surrounded.
This particular tool promised to go as far as recognizing emotions and that’s why the firm is slowly but surely removing public access that was related to it. Moreover, for those who might not be aware, there was even one that went as far as highlighting emotions by looking at images and videos.
Tech analysts and experts have over the years highlighted that the practice of using these tools for emotional detection is so wrong. For starters, emotions vary and are not standard in various global populations.
Secondly, many find it inadequate to link external emotional displays with a particular person's feelings on the inside.
One professor of psychology claims firms is definitely misleading the world with technology such as these tools. Yes, the technology may be able to delineate a scowl but no, they aren’t going as far as detecting rage.
The new decision by Microsoft appears to be a part of its clean sweep to rid tools that go against its ethics policies. The firm is working hard to determine where its tools are being used and how far they’re going in terms of overlooking human behavior.
On a more practical basis, we can soon be seeing the tech giant put great limitations as far as the features involving facial recognition are concerned. While some are being eliminated as a whole, others will require more interactions with users to see where they plan on using a specific facial identification tool.
Meanwhile, not all tools will be dealt with so stringently. There are definitely going to be some that are left as they are because there aren’t a lot of risks associated with it.
Other than getting rid of the emotional detection facial recognition tool, Microsoft has also unveiled some plans to retire its Azure Face tool’s functionality to pick up on specific traits like age, sex, hair, makeup, and an individual’s smile.
Remember, there is no standard being followed and without specific criteria to detect emotions, things are going to get pretty complicated when comparisons are drawn. Let’s not forget about the many privacy concerns attached to it.
The retirement of these features is coming into effect as early as today and that means no new customers will get access. On the other hand, those who already have the feature with them will be given until June 30 after which the access is revoked.
It is important to note that while the firm plans on removing public access to these features, they’ll still be used by the firm in one of its own products. One common example would be a tool designed to help the visually impaired. This is called Seeing AI.
The company is also making plans to introduce some similar limitations to its Neural Voice feature. This tool enables users to make AI voices with the help of real human recordings. In case you’re wondering why, well, yes it may have plenty of benefits but that does come with so many cons.
There are a number of people who might be on the lookout to take the wrong advantage of the tool and go about impersonating people for illegal activities, explained the company, and before that matter gets out of control, it’s time to bid farewell. Read next:
Photo: Coolcaesar / Wiki
Read next: AVTEST conducted an analysis on the best security tools for Windows and here is how the results look
And for many, the news is a sign of relief because of its controversial design and the great amount of criticism that it surrounded.
This particular tool promised to go as far as recognizing emotions and that’s why the firm is slowly but surely removing public access that was related to it. Moreover, for those who might not be aware, there was even one that went as far as highlighting emotions by looking at images and videos.
Tech analysts and experts have over the years highlighted that the practice of using these tools for emotional detection is so wrong. For starters, emotions vary and are not standard in various global populations.
Secondly, many find it inadequate to link external emotional displays with a particular person's feelings on the inside.
One professor of psychology claims firms is definitely misleading the world with technology such as these tools. Yes, the technology may be able to delineate a scowl but no, they aren’t going as far as detecting rage.
The new decision by Microsoft appears to be a part of its clean sweep to rid tools that go against its ethics policies. The firm is working hard to determine where its tools are being used and how far they’re going in terms of overlooking human behavior.
On a more practical basis, we can soon be seeing the tech giant put great limitations as far as the features involving facial recognition are concerned. While some are being eliminated as a whole, others will require more interactions with users to see where they plan on using a specific facial identification tool.
Meanwhile, not all tools will be dealt with so stringently. There are definitely going to be some that are left as they are because there aren’t a lot of risks associated with it.
Other than getting rid of the emotional detection facial recognition tool, Microsoft has also unveiled some plans to retire its Azure Face tool’s functionality to pick up on specific traits like age, sex, hair, makeup, and an individual’s smile.
Remember, there is no standard being followed and without specific criteria to detect emotions, things are going to get pretty complicated when comparisons are drawn. Let’s not forget about the many privacy concerns attached to it.
The retirement of these features is coming into effect as early as today and that means no new customers will get access. On the other hand, those who already have the feature with them will be given until June 30 after which the access is revoked.
It is important to note that while the firm plans on removing public access to these features, they’ll still be used by the firm in one of its own products. One common example would be a tool designed to help the visually impaired. This is called Seeing AI.
The company is also making plans to introduce some similar limitations to its Neural Voice feature. This tool enables users to make AI voices with the help of real human recordings. In case you’re wondering why, well, yes it may have plenty of benefits but that does come with so many cons.
There are a number of people who might be on the lookout to take the wrong advantage of the tool and go about impersonating people for illegal activities, explained the company, and before that matter gets out of control, it’s time to bid farewell. Read next:
Photo: Coolcaesar / Wiki
Read next: AVTEST conducted an analysis on the best security tools for Windows and here is how the results look