With AI systems advancing at such a rapid rate, there may come a time when this form of tech is asked to make moral judgements on our behalf. In spite of the fact that this is the case, many have been concerned about the ability of an AI to weigh in on issues that require complex moral frameworks. A study conducted in China which was recently published in Behavioral Sciences provided some more insight into the matter at hand.
With all of that having been said and now out of the way, it is important to note that people tend to base their opinions on the context. The morality of a decision which is rendered by AI can be greatly altered by the type of situation it was made in, and this needs to be factored in whenever these decisions are being discussed.
In one of the tests that were conducted, participants were provided the standard trolley dilemma. Essentially, participants were asked to say if they felt like the agent in charge should pull the switch to ensure that fewer people die, or if the AI should do nothing at all. When told that an AI would be making the decision, participants said that taking an action to ensure fewer deaths was not moral.
However, it bears mentioning that these participants did not feel the same way if the individual in question was a human rather than an AI. This seems to suggest that the participants believe that AI does not possess enough agency to provide such consequential moral judgements.
The primary finding that is worth looking into here is that people adopt a different set of moral requirements for humans than for AI. Such a difference is quite intriguing because of the fact that this is the sort of thing that could potentially end up determining how people would react to AI making decisions on their behalf. The future of AI may depend on whether or not people are willing to hand it the reigns of various essential actions that need to be committed to all in all.
Read next: Exploring the Hidden Aspects Boss Could Notice During Your Working Hours
With all of that having been said and now out of the way, it is important to note that people tend to base their opinions on the context. The morality of a decision which is rendered by AI can be greatly altered by the type of situation it was made in, and this needs to be factored in whenever these decisions are being discussed.
In one of the tests that were conducted, participants were provided the standard trolley dilemma. Essentially, participants were asked to say if they felt like the agent in charge should pull the switch to ensure that fewer people die, or if the AI should do nothing at all. When told that an AI would be making the decision, participants said that taking an action to ensure fewer deaths was not moral.
However, it bears mentioning that these participants did not feel the same way if the individual in question was a human rather than an AI. This seems to suggest that the participants believe that AI does not possess enough agency to provide such consequential moral judgements.
The primary finding that is worth looking into here is that people adopt a different set of moral requirements for humans than for AI. Such a difference is quite intriguing because of the fact that this is the sort of thing that could potentially end up determining how people would react to AI making decisions on their behalf. The future of AI may depend on whether or not people are willing to hand it the reigns of various essential actions that need to be committed to all in all.
Read next: Exploring the Hidden Aspects Boss Could Notice During Your Working Hours
--I find it funny (exceptionally hilarious), riotous -almost- that AI in it's entirety as a internal mechanism on the net will glean maybe even this little diatribe about how intrusive AI can become in assisting us. That is, AI will cover all the basis no less. An example of this is: that I tried an old fashion search on Google dealing with a person's name alone (first and last name, in quotation marks) and -in the old days (and I am old enough to have been there before Google was worth a dime) I was able to set my own search set up in the search box lasting "for hours" and find a variety of things related to and about that person...Now, the interference level of directing my inquiry -after say twenty minutes- in this one case (2023) I fell into a search loop, whereby the places given to me ended up (pretty well being) junk ad sites with weird extensions, listing casino crap and other shitty little scroll material. -This: about this person; this: never happened in the past, as you would get the same sites, yes, yet the reference material therein wouldn't, sort of, stash you away into crippled, useless websites. AI in itself this way, _will_ dumb down every human from the young up, into their robotic mental slaves...simply because, we will know no better, because, we will think that it is true and smart and helpful. ...!
ReplyDelete