When you hear the term AI chatbot, the last thing that would probably come into your mind would be health.
But thanks to new research, AI has been called out as an integral means for fostering healthy relationships online. The study’s results came after the authors took into consideration a whopping 500 chatbots and the interactions arising between them.
Moreover, researchers went into the details as to how such chatbots were being used in the world of social media to facilitate and stimulate all kinds of interactions via the use of experimental models.
Such chatbots gave rise to behavior that was far less toxic after getting exposure to engagement through cross-party interactions. And while a lot more work and studies are needed to further confirm the claims, these are proof of how accurately such mediums can reflect human nature.
Today, AI is all about the convenience of the finest kind. They give rise to solutions that limit activity across social media that a lot of individuals consider negative and polarizing. In the same way, it was so interesting to see how the authors were involved in suggesting tiny amendments regarding how social networks function so that certain material published could limit partisan divides, and that causes greater positivity among chats across the board.
This particular study was rolled out by a huge team featuring a professor from Amsterdam University named Petter Tornberg. The researchers took part in the programming of a whopping 500 chatbots that had all kinds of profiles on display, depending on the kind of survey data at large.
Such bots were designed to read news and posts that were circulated through the X ecosystem. In this way, so many chatbots were noticed having a blast in terms of searching for common ground and behavior deemed to be less toxic in nature.
The world of AI is designed to mimic individuals in a more realistic manner and through such types of research, we get to see how it helps in uniting individuals and putting ethical pitfalls on a limit.
After assigning each chatbot with a link to a political party, we saw other factors be attached like gender, race, religion, income, and others. The goal was to add a degree of diversity across the board.
On one particular day, we saw bots read out real news and produce comments on the trending subject in focus. They were also seen engaging with the content. The behavior of the chatbots was then tracked with simulations running for six hours and researchers tracking their behavior.
Bridging models displayed greater likelihoods of happiness on so many matters like LGBTQ.
These results suggest how social media may be designed to carry out engagement without giving rise to abuse amongst various groups. But a lot more studies are necessary to validate the claims in terms of how effective these models can be to stimulate the right kind of human actions online.
But you cannot forget about the major ethical concerns arising on this front. The degree of private information being used for training purposes of bots is intense as it could be trained using actual user’s history, confidential data, and beyond - giving rise to a plethora of problems ethically.
This is the reason why experts are calling for more guidelines to be launched on this front as it would be required for training purposes so more studies of this nature can be rolled out.
Remember, AI chatbots are serving more human-like behavior now than ever. This reduces toxicity seen across social media apps but the main thing to remember is that the best side of humanity must be depicted.
Image: Digital Information World - AIgen
Read next: How Does Authorship Work in the Age of AI? This Study Reveals the Answers
But thanks to new research, AI has been called out as an integral means for fostering healthy relationships online. The study’s results came after the authors took into consideration a whopping 500 chatbots and the interactions arising between them.
Moreover, researchers went into the details as to how such chatbots were being used in the world of social media to facilitate and stimulate all kinds of interactions via the use of experimental models.
Such chatbots gave rise to behavior that was far less toxic after getting exposure to engagement through cross-party interactions. And while a lot more work and studies are needed to further confirm the claims, these are proof of how accurately such mediums can reflect human nature.
Today, AI is all about the convenience of the finest kind. They give rise to solutions that limit activity across social media that a lot of individuals consider negative and polarizing. In the same way, it was so interesting to see how the authors were involved in suggesting tiny amendments regarding how social networks function so that certain material published could limit partisan divides, and that causes greater positivity among chats across the board.
This particular study was rolled out by a huge team featuring a professor from Amsterdam University named Petter Tornberg. The researchers took part in the programming of a whopping 500 chatbots that had all kinds of profiles on display, depending on the kind of survey data at large.
Such bots were designed to read news and posts that were circulated through the X ecosystem. In this way, so many chatbots were noticed having a blast in terms of searching for common ground and behavior deemed to be less toxic in nature.
The world of AI is designed to mimic individuals in a more realistic manner and through such types of research, we get to see how it helps in uniting individuals and putting ethical pitfalls on a limit.
After assigning each chatbot with a link to a political party, we saw other factors be attached like gender, race, religion, income, and others. The goal was to add a degree of diversity across the board.
On one particular day, we saw bots read out real news and produce comments on the trending subject in focus. They were also seen engaging with the content. The behavior of the chatbots was then tracked with simulations running for six hours and researchers tracking their behavior.
Bridging models displayed greater likelihoods of happiness on so many matters like LGBTQ.
These results suggest how social media may be designed to carry out engagement without giving rise to abuse amongst various groups. But a lot more studies are necessary to validate the claims in terms of how effective these models can be to stimulate the right kind of human actions online.
But you cannot forget about the major ethical concerns arising on this front. The degree of private information being used for training purposes of bots is intense as it could be trained using actual user’s history, confidential data, and beyond - giving rise to a plethora of problems ethically.
This is the reason why experts are calling for more guidelines to be launched on this front as it would be required for training purposes so more studies of this nature can be rolled out.
Remember, AI chatbots are serving more human-like behavior now than ever. This reduces toxicity seen across social media apps but the main thing to remember is that the best side of humanity must be depicted.
Image: Digital Information World - AIgen
Read next: How Does Authorship Work in the Age of AI? This Study Reveals the Answers