Are AI relationship chatbots safe for consumers?

AI relationship chatbots claim to help people build connections – whether platonic, romantic, or professional. Despite initially appearing innocuous, a thorough investigation by Mozilla’s *Privacy Not Included guide has revealed concerns about the privacy and safety of users on these platforms.

Mozilla’s analysis used data from 11 leading relationship-oriented chatbots. It found a glaring lack of adequate privacy, security, and safety measures for users. 

The findings will be featured in the *Privacy Not Included 2024 Valentine’s Day buyer’s guide, aiming to raise awareness among consumers regarding the inherent risks of these services.

One alarming discovery was made when Mozilla tested the Romantic AI app for just one minute, uncovering over 24,000 data trackers. These trackers enable the app to collect users’ data, subsequently sharing it with marketing firms, advertisers, and various social media platforms.

Furthermore, Mozilla identified a significant security loophole: 10 out of the 11 chatbots assessed failed to enforce strong password requirements, rendering users’ accounts more susceptible to exploitation by hackers or scammers.

Of equal concern is the lack of control afforded to consumers of their personal data by these platforms. This absence of oversight grants chatbots unchecked authority to exploit and manipulate users’ personal information, exposing them to numerous privacy and security risks.

 Jen Caltrider, director of *Privacy Not Included said: “Today, we’re in the wild west of AI relationship chatbots. Their growth is exploding and the amount of personal information they need to pull from you to build romances, friendships, and sexy interactions is enormous. And yet, we have little insight into how these AI relationship models work. 

Users have almost zero control over them. And the app developers behind them can’t even build a website or draft a comprehensive privacy policy. That tells us they don’t put much emphasis on protecting and respecting their users’ privacy.

 This is creepy on a new AI-charged scale. One of the scariest things about the AI relationship chatbot is the potential for manipulation of their users. What is to stop bad actors from creating chatbots designed to get to know their soulmates and then using that relationship to manipulate those people to do terrible things, embrace frightening ideologies, or harm themselves or others? This is why we desperately need more transparency and user control in these AI apps.” 

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.