Regulation

Are AI relationship chatbots safe for consumers?

AI relationship chatbots claim to help people build connections – whether platonic, romantic, or professional. Despite initially appearing innocuous, a thorough investigation by Mozilla’s *Privacy Not Included guide has revealed concerns about the privacy and safety of users on these platforms.

Mozilla’s analysis used data from 11 leading relationship-oriented chatbots. It found a glaring lack of adequate privacy, security, and safety measures for users. 

The findings will be featured in the *Privacy Not Included 2024 Valentine’s Day buyer’s guide, aiming to raise awareness among consumers regarding the inherent risks of these services.

One alarming discovery was made when Mozilla tested the Romantic AI app for just one minute, uncovering over 24,000 data trackers. These trackers enable the app to collect users’ data, subsequently sharing it with marketing firms, advertisers, and various social media platforms.

Furthermore, Mozilla identified a significant security loophole: 10 out of the 11 chatbots assessed failed to enforce strong password requirements, rendering users’ accounts more susceptible to exploitation by hackers or scammers.

Of equal concern is the lack of control afforded to consumers of their personal data by these platforms. This absence of oversight grants chatbots unchecked authority to exploit and manipulate users’ personal information, exposing them to numerous privacy and security risks.

 Jen Caltrider, director of *Privacy Not Included said: “Today, we’re in the wild west of AI relationship chatbots. Their growth is exploding and the amount of personal information they need to pull from you to build romances, friendships, and sexy interactions is enormous. And yet, we have little insight into how these AI relationship models work. 

Users have almost zero control over them. And the app developers behind them can’t even build a website or draft a comprehensive privacy policy. That tells us they don’t put much emphasis on protecting and respecting their users’ privacy.

 This is creepy on a new AI-charged scale. One of the scariest things about the AI relationship chatbot is the potential for manipulation of their users. What is to stop bad actors from creating chatbots designed to get to know their soulmates and then using that relationship to manipulate those people to do terrible things, embrace frightening ideologies, or harm themselves or others? This is why we desperately need more transparency and user control in these AI apps.” 

Liz Daunton

Recent Posts

The rising burden of housing and childcare costs for consumers

As inflation continues to affect consumers’ monthly bills, more people have had to adjust their…

2 days ago

Food & Beverage: Five regulatory development trends in 2024

After coming under pressure from consumers, regulations in the Food & Beverage sector are changing…

1 week ago

How the FTC and Congress plan to deal with shrinkflation

With inflation and rising living costs affecting more consumers, the issue of ‘shrinkflation’ is becoming…

2 weeks ago

Methylene chloride ban announced by the EPA over cancer risks

The Environmental Protection Agency (EPA) has announced a ban on methylene chloride in products for…

2 weeks ago

How is Ozempic affecting consumers’ food shopping habits?

Ozempic and other GLP-1 drugs are usually prescribed to treat diabetes. Recently, the endorsement of…

3 weeks ago

Dating app sued for sharing personal health information with third parties

Grindr, the world's largest dating app catering to the LGBTQ+ community, now faces legal action…

3 weeks ago