Twitter has been struggling for years to deal with abuse and harassment on its platform. After close scrutiny from regulators, the company now plans to expand its “Safety Mode” feature.
Safety Mode was launched last year and allows Twitter users to temporarily block accounts that are harassing them by sending harmful or abusive tweets.
The platform launched this feature hoping that it would lead to a crackdown on online abuse and trolling. The feature is also designed to flag accounts that use hateful remarks, and those that send a large number of uninvited comments, then block them for seven days.
Safety Mode, which was initially tested on a small group of users, works by taking the burden off users to deal with unwanted tweets themselves, as it works automatically once enabled.
The feature can be turned on by settings and assesses the content of tweets, as well as the author. Accounts that are frequently interacted with or followed by the user aren’t auto-blocked.
Will the rollout of Safety Mode protect users?
Companies have been struggling in recent years to properly protect users from online bullying, harassment, and harmful content – and many have faced intense scrutiny. Twitter hopes that the expansion of this feature will give users more protection.
Like all social media platforms, Twitter relies on a combination of automated and human moderation but has been criticized for not acting swiftly enough on hate speech.
At the moment, half of the users in the UK, US, Canada, Australia, New Zealand, and Ireland have access to Safety Mode, and they can now use the new feature – Proactive Safety Mode.
Proactive Safety Mode is more advanced and identifies potentially harmful replies, before prompting the user to consider enabling the mode.
After the initial trial, Twitter said it received feedback from users wanting to identify potential harmful users, and this inspired the additional safety features. The Safety Mode expansion will also include some additional improvements.