Twitter's new labels limit visibility of "hateful" tweets
Several media organizations have recently criticized Twitter for their labeling policies, which has prompted the social media platform, led by Elon Musk, to introduce new labels. The goal is to better identify tweets that could be considered "hateful."
Twitter has announced that it will now add visible labels to tweets that violate their policies, making it clear to users that their visibility has been restricted. This marks a new approach to policy enforcement by the company, which previously relied on a binary "leave up versus take down" method of content moderation. By implementing visibility filtering, Twitter can take a more nuanced approach to content moderation.
According to a tweet from the company, the labels will only be applied to individual tweets and will not impact a user's account as a whole. Twitter also shared a few examples of what the labels will look like in images accompanying the tweet.
These actions will be taken at a tweet level only and will not affect a user’s account. Restricting the reach of Tweets helps reduce binary “leave up versus take down” content moderation decisions and supports our freedom of speech vs freedom of reach approach.
— Twitter Safety (@TwitterSafety) April 17, 2023
There will be two types of labels
Twitter will be using two types of labels, called Author and Viewer labels, which will inform both the author of the tweet and other users on the platform of which policy the tweet may have violated. Currently, these labels will only be applied to tweets that potentially violate Twitter's Hateful Conduct policy, but the company plans to expand their use to other policy areas in the near future.
The tweets that receive these labels will be made less visible on the platform, and there will be no ads displayed next to the labeled content. Twitter has stated that this change is intended to promote more proportional and transparent enforcement actions for all users on the platform.
In case Twitter labels any tweet for policy violations, the author of that tweet will have the ability to provide feedback if they believe their tweet was incorrectly limited. However, submitting feedback does not necessarily guarantee a response from Twitter or a restoration of the tweet's reach. Twitter is working to address these concerns.
A recent report from the Institute of Strategic Dialogue (ISD) showed that anti-Semitic tweets have doubled from June 2022 to February 2023. The report also noted that there has been an increase in the removal of such content, but this action has not been deemed sufficient to keep up with the surge.
In a recent interview with the BBC, Elon Musk was asked about the prevalence of hate speech and misinformation on Twitter, to which he responded with, "I don't" and requested specific examples of hateful content.
Twitter's introduction of new labels to identify potentially "hateful" tweets is a clear response to criticism of the platform's content moderation policies. As these labels expand to other policy areas in the coming months, it will be intriguing to observe how effective they are in promoting transparency and proportionality in Twitter's enforcement actions.Advertisement