Even though Twitter’s terms of service explicitly ban posts glorifying self-harm and media depicting “visible wounds,” independent researchers report that Twitter too often seemingly looks the other way regarding self-harm. Researchers from the Network Contagion Research Institute (NCRI) estimate there are “certainly” thousands, and possibly “hundreds of thousands,” of users regularly violating these terms without any enforcement by Twitter. The result of Twitter’s alleged inaction: Since October, posts using self-harm hashtags have seen “prolific growth.”
According to reports, Twitter was publicly alerted to issues with self-harm content moderation as early as last October. That’s when a UK charity dedicated to children’s digital rights, 5Rights, reported to a UK regulator that there was a major problem with Twitter’s algorithmic recommendation system. 5Rights’ research found that Twitter’s algorithm “was steering accounts with child-aged avatars searching the words’ self-harm’ to Twitter users who were sharing photographs and videos of cutting themselves.”
In October, Twitter told Financial Times that “It is against the Twitter rules to promote, glorify, or encourage suicide and self-harm. Our number-one priority is the safety of the people who use our service. If tweets are in violation of our rules on suicide and self-harm and glorification of violence, we take decisive and appropriate enforcement action.”