Freedom of speech has traditionally been an issue of government and human rights. But more and more companies are providing platforms where anyone can potentially contribute some sort of speech, typically text. And those companies are finding that they face many of the same issues governments have: how to balance giving users the ability to express themselves freely against the possibility that they’ll post problematic content.
“Problematic” has various definitions. In some cases, it’s truly dangerous, like incitements to violence or false medical advice. And companies may find that they don’t want to be associated with expressions of racism, sexism, or other forms of prejudice. But can companies do anything if people use their service for broadcasting content that the companies don’t approve of?
A new study answers that question with a clear “yes.” Researchers looked at Reddit’s fight against hate speech, which saw it ban a variety of subreddits in 2015. The analysis suggests that the regular users of these subreddits toned down their language as they moved to other areas on the site. And a number of users who wanted to continue to share offensive opinions simply went to other services, making them someone else’s problem.