Interview with Susan Benesch (Part- 2)

Question: Do you think a permanent suspension of any influential person’s social media account is the ultimate solution for tackling online hate speech?

Answer: I’ll reply as if you’d asked about dangerous speech, since hate speech is a vague and contested term, and some hate speech isn’t dangerous. The only ultimate solution for tackling online dangerous speech, or offline dangerous speech for that matter, is to convince people not to be interested in it. If people lose interest, the speech loses its power.  

Here it’s important to realize that when we refer to “freedom of speech” or expression, what we really mean isn’t the freedom merely to speak, say in the shower or in the woods. It’s the opportunity to get someone else to hear or read you. I use the term “freedom of reach.”

It’s hard to persuade people to lose interest in what an influential person has to say even when that person is spreading harmful lies, but it’s not impossible especially if other influential people work at it. We need much more of that, as I argued in a recent oped. Meanwhile, an interim solution is to limit the freedom of reach of influential people by suspending their accounts. There are many other possibilities, such as downranking their content which would limit their reach without taking down their accounts.

Question: What kind of measures can social media platforms take in order to tackle the menace of hate speech in today’s volatile world?

Answer: The most obvious and most-discussed response is to attempt to detect hate speech and take it down. To do this at scale, you’ve got to detect hate speech automatically, with software, and that’s very difficult since hate speech is hard to define consistently as I mentioned above. Even content that is clearly hateful is often expressed in idiosyncratic, subtle ways (like mocking the way another group of people talk), and it’s highly context-dependent. For example it can be difficult to distinguish someone expressing racism, from someone calling out someone else’s racism. Also platforms operate in dozens of languages. All this makes me worry that detecting and taking down hate speech automatically would lead to overbroad censorship, so takedown decisions should be made or at least reviewed by people, and there should be some form of oversight of the platforms’ enforcement of their own rules, at scale. (This is not at all what Facebook’s new Oversight Board is doing. It is reviewing only a few dozen cases a year; Facebook implements millions of decisions every week).

Platforms can take many other measures, such as detecting and removing bots that produce hate speech, banning accounts that persistently spread hate speech, requiring users to verify their identities, attempting to reform users who post hate speech (with a variety of behavioral interventions), providing users with blocking and filtering tools so they don’t see hate speech or other objectionable content, limiting the reach of hate speech that the platform chooses not to take down entirely, prioritizing hate speech that seems to bring about specific kinds of harm that the platforms (and especially relevant groups of its users!) decide to prevent, making it easier for users to understand platform rules and to report hate speech, and many more.

Question: The Internet has also turned into an unsafe place for women. What kind of steps should social media companies take to address the issue?” 

Answer: Yes, the Internet is unsafe for many women in different ways, and responses must be tailored for each of those. For this large topic I’ll first point you to a brilliant book filled with important ideas: Danielle Citron’s Hate Crimes in Cyberspace. Danielle describes a variety of attacks on women and argues for better laws to protect women, including civil rights laws, since as she argues persuasively, online attacks often violate civil rights.

It’s useful to distinguish between attacks on women as individuals, (like nonconsensual publication of intimate images by their former partners) and attacks on women as members of groups (such as women journalists, who often face relentless harassment and threats because they dare, as women, to do certain kinds of work.

Regarding attacks on women as individuals, almost every U.S. state now has laws against cyberbullying and cyberstalking. Platforms should work with government to enforce such laws, and to crack down on perpetrators where laws or law enforcement are absent. This is bound to be inadequate, but it’s better than nothing. Regarding attacks on women as a group, platforms should count gender as a protected group when it’s the basis for attacks on them, and pay special attention to groups of women who are frequently attacked, like political candidates, journalists, women of color, and overlapping categories of those.

There are also some civil society efforts that can’t take the place of law enforcement, but that can help to make some progress toward preventing harassment, by raising awareness. An interesting one is a video about harassment of women journalists who write about sports, called More Than Mean. In the video, women sportswriters sit and listen while men who volunteered for the project read aloud harassing messages that the women have already received online. The male volunteers have not seen the messages before, and they become increasingly uncomfortable at the profanity and viciousness of them, while the women nod knowingly.

Leave a comment