skip to content

Speaker Spotlight: Dr Stefanie Ullmann

In her Festival talk, Combatting harmful content online: the potential of Counterspeech, Dr Stefanie Ullmann from the 'Giving Voice to Digital Democracies' project at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) at the University of Cambridge, explores how using ‘Counterspeech’ could be a much better way of dealing with harmful content online.

Dr Ullmann’s talk is online 4:00pm-5:00pm on Monday 4 April.


Cambridge Festival: What constitutes harmful or dangerous content online? 

Stefanie Ullmann: Overall, harmful or dangerous content can be defined as any kind of content – be it text, images or videos – that has the potential to incite violence or aggression against another person or group of people.

CF: Is there a difference between what is illegal and what is morally and ethically unacceptable? 

SU: While we may personally find content morally or ethically unacceptable, it is often still within legal bounds. This also depends on national contexts. For instance, in Germany Holocaust denial is illegal and would equally be prosecuted when discovered online. Finally, we should bear in mind that social media companies have their very own guidelines and codes of conduct according to which they may delete content.

CF: What are the current approaches to tackling these issues? And what are the concerns about using these types of approaches? 

SU: Currently, most social media platforms follow a reactionary approach to harmful content. Once a post gets reported, it will be assessed according to the company’s guidelines and may be deleted. In extreme cases, the user may be banned. This approach is problematic because it doesn’t protect users from the initial harm of being confronted with harmful content. Additionally, the majority of content moderation is still done by humans, putting an enormous amount of psychological pressure and harm on the moderator.

CF: You suggest Counterspeech could be an effective way to deal with the issue of harmful or dangerous speech online. Can you explain what that is? 

SU: Counterspeech is based on the idea that more and good speech is a better response to hateful or dangerous speech than censorship. Counterspeech can take a variety of different shapes but studies have shown that taking an empathic and non-hostile approach is most successful. Pointing out factual incorrectness and the potential harm of a post is more likely to be effective in drawing awareness and changing people’s viewpoints than simply condemning and deleting it. It has been observed to be particularly useful in preventing online bystanders from becoming perpetrators themselves.

CF: Can anyone use this form of controlling online harms? And what are the risks and benefits of using Counterspeech? 

SU: Anyone who is able can use Counterspeech when they observe harmful behaviour online. The Dangerous Speech Project provides very useful resources on how to use Counterspeech. However, it needs to be noted that while being rewarding, the act of counterspeaking can also be laborious and psychologically challenging. Professional counterspeakers are often experts in mitigating conflict and may possess coping skills that the individual user does not. In the end, it is still important to consider that counterspeakers may themselves become the target of hateful attacks and that protecting one’s own physical and mental health should be the priority.

CF: How can automation help?

SU: Precisely because counterspeaking can be a potentially challenging act, we suggest that deploying a (semi-)automated approach could be useful. Users may observe harmful behaviour online and want to respond but are unsure which approach would be best. A Counterspeech prompter could offer suggestions based on the type of content one has witnessed (eg racism will require a different type of Counterspeech than homophobia). This way users may feel assured in their use of Counterspeech.

CF: There has been quite a lot in the news about the Government’s Online Harms Bill. Do you think it goes far enough in protecting vulnerable people online? And does it put enough onus on big tech and social media companies?

SU: The Online Harms Bill is an important and much needed step towards providing better protection to users online, especially those who are particularly vulnerable. In this context, I do appreciate the special emphasis on protecting children. However, I am concerned that not the same focus is put on other vulnerable persons such as ethnic minorities and those living with disabilities including individuals struggling with their mental health and even suicidal thoughts. I also hope for a more detailed outline of the concrete consequences that tech companies will have to face if they fail to comply with regulations. More pressure needs to be put on social media companies to reduce the damage done to not only their users but also the content moderators they employ through sub-contractors.