Trying to Think Through Content Warnings
Content warnings have become a common feature on the internet. This is my attempt at trying to think them through.
I usually write about topics that Iām more or less knowledgeable about, but this time, I will make an exception. I donāt have a lot of previous knowledge or particularly strong opinions about content warnings or trigger warnings, but Iāll use this post to try and make sense of them.1 What prompted this text was an exchange I recently had on Mastodon where a user had a strong stance on the issue. They might not be wrong, but Iām not sure they are right either ā Iāll get back to it a bit later.
The original context for trigger warnings is outside ā or rather before ā the internet. Psychologists studying veterans of World War I and II recognized something that was later called post-traumatic stress disorder. Certain stimuli would work as triggers for re-experiencing the trauma because they resembled the original trauma. In psychological literature these stimuli had multiple names, ātrauma triggerā among them.
Another similar concept that predates the internet is the idea of providing content warnings. Many types of content warnings have become established, usually relating to particular media. There are graphic content warnings, rating systems and warnings about explicit lyrics that are meant to warn the potential consumer about the content ā and to prevent children from consuming media unsuitable for them.
Trauma triggers and content warnings have slightly different purposes, but both exist on the premise that there are people vulnerable to certain stimuli. There seems to be one difference between them: managing trauma triggers is seen mostly as an individual, psychological problem, with counseling and medication being the two main treatments. In comparison, content warnings are applied when some stimulus is seen as inherently harmful, usually to children. This also includes the belief that children (but perhaps also other groups of people) are inherently fragile and in need of protection.
These two uses seemed to conflate when publications moved online. First, warnings about content were more general and used different types of language, but apparently around 2009 publications and online spaces especially around feminist circles adopted the term ātrigger warningā for this purpose. For example, the feminist blog Feministing adopted trigger warnings based on requests. As Lori Adelman, the siteās executive director, puts it:
Feministing respects such requests [from comments] by including trigger warnings on posts when we cover issues related to sexual assault, rape, or other violence, or publish posts with graphic content related to these themes.
Another feminist blog, Shakesville, moved from trigger warnings to ācontent notesā āin 2011 or 2012 to acknowledge that even if somebody isnāt triggered, there still might be something that they donāt want to readā. Currently, they seem to list content warnings either at the beginning of an article or at the beginning of a paragraph for things like nativism, disablist language, police brutality, racism, death, harassment, threats, misogyny, child abuse. Shakesvilleās rationale for using content warnings seems to change from protecting vulnerable people to warning all readers of topics they might not be willing to engage with.
Some social media services also enforce limitations on specific content types. For example, Twitter has a setting for āsensitive contentā, which could be read as being a content warning system, but in practice only seems to apply to āadult contentā, meaning porn. This use is much closer to the earlier use of content warnings in relation to media, with warnings about nudity and explicit language. The use of the term āadult contentā is probably a good hint on who this is supposed to protect (children).
Looking back at the examples it seems clear that there is a rather strong consensus that there are at least some people that need to be protected from some content ā protecting children from porn and violence doesnāt seem like a very controversial stance. I would say we can pretty safely rule out the stance that content warnings should not exist in any context.
The opposite stance would be that (almost) everything should be behind a content warning. That also happens to be the stance I encountered in the online discussion I mentioned in the beginning. We canāt know who engages with the media we produce (in the broad meaning, think tweets and toots), we canāt know what content bothers them, and so it makes sense to put everything behind a content warning.
This seems sensible, but to me, seems to ignore crucial differences. Both Feministing and Shakesville use trigger/content warnings for specific types of things, but not all things. Shakesville warns about police brutality and death, but not about swearing, which it does liberally and which to me could be the kind of thing āthat [some people] donāt want to readā (to quote the Shakesville writer McEwan).
This is not a criticism of Shakesville ā on the contrary. I think McEwan knows her audience and what they might be bothered by, so she tailors her content warnings to that audience. Her readers might be bothered by ānativismā, while others might have problems recognizing that concept. But crucially, while her primary readership is probably that of active US feminists, her posts can still be read anybody. She canāt know what bothers all possible people visiting her site, so trying to accommodate all those potential visitors is at least very difficult, if not impossible. Context is key to interpretation, so it makes sense to me that things like content warnings should be context-sensitive. They also first became common in a very specific context, feminist blogs, which makes sense to me: they write about topics that might bother people (rape, misogyny), but which still need to be discussed.
Taking context into account becomes more difficult on online services, like Twitter or Mastodon, where context is much more ephemeral. Tweets can be read by anyone, so should we be responsible for taking into account absolutely everyone? Perhaps. Another possibility is to view these platforms more like conversations. Iām responsible for what I say to the people that Iām talking with, and Iām less responsible about how people overhearing me interpret the conversation. That doesnāt mean that Iām completely free of responsibility, since considering what we say in public and how it might affect others is certainly an ethical responsibility.
Managing context is exceptionally hard on Twitter, where a well-timed retweet can turn any personal musing into a public statement, stripping it of all original context. Context on Twitter is, at best, leaky.
Mastodon might be slightly better. It has instances, which are communities of varying sizes sharing the same platform. The primary audience on Mastodon is the other people on the same instance and different instances might have very different expectations on what is acceptable. Some instances are more explicit about this than others, with elaborate community guidelines about acceptable content. I see this as a good thing, since it allows people choosing instances to figure out what kind of community to expect.
Mastodon also includes the possibility of posting āunlistedā, which means only people following you see the post. This could be seen as one way of limiting your audience, because people following you have presumably chosen to do so and can also choose not to do so in the future. It shifts at least part of the responsibility to the receiver, because they are more in control of what they see.
Where does this leave us with content warnings? Ultimately, I think content warnings are useful, especially in contexts where they were originally conceived. Giving traumatized people a warning before they encounter traumatizing material gives them a chance to decide whether to engage with that content. Iām slightly more sceptical of the stance that it would be helpful to add content warnings to absolutely everything.2 There seem to be cases where the potential for harm is so low that they donāt seem so useful. This leaves a lot of middle ground, where it is up to judgement whether a content warning would be appropriate ā Shakesville chose to add warnings for disablist language, but not swearing.
There is an important factor that needs to be taken into account when evaluating whether my personal views are useful for determining the usefulness of content warnings: privilege. Many things that donāt bother me might be harmful to others in less privileged or just different positions, and it might be difficult to see that without having first hand experience of that position. There are some ways to make that judgement easier, like seeing what other people use content warnings for. This can be sometimes misleading, since the communities that inhabit online spaces have very different views of what is acceptable. It seems like a learning process, hopefully becoming easier with experience.
Did I miss a perspective that I should take into account? I wouldnāt mind being more informed about this topic, so feel free to comment if you think I missed something.
Edit: I received some excellent comments on this post both about misleading phrasing and the content in general. I have updated the post to fix at least some of those issues.
-
I remember seeing a blog where the writer listed the epistemic certainty for all the posts they wrote. My epistemic certainty for this post would be rather low. Unfortunately I canāt remember what the blog was called so I canāt link to it.Ā ↩
-
There is another argument that adding content warnings to social media like Mastodon would make it more usable, because you can more quickly scroll through posts and choose which ones to read.Ā ↩