The mainstreamification of extreme content

From antisemitic memes to drunk women filmed without consent – what happens when the platforms stop moderating

Editor’s note: Back by popular demand, this week’s newsletter is guest-written by Sarah, a data analyst who studied digital culture and online research in the Netherlands.

This is Part 3 of a 4-part series on digital identities, exploring how algorithms are quietly shaping who we are, what we see, and how we present ourselves. You can read last week’s essay here.

In preparation for this week’s newsletter, I asked some friends: What’s the craziest thing you’ve seen on your feed recently?

One friend said her Explore tab had started serving her videos of drunk women stumbling on the street, filmed without their consent. 🤢

Another’s feed was recently flooded with antisemitic memes. ✡️

Both of them shrugged it off. They found the targeted content disturbing, so they deliberately stopped clicking on it, knowing eventually it would stop showing up.

But a third friend (Jodie) had a different reaction. 

Over the past few weeks, Jodie had been arguing with her mom, whose views on trans issues became increasingly hard for her to ignore. Jodie’s mom spends hours on X, and she suspects the platform has been steadily feeding her mom more extreme political content. She’s deeply unsettled by how far she feels her mom has drifted. 

Now, all that echoes in Jodie’s head is “grok it”. 😵‍💫

The “Black Jewish gay disabled dwarf with bad eyes” meme which went viral recently | Source: X

Mapping the rabbit hole

By now you’ve probably watched Adolescence, the new Netflix show about a teenage boy who gets radicalized online. It taps into a growing fear: that young people are being exposed to more toxic content online, and no one really knows how to stop it

It hit such a nerve that Keir Starmer wants it shown in every UK school.

Not everyone who scrolls ends up radicalized, but pretty much anyone online in the post-content moderation era regularly encounters extreme content. When we polled our audience, all respondents said they were being served extreme content at least sometimes, while roughly 1 in 3 sees toxic content “all the time”.

Polling our audience

Algorithms are very good at figuring out what feeds our anxieties and insecurities – and then nudging us deeper into that emotional state. Over time, the extreme becomes familiar. Then it becomes normal.

Because each feed is highly personalized and private, it’s hard to see what’s actually going on. So researchers have started creating throwaway accounts to track where the algorithm takes them. Here’s what they uncovered:

  • Feeling low? Watching a few self-help videos can quickly open the floodgates to depression and suicide-related content. 😢 

  • Feeling self-conscious? Interacting with ‘male confidence’ or ‘alpha’ posts often leads to a stream of misogynistic content and incel propaganda. 😡

  • Feeling skeptical? Engaging with conspiracy-adjacent content can rapidly spiral into extremist material according to a study analyzing how Reddit users progressed through conspiracy forums. 👽

Percentage of misogynistic content shown over time to fake accounts | Source: ASCL

Extreme content has always existed… but now it’s mainstream

This kind of content has always been online. But it used to be buried in fringe forums or obscure links. If you wanted it, you had to look for it. Now, it shows up uninvited.

Source: X

One key reason is that content moderation is slipping. Platforms used to have large teams working behind the scenes to combat the constant onslaught of violent, pornographic, and illegal content. But recently, many have scaled back those efforts.

In January, Meta made a big announcement about how it’s pivoting to ‘more speech and fewer mistakes’ while YouTube quietly removed ‘gender identity’ from its hate speech policies, raising concerns about protections for trans and non-binary users. By making these changes, the platforms are shifting the norms of what’s acceptable to see on our feeds. 🫣

Ben Whitelaw, founder and editor of Everything in Moderation and co-host of Ctrl-Alt-Speech podcast (and friend of Doomscrollers) shared his perspective:

Memes are the new propaganda machine

Extreme content is increasingly disguised as memes or parodies – ironic, half-joking, just edgy enough to share. The algorithm notes the post’s strong engagement and starts pushing it out further. Soon enough, it’s on your feed. 

That’s what makes this type of content so powerful: the same posts that radicalizes us can also make us laugh. And if it makes us laugh, we’re more likely to share it, even if it promotes misogyny, hate, or conspiracy. 📣

All of this might sound a bit dystopian but it is. Our algorithms don’t have morals. They have one goal: keep us scrolling.

And in a world where young people are increasingly forming their identities online, what we scroll through at 2am can quietly shift how we see the world… and ourselves. 👀

Polling our audience

Polling our audience

What’s next?

Next time, we’ll wrap up this 4-part series by asking a bigger question: Where is this going?

If you didn’t see Doomscrollers last week, it’s because we’re testing a bi-weekly format to give ourselves more space to experiment. If you’d rather it stay weekly, feel free to drop me a message. Otherwise, see you back in your inbox on April 29.

So what?

💡 For strategists & researchers
  • TLDR… With moderation teams shrinking, platforms are exposing users to more extreme content more often and openly.

  • Ask yourself… If platforms won’t self-regulate, what new tools or models can protect users from harm?

  • Check this outEverything in Moderation by the brilliant Ben Whitelaw.

💭 For self-reflective readers
  • TLDR… You might think you're immune to radicalization, but often feeds subtly shape our worldview.

  • Ask yourself… How has your feed changed what you believe, fear, or find funny?

  • Check this outAdolescence, the Netflix show.

– Liat & Sarah

Doomscroll of the day

Reply

or to participate.