Stanford researchers unveil Mastodon CSAM crisis

Onur Demirkol
Jul 25, 2023

According to recent Stanford's Internet Observatory research, Mastodon, the decentralized network seen as a viable alternative to Twitter, is crowded with child sexual abuse material (CSAM). Researchers searched 325,000 posts on the network in just two days and discovered 112 instances of known CSAM, with the first one turning up after only five minutes of searching.

According to The Verge, The Internet Observatory searched the 25 most popular Mastodon instances for CSAM as part of their investigation. Researchers also used PhotoDNA, a technique that locates CSAM that has been identified, and Google's SafeSearch API to find explicit photos.

The team's search yielded 554 pieces of content, all of which were classified as explicit in the "highest confidence" by Google SafeSearch, matching hashtags or keywords frequently used by online groups that promote child sexual abuse.


There were 713 uses of the hashtags

The top 20 CSAM-related hashtags were also used 713 times on postings with media across the Fediverse, and 1,217 text-only posts referred to "off-site CSAM trading or grooming of minors." The open posting of CSAM is described in the report as being "disturbingly prevalent."

One instance cited the prolonged server failure as an instance of an event brought on by CSAM published on Mastodon. The server's sole administrator claimed they were informed of CSAM-containing content in a post about the incident. Still, he also noted that moderation is done in his spare time and can take up to a few days to complete — this isn't a massive operation like Meta with a global team of contractors; it's just one person.

Even though they claimed to have taken action against the offending content, the domain's host had nonetheless suspended it, rendering the server unreachable to users until they could get in touch with someone to get its listing restored.'s administrator reports that after the problem was fixed, the registrar moved the domain to a "false positive" list to stop further takedowns. The researchers note, however, that "what caused the action was not a false positive."


Tutorials & Tips

Previous Post: «
Next Post: «


  1. John G. said on July 25, 2023 at 9:24 pm

    Bad people everywhere all day long. There are not enough jails to lock them up!

    1. Karl said on July 26, 2023 at 3:09 am

      There are, but if you don’t know how to catch them, due to laws preventing law enforcement from doing it the most effective way, like in several of the western countries with big criminal gang problems, having enough jails won’t matter. But elsewhere, results are showing quickly.

      In a recent documentary a mother said that although her innocent son was arrested due to this controversal way of tackling the huge problem, both she and her son supported this way of solving the extremely wide spread criminality, that’s how pissed off and tired they were of living in a lawless land where criminals ruled. In european countries that has a similar problem with criminals that are shooting each other on a more or less daily basis on the streets, this would not be possible as it would breach “human rights” law. Turns out living in a country infected by big criminal gangs are part of our “human rights”, stability and peacefulness, now out of fashion in the west.

  2. TelV said on July 25, 2023 at 5:32 pm
  3. Anonymous said on July 25, 2023 at 4:34 pm

    I’d rather use Twitter than some half baked hobby project ran by unpaid volunteers. Twitter doesn’t have this problem.

    1. bruh said on July 25, 2023 at 5:42 pm

      Every platform has this problem to some extent, silly to claim otherwise. But I don’t see how it matters to normal people: there are terrible depraved people living on the same planet as me, do I need to go to mars now? So what if there are depraved people using the same app/website as me, that doesn’t mean anything to me.

      Personally I lean towards using platforms/services/networks with less moderation and restrictions rather than more. There will always be negative consequences with giving people freedom.

      Those which are trying to track down and arrest offenders can do what they are already doing, posing as potential offenders, infiltrating networks, etc, so in that regard there is hardly a difference.

      This is just being used to shut down competitors, any platform that has been outside of the mainstream has had the worst accusations possible thrown at it. The biggest most popular platforms also have the same dang problems, no amount of moderators in India and AI interpretation will ever stamp out all of it, simply not feasible when you’re talking about millions or hundreds of millions of users.

      Heck, I’ve seen some depraved, unbelievable and probably highly illegal materials in some “communities” I have been in, but you either just avoid/ignore/leave. Maybe it’s unpopular but especially online I prefer to have “dangerous freedom” than “sanitised tyranny”

      1. penny said on July 26, 2023 at 12:38 pm

        i mean thats fine but at what point do you start getting uncomfortable. i dont care either way i dont use any of these sites and i agree with you in a sense. but if my apt complex was known as the place child r-pist hung out in then im looking to gtfo

      2. bruh said on July 27, 2023 at 12:29 am

        Of course, freedom of association should always mean that you can choose who you have to put up with. Unlike a place you physically live in, online it’s much easier to a) either try to get inappropriate things/people removed or b) set up or join something different to avoid it.

        If your apartment complex (social media space) was a “well known place” for illegal activities, there would be nothing stopping you from getting police departments involved, or even international agencies, or just charities that would know who to contact.

        I am not saying that I would put up with it either, but I have been to “places” online where occasionally things that are in very bad taste, or illegal would be posted. These were not places where it was common practice or where such things were appreciated by anyone, so it was always dealt with one way or another, it seems that reasonable people are able to “police” themselves in a sense.

        I guess I am more arguing for social media with less “oversight” from overlords/authorities which will severely hamper your ability to express yourself (as that’s what i care about).

        If there is organised crime going on, I think that can always be handled, I am not against it being dealt with – but the big thing is, on the super-popular platforms where “there are strong rules, protections, and automated scanning” this stuff still goes on because people just get smarter about evading the systems and staying under the radar. Super tech savvy criminals who have had to adapt and learn to stay afloat are probably a bigger problem and are much harder to catch. So at the end of the day, we can have platforms with less censorship and moderation, and it won’t end up being any worse (than what we have now).

        Sorry for the long message but I hope that makes sense. I am a little sleep deprived so maybe it wasn’t very logical ¬_¬

      3. Karl said on July 25, 2023 at 6:29 pm

        Hm yeah. I dont feel like making long reply, so I will just say +1 and that I agree in most of your comment :).

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.