The fediverse hasn't yet encountered an untrustworthy actor with Meta's scale and resources. The closest to date has been Gab, and maybe the approach of mass defederation that worked very effectively with Gab would have also worked with Threads. Instances that federate with Meta aren't using that tool this time, and instances that reject Meta are, so it's potentially an interesting natural experiment.
Of course that's not the only possible tool, and by itself it certainly isn't enough. Even putting Meta's maliciousness aside, their arrival here means things are going to grow by orders of magnified, so tools that have been effective to date almost certainly won't be sufficient in the post-Threads fediverse(s). As Evan points out "Big Fedi" advocates assume racist automated moderation technology as solving content moderation problems. I don't think it's likely to work particularly well on fedi (it doesn't work particularly well anywhere else, and also the algorithms is anti-LGBTQIA2S+ as well as racist).
A differeent approach, which seems more promising to me, is to start with what works well today on well-moderated fedi instances, and look at what it will take to get it to work in this new environment. So I think a lot of instances that want to be relatively safe and friendly to LGBTQIA2S+ people are likely to move in the directions @smallpatatas@mstdn.patatas.ca describes whether or not they federate with Threads.
- Well-moderated instances today rely on instance-blocking of known bad actors. Even if the hate speech and harassment coming directly from Threads can be managed by existing tools,with one orders of magnitude more instances than today's 20,000 (and new ones popping up all the time) then it's hard for me to see how today's blocklist-based approaches will work. Consent-based federation has its own challenges but "Everybody (including nazis and terfs) can federate and send instances to anybody on the instance until they're told they can't" is always going to be higher-risk than "Everybody (including nazis and terfs) has to get permission to federate and to send people who aren't following them messages or tag them or reply to them."
- cluster-level visibility is an extension of local-only posts: visible to (some) people you don't have a follow relationships with but not public. Any information that's published as an unprotected web page is available to everybody (including nazis and terfs and Meta and Google) and there is a lot of stuff that I would rather not share with everybody (including ....) Of course, I also want to be able to have discussions with people on my instance, and more broadly with people who aren't on my instance -- that's the potential of federation. Today "public" and "unlisted" are the only option for cross-instance discussions, and a lot of people don't even have access to local only discussions, so most stuff is completely public. But that's an artifact of today's fedi functionality, and I thiink a lot of people would prefer an environment where most stuff *isn't* completely public.
@skobkin@lor.sh @FinchHaven@sfba.social @tokyo_0@mas.to
Embed Notice
HTML Code
Corresponding Notice
- Embed this notice
Jon (jdp23@blahaj.zone)'s status on Sunday, 07-Jan-2024 15:48:54 JST Jon