@vikxin @owashe @BeAware I still don't fully understand how they intend to handle moderation at scale. The framework they've built is called "composable moderation" (described here: https://bsky.social/about/blog/03-12-2024-stackable-moderation) with the idea that there's multiple layers; the baseline is enforced by Bluesky's in-house moderation team, but then you can subscribe to third-party labeler services that allow you to filter content in your feed based on whether or not they're tagged with specific labels. They even open-sourced their admin tool, Ozone, ostensibly to allow others to also run moderation services... but I can't wrap my brain around how that would work in practice, or how permissions work to even allow moderation actions to happen in the first place. I think Ozone works in concert with a labeler service, so if you run a labeler you can also provide moderation actions too? But it's clear as mud to me as to how third-party moderation coexists with BlueSky (the company)'s moderation services on the network. Like, if a mod for a labeler says a post is bad, does it just get hidden for those users that subscribe to the labeler, or does it disappear from the network? (I guess it just gets hidden only for those that subscribe to the labeler?) And if I'm incorrect and they actually intend to moderate everything that goes through Bluesky (the company)'s relay and appviews in-house... they're going to have to grow a sizeable admin team to deal with scale.
If you manage to figure it out, can you explain it to me please? :dragn_woozy: