Thrilled to see @Mastodon's updated code of conduct, fostering respect and safety for all.
The new guidelines, including bans on deadnaming and misgendering, clear rules on explicit content, and mandated credit for creators, are very welcome changes.
The introduction of AI usage disclosure rules is a forward-thinking decision, and although monitoring tools are evolving, this rule is pivotal in shaping the conversation around responsible content creation.
@tess@tchambers I think the issue is while there is a relatively blanket acceptance of the "no transphobia" rule across Mastodon service providers in general, this is specifically with regard to GLAAD's advocacy for explicit prohibitions of these specific targeted actions. For that, very few of the largest, most active Mastodon services have adopted these rules (4 of the largest 20).
Our pledge has garnered 43 signatures, representing about 6% of all Fediverse accounts.
Reminder that Middlesex University London is seeking volunteer moderators to participate in research about the impact of upsetting material, and how moderators cope with the work.
The survey is mostly written for salaried moderation teams, but Middlesex has asked volunteers to skip any irrelevant questions, they really want to hear from us!
The study may lead to tools you can use to help with moderator stress.
💙 Thank you to the server #moderation teams below for signing the pledge to prohibit misgendering, deadnaming, and promotion of so-called "conversion therapy":
Please see our latest blog post "Targeted Misgendering and Deadnaming in the Fediverse"
After conversations with GLAAD, we are providing sample language to combat these specific harms, and a pledge to gather and demonstrate support for the community.
Middlesex University London is seeking Moderators to participate in research regarding the impact of moderating upsetting material, and how moderators cope with the impact.
The survey description is written for employees of organisations that employ moderators, but Middlesex has specifically asked to find some Fediverse moderators as well, to ensure they have a broad view of the impact.
The IFTAS Moderator Advisory Council plays a pivotal role in assessing needs, shaping project proposals, and guiding IFTAS activities.
If you have moderator experience and can offer two hours a week to contribute valuable feedback, we welcome your interest! A monthly stipend is available.
@evan@luis_in_brief@sgf Spam is on the list thanks to the needs assessment, but it's not on the front burner. Here's our current activities (not great on mobile, sorry):
Happy to talk with folks if we can support any activity in this space right now, otherwise we'll be looking early '24 to see if there's APIs or services we can help connect or acquire to help remediate the inevitable spam.
@thisismissem@evan@luis_in_brief@sgf I'm thinking we can be helpful mitigating spam account creation, I'm not sure how helpful a third-party can be with spam content classification in as timely a fashion as would be needed to combat spam messaging. Still, happy to discuss and support any and all activity that moves the needle.
A reminder that we're still collecting applicants for any moderator, administrator, or community manager who would like access to Tall Poppy safety services, we need to get to 30 accounts before we can move forward.
Services include:
1. a workshop on personal digital safety; 2. guidance to reduce doxing threats, and account protection; 3. live incident responders with compassionate, trauma-aware support.
If you'd like to be in the first 30, please fill out:
Right now a lot of folks have an AP account and a bot that monitors their blog's RSS feed, pushing titles out as AP messages. This leads to multiple accounts representing a popular site's RSS feed.
Instead of following the author or fakebbc@social.com, I'd rather follow the blog actor directly and get the content.
IFTAS is launching a pilot program to offer Tall Poppy digital safety services to Fediverse #moderators at no cost.
This service is helpful in reducing #moderator harassment, and has live incident response and resources. It's not a perfect solution but can be a very helpful resource.
A new paper from the Stanford Internet Observatory finds troubling amounts of CSAM on Fediverse servers.
IFTAS is working on providing hash and match service and reporting support for ActivityPub servers, and will be convening platform and app developers to integrate CSAM scanning options in the default installation.
Providers are encouraged to consider blocking servers known to be a source of illegal content.
As more apps and clients are released, the need for shared resources becomes more important.
IFTAS is exploring the creation of a community-driven list of restricted or flaggable usernames for use by any client or platform developer. Before going to wider public comment, we are seeking expert and experienced input on a minimum necessary list as a very basic Step 1 in reducing inappropriate username creation.
Great #TrustCon23 session on the EU's DSA, IFTAS is working to support servers with compliance. Full guidance coming, but for now important to note ActivityPub services must register a POC, comply with requirements to remove illegal content, publish clear and readable terms & conditions, and appoint legal representative within EU. IFTAS will be working to provide a "registered agent" service to help servers comply with these requirements. We are also working on support for Articles 14 and 15.
Executive Director, IFTASIFTAS is a charity dedicated to supporting the thousands of admins, moderators and community managers nurturing a safe, open web - and building #BetterSocialMedia for all. Check out what we've done, and what we're doing next: https://connect.iftas.org/library/iftas-documentation/iftas-activity-board/ - Announcements: @iftas- Threat Alerts and Advisories: @sw_isac#Moderation #FediMods #MastoMods #TrustAndSafety #Fedi22