Notices by #opPedoHunt (oppedohunt@fsebugoutzone.org)
-
Embed this notice
@stelo Br
-
Embed this notice
@p Just friendly advice,maybe you need update nginx/1.25.3 because is outdated but also because vulns
-
Embed this notice
@p https://check-host.net/check-report/260efb59k98c Error are in all countries,that mean not only me
-
Embed this notice
@p I have advice you need fix this https://fsebugoutzone.org/main/ostatus/
-
Embed this notice
@p That's fair
-
Embed this notice
@p I agree with that
But loli is illegal in USA
-
Embed this notice
@p Honestly I generally don't like illegal and weird contents because I wish internet be safe place because it is in mutual interest.
I just wish help be better moderate and help about that but looks as some people's definitely like that contents...
-
Embed this notice
@p Content moderation and user safety
Decentralized social networking platforms introduce new challenges and difficulties for user trust and safety.
By nature of the Fediverse, operators of an instance are solely responsible for moderation of its content. As there is no form of centralized governance or moderation across the Fediverse, it is impossible for an instance to be "removed" from the Fediverse; it can only be defederated per an instance operator's choice, which makes that instance's content inaccessible from the operator's instance.
Individual instances are responsible for defining their own content policies, which may then be enforced by its staff. Moderation of a Fediverse instance differs significantly from that of traditional social media platforms, as moderators are responsible not only for content posted by users of that instance ("local users"), but also for content posted by users of other instances ("remote users").
With toxic or abusive content being common in the Fediverse,as well as available moderation tools and the legal or financial impetus to moderate content lacking in comparison to those of centralized social media platforms,the Fediverse exhibits shortcomings in child safety.
A 2023 study by the Stanford Internet Observatory's Cyber Policy Center found that,out of approximately 325,000 Fediverse posts analyzed over a two-day period, 112 were detected as instances of known child sexual abuse material (CSAM); 554 were detected as containing sexually explicit media alongside keywords associated with child sexual exploitation; 713 contained media alongside the top twenty CSAM-related hashtags on the Fediverse; and 1,217 contained text relating to distribution of CSAM or child grooming. The study additionally noted that, during a test run of its analysis pipeline, detection of its first instance of known CSAM occurred within approximately five minutes of runtime. All detected instances of CSAM were reported to the National Center for Missing and Exploited Children (NCMEC) for triage.
-
Embed this notice
@p Did this happened recently because that's only old logs on @modfaggotry without new updates?
-
Embed this notice
@p That's hypocritical because people's constantly making case's because FSE is full with shit's as some of your friends but you dgaf.
Then why would someone make case when you allowing shits on your unmoderated platform?
-
Embed this notice
@p Surprise pedo defender
IMG_20250512_142803.jpg
Statistics
- User ID
- 345174
- Member since
- 12 May 2025
- Notices
- 11
- Daily average
- 5