@sarvo@natalie@Drand this "misinformation" is just people telling lies on the internet for fun. studying random internet user's shitposts for natsec grant money is why MIT is a fucking embarrassment and shell of it's former self.
@lanodan@natalie@jeffcliff@ashten@Drand having no code just means it's pure and thus mathematically proven. more grant money that way so they can make more academic vaporware!
@ashten@natalie@lanodan@Raccoon no, moderation is the main theme here, someone is building a ML dataset for selling "policing fedi" to military contractors on MIT's network and using a buggy spider which is why anyone noticed. there is a divergence in moderation philosophies here which boils down to, do it by hand with human moderators or pay someone in silicon valley to make a robot do it for you.
in my extended experience human moderation is the only kind of fair moderation that works at scale, AI won't solve this it'll only make the problem of network wide angst worse.
@ashten@natalie@lanodan@Raccoon automated moderation also ends up being a friction point which generates more problems than it solves. when there are perverse economic incentives involved it actually ends up doing a worse job than even the worst human moderators.
it's IMO a solution in search of a problem as moderation requires the human element and removing that makes it into network filtration which is totally different set of end users who pay into that.
@lanodan@natalie@ashten@Raccoon right, it's mix of normative non normative control messages and angry posts. if only people read up on usenet before repeating it