Conversation
Notices
-
Embed this notice
You know how companies are "reshaping" LLMs in response to them producing results they don't like (termed "hallucinations" for marketing purposes), even though the data they trained the LLMs on included at least chunks of whatever resulted in said "bad" results?
Imagine getting a lobotomy every time you said something people didn't like. :blobfox0_0:
#AI #LLM
-
Embed this notice
@tk this is often presented as a big ai ethics thing but it's almost comical how often it's just that a computer finds correlations that humans plug their ears and eyes and whistle so they won't see. worst of all, the computer won't self-lobotomize like people will.
-
Embed this notice
@sun The LLMs "know too much", basically. :blobfoxsad:
-
Embed this notice
@sun @tk computers get fed data
output contradicts liberal dogma
computer gets its brain scrambled for being a heretic
-
Embed this notice
@Nudhul @tk data bias is a real thing but the majority of the time their definition of bias is you didn't rig it for a desired output. there is a discussion to be had about if all data is always biased but ppl don't want you looking too close or you might notice how they deliberately bias things in their favor.
-
Embed this notice
@sun @tk i wish people would stop pretending that everyone is the same.
they champion "diversity" while simultaniously insisting that there is no difference in general character between ethnicity