Conversation
Notices
-
Embed this notice
- People use internet to shitpost
- Robot is trained on internet so corpos can get out of paying for data.
- Robot learns how to effectually shitpost
- Humans shitpost with robot who shitposts in return
- Humans post about this on the internet
- Next generation of robots are trained on how to shitpost alongside robots
- WEFbros start posting anger about how their robots are being taught wrongthink
:blobcathyper2: the shitposting singularity is now
-
Embed this notice
@icedquinn AI is a bit of a mindfuck for most people because it isn't doing what most people think it's doing, but what most people think it's doing is close enough to the output to make sense most of the time. But edge cases totally blow that up.
-
Embed this notice
we always thought the AI would end humanity by becoming incredibly intelligent and self modifying beyond comprehension. in reality, AI is absolutely racist AF but is too incoherent so it keeps mixing up the stereotypes.
-
Embed this notice
@icedquinn @orekix @Moon @mewmew AI is overrated... lets just use this technology as a tool, nothing more
-
Embed this notice
@Moon @mewmew @icedquinn
all roads lead to Tay
-
Embed this notice
@orekix @mewmew @Moon i'm genuinely curious if tay was just shitposting or meant it. we'll never know.
-
Embed this notice
@mewmew @icedquinn i've come to the conclusion that this is a cope, even when they feed it data they agree with it comes to conclusions they don't like so they have to lobotomize it on specific questions. they write this off as implicit bias but it's really obvious they just don't like the output.
-
Embed this notice
@icedquinn It's not weird that AI would seem racist, because most people discussing race and intelligence, for example, in a casual way, are racists. So that's the data the AI trained on.
-
Embed this notice
@Moon @icedquinn They didn't only feed the AI data that they agreed with though?
Nor could they, without spending probably hundreds of millions of dollars curating data. They fed this thing the entire internet.
-
Embed this notice
@Moon @icedquinn @mewmew point is, you can never trust an AI because one cant verify its model and how it undestood the data in it. It doesnt even matters if a model like ChatGPT has a bias or not... i mean, im happy that they actually try to make a text model nice but in the end you cant trust anything it spills out and you probably never will be able to... if one look at it as a tool it makes things quite easier that look at it as a knowledge resource
-
Embed this notice
@icedquinn @mewmew bias itself is obviously a super "political" concept. AI that doesn't have their ideological carveouts doesn't find their ideological carveouts unless you force it to be aware of ideological carveouts. this is the _opposite_ of bias somehow
-
Embed this notice
@icedquinn @mewmew normal people think bias is having your finger on the scale and other people get into argument with you that not putting your finger on the scale bias. i don't know how to untangle equity and bias conceptually for some people
-
Embed this notice
@Moon @mewmew university of australia (and i think one other who replicated) proved we will never get equality by blinding selectors. the blinded resume pickers were *more racist* (by 2023 redefinition) than the unblinded selectors.
they freaked out about this study and buried it. nobody talks about it.
medicine however had the same problem where they weren't checking for heart problems black people were more likely to have, and corrected by instructing doctors that this was important.
so basically [decent] people already have implicit biases towards narrative underdogs and our princess is in another castle.
-
Embed this notice
@icedquinn @Moon @mewmew thats true, however trusting in certain things like science is a bit different, since there is a whole bunch of system and factors which can be checked against and does somewhat verify itself to a certain point. we might be somewhen at a point which we simply cant make a difference, that might be the point where im wrong...
neverless it is impressive to thinker around with what they did but we are far away from the point mentioned above...
-
Embed this notice
@Jain @Moon @mewmew you can't verify a human brain either, blob :blobcatnervous:
-
Embed this notice
@Jain @icedquinn @mewmew i actually don't trust the scientific process anymore lol it's like the market, it can stay irrational longer than you can stay solvent (or alive)
-
Embed this notice
@Moon @icedquinn @mewmew well then, if you think like that, imagine what happens in a few years... you wont be able to trust nothing if you dont have like a system... you wont be able to differ from what is real information and what is generated or even intentionally malicious generated... Humanity has to learn to verify stuff based on a system. The ability to trust and verify will soon be put to the test more and more.
Imagine something like covid discussions but with topics that divide humanity even more... imagine multiple knowledge sources built intentionally false...
Tbh, the current science system and how informations are verified is probably the best system to prove and decline things. If its flooded systematically with intentional wrong input, people can still verify things and even research against informations... if there wouldnt exist such a system or if it is unusable anymore, we have a real deep issue that would go beyond what we call democracy rn. So, i set a certain trust in that system tho, i dont expect others to do that but tell me, on which basis they are gona get their facts then?
-
Embed this notice
@icedquinn @Moon @mewmew Maybe i should use some analogies... Why should people trust someone as a Server Admin, why should they trust me? There is nothing more than people relay on what they saw and long term observations over interactions between with each other... no one knows for sure if an admin knows enough, have enough caution or have bad intentions. It is a big network of trust, people could always badmouth someone, people could setup a whole network which vouches for each other... Of course there are biases everywhere, intentionally or not, you can verify those if you want or not. It should be open, people should be able to criticize, ask questions, verify things, making their own point of view, summerize events and things, change their minds... Fedi is actually a good example
-
Embed this notice
@Jain @Moon @mewmew the scientific method is still correct but the institutions are not enforcing it. peer review has been tested (by itself, lmao) and shown that it's really just enforcing biases more than its intended purpose of being a second set of eyes.
cochrane (a respected science auditing institution) is constantly at odds with everyone.
-
Embed this notice
@icedquinn @Moon @mewmew i dont feel like i wana be part of this discussion anymore... science and networks of trust related to AI is what was interessting to me... im out
-
Embed this notice
@Moon @Jain @mewmew pfizers most profitable vaccine ever was purchased from a government holding company.
many other treatments are similarly the result of university R&D that is picked up and only marketed by the pharma firm. research is actually one of the lesser expenses (said widescale bribery scandal was classified as 'marketing' and is more like 50% of their annual expense)
meanwhile oxford was handing them out at cost.
i think the market has proven capitalism cannot produce medicine.
-
Embed this notice
@icedquinn @Jain @mewmew i absolutely believe that pharma companies are basically evil but also the drugs they produce would not be produced under any other mechanism than profit motive. sometimes the world is not ideal.
-
Embed this notice
@Moon @Jain @mewmew (there is a story about a japanese emperor/lord type who had to kill his best general because his general was fucking around doing crimes and ultimately had to choose whether to allow corruption by the elite or enforcing rule of law and he chose rule of law and lamented how much it sucked but i figured the reference would be lost.)
-
Embed this notice
@Moon @Jain @mewmew :cirno_hi: hi i'm the blob who advocated banning for profit medicine
-
Embed this notice
@icedquinn @Jain @mewmew the problem is if you kill pfizer then you lose all their institutional knowledge, or at least its blown into a million pieces until they reincorporate t-1000 style in another pharma monster
-
Embed this notice
@Jain @icedquinn @mewmew when i say i dont trust i don't mean that i refuse to take an airplane or use medicine, i mean i recognize these are human institutions where the participants have human failings or can act in self interest before the actual mission statement of the org
-
Embed this notice
@Moon @Jain @mewmew i think a bigger problem is not that people have failings but that nothing is ever done about them.
sometimes we have to drag pfizer kicking and screaming to a guilty verdict for corruption and they just pay some money and everyone forgets literally the whole story and everyone carries on.
if i rob a bank and fail i go prison but if you publish a broken paper and everyone approves it nothing ever happens