@whitequark Capitalism more or less collapsing because the owners of capital get stuck on some incredibly counterproductive ideas and use their disproportionate media reach to evangelize those ideas... yeah, that is a bloody amusing way of preserving humanity-as-a-species, if grim and bleak in equal measure.
@whitequark Agreed, though I think that's a commonality with Pascal's Wager, rather than a distinction from. That thought experiment also makes zero sense if you're not already somewhat primed in classical Christian apologetics.
It's a point well-taken, though, that the Basilisk is a symptom and not the originating belief system.
@whitequark Agreed there. I don't think there's any sense in which this strain can meaningfully create *stuff* or sustain its own needs, so it will have to self-limit even just from a basic "who's going to grow the food" kind of analysis.
Similarly, totally agreed that that doesn't mean any particular individual or even community will survive that self-limiting asymptote.
@whitequark Yeah, it's tricky to exactly characterize. Roko's Basilisk is pretty much Pascal's Wager, so I don't think the comparison to a god is wrong, exactly, but it's also by necessity somewhat imprecise.
There's definitely a belief system there, and an absolutely bizarre one at that, but it's a bit outside the normal taxonomies of belief that tend to get used in casual conversation.
I get it's extremely problematic, but it's hard for me to look at the particular form that AI hype is taking and *not* make an analogy to drinking the Kool-Aid. There's a genuine cult-like form that this is taking, that's quite distinct from cryptoscams and the metaverse hype.
I don't think most web3 cranks looked at Bitcoin and thought "ah, yes, I've found the technogod," but that's exactly what's encoded in Roko's Basilisk and other kinds of AI apologetics.
It kind of gets to the core of what's human to understand that the mystery box that convincingly mimics human speech *isn't* actually human or even intelligent. I guess it shouldn't surprise me that there's some very weird shit to be found in those philosophical backwaters.
Anyway, it genuinely scares me that a cult-like movement has captured and enclosed large swaths of human communication, already hurting a hell of a lot of people in the process. Whatever happens from here, I can't escape the feeling that it's going to get far weirder still.
...and the latest reMarkable release notes include the phrase "AI-powered."
It's at least not LLM trash, but putting a lot of hype around an OCR-like feature. It's also gated on having a Slack account for some reason, so easy to avoid. But still, it's the principle of the thing, you know?
[edit: I was wrong. Somehow it *is* LLM trash. I underestimated tech companies' willingness to shove LLMs into problems where they do not belong.]
Just say "machine learning." It's more accurate, less cult-y, and doesn't make your audience think you shipped something that pilfers their creative labor.
"In addition, OpenAI can't use the data for training purposes..."
Better idea: don't give the eugenicist fascist AI cult my handwritten notes at all, even if they pinky swear to be good.
At least all this bullshit is pretty easy to turn off for now, and reMarkable is based in a GDPR regime (specifically, Norway). But dear gods, I'm pissed.
@whitequark It may have been slashdotted, but it was a Lemmy comment reading:
"I use ChatGPT to answer the questions for my annual mandatory idiotic work safety training. Just copy/paste the questions and choices in, boom, get the right answers, don’t even have to read the shit. I’d pay $0.01 for that."
@whitequark Sure, but just thinking of what normal development processes are (or used to be) there, it's very surprising that an easily detectable regression to a core component was introduced without tripping review or CI.
Even if this bug doesn't trace back to AI, it speaks badly to what controls are still functioning to prevent similar regressions from being introduced by LLMs.
@aud@SnoopJ Just this. It's so annoying that LLM con artists have successfully sold the idea that LLMs are "universal learners." There's no reason to think that scraping all of Reddit will lead to some new medical discovery — hell, that's not even close to how *humans* learn, through mistakes and mentorship, through experiment, through careful selection of references and so forth. That hasn't stopped the con artists from describing LLMs as though they learned in human-ish ways.
I will admit to not fully understanding why someone would cosplay a character that's a thin allegory to a fascist SS officer, but especially at Pride it just leaves me confused.
I mean, you do you... just saying I'm over here scratching my head.
Sometimes I write intimate eschatologies or words about technology and math. Sometimes I make things by burning them with light or squeezing them through a small, hot tube. Sometimes I push water with a stick while sitting in a tiny boat.