@weirdwriter Their assumption that algorithmic serendipity is somehow inherently un-tainted by algorithmic bias would be charming if it weren't so insidiously dangerous.
@powersoffour@futurebird@cyberlyra Thinking this could violate data privacy laws in the EU & in a number of US states. So, not surprised (& in fact conditionally pleased) that the capability is no longer there.
@inthehands I'm having some trouble with this. My experience, along with counsel I've been given over the years, is that one MUST attend to the clearly stated job requirements. That someone's flagged your application to ensure it passes, they must be addressed if one's to make it through the initial screen. & that in an annoying number of cases, the requirements literally CAN'T be ignored, because the screen is automated.
@goatsarah Goodbye, Earl. (Srsly though here in the US this would more typically end with Earl murdering Wanda & probably Maryanne, too, if she lived in easy driving distance.) https://youtu.be/bqnrXRuebWg
@ironchamber@anildash@mathowie What I read in school (early 90s) focused mostly on menu systems for applications; hypertext usability wasnt a big thing yet. But now that I think of it there've been Nielsen-Norman Group articles over the years. They're AI shills now but for a long time there was solid stuff in there, even if one had to adjust some for Jakob's massive ego.
@ironchamber@anildash@mathowie There was actually a fair body of ergonomics research on this, but the literature I'm familiar with is pre-web. I'd have to look up for specifics (I still have that ergonomics topics reader from school) but my recollection is that they worked OK for a small set of users with very narrow needs, but were a major detriment to power users. IIRC there was speculation that it would also tend to narrow the options that were used through a negative feedback loop.
@ironchamber@anildash@mathowie OTOMH, the way sharing works in Android, I never know where I'm going to find the target for my share, because Android keeps resorting based on how it conceptualizes 'recency.' I've also tried out adaptive UIs in several desktop apps over the years - bad-penny pattern there is pruning elements from the menu structures if they're not used within some period of time. (Word did this at one point, IIRC, but I've used others.)
@anildash@mathowie man do I ever hate, hate, HATE adaptive UIs. I have used many & have yet to see one that didn't make things harder for me on the average, vs making things easier on one task maybe 30% of the time. The added cognitive load of trying to figure out where the feature I wanted has gone just leaves me irritated until I sigh & accept it as unchangeable. But I never like it.
"system that think like your brain in parallel" no my dude, you haven't, you've done absolutely nothing to demonstrate that transformers think in any meaningful way "like my brain."
@LinuxAndYarn@thomasfuchs arguably a way glib to put it; however, it does get at a problem with this type of treatment. What I've read is that psychedelics as treatment for depression, anxiety, PTSD, et al, is currently believed to work by increasing neuroplasticity.
What do you then *do* with that?
If you just take the drug & live your normal life, you should expect any beneficial results to be impermanent. Or if you continue to do bad shit, expect new, undesirable patterns to be enforced.
@cstross@pettter Sure, but I think we're mostly shitposting on this. (FWIW, though, I think there's an argument for 'bullshit' vs 'lying.' It's more dismissive - I'd map 'bullshit' to contempt, & 'lying' to anger. Contempt is often more damaging than anger.)
@pettter I'd argue for #bullshit in the Frankfurtian sense of it not caring about truth, just about outcomes - but it doesn't actually care about *anything* because, yeah, it's not capable of caring. @cstross
@Gargron@anildash I'd argue it's usually not, since I mostly see it used ironically ("17 civilians were discovered to have been rendered unalive") or as a fine distinction ("there's something unalive about that painting"). I.e., not bowdlerization - it has a different purpose than that.
What's going on is that Anthropic "prompt engineers" have redefined self-awareness to mean 'has contextual information.' That the system is using language then allows them to delude themselves into universalizing their definition.
Saw a similar problem in AI research in the 80s: researchers might define a "frame" holding contextual info, & when their program produced solutions that referenced the frame, construed that as a form of self-awareness. #AIHype#Claude
Put another way: We should not lose sight of the fact that LLMs are doing some really interesting things. But that they're being built by cultist #AITrueBelievers, to do this thing using natural language, while simultaneously making lots of money*, all contribute to the delusion of something being there that isn't. _ *which is the primary signifier of God's Grace in Calvinist Capitalism
What's fascinating to me is Alex Albert losing sight of something genuinely cool & interesting: the model integrated needle testing concepts so quickly that it produced responses that could be construed as recognizing the test environment.
Illusion of "meta-cognition" isn't that surprising if one remembers the system is created & trained by #AI#TrueBelievers who spend all day every day communicating in language that presumes #AGI is imminent - if not, as assumed here, immanent. #AIHype#Claude
Put another way: Alex is basically telling Claude 3 ("Opus") that he's running a test on it, & is excited when Claude (a system for analyzing & producing human-plausible representations of similar text) "recognizes" a needle-testing prompt and produces text that's plausibly consistent with needle-testing.
@nemeciii There's just no way that voice control as a primary method of control holds up to ordinary day to day use across the population. If you want to suggest that disabled folks should get knobs, then think about transitory disabilities: illness, fatigue, injury. Think about circumstances: passengers, loud music, software failures. So, no. Hard pass. @msh@thomasfuchs
@nemeciii OK I think I didn't understand your claim before, but now that I do: No. Absolutely not. Ambient noise exists. People lose hearing. People aren't always able to speak. People carry on conversations in cars. Disability exists. You can say machine learning systems will handle a lot of these use cases, & maybe they can - sometimes. But what should our tolerance for failure be in a one or two ton vehicle moving at high speeds in varied conditions? @msh@thomasfuchs