I love any effort to claim AI as part of cognitive science, yet I remain skeptical (having not read the paper) that "AI" was ever an attempt to understand the brain.
Understanding "making decisions", yes
I love any effort to claim AI as part of cognitive science, yet I remain skeptical (having not read the paper) that "AI" was ever an attempt to understand the brain.
Understanding "making decisions", yes
I don't think "artificial intelligence" ever really sought to understand how human brains work.
The older research programs wanted, at least, to make *correct* decisions — we got knowledge graphs, compilers, expert systems from them.
The current craze has abandoned commitment to correctness in favor of the imitation of existing communication artifacts
.…disturbingly similar to the VC investment cycle
Upon greeting the gang
"Foolish mortals!"
If the reward is infinitely scalable, the expected return can always be scaled up so long as the prior is > 0 — even a tiny bit greater
Which is why the FP overflow can also be an underflow; you can still reach these absurd decisions
like "my ten bucks is better spent in helping an AI researcher get an extra latte than in buying lunch for the starving mother in front of me, because I could be increasing the chances of Robot Rapture (utility 10^59 lifetimes of bliss) by 10^{–35}"
I read Bostrom's book SUPERINTELLIGENCE back when I worked at Google —
Blaise A— insisted that it go on the "AI" SF book club reading list
(as an aside: I started the book club; we were mostly reading, uh, cautionary stories)
SUPERINTELLIGENCE is garbage, & I said so: throw around big numbers until your moral calculus has a floating-point overflow & you can be convinced of obviously crazy things
(phrasing wasn't so clean at the time)
... nine months later I was no longer working for Google
But Bostrom is a garbage philosopher, too.
Pascal's Wager is nonsense based on the same kind of floating point overflow inconsistent utilitarianism
Even within u—ism (with which I have problems!) there've been literal decades of philosophy work on discounting future utility to avoid "so you're saying there's a chance" goofy NaN * NaN computations
Like "how many human consciousnesses could we execute in total bliss at once if we converted the galactic core to computronium"
LLMs exhibit "potemkin understanding"!
Hope the methodology here is better than the last LLM-hater arxiv paper that came through
Must read it more carefully...
https://mathstodon.xyz/@gregeganSF/114758840374128081
issuing a blessing out to the world to the people who made it possible to renew my passport online without having to mail the existing one in
that's probably the 18F folks, bless them
i know our gradual descent into scary places is frightening enough and I am grateful for the ability to hold onto the physical document.
(and I hope that 🏳️⚧️ friends can get their papers with the right markers on them soon, dangit)
@grimalkina if I understand the question right, yes i think so
Jony and Sam are both suddenly real quiet; wonder how their honeymoon is going
Oh
https://pivot-to-ai.com/2025/06/23/iyo-vs-io-openai-and-jony-ive-get-sued/
*Nelson laugh*
h/t @davidgerard
LLMs lower the immediate cost of "fucking around"
and push "finding out" beyond management's forecasting horizon
which is Yet Another reason that everyone with actual expertise should Avoid At All Costs
Tired: eating your seed corn
Wired: burning the library to drive off the chill on a brisk autumn day
Y'all this post I made blowing off steam is blowing up my phone and I'm not sure how to feel about that
*I* thought I was saying something obvious but it has apparently touched a nerve for a lot of us
oof too real, honestly.
yes, hard agree
automation has the potential to take us to "fully-automated luxury communism" but it also has the possibility of accelerating the machine that is already chewing some of us up
100%
all of this.
zero-shot, or few-shot ,classifiers are pretty great, although they should be absorbed with caution because of embedded bias; serious users should take care to validate that the tool does what you expect on known examples.
this kind of work is specifically _not_ LLMs, and as a language nerd I'm super annoyed that these two very different uses are conflated here
generated language from an LLM is _not_ problem-solving. it is an interesting language artifact, but does not involve thought or insight into anything but the weather system behind the chatbot
Seeing usually smart folks getting rope-a-doped into arguing for LLM utility like there's a moral justice zero-sum trolley-problem slider between "useful" and "ethical" and we're just arguing about the best setting
But the real problem is even dumber
-is its mere use a climate disaster? Yes
-is its data provenance founded on theft? Also yes
-will it be used to ruin ordinary workers' lives? Yup
-will it ruin countless organizations who think they're buying their way to cheap labor? That too
"Supply side democracy" is pretty great
Attention @skinnylatte
the fish-themed pun battles have reached the international stage.
(I hate that I have to wonder if this is generated, though)
NLP (the nerdy kind). Wrong-half Jew. Privilege-traitor. Linguist. Empiricist. Rager against machines. formerly #indymediaAlso, #comics, #SFF #scifi, languages, big data, algorithms, software design, and more.#LLMSkeptic. farm-to-table organic free-range prose only. Don't send me generated text. #WestSeattle #Duwamish unceded landThis profile is searchable by tootfinder.ch.
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.