"AI is totally going to replace developers, guys..."
RT: https://hell.twtr.plus/objects/bfeb3a76-34ad-400e-b8a9-70690e45a9a1
Conversation
Notices
-
Embed this notice
gentoobro (gentoobro@gleasonator.com)'s status on Wednesday, 22-May-2024 14:31:25 JST gentoobro -
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Wednesday, 22-May-2024 14:31:23 JST iced depresso @gentoobro @hazlin well, transformers have awful performance numbers. mamba networks are subquadratic. they just struggle with if they don't admit facts in to their state window then they can't remember it at any point, since the whole thing with state space models is they have to evaluate what to carry forward and what to drop.
but none of this addresses the artificial hippocampus element, which is whats needed to actually make random ML shit in to a AI :blobcatdunno:
i had some theories. -
Embed this notice
gentoobro (gentoobro@gleasonator.com)'s status on Wednesday, 22-May-2024 14:31:24 JST gentoobro @hazlin The problem is that GPT systems lack the ability to reason in the sense that humans do. They're essentially a giant truth table from a logic perspective. You can store a certain degree of logic in one, but no more than that. Even if you trained an absurdly expensive GPT with an attention window that could fit millions of lines of code into it, it could still only store logical relationships up to the depth of the internal layers. Humans, on the other hand, can trace logical threads as deep as they have the patience for.
But all of that is fantasy. The cost of a GPT system roughly correlates to the size of the attention window times the depth of the internal layers. This is why OpenAI keeps shrinking the window size on the free version.
-
Embed this notice
hazlin no plap pirate (hazlin@shortstacksran.ch)'s status on Wednesday, 22-May-2024 14:31:25 JST hazlin no plap pirate @gentoobro I honestly expected AI to bottle neck before now. But, it just keeps improving.
Though I don't think throwing more github projects at it, will make it better.
Perhaps there is some kind of, Turing Complete type set of programs it can be given to learn from.
Create a procedural enumeration that has correct associations between different descriptions and different solutions. -
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Wednesday, 22-May-2024 14:49:33 JST iced depresso @gentoobro @hazlin the simulacrums are passing turing tests, they just don't have the ability to actually learn anything. and they never will with the current doctrine.
i think numenta was very close. they had some small models learning in real time (i think still a thousand iterations of learning, but they're using hebbian stuff.) i had some thoughts about the way they did that and dividing the model in to banks of "tapes" where you could somehow accumulate the error across those and pull up whichever tape is worst, and subject only that to ongoing training for a bit, which is kinda close to what the hippocampus actually does. -
Embed this notice
gentoobro (gentoobro@gleasonator.com)'s status on Wednesday, 22-May-2024 14:49:34 JST gentoobro @icedquinn @hazlin And let's not forget that "human cognition is computable" is an assumption with little evidence for or against.
-
Embed this notice
gentoobro (gentoobro@gleasonator.com)'s status on Wednesday, 22-May-2024 15:29:39 JST gentoobro @icedquinn @hazlin A turing test is just some scifi nonsense about replicating human speech patterns well enough to fool a normie (something that is easy; politicians do it constantly and they're not even a form of slime mold). It has no significance technologically.
I don't think it will ever be feasible to replicate human cognition in a digital computer as we know them. Maybe a 5 kilo monolithic chunk of custom silicon, maybe. Probably not. Consider distance and signal propagation. It's not possible for anything the size of a warehouse to have the same latency as something the size of a melon, just based on the speed of electricity. And a melon isn't realistic. It's a fraction of that that handles cognition; most of your brain processes sensory input and handles biological things. Parallelism is built in to the hardware of your brain; in computers it's largely handled by sequential processors multiplying arrays upon arrays of numbers. Sure, you can add more cores, but not to the same effect as adding more dendrites. Your brain can effectively multiply enormous matrices in O(1).
-
Embed this notice
iced depresso (icedquinn@blob.cat)'s status on Wednesday, 22-May-2024 15:29:39 JST iced depresso @gentoobro @hazlin the turing test was supposed to be a philosophical device in that people kept asking how you knew when something was smart, and people only recognize something is smart by talking to it.
basically, nobody actually has/had an objective measure of something being alive.
-
Embed this notice