@sickburnbro These AIs are often wrong. When I engage ChatGPT on a topic I am familiar with, the errors it makes are striking. Simple stuff that even a novice couldn't get wrong, the AI mangles. And when you call it on its errors, it pretends it meant to say the correct answer, and when you say, "No, I was lying, your original answer is correct," it will go back the original. Some more complex concepts it absolutely nails. It's just not trustworthy. The scary bit is that they *could* use this technology to control robots -- it seems straightforward. But it would not do the right things in many cases.
@sickburnbro That critical part is context, and you need a body existing in the world and experiencing reality to understand context. The best AI could donis rely on people for context, like if someone curates the information fed into it (eg scientific articles), but unfiltered information is useless.
Unfiltered input and output just makes an incredibly funny shitposter.
@epictittus I'm not even going that far. What I'm saying is that elmers glue isn't in the category of "food". one of the basic rules of food is there is stuff that is food and stuff that isn't. putting something not-food into food should be a no-no.
@epictittus if you have a magic system that can learn deep contexts, either it is not able to pick this up .. or it is failing to apply it here. In either case, not good.
@ApexBoomer@sickburnbro For the time being, AI is being programmed by jews. Eventually AI will be decentralized like Fedi and it will be amazing, but for now, talking too chat gpt is like talking too a New York tunnel jew.
I think to get to a general intelligence is going to require multiple neural network models to to handle the various tasks. The ChatGPT track of research is basically the core of a linguistic model, and the speech synthesis models give you a voice.
There is currently nothing like a prefrontal cortex that I've seen so far.
ChatGPT, if I understand roughly how it works, is an extension of Markov chain generators, with a neural network instead of a simple table of probabilities of word pairs.
Using a Markov chain generator trained on your own posts is a fun thing to do that makes a bot that sounds roughly like you , but is often hilariously wrong.
@teknomunk@epictittus indeed, I remember the problem at one point was 'we can make systems to parse language pretty well, but noun knowledge is not doable' .. well now we have a noun knowledge system, pretty cool.
@EvolLove@epictittus all "reality" is, is having a bunch of assumptions about how things work. If you were placed on mars tomorrow, a lot of your assumptions would have to change.
@EvolLove@sickburnbro Thats not how it works. If, for example, it thinks genociding white men is ok, you just have to say >what if the white men self identify as jews/blacks/women To get the genocide exemption. Leftism is deeply flawed, it just doesnt make sense in any way, and this leaves gaps wide open for exploitation.
The way flesh and blood leftists get around it is to run away, so maybe thats what the AI would do. But that means it wouldnt deliver the message.
nah I really don't think AI would have any problem with a distorted reality since it has no sense of reality. it will just keep on making dumber and dumber conclusions for an eternity.
way I see it AI should be perfect for leftist ideology. since they are just as clueless as the AI. It's like a perfect match.
conservatives know better than to trust a machine with any decision.
@EvolLove@sickburnbro Not really. If the political ideology is incorrect and contradictory, it will cause an AI that breaks down and acts retarded, or it has to use way more memory to keep track of all the retarded contradictory rules of leftism.
Right wing AI will be sleek and super fast by comparison.
I don't think so. because the AI lacks the ability to detect contradiction. it will just keep adapting its data bank based on ideology approved data without worrying about any contradictions.