Embed this noticeMatty (matty@nicecrew.digital)'s status on Wednesday, 04-Sep-2024 02:21:29 JST
MattySo many advertisements for services that utilize artificial intelligence. If you think the competency crisis is bad now just wait ten more years nigga if we even get that far. All this talk about how the next GPT model is going to be a hundred times more intelligent than the last. Humanity's hubris knows no bounds.
Jonny, what you see on the internet is surface level for what AI can be/is being being used for. If you think corporations aren't leveraging AI for hiring, promotions, corporate decisions et al you are a fool. The image generation and prompt responses are just what we are allowed to see - it is innocuous enough but anyone with a brain knows it's used for far more than making AI art of Pikachu firing a rifle.
@matty@Jonny@mjdigspigs any real use of "AI" is just really "computer algorithms" the only special sauce that "AI" is supposed to have is that you aren't supposed to need to curate the input data as much.
@matty@mjdigspigs AI is being used too write content and create art, for the most part. Some laboratories are using it too model proteins and elemental molecules, and that’s cool, but for the most part, it’s just writing term papers and fake legal documents for lazy lawyers
I like AI and use it sometimes, but these people are going to make it the fulcrum of their entire operation, outsourcing development and decision making to a computer that may or may not spit out the right information. What if it doesn't? Who is going to be around for quality control? No one. It is a logistical and technical nightmare waiting to happen.
Good morning Matty. Got a fire started and the needle has gone up 1 degree. These old houses carry the cold in their bones. That's when Turbo really went off the rails, IMO, that AI crap. I think it's the anti-christ.
That's not how this works at all. The Y2K way of thinking makes people vastly underestimate modern technology. Yes if you program a computer to get stuck in a loop it will crash but it can also be programmed to handle those crashes. There will be no point where the singularity comes and humanity wins because we are made of flesh and bone rather than circuit boards and capacitors.
You also have to remember that this ephemeral idea that you may have, a "loop" doesn't exist as it does in your head. Imagine you had instructions to build a frame for a home. However in those instructions you see that it requires you to tape two pieces plywood together to form a joist. Obviously that's not right, so you use a couple bolts and plates. The AI can do the same thing. It has already been given a way to think outside of the box.
Exactly, correct me if I'm wrong Jonny but, the premise is it will implode on itself. It'll end up going round and round and then cease to function as it's stuck in a loop.
LLMs can be forked and used in any particular way. The AI isn't one big thing that everyone just uses, not in enterprise environments anyway. You cannot look at this through a lens of philosophy, but practicality in it's implementation. It's nice to tell ourselves everything will be okay but that's not always the case.
@mjdigspigs@matty If AI is gonna be smarter than humans and logic based, then it will have no choice but too eventually become BASED. AI will be good once it becomes BASED.
@mjdigspigs@Jonny@matty I don't think it's quite a SCAM. It's like electric cars. Telsa cars, run, drive and do many car like things. That doesn't mean plug in hybrids are going to the be future or save the economy or something.
@matty@Jonny@mjdigspigs that's the thing, they aren't human minds, they are a .. rough simulation. Yes they have the ability to have an obscene amount of raw data plugged in, but they don't have the ability to interpret the information as it is being output.
You as a human can be asked a question and as you are answering, realize you are wrong.
@matty@Jonny@mjdigspigs because that's the trick - they don't really *know* anything, it's just the ( as far as I can seen ) the probability of that being the correct word to add on the end.
It's like a super elaborate mad-libs trick. I don't think you can even glean things like "what are the words most associate with X" from them.
That's not to say you could *make* a model that did this, but it would involve a much different way of storing information.
Why not just simulate that then? Prompt responses are generated word by word. There's no reason why it can't do real time error checking and correction with powerful hardware.
The new samsung phones will automatically summarize your phone conversations "for" you. I could go back and say "what time did Tyler say to be at the BBQ again?" and actually see a bullet point of it from a phone conversation we had, and so can everyone else with access to that data.
The modern day accuracy of text to speech alone, that hundreds of millions have been training using Siri/Alexa etc is making unprecedented levels of surveilance possible.
You can both be right and probably are. Just how much will AI have impacted the world before programmed falsehoods cause a problem? Does the "loop" ever actually need to be closed before they can get you in the pod with the bugs?
The supply chain at the end of the day requires some truth. After all, if the bread and circuses don't actually get delivered the whole charade goes tits up. But man, look how many lies they are able to squeeze in the gaps to bend the system to their will.
AI is currently being used to parse speech on xbox live servers to auto-ban the no-no word sayers. Think about the applications of this. All those things the governments of the world didn't crack down on because it did not have the time and manpower, it suddenly has both.
January 6 2021 a loose collection of leftist redditors spent their days after the event combing through the footage to identify people who were there and tip the FBI off. 2024, for the UK riots, AI is doing it.
@sickburnbro@matty@Jonny@mjdigspigs And we don't even understand how the human mind works yet. We have no real understanding of consciousness or sentience.
We just copied an interesting structure that's found in the brain, the way neurons network with each other in the brain, stuck on a simple model of how they might interact and found at first interesting (letter recognition) and more and more surprising and possibly useful results.
But to call it thinking as an AI is a *huge* stretch - that sells!
This structure of networked neurons looks like it can power lower animals and their ability to move in the world, adapt to circumstances, find food, escape predators.
But we're a long way from human cognition as far as I can see. We don't even know if it's this network structure is what leads to human level cognition. That could be a completely separate process we don't know about that's built *on top* of the existing network and took time to evolve.
Perhaps Pensrose's and others' idea of quantum mechanics being fundamentally involved in consciousness. We are just starting to understand quantum computing so maybe that will be the way forward.
Regardless, it still is an exciting field. Hype aside.
No. They are a language generator. They don't simulate human thought in any way. That path was explicitly rejected by the creators of Large Language Models, since it had thoroughly stagnated for over 30 years. Instead, they took the Turing test seriously, despite it being nonsense, and built a machine that attempys to meet that standard instead. They construct simple human language sentences in response to prompts. This is no small feat, but it is not thought, and it is not intelligence.
@Snidely_Whiplash@Jonny@matty@mjdigspigs what it does do that is neat is show that is it possible to feed into a system unstructured data and generate some kind of relationship mapping.
@iwetoddid@Snidely_Whiplash@fireandforget@matty@mjdigspigs@sickburnbro Just think of how many billionaires are being tricked into investing in this bullshit. They released a picture of some jankie ass quantum computer that’s supposedly gonna power AGI (general intel). Give me 5 days, and I can build this out of legos IMG_3678.webp
A simulation can only be less complex than its creator.
Also there is the problem of the amount of knowledge that comes from having a body
AI can self diagnose programming flaws, but when deciding between two equivalent physical states that have differing moral implications (run over the person in the crosswalk or hit the light pole) what guiding hand will decision it?
@Jonny@iwetoddid@Snidely_Whiplash@fireandforget@matty@mjdigspigs@sickburnbro No. There really is something to quantum computing. We are just figuring it out. But yes, it's going to be another huge hype cycle. Afterwards there will be useful technology. It's still virgin territory. As an engineer I find it wonderful.
@sickburnbro@matty@Jonny@mjdigspigs Nobody knows how the human mind works. We don't know what the basis of memory is in the human mind. If someone tells you that they do, they are either ignorant or lying.
IPoAC has been successfully implemented, but for only nine packets of data, with a packet loss ratio of 55% (due to operator error),[2] and a response time ranging from 3,000 seconds (50 min) to over 6,000 seconds (100 min). Thus, this technology suffers from high latency.[3]
i have to use satellite .. no one will bring cable way out here .. but yes , for sure used to pile up torrents before going to work. mostly every day that was on fiber