@mrsaturday@Tattooed_Mummy every good picture you see from a local AI probably had to be curated by a human sifting through all the garbage ones that got produced. I've had better luck with the corporate ones but they still completely mess up for me when I ask it to do something like show me a person sitting in a car, the car is completely messed up. also the corporate ones are so censored I often can't make it do anything. I tried to get the bing one to do something recently and it argued with me constantly, things like military uniforms it wouldn't do.
@Tattooed_Mummy I can tell you this, watching Stable Diffusion rapidly get left behind closed corporate censored picture generation is really depressing
@nikatjef@root42@tob@Tattooed_Mummy They are unable to tell the truth. All they can do is mimic the truth with no actual control that they DO tell the truth. Relying on AI for facts is not a good idea, since they cannot determin what fact is.
As someone else put it on here, we have burned billions of dollars, and are destroying the environment with enormous data centers burning drinking water and electricity, to produce tech that makes computers pretend they can't do math.
No one is claiming they are sentient or that it understands what it is say, but the fact is that LLM systems are our-scoring humans on various collegiate tests, they are diagnosing diseases that medical professionals are missing.
But it is not intelligent. It can randomly put together responses based on training and input, nothing it says is anything but random words put together to SOUND like a human. There is no medical or psychological knowledge in there, just random responses based on the data that it has been fed with while being trained to sound as if it knows anything.
People who mistake an RNG what-if code with "it is almost sentient" has no clue.
@nikatjef@tob@Tattooed_Mummy My point. And memory is already on its way and will be another game changer for current models. I do think that AI will still do great strides, but I don't see the point or better: I see a lot of drawbacks and dangers. We totally lose track that whatever we do we should do it for humanity, not "because".
@tob@root42@Tattooed_Mummy I am not so sure about that. 30 years ago Eliza was about the best you could get for "therapy" chat bots, now there are things like Replika, Elomia, Mindspa, etc. We have come a long ways in the last 30 years. Note that I am not saying good, just that we have come a long way.
@Tattooed_Mummy we are trying to replace humans with humanlike computer programs. What could possibly go wrong? Calling it now: we will soon have equivalents of babysitters, teachers and psychologists for AIs. Assuming that next steps will be memory and self correction in AI development.
The "game changers" aren't anywhere except in peoples' fantasies.
What you have instead is the masked work of humans. e.g. Nigerians working for $2/hour filtering through thousands of horrific images in the vain hope that Midjourney won't produce snuff-porn.
@tob@nikatjef@root42@Tattooed_Mummy The over here AI that was used (as a test, double checked at every point by human pharmacists) for pharmacies to identify and recommend correct use of medication was very quickly taken off line since it turned out to be severely wrong on many occasions, often deadly so.
@WhyNotZoidberg@nikatjef@root42@Tattooed_Mummy Good thing history doesn't have anything to say about civilizations collapsing because of the misallocation of investment based on the whims of a disconnected and increasingly bizarre class of elites.