AI is an eldritch horror. it is an incomprehensibly vast being but without physical form, not truly alive or aware of anything, though it consumes human knowledge and spits out a perversion of it. those who get too close to it are driven mad, and worship it as a deity that will lead them to salvation, even though it is not even aware of them and will lead them to destruction instead
@phil@eniko The only uses for is are where "producing information shaped drivel at scale" is a goal. That's mainly the domain of deceiving people/propaganda. It probably also "works" in domains like games where you need something that gives the appearance of something it's not, but folks rightly hate it there because it's cheap, regurgitive, and undermines actual creative work making much better things.
@eniko@mastodon.gamedev.place I think that's a major exaggeration. There are valid uses for it; the hype isn't healthy, and training these things is destroying the planet for sure, but since we get free tools out of it, and inference costs less than a dollar a month (no, I don't use them all that much)... I don't think the problem is with users. It's corporations and regulators who should be held accountable for it.
@phil@eniko It's well established both empirically and on sound theoretical basis that LLMs *cannot* summarize/review. But it's clear that you're already on the bandwagon and you're here to inject subtle (which becomes not-so-subtle as soon as you're challenged) pro-"AI" rhetoric into conversations, not to have your mind changed on something you're wrong about. 🙄
I genuinely believe there's value in LLMs, provided that they're used in smart ways. They're dumb, bulky and stupid, but if you build massive walls and create a context in which the only possible option for them is to meet your requirements, it's quite possible.
I shared a short snip of how I use this in the quoted thing, not sure if this works across fedi or not, so hard link also.
My point here isn't that you're entirely wrong, because I do see that a lot of unscrupulous people do use them that way.
My point is that for me, and many others, LLMs are a viable tool for self-reflection, and they are capable of speeding certain things up. E.g. I can have it review my sleep or exercise habits, or locate all the interactions I've had with people and remind me of the important bits.
Ensuring the LLM works only on relevant data does make them more accurate. Sometimes they do miss things (like saying the novel I read is a manga), or get stuck in loops (especially around dates). This, for the most part, is easily fixed with a little bit of more specific instruction.
@phil@eniko I'm not offended by you running local models. What I object to is your insertion of gratuitous pro-"AI" narratives into public conversations about the topic with misleading claims about utility.
It's a tool with a very, very narrow scope of applications. Just because people are abusing it, doesn't mean everyone is.
I run local models, maybe for 5-10 minutes every few days. The idea that this offends you is genuinely funny to me.
Every single thing I publish, I write myself (unless it's examples of the tool working, in which case it's obviously noted as such.)
Your rage is best directed as regulators and corporations training these models - that's where all the power and environmental damage happens.
For individual users, such as myself, the damage is roughly equivalent to leaving a lightbulb on for a few hours, or playing a video game for those same 5-10 minutes.
In that perspective, your attitude is rather jarring, since you act as if I'm somehow responsible for what Google/ MS/ Elon/ OpenAI/ etc are doing.