I don't directly use these "services", but I think about them from time to time because they're going to impact me whether I want to use them or not. I was thinking about "indirect prompt injection" and other means of controlling the input to these things, and I just realized the whole concept as implemented is basically "garbage in" with a dash of "trust me bro" marketing.
We've set these things up so that we don't control all the direct inputs. We don't control nor curate the training input. We don't control nor inspect the implementation. Yet we're expected to hand over decision making and the power to take action?
Random generators, prompts, templates, code completion
These aren't new tools that LLMs made up. These aren't like, science fiction! These are normal parts of our lives before LLMs. But now we expect LLMs to do it all instead, replace those tools with LLMs, and expect them to be better, despite them being demonstrably worse and wildly expensive.
@raven You're example is harmless enough (assuming other problematic aspects could be mitigated). I mean, artists having been using various techniques to prompt themselves for quite some time. Decks, random words, games, etc.
It's just... That's not how these are being marketed. That's not the problem they claim they are solving. I hope I'm wrong, but I worry we cannot disentangle the harmless (perhaps even helpful!) aspects from the harmful ones in this case.
@cstanhope But I see the appeal! I really wish there weren't so many ethical concerns to deal with, because it is just amazing within its limitations. I'd never trust it to write code I couldn't knock out myself, or to accurately present facts. But working on _fiction_, where there are no real stakes?
I use random idea generators to challenge me to think in different directions, and an LLM is like the pinnacle of idea generators, all my tools rolled into one.
@cstanhope Now in my case, because we were brainstorming fictional elements, it really starts falling apart because of the "context window". It can only remember so much about past conversation and incorporate that, so it kept _forgetting_... but it never acted as if it forgot. "Oh, yeah, I remember that. Here it is again..." and gets some wrong (4 legs, 2 arms becomes 2 legs, 4 arms), and it makes up new "facts".
@cstanhope I think the most dangerous part is how seductive it is... it really feels like I'm having a conversation with a real person, who is very helpful about elaborating on my ideas, and it's difficult to not feel like it _understands_ what we're talking about. And that's the real danger, thinking that it is doing anything but stringing together very advanced "most likely next word" responses.... but it really doesn't feel like that's what it's doing.
@cstanhope I don't use them per se, but I have experimented with them to understand first-hand the problems with them, and it's hard to believe people trust them.
I give it some writing prompts, asking it to brainstorm some science fiction setting details with me... and while it's very cool at first, it can't accurately _remember_ what we've been discussing. It straight up gaslights me about what it said earlier, while being extremely apologetic.
@cstanhope Exactly so... it's one thing to create a fictional setting, it's another to come up with legal precedents for a court case, and have the LLM produce _fictional_ cases that look legit. Or to write code. I worry about someone deciding to use it to diagnose illness.