@weirdwriter That is brilliant and scary omg. I read an article recently (maybe through you?) saying that LLMs are inherently insecure because input and commands can't be separated, and evidently there's no way to stop those prompt injection attacks that were in the news.