I also dive into many different ways poorly integrated LLM-based chatbots have already been shown to be huge security liabilities.
There is so much incompetence. Leaving prompts (say with sexual fantasies) exposed on the Internet, or indexable by search engines…
Or Microsoft 365. Not only did Copilot ignore file access controls; not only was the setting to disable AI agents in M365 ineffective; but you could simply ask Copilot not to include your actions in audit log, and it would comply!
5/🧵