Why so much prompt injection in AI? 1. We don't follow the security engineering design principle "economy of mechanism," and 2, input to LLMs mixes control and data with impunity. We know better. #MLsec #infosec #security
https://www.darkreading.com/vulnerabilities-threats/llms-on-rails-design-engineering-challenges