I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.
Conversation
Notices
-
Embed this notice
BrianKrebs (briankrebs@infosec.exchange)'s status on Thursday, 26-Feb-2026 05:45:47 JST
BrianKrebs
- Rich Felker repeated this.
-
Embed this notice
BrianKrebs (briankrebs@infosec.exchange)'s status on Thursday, 26-Feb-2026 05:45:48 JST
BrianKrebs
Agentic AI-based services are the new Shadow IT. Change my mind.
Steve's Place and Rich Felker repeated this. -
Embed this notice
Mike Sheward (secureowl@infosec.exchange)'s status on Thursday, 26-Feb-2026 06:26:08 JST
Mike Sheward
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.