Your Machine Is the Blast Radius
LiteLLM got compromised on PyPI. The malicious payload harvested SSH keys, cloud tokens, database passwords, crypto wallets, shell history, then tried to move laterally through Kubernetes clusters and install a persistent backdoor. The only reason anyone noticed was that someone's Cursor IDE pulled it in as a transitive dependency through an MCP plugin, and the machine ran out of RAM from a fork bomb. Pure luck!
I stopped installing random things on my machine about eight years ago. It seemed extreme at the time. No dev tools, package managers, language runtimes running locally. My computer is simply a window to other worlds. It has the minimal attack surface. Is this setup bulletproof? No, and I'm not claiming it is. But the risk of supply chain attacks like this one is significantly reduced when your local machine has almost nothing on it worth stealing. What do I use instead? Cloud environments! I started with AWS Cloud9, moved to GitHub Codespaces, and now I run hardened Firecracker VMs with tunnels. The key property is always the same: disposability. If an environment gets compromised, I throw it away and spin up a clean one. It exists to serve a purpose, and when that purpose is done - or when trust is broken - it gets destroyed. This setup won't protect you from everything. A compromised cloud environment can still leak whatever secrets were loaded into it. But the blast radius is contained. It doesn't touch my personal files, my browser sessions and my password manager. A cloud environment is recreatable. A local machine is not. That distinction matters more every year.
The rise of local AI coding agents, like Claude Code, Gemini Code, Codex, and more recently OpenClaw, represents a massive expansion of the local attack surface. These agents run on your machine with broad filesystem access, network access, and often the ability to execute arbitrary commands. They pull in dependencies, and run untrusted code. If anything in that chain is compromised, the agent becomes the delivery mechanism and your machine becomes the target. That is exactly what happened with LiteLLM. Nobody chose to install it. It just showed up as a transitive dependency. One MCP plugin pulled it in, and suddenly the machine was exfiltrating credentials.
This is why I believe local agents are just too much risk. At least for me. They should be running in isolated, ephemeral compute environments. Environments where even if compromised, they get destroyed after five minutes of inactivity. No persistent state, accumulated credentials and lateral movement opportunities. And these agents need to be given just the tools they need. Not more, not less. Exactly as much as they need to do the job. Every additional capability is additional attack surface. Every extra permission is a potential exfiltration path. The LiteLLM compromise won't be the last. AI tooling is the new target because AI tooling is where the valuable credentials live. The answer isn't to stop using these tools. It's to stop running them where the blast radius includes everything you care about.
A practical note: this is one of the reasons ChatBotKit runs agent workloads in isolated, ephemeral compute. The environment is disposable by design. If it gets compromised, it gets destroyed. That's not a feature. It's the intended architecture.