If you use macOS then it has a great sandboxing system built in (albeit, undocumented). Anthropic are starting to experiment with using it in Claude Code to eliminate permission prompts. Claude can choose to run commands inside the sandbox, in which case they execute immediately.
I've thought about making one of these for other coding agents. It's not quite as trivial as it looks and I know how to do it, also on Windows, although it seems quite a few coding agents just pretend Windows doesn't exist unfortunately.
The lack of documentation for that system is so frustrating! Security feature are the one thing where great documentation should be table stakes, otherwise we are left just wildly guessing how to keep our system secure!
It's an internal system that exposes implementation details all over the place, so I understand why they do it that way. You have to know a staggering amount about the architecture of macOS to use it correctly. This isn't a reasonable expectation to have of developers, hence why the formal sandbox API is exposed via a set of permissions you request and the low level SBPL is for exceptions, sandboxing OS internals and various other special cases.
Is AI a special case? Maybe! I have some ideas about how to do AI sandboxing in a way that works more with the grain of macOS, though god knows when I'll find the time for it!
I've thought about making one of these for other coding agents. It's not quite as trivial as it looks and I know how to do it, also on Windows, although it seems quite a few coding agents just pretend Windows doesn't exist unfortunately.