LLMs are approximately your employees on their first day of work, if they didn't care about being fired and there were no penalties for anything they did. Some percentage of humans would just pull the nearest fire alarm for fun, or worse.
This seems like the obvious outcome, considering all the hype. The more powerful the AI, the more power it has to break stuff. And there is literally ZERO possibility to remove that risk. So, whos going to tell your gungho CEO that the fancy features he wants are straight up impossible, without a giant security risk?
They weren’t kidding about hooking mcp servers to internal databases. You see people all the time connecting LLMs to production servers and losing everything — on reddit.