- Published on
When moving from intent to execution, the security model for Agentic AI shifts from intent interpretation to traditional systems hardening. Once an LLM can invoke tools and assume identities, the capabilities we grant an agent become the primary attack surface. This post continues from my first piece on [Loss of Intent as a Failure Mode in OWASP's Agentic AI Risks](/blog/loss-of-intent-as-a-failure-mode-in-owasp-agentic-ai-risks-2026). Here, I focus on the second bucket in the [OWASP Top 10 for Agentic Applications 2026](https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/): agents with too much power.