Blog

All Posts

  • Published on
    Last week, *The Register* reached out to the major AI application vendors—Microsoft, SAP, Oracle, Salesforce, ServiceNow, and Workday—and asked a simple question: How much liability do you accept when your AI agents make bad decisions? Microsoft and SAP declined to comment. Oracle, Salesforce, ServiceNow, and Workday didn't respond. That silence is your answer. For every CISO, CRO or head of legal deploying AI today, that silence has a direct consequence: You are the insurer of last resort for your vendor's model.
  • Published on
    On March 18, Meta's internal AI agent exposed sensitive user and company data to engineers who shouldn't have seen it. The exposure lasted two hours. Meta classified it as Sev-1. Here's the part that should concern every security architect: the agent was fully authenticated. It had valid credentials. It passed every identity check. And it still caused a data breach. This is the post-authentication gap.
  • Published on
    Last year, researchers disclosed EchoLeak (CVE-2025-32711), a zero-click Indirect Prompt Injection in Microsoft 365 Copilot. A poisoned email forced the AI assistant to silently exfiltrate sensitive business data to an external URL. The user never saw it, never clicked a link, and never authorized the transfer, but the data left anyway. Most leaders I talk to think they are "covered" because their LLM provider is SOC2 compliant or has a signed DPA. However, in the eyes of the law, the liability remains with the deployer
  • Published on
    NIST's comment window on AI agent identity and authorization closes April 2. If you are deploying AI agents and haven't read the framework, this is the post. Not because the comment window matters to your engineering roadmap, but because NIST just put formal language around a structural gap that most organizations are already sitting in.