Llm

  • Published on
    When applications interact with LLMs or MCP servers, every request and response is a potential attack surface. One way to add protection is to put a proxy at the edge, where you can inspect traffic and enforce security rules. Just as firewalls and WAFs shield web apps from SQL injection or XSS, a proxy can serve as an "AI firewall" to defend against risks like those in the OWASP Top 10 for LLMs. In this article, I will walk through how to build such a firewall using Nginx, OpenResty, and Lua.
  • Published on
    If you are writing conventional web interfaces, it will be a good idea to take a pause and rethink your strategy. Instead of coding static UI for every workflow, what if we could generate UI on demand, directly from a users prompt? In this post, I explore the idea of intent-driven user interfaces that leverage AI to determine user intent and generate dynamic UIs on the fly.
  • Published on
    If you are exposing AI-enabled capabilities in your product and supporting external integrations, there is a good chance you will implement an MCP (Model Context Protocol) server to handle tool calls from LLMs. When you do, you will need to manage authentication, input validation, multi-tenant isolation, and more. Instead of starting from scratch, I have put together a starter-kit that gives you all this out of the box: JWT-based tenant authentication, input validation, per-function metadata, cloud-native & container-ready with Docker, and standard endpoints as per the MCP spec.