Published on

Intent-Driven User Interfaces

Authors
  • avatar
    Name
    Parminder Singh
    Twitter

If you are writing conventional web interfaces, it will be a good idea to take a pause and rethink your strategy. Instead of coding static UI for every workflow, what if we could generate UI on demand, directly from a user's prompt? As of June 2025, ChatGPT has approximately 400 million users. These interactions with ChatGPT and other LLMs aren't just asking questions. More and more applications are building integrations with these LLMs via MCP (Model-Component Protocol, or other LLM integration mechanisms), that allow users to perform actions within these application (Setup an appointment in your Calendar, Review your GitHub repo, etc.). Every software you use now, you expect a chatbot to be present there to assist users. This is because users are now attuned to "chatting" and getting work done.

Taking this a step further, going forward we will see these integrations go beyond and have the ability to present UIs on the fly based on what user is requesting. For example, let's say your application allows your users to submit a review. You have three options:

  1. Build a native screen in your application
  2. Let the chat interface ask questions interactively
  3. Build the UI on the fly based on the user request

Option 1 is costly but presents the best user experience. Option 2 is fine for simple functionality but may not be the best user experience. Option 3 is best of both worlds.

In terms of flow, this is how we can implement this.

  1. Use a model to understand user's intent from their prompt.
  2. Use a model to map this intent to the API or set of APIs from the specification.
  3. Leverage an existing UI generation tool like React JSON Schema Forms to create a dynamic form based on the API definition.

Of course, there will always be cases, where you need a custom UI based on complexity and business logic, but for simpler, straightforward tasks, this can bring your effort down (UX, UI, security and functionality testing, etc.). While auto-generated forms offer fast delivery and cost savings, they need constraints for branding, accessibility, and validation consistency. This is where form schemas, theme layers, and reusable intent-to-UI mappings become crucial.

I have a prediction that like MCP, this is going to become a standard for UI integration in LLMs. Let's see.

I wrote a reference implementation to experiment with the idea. You can view the code here.

A video demo is shown below. In the demo you will see two forms generated based on the prompt. These forms are generated based on the API definitions provided in the code.

Please note that the idea isn't new as such. Integrating it with a model to determine the intent and picking the right UI is what makes it interesting.

Maintaining front-end code for dozens of features is expensive — in time, talent, and testing. With on-the-fly UI generation, teams can shift focus from UI plumbing to designing reliable APIs and intent taxonomies. This also future-proofs systems for agentic workflows.

Let me know your thoughts.