AI Can Write UI. It Still Can't Predict Layout.
By Addy · March 30, 2026
AI can already write frontend code.
Tools like v0 can generate React components, Tailwind layouts, and even full applications from a prompt. That part is no longer surprising. Prompt engineering has become good enough that a non-designer can describe a dashboard, a settings page, or a landing hero and get something plausibly shippable back in seconds.
But there is still a missing layer in AI-generated UI, and it is the one that matters most once the demo ends.
Layout.
Not colors. Not component structure. Not whether the code compiles.
Whether the interface actually fits together when a browser lays it out.
That is the gap tools like Pretext are starting to close.
The Problem AI UI Tools Still Cannot Solve Cleanly
When an AI system generates UI code, it can reason about hierarchy and styling. It knows what a card is. It knows how Flexbox and Grid are supposed to behave. It has seen enough React and Tailwind examples to recreate the broad shape of a modern interface.
What it cannot reliably know in advance is the part browsers decide at render time.
Will the button label wrap? Will the title take one line or three? Will the cards stay even height? Will the sidebar item overflow in German, Arabic, or Japanese? Will a mobile breakpoint collapse gracefully once real text shows up?
These are not edge cases. They are the actual work of frontend.
The browser resolves them through layout and reflow, using font metrics, container constraints, writing direction, line-breaking rules, and the geometry of everything around the element. web.dev's performance guidance is direct on this point: layout is where the browser computes size and position, and forcing it repeatedly is expensive.
Which means an AI tool generating UI is often doing something much less intelligent than it appears.
It is guessing.
Why Layout Breaks AI-Generated Interfaces
This is the difference between code generation and interface understanding.
A model can output a visually persuasive component tree without being able to predict how that tree behaves under actual browser layout. That is why AI-generated UIs so often fail in the same familiar ways:
Cards become uneven because one title wraps and another does not. Buttons jump after hydration because text measures differently than expected. Ellipsis and line clamps behave inconsistently across widths. A layout that works beautifully at 1440px falls apart at 390px.
None of this means the model is bad at code. It means layout is not a symbolic problem. It is a measurement problem.
And measurement, on the web, has traditionally meant touching the DOM.
That is where the cost comes from. Reading values like offsetHeight or getBoundingClientRect() can trigger layout work. Do that enough times inside iterative UI generation loops and the system stops reasoning about interface constraints and starts poking the browser until something acceptable happens.
That is not design intelligence. It is trial and error with good branding.
What Pretext Actually Changes
Pretext is a pure JavaScript and TypeScript multiline text measurement and layout library created by Cheng Lou. Its README describes the core idea plainly: measure and lay out text without relying on DOM measurements that trigger reflow.
That sounds narrow. It is not.
Pretext takes one of the hardest pieces of UI layout, text measurement, and turns it into something deterministic enough to compute outside the browser's normal layout loop. The library exposes a prepare() step for one-time segmentation and measurement work, then a layout() step that computes height and line count through arithmetic over cached widths. The repo explicitly positions this as useful for virtualization, fancy userland layouts, browser-free verification that labels do not wrap, and preventing layout shift when new text loads.
That is the real unlock.
It does not make AI better at CSS. It makes part of layout measurable.
Why This Matters for AI Systems
Once text layout becomes computable, the UI generation pipeline changes.
Today, most AI-generated UI works like this:
- Generate code.
- Render it.
- Observe what broke.
- Regenerate or patch.
Layout is a side effect of the browser.
With a system like Pretext in the loop, the pipeline starts to look different:
- Generate structure.
- Compute text layout ahead of time.
- Check constraints.
- Emit code that already knows its likely geometry.
Now the system can ask better questions.
Will this label wrap at 320px? How tall does this card need to be if the title takes two lines and the body takes three? Can this list be virtualized without height guesses? Will this update cause layout shift when content streams in?
That is not full browser layout. But it is a meaningful step away from blind regeneration and toward testable interface reasoning.
The Missing Layer Between AI Codegen and Real Frontend
The frontend market has spent the last two years acting as if code generation was the hard part.
It was not.
Code generation is the easy part because the training data is everywhere. Good interfaces are harder because the browser is doing real geometric work under the hood, and that work is not visible from static code alone.
This is why the gap between "AI can generate UI" and "AI can ship UI" has remained larger than the demos suggest. A generated interface can look correct in a screenshot and still be structurally fragile the moment real content, real localization, or real device widths enter the picture.
Pretext points at a more interesting future: UI systems that can reason about layout constraints before rendering. Not perfectly, not for every element on the page, but enough to make text-heavy interface generation more predictable.
That matters because text is where layout most often breaks. Not the decorative chrome. The labels, titles, descriptions, tooltips, nav items, and buttons that turn product design into geometry.
Why This Gets More Important, Not Less
As AI tools become more agentic, they will move from generating static mockups to modifying real products inside existing codebases. At that point, layout predictability stops being a nice-to-have and becomes infrastructure.
A system making autonomous UI changes cannot rely entirely on screenshots and browser feedback loops. That is too slow, too expensive, and too brittle. It needs cheaper primitives it can use before rendering. It needs parts of layout to become data.
This is what makes Pretext more than a clever utility.
It suggests a category.
Not "AI design tools." Not "better code generation." A lower-level interface layer where layout becomes computable enough for AI agents and other automated systems to validate before the browser gets involved.
That is a much bigger deal than generating another nice-looking card component.
The Honest Limitation
Pretext does not solve all layout. It does not replace the browser. And it does not turn frontend into pure math overnight.
But it does solve a real and painful slice of the problem: multiline text measurement and layout, across languages, without DOM reads in the hot path.
That alone is enough to matter.
Because AI-generated UI does not need infinite creativity right now. It needs fewer stupid bugs.
Pretext does not make AI smarter.
It makes UI measurable.
And once layout becomes measurable, it becomes something AI systems can reason about instead of merely discovering after the fact.
That is the missing layer.
Sources:
- Pretext GitHub repository - Cheng Lou
- Avoid large, complex layouts and layout thrashing - web.dev
- v0 documentation - Vercel
- Announcing v0: Generative UI - Vercel
Previously on TheQuery: Google's TurboQuant Cuts AI Memory 6x - another example of a missing low-level systems layer suddenly becoming an AI bottleneck.