entropytown @2025 / Twitter (X)

Analysis

Why AI App Builders Are Quietly Moving Server-Side

lovable.dev, bolt.new and the end of the purely ‘vibe’ coding era.

September 30, 2025


AI coding tools were supposed to stay thin. A prompt box, a pretty file tree, and a call to someone else’s model. But the two products people actually use in this category in late 2025 — lovable.dev and bolt.new — are adding opinionated, first-party backends. That sounds backwards in a year when Anthropic ships Claude Code to the web and OpenAI pushes GPT-5 Codex straight to developers, skipping the middlemen. It makes sense once you look at platform risk, traffic volatility, and how expensive it is to impress users who never come back.

A new kind of upstream pressure

October’s launch of Claude Code on the web made the shift obvious: Anthropic now offers a repo-aware, multi-step coding agent that runs on its own infra and can operate directly on GitHub projects (Anthropic, TechCrunch). OpenAI, for its part, has been positioning GPT-5 Codex as an agentic, long-running coder that can create, patch and test projects in cloud sandboxes without needing a third-party builder in front of it (OpenAI, TechCrunch). If the upstream vendor gives developers a good-enough IDE-like surface, a startup whose only feature is “nice UI over someone else’s API” becomes disposable.

The August episode where Anthropic cut off OpenAI’s internal access to Claude, reported as a signal about evaluation and competitive boundaries, made that platform risk visible to founders and buyers (Wired). The rational response is to own a translation/orchestration layer that can swap models, and that layer almost always lives server-side.

Why lovable.dev wants to own the boring parts

Lovable’s recent material sells more than “AI writes your app.” It sells “AI writes your app and we host the thing it wrote,” including auth, database and realtime features inside a managed environment (how-to, demo video). That looks like overreach until you map the incentives.

If lovable promises that a 20-minute AI session yields a working app, it cannot let users point the agent at arbitrary, half-maintained backends. A small, controlled set of backend primitives keeps success rates high, keeps LLM repair costs low, and gives lovable unified logs the agent can read. It also creates something to charge for that is not just “number of LLM tokens.” And once lovable tries to sell to teams with security or compliance needs, having its own backend plane is basically mandatory.

Bolt.new’s headless turn

Bolt projects run in browser-based WebContainers from StackBlitz, so at first glance it is the opposite of server-heavy (WebContainers, StackBlitz blog). But StackBlitz has been offering self-hosted and enterprise variants for years (self-hosted), and in 2025 it open-sourced a DIY version that lets teams pick their LLM provider and run the agent locally or in their own cloud (bolt.diy). That is effectively a headless mode: the UI becomes optional, the orchestration isn’t.

The reason is the same as lovable’s but viewed from a different angle. Bolt needs a stable, server-side broker that can talk to OpenAI one day, Anthropic the next, maybe a local Ollama model the third, and still produce consistent file diffs and patches. It also needs a place to host previews and make the “share this app” loop work. And it needs something enterprises can drop into CI. All of that is backend work, even if the code runs in the browser.

The vibe-coding plateau

Traffic confirms the shift. Business Insider, using Barclays data, said in late September that visits to Vercel’s v0 had fallen 64 percent since May 2025, with lovable down around 40 percent and bolt.new about 27 percent over the same period, grouping them as “vibe tools” whose early spike didn’t turn into repeat usage (Business Insider). Months earlier, a Reddit thread in r/vercel surfaced Similarweb numbers showing v0 dropping from roughly 1.3 million weekly visits to about 400,000 right after usage-based pricing arrived, a fall of over 70 percent, with commenters basically saying: fun to try, not fun to meter (Reddit).

That is the core weakness of the current generation. The products are tuned for a three-minute wow moment, but the user’s real work is three weeks. The platform spends the money on multi-call generation, schema inference and self-repair right at the top of the funnel; the user often exports, screenshots or abandons before the platform can earn any of it back. On top of that, generated code often lives in a playground outside Git and CI, so teams do not treat it as canonical. And after the Anthropic–OpenAI incident, engineering leaders are more likely to ask whether they can run the same agent headless on their own infra. That is exactly what Bolt is trying to meet with its DIY route, and what Lovable is trying to de-risk by owning the execution environment.

What a backend-centric future actually buys

A thin UI over someone else’s model cannot promise durability. A UI connected to its own backend can. Once the platform owns the project state (file tree, schema, auth model), it can let the agent re-enter and repair the app days later; it can expose logs and traces to users; it can offer push-button deployment to its own edge or to a partner. That is how you get past “demo and go.”

To make that stick, though, platforms will have to loosen control in the right places: let people export to their own Postgres or Supabase; let them keep using GitHub as the source of truth; let them point at the model they want. Pricing will have to recognise that day-one usage is spiky and then flattens, so org-level pooled plans will work better than hard per-call meters, which is precisely what seemed to hurt v0.

The contradiction remains. Tool makers want to show that AI can build and run a whole app end-to-end. Most users want AI to accelerate the app they already run somewhere else. Building server-side is an attempt to bridge that: control enough of the runtime to guarantee the AI’s promise, but surface enough headless hooks that teams do not feel trapped. In 2025, that is the only way an AI coding frontend stops being just another screenshot.