Between July 2025 and early 2026, the Condamine Apps experiment produced more than 50 functional web applications. Not prototypes in the loose sense of the word, rough clickable wireframes meant to simulate a product. Actual deployed apps: React or Next.js front ends, Tailwind CSS, pushed to Vercel, accessible via a real URL. The kind of thing that used to take a small team several weeks now takes one person a few days.

The actual time compression

The speed gain is real, but it needs to be understood precisely. AI tools like Claude Code, Bolt, and Cursor don't compress the thinking phase. They compress the distance between a decision and a working artifact. Once you know what you want to build and why, the scaffolding, the component structure, the deployment pipeline, these get assembled in hours rather than days. The creative constraint is gone. What used to be a bottleneck (writing the boilerplate, wiring the state, configuring the build) is now nearly instant.

That shift has a downstream effect on how ideas get validated. Static mockups served a purpose when building the real thing was expensive. They let you test a direction before committing resources. But a mockup always involves some degree of translation loss between intent and execution. A working prototype running in a browser, with real interactions and real data, produces a different quality of feedback. Users respond differently. Stakeholders respond differently. The gap between the artifact and the eventual product narrows considerably.

What AI doesn't replace

Fifty applications in eight months surfaces a pattern quickly. The quality of what gets built is bounded by the quality of the thinking that precedes the first prompt. Claude Code can scaffold a complete React application with routing, components, and API calls in under an hour. What it cannot do is decide whether that application solves a real problem, serves the right user, or makes the right tradeoffs between simplicity and functionality.

Design decisions remain with the designer. Not because the tools are incapable of generating design choices, but because generated design choices without a human frame of reference tend to be statistically average. They draw from patterns in training data. The interesting, specific, well-reasoned decisions come from understanding the context deeply: who uses this, in what situation, with what prior knowledge, toward what goal. That understanding doesn't emerge from a prompt. It comes from the work that precedes it.

The 50-app experiment made this concrete in an unexpected way. Early on, apps that were built quickly without enough upfront thinking accumulated friction fast. Ambiguous requirements produced ambiguous code. Refactoring through AI tools is possible but costs more time than getting the structure right from the start. The lesson: AI accelerates production, so it also accelerates the consequences of under-specified decisions.

The stack and why it matters

React and Next.js with Tailwind CSS, deployed on Vercel, was a deliberate choice and not simply the path of least resistance. This stack has broad AI model familiarity, meaning Claude Code produces more accurate, idiomatic code for it than for more niche frameworks. Vercel's deployment pipeline removes enough friction that shipping a working app becomes a trivial step rather than a project milestone. That matters for the experiment's core objective: reduce the cost of a working idea to the point where it becomes comparable to the cost of a sketch.

The goal for 2026 is 100 apps. That number is less important than what it represents: a sustained, high-volume practice of building and shipping. Volume generates pattern recognition that no amount of reading or theorizing produces. Each app teaches something specific about what works, what doesn't, where the tools are genuinely useful, and where they create a false sense of progress.

What a real validation loop looks like now

The shift from mockup-first to working-prototype-first changes the validation rhythm. A static mockup invites feedback on how something looks. A working app in a browser invites feedback on how something works. These are different conversations. The first is about visual choices. The second is about behavior, logic, and fit with the user's actual context.

For product teams and entrepreneurs, this matters more than it might seem. The number of assumptions that survive a polished mockup and collapse on first contact with a working prototype is striking. Interactions that seemed obvious in a static frame become confusing in motion. Edge cases that weren't visible in a three-screen user flow become immediately apparent when a real user starts tapping unpredictably. Building faster makes it possible to reach that moment of contact earlier, with less sunk cost, and with more room to adjust.