SQOOL Extend entered the world as a single slide in a board presentation. The pitch was straightforward: give high school students in Île-de-France access to virtual machines directly from their SQOOL tablet, enabling use cases that local hardware could not support. The technology was credible. The product, however, did not exist. No specifications, no identified users, no competitive benchmark, no one who had done this at the scale of 465 schools and 300,000 devices.

My starting point was a question that sounds obvious but is often skipped: what would have to be true for this to work? Not "what features should it have," but what assumptions, if wrong, would make the product useless regardless of execution quality. That reframing changed how I structured the first weeks.

Five interviews before touching a wireframe

I ran five stakeholder interviews before producing a single screen. The stakeholders were chosen not for seniority but for knowledge: two IT administrators responsible for deploying infrastructure in schools, one academic director who managed the relationship between the regional government and school principals, one teacher who had used virtual machines in a previous professional context, and one student in terminal who was part of a pilot technology program. These were not user interviews in the classic UX sense. They were assumption-extraction sessions. I went in with a list of things I believed to be true about the product and tested each one against their experience.

The output was not a persona or a journey map. It was a map of conflicting assumptions. The IT administrators assumed that virtual machine provisioning would be centralized and invisible to teachers. The academic director assumed that teachers would configure their own environments. These two assumptions could not both be true, and neither had been surfaced in the board meeting where the product was approved.

Risk matrix as the real planning document

I mapped every assumption onto a two-axis matrix: probability of being wrong versus cost of being wrong late. This is not a novel technique, but using it as the primary planning document rather than a background analysis tool changes the output significantly. The high-risk quadrant (likely wrong, expensive to discover late) became the sequencing logic for the roadmap.

The first phase of the MVP was not about building the product. It was about resolving the three assumptions in the high-risk quadrant: whether the network infrastructure in schools could support concurrent virtual machine sessions at classroom scale, whether the SQOOL tablet OS could run the client software without modification, and whether teachers would accept a workflow that required them to provision sessions in advance. Each of these was a technical or behavioral unknown that, if wrong, would invalidate months of downstream work.

Phase two focused on the teacher experience: the configuration interface, session management, and the failure modes that a classroom environment makes unavoidable (a student's session crashing during an exam, a network dropout mid-session). Phase three was ecosystem deployment: the tooling for IT administrators to manage fleets of virtual machines across hundreds of schools, the reporting for academic directors, and the integration with the broader SQOOL platform.

The academic calendar as an immovable constraint

One constraint shaped everything: the academic calendar. A product that missed the September school year start would sit unused for ten months. That deadline was not negotiable, and it compressed the entire planning horizon. Four months from kickoff to first pilot in five schools.

That constraint was actually useful. It forced ruthless prioritization: phase one had to be small enough to ship in four months, and the five pilot schools had to be chosen for diversity of infrastructure conditions rather than for receptiveness. The point of the pilot was to break the product in realistic conditions, not to validate it in favorable ones.

The first version shipped on schedule. The network infrastructure assumption turned out to be partially wrong: concurrent sessions at full classroom scale caused latency that was unacceptable in practice. That finding, discovered in week two of the pilot, led to a session scheduling feature that had not been in any of the original specifications. It was the right fix, and it came from observing actual behavior rather than from the initial design.

What the roadmap was actually for

Looking back, the three-phase roadmap served a purpose beyond planning: it created a shared language for managing uncertainty with the executive team. Instead of promising a complete product in four months, I was promising resolution of specific unknowns in sequence. That framing gave the board something to evaluate at each gate that was not "is the product ready?" but "did we learn what we needed to learn?"

The difference matters because product development at zero-to-one stage is primarily a learning process. A Gantt chart makes it look like an execution process. The risk matrix framing keeps the learning explicit and makes it easier to change direction when evidence points elsewhere, without that change looking like a failure.