Most product design work rests on a set of assumptions that are so fundamental they rarely get examined: users chose to use your product, they can stop using it if it frustrates them, and their engagement or departure tells you something meaningful about the quality of your design. When you design for a captive user population, none of these assumptions hold.

SQOOL was deployed across 465 high schools in Île-de-France as part of a regional government initiative. 300,000 devices. Students received tablets they had not requested, running software they had not chosen, in a context (school) where they had limited ability to opt out of either. The engagement data we collected told us almost nothing useful. High usage of a feature could mean students found it valuable or that their teacher required it. Low usage could mean the feature was useless or that it simply was not part of the curriculum workflow that week.

The failure of conventional feedback loops

Standard UX feedback mechanisms break down in captive user contexts. NPS surveys produce results that reflect the general attitude toward the institution rather than the product. "How likely are you to recommend SQOOL to a friend?" is a question a student who resents having a school-issued tablet will answer based on that resentment, not based on their experience with any particular feature. Voluntary in-app feedback skews toward the most frustrated and the most enthusiastic, which are both non-representative populations.

Analytics were similarly unreliable. We could see that students spent significant time in certain applications. We could not tell from the data whether that time was productive, frustrating, or simply obligatory. A student who takes twenty minutes to complete a task because the interface is poorly designed and a student who takes twenty minutes because they are genuinely engaged with the content produce identical behavioral signals in the analytics.

Classroom observation as the primary research method

The research approach I settled on was classroom observation, paired with structured conversations with teachers. Not usability testing in a controlled environment. Actual class sessions, with students working on their assigned tasks and me sitting at the back of the room with a notebook.

What classroom observation reveals that no other method captures: the workarounds. Students who cannot accomplish a task the intended way will find another way. They share screens, they dictate to a classmate, they take a photo of the content with a personal phone. Each workaround is a design failure made visible. The student who photographs the screen because the copy-paste function is buried three levels deep in a menu is telling you something about information architecture that a usability test would only reveal if you specifically tested that task.

Teachers were a different kind of informant. They observed usage patterns across a class of thirty students, five days a week, for months. They knew which applications reliably lost a class's attention and which held it. They had accumulated operational knowledge about failure modes that no analytics dashboard could replicate. The challenge was structuring conversations so that teachers shared operational specifics rather than general impressions. "Students struggle with this" is less useful than "when students try to submit an assignment and the connection drops, they don't know whether the submission went through, so they submit again and then have duplicate submissions that fail."

First-try reliability as the primary design constraint

In consumer product design, onboarding is a first-class concern. You invest in the initial experience because the user's decision to continue using the product or abandon it happens in the first few sessions. In a classroom context, there is no onboarding moment in the consumer sense. The bell rings, the teacher gives an instruction, and students are expected to execute immediately. If the interface requires discovery, the lesson time disappears into confusion. If an error message appears and students do not know how to resolve it, the teacher has to stop the class.

This constraint changes the design vocabulary significantly. Progressive disclosure, which works well in consumer contexts because it reduces initial cognitive load, becomes a liability when first-try reliability is the primary constraint. Every important action needs to be reachable without prior knowledge of the interface structure. Error states need to be self-resolving or to explain clearly what the student should do, because there is no support channel in a classroom.

Designing with dignity when choice is removed

The ethical dimension of designing for captive users is real and worth being explicit about. Students who use SQOOL did not choose to. They are in a coercive context (school) using a product mandated by an institution (the regional government). The design has an obligation to be respectful of that position: not to use the captive relationship to harvest data, to over-notify, or to create engagement patterns that serve institutional metrics rather than student needs.

In practice, this meant resisting requests to add usage tracking that was granular enough to be surveillance, declining to implement notification patterns designed to maximize open rates, and consistently framing the design brief around what made the student's work easier rather than what made the administrator's reporting cleaner. These were not always easy conversations with stakeholders who had institutional accountability pressures. But they were the right ones to have, and grounding them in the specific constraint (captive users, no opt-out) made them more tractable than abstract arguments about ethics.