Design workshops in institutional contexts tend to follow a predictable script: Post-it notes, dot voting, "How Might We" questions written in marker on kraft paper. The facilitator arrives with a toolkit. Participants sense they are being processed through a method. The output is a wall covered in ideas that no one owns.
At France VAE, the national platform for prior learning recognition built within beta.gouv.fr, the field actors we needed to work with were called AAPs: Architectes Accompagnateurs de Parcours. These are professionals who sit with candidates for months, help them articulate what they know, and guide them through an administrative process that is genuinely complex. They had deep expertise. They had zero patience for theater.
Stripping the method down to what it actually does
I made a deliberate choice before entering the room: no design vocabulary. Not because these participants were unsophisticated, but because the vocabulary would have put me at the center. The goal was to put the work at the center.
The two-day workshops were built around plain questions: What does a difficult case look like for you? Where do you lose time that you should not be losing? What does a candidate need to understand that most of them don't? Sticky notes still existed because physical organization helps when you have twenty people in a room. But the structure underneath was about surfacing operational knowledge, not generating creative output for its own sake.
Dot voting was out entirely. Consensus theater that compresses collective judgment into colored stickers produces a false signal. Instead, I asked for explicit disagreement: "If this were wrong, what would that look like?" That question consistently produced more useful information than any affinity mapping exercise.
What changes when participants feel respected
The practical effect was visible within the first few hours. AAPs who had spent their careers navigating slow-moving institutions were accustomed to workshops where their input was collected, thanked, and filed. When they realized the conversation was genuinely about their actual work rather than an input-gathering formality, the quality of what they shared changed.
One participant described a recurring failure mode in candidate onboarding that none of the existing platform documentation had captured. It came out not because I asked a creative question, but because a peer in the room said "that happens to me too" and the conversation opened. That kind of knowledge does not surface in a dot-voting exercise.
The distinction that mattered: structured facilitation is not the same as facilitation theater. A well-structured conversation does the same cognitive work as a formal design sprint, without the overhead that signals to participants that they are subjects of a process rather than contributors to a decision.
What this requires from the facilitator
Running workshops this way is harder than running them with a full method toolkit. When you strip the scaffolding, you are responsible for holding the thread yourself. You need to know what you are trying to learn before you enter the room, and you need to be willing to adjust in real time when the conversation produces something more valuable than what you planned for.
The discipline is in the preparation. Every session had a concrete question it was designed to answer. Not "let's understand the user journey" but "we need to know where AAPs lose confidence in the platform so we can decide whether the problem is information architecture or onboarding sequence." That level of specificity in the brief allows for flexible facilitation in the room without losing direction.
The result at France VAE was design decisions grounded in operational reality rather than workshop artifacts. The participants built something they recognized as useful. That is the bar.
