When I joined UNOWHY in 2017, the design team was small and largely disconnected from the rest of the organization. Decisions about product direction were made in meetings where design was not present. Engineers implemented features based on written specifications that had never been validated with users. Product managers wrote briefs without a shared vocabulary for framing user problems. None of this was unusual for a mid-sized EdTech company at that stage. But it meant that design influence was limited to the execution phase, and that limitation had a direct cost on product quality.
Six years later, the situation was substantially different. Design was present in strategic discussions. User research informed decisions made by teams that had never participated in a user test. Decision logs maintained by designers were consulted by engineers and product managers as a matter of habit. The shift did not happen because of a design thinking program or a declared cultural initiative. It happened because of a series of small, concrete actions that accumulated into a different set of shared assumptions.
Why culture cannot be installed
The most common mistake I see in design culture efforts is treating culture as something you declare or install. You run a workshop on design thinking, you put sticky notes on a wall, you introduce a framework, and you expect that the organization has now internalized a new way of working. It does not work that way. Culture is not a set of beliefs that people hold. It is a set of assumptions that are so well established they no longer require articulation. Those assumptions come from repeated experience, not from declarations.
This means that building a design culture requires finding the concrete, repeated actions that will create the experiences from which new assumptions emerge. Not rituals (rituals are performative), but genuine problem-solving activities that non-designers already care about, approached through a design lens.
Starting with problems that non-designers already care about
The first cross-functional workshop I ran at UNOWHY was not framed as a design workshop. It was framed as an investigation into a business problem: why do teachers abandon SQOOL Connect after the first week of use? The question came from customer success, who were dealing with the consequences. It was a question that engineering, product, and management all had a stake in answering.
By starting there, I did not have to convince anyone that user-centered thinking was valuable in the abstract. The value was implicit in the question. The workshop used design methods (journey mapping, problem framing, synthesis) but was organized around a problem that the participants already found important. The methods were incidental. The business question was the entry point.
This principle governed how I introduced almost every design practice over the following six years. I never started with "here is a design method we should try." I started with "here is a problem we are all struggling with," and then introduced the method as the most useful way to address it.
Making user research visible
User research has a visibility problem in most organizations. The designer goes and talks to users, comes back with insights, and writes a report. The report is read by a small number of people. The insights are filtered and summarized. By the time they reach engineering or management, they have lost most of their texture.
The most effective thing I did to change this was invite product managers and engineers to observe user tests live. Not in a formal, scheduled way, but opportunistically: when a test was happening, I sent a message saying "I am testing with three teachers this afternoon, you can watch if you want." Some people came out of curiosity. Some came because they had a specific question they wanted answered. After the first few sessions, people started requesting to observe rather than waiting to be invited.
The reason this works is that observed experience is qualitatively different from reported experience. A product manager who reads "users found the navigation confusing" will nod and move on. A product manager who watches a teacher spend four minutes trying to find a feature they use daily will feel that confusion in a way that shapes their decisions. In 2021, a thirty-minute session with three teachers revealed that a flow we had planned to build would not work for the way teachers actually organized their work. That observation prevented three weeks of development on the wrong solution. The PM who watched that session became one of the strongest advocates for user research I worked with at UNOWHY.
Decision logs as a mechanism for legibility
One of the persistent problems in product organizations is that decisions are made verbally and then forgotten. The same question resurfaces weeks later, the same debate happens, the same conclusion is reached, and nobody notices that the organization has been here before. This is expensive, and it erodes trust between disciplines because each team is working from a different version of history.
I introduced a lightweight decision log practice: after any significant design decision, I wrote a short entry documenting what was decided, what alternatives had been considered, what user evidence informed the choice, and what conditions would cause us to revisit it. The entries were short. Three to five sentences, usually. The point was not to create a comprehensive record but to create a retrievable one.
The secondary effect was the one I had not fully anticipated. The decision logs made design thinking legible to other disciplines. When an engineer could read the reasoning behind a UX decision and see that it was grounded in specific user observations rather than aesthetic preference, the nature of the conversation changed. Disagreements became more productive because they were about substance rather than authority. "I see you based this on the teacher observation from March, but I think the student case is different" is a better conversation than "I don't think this is the right approach."
The structural test
There is a useful test for whether a design culture has become structural: do design questions surface in conversations where no designer is present? In a team where design culture is still dependent on individual advocates, the answer is no. Decisions get made without design input, and a designer learns about them after the fact and has to push back.
By 2023, six years into this work at UNOWHY, I observed the structural test passing regularly. Engineers raised usability questions in sprint planning. Customer success flagged user experience issues before they escalated. Product managers framed feature requests in terms of user problems rather than feature specifications. These conversations happened without a designer in the room, which meant the assumptions had become shared enough to operate independently.
Six departments had become meaningfully involved in the design process over that period: technology, product, customer success, design, management, and end users directly. That breadth was not the result of a design evangelism campaign. It was the result of repeatedly solving problems together in ways that made the design approach useful and legible to people who had not been trained in it.
What does not work
Top-down declarations do not work. A leadership announcement that "we are a design-driven company" has no operational content. It does not change what happens in meetings, how decisions are made, or what is valued in the day-to-day work. It may signal an aspiration, but aspirations without corresponding practices remain aspirations.
Mandatory training programs do not work either, or at least not in the way their proponents believe. Training can introduce a vocabulary and a set of methods. What it cannot do is create the repeated experience of using those methods to solve real problems, which is the only thing that builds genuine fluency. Training followed by no change in actual working practices produces people who can describe design thinking but do not practice it. The investment is largely wasted.
What works is finding the problems that matter to the people you are trying to reach, solving them together using design methods, making the process visible, and doing it repeatedly over a long enough period that new assumptions have time to form. This requires patience, and it requires being comfortable with influence that is indirect and slow. But it produces something that declarations and training programs cannot: a set of shared assumptions that operate even when no designer is in the room.
