Most of the people who come to AI workshops with a tool already in hand have already formed a belief about it. Either it will replace their job, or it will do everything better than them, or it is not relevant to their work at all. The first conversation is usually about dismantling that belief, not because it is wrong in some abstract sense, but because it blocks anything useful from happening.

The misconception that makes training harder

The most persistent misconception in AI workshops for non-designers is that generative AI replaces skills. It doesn't. It amplifies what is already there. A product manager who understands user needs writes prompts that produce usable results. A product manager who doesn't produces outputs that look polished but miss the point entirely. The tool doesn't close that gap. It makes it more visible.

This is particularly clear with image generation and copywriting tools, but it applies equally to Claude Code, Figma plugins with AI, and any AI-assisted workflow. The quality ceiling of what you can produce with these tools is bounded by the quality of your underlying judgment. The prompt is a compression of your thinking. If the thinking is shallow, the prompt is thin, and the output reflects that immediately.

Why starting with the tool is the wrong approach

The standard format for a software workshop (here is the interface, here are the features, now try it) works reasonably well for tools with a fixed syntax. You learn the commands, you learn the shortcuts, you practice. Generative AI doesn't work that way. The interface is often a text box. The "commands" are natural language. There is no fixed syntax to learn.

What actually varies between people who produce useful outputs and those who don't is their ability to frame a problem. A useful prompt starts with a well-defined situation: who is involved, what the goal is, what constraints exist, what failure looks like. That is not a prompt-writing skill. It is a thinking skill. And it transfers directly from whatever domain expertise the person already has.

This is why workshops that start with actual problems rather than the tool itself tend to produce better results. When an entrepreneur brings a real decision they are trying to make, and AI is introduced as a way to accelerate the thinking around that decision, something clicks that doesn't click in a generic demo. The tool stops being abstract. Its limitations become visible at the same time as its utility, which is the most honest introduction to it.

What the training actually looks like

The format that has worked best is structured around problem framing rather than feature exposure. Participants bring a real problem from their work. The first hour is about articulating that problem precisely: what is the actual question, what is already known, what does a useful answer look like. The second hour introduces AI as an accelerant for specific parts of that process, not as a general replacement for thinking.

The outcomes vary significantly by participant. People with strong domain expertise and weak writing skills see the largest gains. The tool helps them externalize what they know in a form that can be worked with. People with weak domain expertise see smaller gains, but they gain a clearer picture of where the gaps in their own thinking are. Both are useful results.

What doesn't transfer from workshops to practice is the expectation that every task will be this productive. AI tools have a narrow band of tasks where they genuinely accelerate work and a wide range of tasks where they produce something that looks like progress without actually being progress. Learning to tell the difference is the most important skill, and it only comes through volume. The workshop is a starting point, not an endpoint.

The posture shift that matters most

The most durable outcome of good AI training for non-designers and non-engineers is a posture shift. The question stops being "can the tool do this" and becomes "how precisely can I frame what I need." That reframe changes the relationship with the tool entirely. Instead of being a passive recipient of whatever the model generates, the person becomes an active director: iterating on framing, evaluating outputs against a clear standard, deciding when the result is good enough and when it needs another pass.

That posture is transferable across tools. It doesn't depend on Claude Code or any specific interface. It is fundamentally a thinking habit, and the people who develop it early will have a structural advantage as the tooling continues to change around them.