Design Thinking × AI

From User Research to AI Research: A Leader's Guide to Asking Better Questions

5 min read

The skills that make user research powerful — structured inquiry, assumption surfacing, pattern recognition — are exactly what AI-enabled leadership requires.

From User Research to AI Research: A Leader's Guide to Asking Better Questions

The most valuable thing user research teaches you isn't how to conduct an interview or analyze survey data. It's how to ask questions that reveal what you don't know.

This turns out to be exactly the skill that AI-enabled leadership requires — and exactly the skill that most executive AI training programs skip entirely.

What User Research Actually Trains

A trained researcher approaches every project with deliberate skepticism about their own assumptions. Before designing a research instrument, they inventory what they believe to be true about the user, the problem, and the context — and then they design questions specifically intended to test whether those beliefs hold.

The output of good research is often less certainty, not more. You go in believing you understand the user's problem, and you come out knowing that the problem is more complex, more contextual, and more personal than you assumed. That disconfirmation — uncomfortable though it is — is what makes the eventual design actually useful.

This discipline — structured inquiry designed to surface what you don't know — is precisely what's missing from most AI interactions.

How Most Leaders Use AI (And Why It Limits Them)

Most leaders use AI the way early internet users used search engines: they enter what they want and evaluate what comes back. They're searching, not researching.

AI systems are trained to be helpful, which means they're trained to give plausible, coherent answers to whatever question they receive. If the question contains assumptions, the AI will generally work within those assumptions rather than challenging them. If the question is ambiguous, the AI will fill the ambiguity with the most common interpretation rather than asking for clarification.

The result is that leaders who use AI primarily as a search engine get answers that confirm their framing rather than testing it. They get information, not insight. They get what they expected, not what they needed to know.

The Research Mindset Applied to AI

Applying a research mindset to AI use changes the interaction fundamentally.

Start with assumption inventory. Before engaging with AI on a significant question, spend five minutes listing what you believe to be true about the situation. Then design your AI interaction specifically to test those beliefs. "Given these assumptions, what's the strongest evidence against them?" is a radically different prompt than "What should I do about this problem?"

Ask for the disconfirming case. Researchers are trained to actively seek evidence that their hypothesis is wrong. With AI, this means explicitly asking: "What's the strongest case against the recommendation you just made?" or "What would need to be true for this analysis to be wrong?" AI systems can generate this analysis — but they won't volunteer it unless you ask.

Triangulate across framings. Just as user researchers conduct multiple interviews and look for patterns and contradictions, effective AI-enabled leaders ask the same question from multiple angles. Frame the same problem three different ways and compare the AI's responses. Where they converge, you have a robust finding. Where they diverge, you've found an assumption worth examining.

Treat the AI's output as a data point, not a conclusion. User research teaches you to hold findings loosely — the sample size is small, the context is specific, the finding needs corroboration. AI outputs deserve the same epistemic humility. A single AI response is one data point. It becomes more valuable when it's compared against other sources, challenged with follow-up questions, and evaluated against your contextual knowledge.

The Leader Who Does This Well

The leaders who develop sophisticated AI inquiry skills are often not the most technically advanced users. They're the ones who have developed strong research intuition — from product backgrounds, consulting experience, or deep customer engagement — and are applying it deliberately to their AI interactions.

This is a significant advantage for leaders with design, product, or research backgrounds. The metacognitive skills — knowing what you don't know, designing inquiry to surface it, evaluating findings with appropriate skepticism — transfer directly.

The gap is not technical. It's methodological. And methodology is something that experience teaches in ways that any AI training program can leverage.

More on Design Thinking × AI

Work with CoCreate on executive AI leadership

Workshops, advisory, and facilitation for leadership teams — built on the same methods we use with design orgs at enterprise scale.