Skip to main content
← Morgue files
Case #029·June 15, 2026·6 min read

How to Run a Startup Customer Interview That Actually Tells You Something

Most startup customer interviews produce one result: confirmation that the founder’s idea is good. That result is wrong more often than it is right. The interview design is why — and it is almost always fixable before the conversation starts.

TL;DR

  • 01.The most common interview mistake is asking people what they want. People do not know what they want. They know what frustrated them last Tuesday.
  • 02.Past behaviour is evidence. Future behaviour is speculation. Design every question around the former.
  • 03.The person you are interviewing has a social incentive to be encouraging. The right questions remove that pressure by making the conversation about them, not about your idea.
  • 04.An interview that ends with "I would definitely use this" has not validated anything. An interview that surfaces specific, unprompted descriptions of a painful problem has.

The verdict

“A customer interview is an archaeological dig, not a pitch meeting. You are looking for evidence that already exists in their behaviour — not permission to build what you already planned.”

Why most customer interviews produce bad data

The structural problem with most founder-led customer interviews is that the founder already has a conclusion and is running the interview to confirm it. This is not a character flaw — it is what happens when you are excited about an idea and trying to get to yes.

The result is a conversation that looks like research but functions like a pitch. The founder describes the problem as they understand it, asks whether the interviewee recognises it, and counts the yeses as validation. The interviewee, who has no skin in the game and a social incentive to be supportive, provides yeses. Everyone leaves satisfied. The data is useless.

Rob Fitzpatrick documented this dynamic in The Mom Test — the observation that even your mother will inadvertently lie to you if you ask the wrong questions. The fix is not to find more honest interviewees. It is to design questions that make honesty the easiest path.

The interviewee is not lying when they say your idea sounds great. They are answering the social question you implicitly asked: “Do you like me and want to support what I am doing?” Design questions that do not ask that.

The rule: past behaviour, not future speculation

Every useful question in a customer interview is about something that already happened. What did you do the last time this came up? How much time did that take? What did you try first? Why did that not work?

Future-facing questions — “Would you use this?”, “How much would you pay?”, “Would this solve your problem?” — produce hypothetical answers that correlate poorly with actual behaviour. The gap between what people say they will do and what they actually do is well-documented in behavioural economics and is especially wide for products that do not yet exist.

Past-behaviour questions produce different data. They surface what people actually did, what it cost them, and how much friction they were willing to tolerate to solve the problem. That is the data that predicts whether they will pay for a better solution.

If someone has never taken any action to solve the problem — not Googled it, not tried a workaround, not complained to a colleague — the problem is not painful enough to validate. That is useful information. The future-facing question would have missed it entirely.

Your idea is next

Your startup idea has a fatal flaw. Four AI examiners find it.

Results in ~60 seconds. No account needed.

The five questions worth asking

These are not a script. They are a framework. Adapt the language to the context, but keep the intent of each question intact.

  • “Tell me about the last time you ran into [the problem].”Gets the conversation into specific, recallable territory rather than generalisations. If they struggle to name a specific instance, the problem is probably not frequent enough to build a business around.
  • “What did you do to deal with it?”Reveals whether they took any action. No action means no urgency. It also reveals the current solution — which is your real competition, not the product on Crunchbase.
  • “How much did that cost you — in time or money?”Quantifies the pain. If they cannot put a number on it, the pain may be real but not acute enough to drive purchasing behaviour.
  • “What would have to be true for the ideal solution to be worth paying for?”Surfaces the buying criteria without asking them to evaluate your product. The answer tells you what they are actually optimising for — which is often not what founders assume.
  • “Who else deals with this — and are they solving it differently?”Expands your understanding of the market without you having to ask directly. Interviewees often volunteer the best leads and the most revealing competitive intelligence in response to this question.

What you are actually listening for

The signal in a customer interview is rarely in the direct answer to your question. It is in the texture of how they talk about the problem.

Listen for specificity. A person who describes a painful problem in precise detail — the name of the tool they tried, the workaround they built, the number of hours it took, the conversation they had with their manager about it — is giving you evidence of real pain. A person who speaks in generalities (“yeah, it can be frustrating sometimes”) is not.

Listen for emotion. Frustration, resignation, embarrassment — these are signals that the problem has a real cost. Polite acknowledgement is not.

Listen for unsolicited information. The details they volunteer without prompting are the ones that matter most to them. When someone mid-answer suddenly says “and the worst part is...” — that is the thing you needed to hear.

This connects to the broader question of what genuine startup idea validation looks like. Interviews are one input, not the whole picture. Use them to find the language of the problem — then test whether that problem produces the behaviour you need with harder evidence.

The most common mistakes founders make mid-interview

Even founders who know the right questions make execution errors that compromise the data.

Pitching instead of listening.The moment the interviewee expresses a problem, the founder starts describing how their product solves it. This ends the research. Once you have described your solution, every subsequent answer is shaped by the interviewee’s reaction to it — not by their genuine experience of the problem.

Accepting vague answers.“It is frustrating” is not data. “It cost me about four hours last month and I still had to do it manually anyway” is data. Follow up every vague answer with “can you give me a specific example?”

Discounting negative signals.If an interviewee says they solved the problem well enough with the existing solution, that is important information — not a bad interview. Founders who interpret “I am not sure I need a better solution” as a failure to articulate the problem correctly are not running interviews. They are running auditions.

Ten interviews that confirm the problem is real and painful are useful. Ten interviews that confirm the founder wants it to be real are not research — they are theatre.

Interviews surface problems. This finds the fatal flaw.

Your startup idea has a fatal flaw. Four AI examiners find it.

Market gaps, competitor threats, unit economics, timing risks — four specialist agents with live data and an adversarial mandate. Not optimized to make you feel good. Verdict in 60 seconds.

Find my idea's fatal flaw →