← Back to guides
Product Discovery: 12 Techniques That Actually Work

Product Discovery: 12 Techniques That Actually Work

Practical methods to understand what to build before building it. User interviews, prototyping, experiments, and more—with real examples of each.

discoveryresearchtechniques14 min read

Why Discovery Matters More Than Delivery

Most product failures aren't execution failures—they're discovery failures. Teams build the thing right but build the wrong thing. Discovery is the work of understanding what to build before you invest in building it.

Marty Cagan, the godfather of modern product management, argues that the best product teams spend half their time on discovery. That sounds like a lot until you consider the alternative: spending months building something nobody wants.


1. User Interviews

Talking to users is the foundation of discovery. But most user interviews are done badly—leading questions, confirmation bias, taking feature requests at face value. Good interviews explore the user's world, not your product.

The Mom Test Framework

Ask about their life, not your idea:

  • ✅ "What do you currently do when X happens?"
  • ❌ "Would you use a product that does Y?"

Everyone says they'd use your hypothetical product; few will actually pay for it or change behavior.

How Many Interviews?

Do 5-10 interviews before each major decision. More is better but has diminishing returns. Look for patterns, not outliers. If 8 out of 10 users mention the same pain point, you're onto something.


2. Jobs to Be Done Interviews

JTBD interviews focus specifically on the progress users are trying to make:

"Tell me about the last time you switched from X to Y. Walk me through what happened."

You're reconstructing the decision moment to understand what triggered the change.

Where the Insight Lives

The insight is in the details:

  • What alternatives did they consider?
  • What almost stopped them?
  • What were they expecting that they didn't get?

This reveals the hiring criteria for your product—what job it needs to do to win.


3. Contextual Inquiry

Watch users in their natural environment doing real tasks. If you're building a field service app, go on ride-alongs with technicians. If you're building a finance tool, sit next to analysts using their current workflow.

Users often can't articulate what they do because it's habitual. Observation reveals:

  • The workarounds
  • The post-it notes
  • The things they do that they'd never mention in an interview

Intuit famously does "Follow Me Homes" where they observe customers using their products in actual contexts.


4. Surveys

Surveys scale user research but sacrifice depth. Use them to:

  • Validate patterns from interviews
  • Measure feature preferences across a large user base
  • Segment users by behavior or attitude

Survey Best Practices

  • Keep surveys short (5-7 questions max)
  • Avoid leading questions
  • Be specific: "How often do you use feature X?" beats "Do you like feature X?"
  • Use standardized measures (NPS, etc.) for benchmarking

5. Prototype Testing

Before building real features, test prototypes. These can be:

  • Sketches
  • Wireframes
  • Clickable mockups
  • Fake-door tests

The goal is to learn whether your proposed solution addresses the problem before investing engineering time.

Figma prototypes are industry standard. In 30 minutes with a designer, you can create a clickable version of a feature. Put it in front of 5 users and you'll learn more than weeks of debate. Spotify, Airbnb, and Notion all use rapid prototyping extensively.


6. Wizard of Oz Testing

Before automating a process, do it manually to validate demand:

CompanyWhat They Tested Manually
ZapposManually fulfilled shoe orders to prove e-commerce demand
AirbnbFounders personally took apartment photos to test professional photography

This technique is perfect for AI features or complex automation. Instead of building an ML model, have a human do the task behind the scenes and see if users value the output. If they don't, you saved months of development.


7. Fake Door / Smoke Tests

Add a button for a feature that doesn't exist yet. When users click, show a "coming soon" message and collect interest. If nobody clicks, you saved yourself from building something nobody wants.

Dropbox did this brilliantly—their famous video showing the product before it existed drove 75,000 signups overnight.

The technique is controversial (users might feel deceived) but extremely effective for validating demand before building.


8. Concierge MVP

Deliver the service manually before automating it. Food on the Table (a meal planning startup) had founders personally create meal plans for early users. This validated the concept and revealed nuances that would have been missed in a spec.

The manual work doesn't scale, but that's the point. You learn exactly what users need before encoding it in software. Most automation can wait until you're sure the value proposition works.


9. A/B Testing

For live products, A/B tests provide real behavioral data. Show variant A to half your users, variant B to the other half, and measure which performs better.

Good Use Cases

  • UI changes
  • Copy variations
  • Pricing experiments
  • Feature variants

Traffic Requirements

A/B testing requires traffic. If you have 100 users, you can't reliably A/B test—sample sizes are too small. Netflix runs thousands of tests concurrently because they have the traffic. Smaller companies should use tests selectively for high-impact decisions.


10. Analytics Deep Dives

Your existing data is a discovery goldmine:

Analysis TypeWhat It Reveals
Funnel analysisWhere users drop off
Cohort analysisHow behavior changes over time
Feature usage dataWhat's actually valued vs. theoretically valuable

Amplitude, Mixpanel, and similar tools make this accessible. Spend time with your data before every major decision.

Often, the answer to "should we build X?" is already visible in user behavior—you just need to look.


11. Competitive Analysis

Study competitors not to copy them but to understand the solution space:

  • What jobs are they solving?
  • What tradeoffs are they making?
  • What gaps exist?

Use their products yourself—signup, onboard, try to accomplish real tasks.

Don't Forget Non-Obvious Competitors

Also study the spreadsheets, workarounds, and status quo behaviors that your product will replace. Understanding what "good enough" looks like helps you understand what "significantly better" requires.


12. Assumption Mapping

Before diving into discovery, map your assumptions:

  1. What do you believe about users, problems, and solutions?
  2. Which assumptions are riskiest—most uncertain AND most consequential if wrong?
  3. Focus discovery on testing those first

The goal of discovery isn't to learn everything; it's to derisk the decisions that matter. If you're wrong about your core user, that's catastrophic. If you're wrong about button color, who cares. Prioritize your learning accordingly.

Get the weekly digest of top product people & jobs

One email a week. No spam.

Ready to get discovered?

Create your profile and let companies come to you.

Create Your Profile