Five Tips for Designing Customer Discovery Experiments

As I’ve learned through my own experience and the experience of others around me, customer discovery is hard. It’s harder to talk to strangers than your friends. It’s hard to ask these strangers the right questions. And it’s hard to handle rejection from these strangers. Yet, these strangers are the ones who will pay the…

testing-customer-discovery

As I’ve learned through my own experience and the experience of others around me, customer discovery is hard. It’s harder to talk to strangers than your friends. It’s hard to ask these strangers the right questions. And it’s hard to handle rejection from these strangers. Yet, these strangers are the ones who will pay the bills. So, how do we make customer discovery easier? We treat it like an experiment. Here are five tips to help you do that.

1. Clarify the Objective of the Test

The first thing successful startups get right is solving an important problem for a qualified customer. This sounds easier than it is in practice. In many cases, the value we think we’re adding to the customer we think we’re serving are actually just our best guesses with little or no evidence besides a ‘gut feel’ to back the guess. Recognizing this, we want to make sure that we articulate our assumptions (the problem we’re solving, the customer we’re serving) and use them as the basis for defining our tests.

In practice, this means writing out our assumptions on our business model canvases and then clarifying that we need to prove those assumptions are true or false. For example, for one product I tested, I had the following value proposition and customer assumptions:

testing-customer-discovery

Seeing this example should raise a host of questions: Which value propositions are the most important? To which segment(s)? Do coaches/teachers/administrators in public schools or private schools behave differently? Do they have different priorities? Is one more likely to prioritize one value proposition over the other? Etc.

Identifying our assumptions allows us to turn our questions into statements, thus defining the objectives of our tests:

  1. Private/independent school principals need improved oversight of the curriculum map to do their job.
  2. Curriculum coordinators need to be able to identify gaps in the curriculum to do their job.
  3. Public school superintendents need to know which standards are being taught across their schools.
  4. Etc.

Each statement now is something we could ostensibly determine the truth/false value by designing good pass/fail tests.

2. Design Pass/Fail Experiments

Once we’ve identified the objectives of our tests, we want to design experiments that allow us to (in)validate our assumptions. The point of these experiments is to learn quickly, so we want to make sure we design tests that are as simple as possible to instrument and execute. For example, I might design a test like the following:

  1. Send an email (contents TBD) to 200 private school principals about improved oversight of the curriculum map.
  2. If 20 principals (10%) respond asking for more information, then the test passes. Otherwise, it fails.

Or, we might try something like this:

  1. Create an ad campaign on LinkedIn targeting superintendents. Budget $1000.
  2. If conversion rates are less than 5%, the test fails. OR
  3. Create a landing page highlighting oversight into the standards being taught. Offer a button for a free trial.
  4. If fewer than 10 signups, the test fails.

While the specific tests are up to you to decide, notice none of these require a lot of time. But you do want to have clear parameters for what constitutes passing or failing the test. Otherwise, it’s easy to convince yourself that your ‘gut feel’ was right when, in reality, you just didn’t create a meaningful test.

3. Test Across Channels

In the example above, I organized a few different tests across two channels (email, LinkedIn). When testing, you want to test across channels both to validate that the test passes because it is the right value proposition/customer combination and to determine whether the passing test changes based on the channel. Testing across channels also provides data on which channel(s) may be most effective, given your market.

4. Tests Don’t Require Working Code

Organizing the tests above don’t require working code. Even landing pages can be created easily with good WSIWYG editors like Wix, Weebly, or Unbounce (my favorite). It just takes some thought, creativity, resourcefulness and initiative. Traits you should already have if you want to be an entrepreneur.

5. Corroborate Test Results with Human Conversations

Steve Blank, one of the thought leaders on startups and entrepreneurship famously states that entrepreneurs should “Get Out of the Building!” The idea is that it is impossible to learn facts – certainly at a deep level – from the comfort and confines of your office, conference room, or dorm. Instead, you need to be interacting with the human beings who will be buying your product. You should also use these digital tests as the basis for conducting good customer interviews.

Testing value proposition/customer hypotheses is hard work, but necessary. Keep in mind, if your business takes off, this will be a fundamental part of actually building a business…acquiring customers. The sooner you’re used to it, the better.

One final note about testing, from Steve Blank:

The goal of these experiments/tests is not just to collect data. Nor is it to simply get “pass” on the pass/fail experiments…you’re looking past the data – you’re looking for insight. Did you get thrown out of sales calls time and again, but remembered someone said, “Too bad you son’t sell x, because can use a ton of those.”

Remember this as you’re planning and evaluating your tests – you might find you’re only a step to the left or right of a successful business.

Best of luck.

Leave a Comment

Your email address will not be published.

Scroll to Top