What to do when the experiments are not valid?

Your experiments are not valid?

How can you be sure you can trust the results?
Here are some cases where your results may not be valid:

Contaminated data – When several experiments are run in parallel and the test groups are mixed, it becomes difficult to tell which experiment had an impact on the change in key performance indicators. This is where the experimental infrastructure becomes extremely important. Best practice is to label customers by experience variant, with rules against assigning conflicting variants to the same user.

Small sample – A small sample of data used to make recommendations can easily be influenced by a few outliers. The required sample size for an experiment can be calculated by estimating the magnitude of change and variability in the data. Using a statistical significance of less than 95% can lead you to believe that a random coincidence will be a repeatable phenomenon. Obtaining an appropriate sample size can be difficult, especially for low-traffic products. In this case, fewer tests over longer periods of time or qualitative studies may be the best way to obtain information.

Insufficient time – Running an experiment over a short period of time (even with a large enough sample size) can lead you to miss the natural usage patterns of your product and only consider peaks or troughs instead of a full usage cycle.

External events – Competitors’ sales, holidays and macroeconomic events (such as a pandemic) will affect the viability of the data you collect. Try to avoid running experiments if you can predict such events or remove data collected during these periods from your analysis.

This is the second part of a series on product experimentation and growth. In the next article I will discuss “How to create a powerful experimentation system”.

* indicates required

Intuit Mailchimp

Write first comment

Leave a Reply

Your email address will not be published. Required fields are marked *