Ask Fuse: Testing (Part 2)
What are the most critical elements in analyzing a test?
Identifying Outliers
Uncharacteristically large gifts reporting to a control or test can result in a false win. If the average gift of your test or control looks high, look for outliers, and if they exist, remove them before comparing results.
Incorporating Roll-Out Costs
While the actual cost of the test needs to be tracked against budget, test results should always be analyzed using roll-out pricing. This is especially true for direct mail campaigns where production savings will be realized with higher volumes. Be sure to swap out test costs for roll-out costs before analyzing results.
Verifying Wins
Verify that your results are statistically valid (meaning they are likely to be repeated within a certain level of confidence). A quick search online provides free access to tools that do this. We recommend targeting at least a 90% confidence. And for smaller test universes, consider a P-value test to evaluate significance.
Consider the Need to Retest
Perhaps your test showed strong directional results, or even some statistical significance at a confidence lower than 90%. Consider a retest at a higher quantity or during an alternative time of year. Perhaps some minor adjustments may help push performance in this next iteration.
Leveraging All Learnings
Win, lose, or draw, every test provides a learning, and rarely is that learning limited to the campaign in which it was tested. Consider how each test result contributes to your collective strategic priorities. Then leverage all learnings across your program to minimize future testing spend while maximizing the impact.
If you’re looking for some more guidance in analyzing test results or want to learn more of our best practices, click here to connect!