Test Like You Mean It
We’ve all done it. We’ve seen an interesting tactic used in a direct mail package or digital ad and immediately mentally slotted it in as the next promising test for our program. Sometimes these types of tests lead to great breakthroughs in a campaign (or even lead to program changing strategies!). But sometimes they can distract from what we truly need to learn, waste precious testing dollars and take up limited testing bandwidth.
So, how do we identify the tests we should be focusing our attention on? And how do we make sure the results are actionable on the backend? We’re so glad you asked!
Identify the Tests that Matter and Make Sense for YOUR Program
Consider overall program-level objectives first. What are your core goals for your program? Improving retention? Increasing donor value? Decreasing investment? Are there organizational initiatives like new messaging or branding that you’re being tasked with rolling out, all the while not negatively impacting performance? These big-picture goals and factors should be shaping your overarching testing priorities.
Consider the metrics you’re trying to impact. Start with your desired outcome and work backwards to make sure the test you develop is driving towards that outcome. Do you want to increase response rate without negatively impacting average gift? The test you design should not include factors which may cause donors to give less (or multiple variables that lead you to wonder what impacted what …). For every test recommended, a clear hypothesis must be defined. The hypothesis should speak to the KPIs you want to impact (or not impact, as the case may be). This sets a clear path for analysis – you’ll know exactly what to look for in the performance data and what it’s telling you.
Consider what has resonated (and not resonated) with donors in the past. Take cues from the type of content that donors have responded to in the past. What do long-time controls and past testing wins or losses tell you about donor preferences? Do plain outer envelopes work well? Have certain types of mission stories generated more interest than others? Does iconography tend to work better than photography when talking about mission programs and services? The list of possible insights is long and varied. But jotting down these things can reveal you’ve accumulated more content-level insights than you might think and also illuminate testing paths you should or shouldn’t consider.
Create Realistic and Balanced Testing Objectives
Just about every program would love to do both of the following when it comes to testing:
o “Really move the needle” on performance
o Drive leaps in performance while also reducing investment
But neither of these are realistic viewpoints in isolation. The former can push us to overlook simple tests which could deliver strong performance, or to choose tests which seek to “blow up” the control too often. The latter can lead to unrealistic expectations from our tests that they all must meet multiple objectives. And both perspectives can cause us to lose sight of the fact that it’s a balanced mix of tests that work together to drive holistic performance improvements across a program.
So, it’s important that no tests are viewed in isolation, and that you also keep the following in balance:
Innovation and Insights – First, don’t overlook what iterative tests can offer – a clear and definitive read on a key variable. These types of tests should always have a place in your testing mix. But it is true that iterative testing can’t always drive program progress at an effective pace, so sometimes tests need to affect multiple variables. In these cases, it’s important to think about how the variables are working together cohesively towards one overarching testing approach. If listing out all the variables in play within one planned test reads like a disparate testing wish list, you will not know what variables elevated or depressed performance and the test won’t yield any insights. So, ensure that you aren’t pushing for too much in a test in the name of “innovation,” as that will ultimately cost you the ability to collect actionable insights.
Efficiency and Performance – It’s rare that one test can both improve performance and decrease investment. So, if you’re after both improved performance and greater efficiency in your program (as many organizations are), this will instead be about selecting a mix of tests that will:
o Drive cost savings at rollout, without the anticipation of driving lower response or giving
o Add no cost or only minimal cost at rollout, with the anticipation of driving greater revenue that will more than offset any added costs
So, the next time you come across an interesting testing tactic you’re excited about, save it in an ideas file and make sure it’s serving the greater good of your program when, where and how you implement it.
In Part 2 of our Testing Series, we will dive into when you need more than one test to gain holistic and actionable insights.