A/B Testing: What Works in the New Zealand Market?

One of the huge advantages of digital communications is undoubtedly the ability to test and then adapt your approach based on audience response.

Internationally, marketers often won’t deploy a campaign without first checking that they’ve chosen the optimal subject line, messaging, calls to action, layout and images by sending it to a random segment of their multi-million-member database, and monitoring the reaction.

Here in New Zealand, that can be more of a challenge. After all, in this country more modest five or six-figure customer databases are the norm, makeing it difficult to achieve test sample sizes large enough to provide statistically robust results.

Kiwi marketers also tend to work with smaller budgets; in many cases they simply don’t have the luxury of producing multiple creative executions.

The appetite for testing in New Zealand does, however, remain strong with marketers here understanding the value it provides in terms of optimising results.

Here’s our advice on the approaches we commonly use with clients to ensure robust testing given the constraints of smaller databases and budgets. These are simple but effective ways that you can start improving your programme of activity.

 

1) Apply the 10/10/80 or 50/50 rule

In the 10/10/80 pre-deployment approach one version of an email is sent to 10% of the database, an alternative is sent to another 10%, and then the “winning” approach is deployed to the final 80% based on what was learned. This approach is commonly used to test subject lines and can be very effective at optimising response. However in order for this to provide statistically significant results, you do need a minimum number of customers within your campaign.

If you have smaller numbers, the 50/50 rule may be more appropriate. In this approach, two different versions of the same email are deployed to the entire audience and the learnings then applied to the next campaign. This is more useful for testing time of the day or day of the week than subject lines.

 

2)  Mix it up depending on your goal

We recently helped a client test the best day of the week to send their customer emails using a 20/20/20/20/20 Monday-Friday test with a live campaign.

Random segments of their database were sent the same email with all other factors controlled – the only difference was the day of the week the email was sent. This has provided some valuable insight into which days work best for their customers to engage with emails.

Of course, we have to be careful to review this on an ongoing basis as consumer behaviour does change over time. For example, the rise in smartphones has meant that people are more likely to be checking emails on the go and at weekends.  

 

3) Keep track of trends over time

Whichever way you choose to go, make sure you keep an eye not just on individual campaigns, but also on how the channel is performing for you as a whole over time.

It is helpful to take a step back on a regular basis and look across all your activity to see what you can learn, and how you might optimise future activity. 

 

If you're interested in learning more what approach to testing may work best for you, we can help you to review your database and develop a framework to manage this process and meet your objectives. Get in touch find out more.

Subscribe to the blog