« Five Factors to Consider When Growing Your List | Main | Study Finds Emails of 23% of Retailers Are Completely Unintelligible When Images Are Blocked »

Jun 07, 2008

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Tim Wilson

Two other points to add on:

Depending on the size of your list and the time you have to execute the campaign, ideally, you would do the A/B test with a subset of the list, determine which performed better, and then execute the full campaign with the better-performing message/copy. This requires that you have a large enough overall list so that you still have enough people left over after the A/B test to make it worthwhile.

And...defining "enough" goes to the overall power and significance of the results. It seems awfully common for marketers to simply look for the higher percentage in an A/B test and jump to a conclusion that one message performed better than others. If you understand that part of your results are due to "noise" (they always are, even in the most controlled of experiments), then you know the size of the test and how much of a difference you see need to be factored into assessing the results. JT Buser at Bulldog Solutions put together a couple of free, downloadable Excel spreadsheets to help with that assessment: http://www.bulldogsolutions.com/ExcelABSplitcalculator/.

Tamara Gielen

Hi Soeren,

Thanks for you comment, I totally agree with you - splitting a list based on opt-in date is definitely going to skew your data. The best way to split a list is to do a random split. If your email service provider doesn't offer the option to do a random split, I'm pretty sure Excel can do it for you.

Tamara

Soeren Sprogoe

Hi Tamara,

first time reader, first time poster here :-)

Excellent article! However there's one point where I strongly disagree:

If you order your list by subscription date, and then cut it in half, you'll get:

- List A with either your best customers, or a ton of deadweight from deadend email addresses that hasn't bounced for some reason.

- List B with all the newest email addresses. These might be extremely ready to convert as your company/product is still fresh in their minds, or they might not yet be ready to convert.

Which is the case on list A/B very much depends on your industry and recruitment strategy. But my point is that splitting by subscription date will give you a completely wrong result. Try sending out exactly the same email to these two lists and see what happens.

To me, the most important thing to do a valid A/B-test is how you do the split. But then again, I work as a Web Analyst, so I gotta say that my job is the most important one :-)

The comments to this entry are closed.

FREE Tips, Insights & News Delivered to Your Inbox

Fill out my online form.

Tamara & Kath On Twitter

Recent & Upcoming Webinars