A/B Testing in NationBuilder using Optimizely & List Splitter - cStreet Campaigns

A/B Testing in NationBuilder with Optimizely & List Splitter

 about.png

Over the past year and a half, we’ve worked with a number of clients to run increasingly sophisticated A/B tests on their bulk email programs, testing calls-to-action, different messages for different users, and geographically- and demographically-differentiated messaging.

There are a handful of ways to perform A/B tests when using NationBuilder. Below I’ve highlighted some of our experience using different types of A/B tests to improve clients' results.

Fair_Work_Saskatchewan.png

Page Level Testing with Optimizely

We’re working with a coalition of labour groups in Saskatchewan who are organizing to prevent the passage of Bill 85, a radical overhaul of labour law in the province.  

In this project we used Optimizely to test a series of site elements in order to improve key campaign conversions. Optimizely works by splitting your site traffic into different buckets who see varied permutations of the same page. Once one permutation is clearly outperforming the others, you can make the change to implement that as the default.  

With Fair Work Saskatchewan we were able to increase the petition's signature conversions by 8.5%, as well as increase conversions on a number of other places on the site.

 

 FairWorkeblasttest.jpg

Bulk Email & SMS Testing Using NationBuilder's List Splitter 

The reason I grouped email and SMS together is because NationBuilder's list splitter is removed from your broadcasting tools (this is actually great, because it lets you split lists based on percentages for phone banks, canvasses, etc).  

We’ve run testing programs in bulk email with a number of clients in recent months, and while I won’t recap all the results, here are some broad generalizations:

The more tests, the better the results. We started splitting lists 10%-10%-80% and testing either subject lines or “ask” in the two 10% lists, with the winning permutation being sent to the 80% list. Over time we’ve moved towards smaller tests that first test subject lines (and sender names) for open rates, followed by variables in the body of the blast once the open rate tests have produced results.

As you might expect, testing specific appeals are limited when the engagement with supporters is weak. For instance, a small increase in donation conversations is easily outweighed by simply improving engagement with your supporters.

More on Optimizely:

originally published Tuesday, April 23, 2013