Obtaining a Clean Read on Your A/B Test Results!

[et_pb_section fb_built=”1″ _builder_version=”4.4.7″][et_pb_row _builder_version=”4.4.7″ custom_padding=”||2px|||”][et_pb_column type=”4_4″ _builder_version=”4.4.7″][et_pb_text _builder_version=”4.4.7″]

Running Mutually Exclusive Optimization Activities

[/et_pb_text][et_pb_image src=”https://33sticks.com/wp-content/uploads/2020/10/denys-nevozhai-7nrsVjvALnA-unsplash-1024×768.jpg” title_text=”denys-nevozhai-7nrsVjvALnA-unsplash” _builder_version=”4.4.7″][/et_pb_image][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built=”1″ _builder_version=”4.4.7″][et_pb_row _builder_version=”4.4.7″ custom_margin=”-202px|auto||auto|false|false” custom_padding=”69px|||||”][et_pb_column type=”4_4″ _builder_version=”4.4.7″][et_pb_text _builder_version=”4.4.7″ min_height=”1716px” custom_padding=”||0px|||” hover_enabled=”0″]

In my last blog post, I discussed how to set yourself up for success and properly plan and prioritize your A/B testing backlog. Now, we’ll take a look at an element of test planning which is critical to running a high velocity A/B testing program – managing potential test collisions. You and your team put an incredible amount of time and energy into getting A/B tests live on your site. Let’s talk about how to get the cleanest read possible on your test results data.

 

The risk of not running mutually exclusive tests is in pushing test winners live to the site on the basis of misleading data. Let’s say you run a test and change the call-to-action on your product page, and the test variant is a winner, so you push it live to the site. Turns out, at the same time you also ran a test on your category page, presenting huge sale items to your loyal visitors. How did the sales item category test influence the results of your CTA test? Did the sales items cause the CTA test to be a winner, or was it the CTA change itself?

 

First, it’s important to understand that many testing programs simply push all of their A/B tests to 100% of site traffic. In other words, any visitor that comes to the site is qualified to enter the test, regardless of which page they land on or which site they come from. Sometimes, this doesn’t pose an issue, if you are only running one test at a time on your site.  However, if you’re trying to up your game and drive more testing wins, that approach just won’t work.  Here’s the problem with the approach when running multiple tests.

Let’s say you have 2 test ideas that you need to run at the same time, based upon business needs and marketing schedules. The first test is placed on your Product Detail Page, and changes the location of our primary call-to-action button. The second test is placed on your Cart page, and reformats your Promo Code input box.

 

 

So you run each test live on the site, and now you are pulling results to review with your business partners in an upcoming A/B testing governance meeting. You are super excited to see that the Cart test has produced a winning test variation! The reformatted promo box has driven a 15% increase in revenue per visitor, as compared to the control treatment. The PDP test variation, the new CTA placement, also had a lift in revenue per visitor for the test treatment, but the lift only reached 75% confidence.

 

During your readout of the results, a key stakeholder asks a seemingly simple question – “What impact, if any, did the PDP test have on the results of the Cart test?” You pause for a moment, and realize that 100% of your site traffic experienced both the PDP and Cart tests. You reply that you will need to do a deeper dive in your analytics tool to determine that impact, it may take a few days to figure it out. Your stakeholder is disappointed, as she needed the answer that day for an upcoming Board meeting.

Here’s a test planning and architecture method you can employ fairly easily to ensure that never happens again. This method is specific to Adobe Target, but other testing tools have can be used with a similar approach. You will create a script in Adobe Target which will allow you to run mutually exclusive tests on your site. So, if a visitor qualifies for one test, you can exclude that visitor from qualifying from any other tests. In the example above, you will ensure that any visitor that qualified for the PDP test, would not be included in your Cart Promo Box test.

Here’s an example of what the Profile script would look like in Target. In this example, a user would be assigned a random number between 0 and 99, and would then either fall into GroupA or GroupB.  When you create the Activity in Target, you would then select GroupA audience for the PDP test, and GroupB audience for the Cart test.

 

.                   

Running mutually exclusive tests on your site will allow you to get the cleanest read on your test results. Good luck implementing this within your testing process!

 

 

 

[/et_pb_text][et_pb_code _builder_version=”4.4.7″ custom_margin=”-123px|||||”][author] Jason Boal [author_image timthumb=’on’]https://33sticks.com/wp-content/uploads/2020/02/Profile-Pic-33-Sticks-2-150×150.jpg[/author_image] [author_info]Jason has over 10 years of experience working on both the client and agency sides, and across Retail, Financial Services, and Non-Profit industries. He always looks forward to helping clients build upon and improve the customer digital experience. He follows a data-driven strategy, focused on constantly learning more about the customer digital experience from both quantitative and qualitative information. Jason’s philosophy is, “taking data and developing a strategy centered around people, process, and technology will lead to tremendous results.” [/author_info] [/author][/et_pb_code][/et_pb_column][/et_pb_row][/et_pb_section]

Published by Jason Boal

Jason has over 10 years of experience working on both the client and agency sides, and across Retail, Financial Services, and Non-Profit industries. He always look forward to helping clients build upon and improve the customer digital experience. He follows a data-driven strategy, as he learns more about the customer digital experience from both quantitative and qualitative information. Jason's philosophy is, "taking that data and developing a strategy centered around people, process, and technology will lead to tremendous results."

Leave a comment

Your email address will not be published. Required fields are marked *