What is Split Testing?
Split testing is the process of taking two or more components of your campaign, and dividing the traffic so you can see which component performs better. You can split-test things like ads and landing pages. So for example, you can send half of your traffic to one landing page, and half of your traffic to another, and then you can see which landing page helps generate more leads or sales.
Split testing is definitely something you should be doing. But I see a lot of people who go about this the wrong way. Google itself guides you through horrible methods of split testing. So let me break down four testing myths, and tell you what can be done instead.
“When you’re split testing ads, you should only look at click-through rate.”
I’ve heard people say this, but it’s simply not true. We want more customers clicking on our ads, and fewer non-customers clicking on our ads. That’s why we do things like adding negative keywords to campaigns. We want to improve the relevancy of our traffic.
Sometimes you will see an ad with a really high click-through rate, but a low conversion rate. What’s happening is: that ad is attracting too many non-customers.
For some reason, they are attracted to the ad, but they aren’t going to become customers. They never were in the first place.
Meanwhile, another ad might get a lower click-through rate, and fewer clicks, but the same number of customers.
The customers are clicking on both ads. But there are more non-customers clicking on the first ad.
So if you keep the ad with the higher click-through rate, you would end up spending more for the same number of customers, because you are paying for all those extra non-customers that are clicking that ad.
When you split test your ads, you need to look at the cost per conversion as the most important metric. Only if the cost per conversion is about the same…only then should you look at the click-through rate.
Myth # 2
“You always need to wait for a statistically significant result.”
In statistics, you learn about confidence levels, and something needs a 95% confidence level to be statistically significant.
It’s true that you shouldn’t declare a split test winner too early.
So if you’ve had 5 conversions from one ad, and 2 conversions from another ad. You shouldn’t necessarily declare the 5-conversion ad to be the winner. It’s too early and the ad with only 2 conversions could easily end up as the better-performing ad.
But let’s say you have 35 conversions from one ad, and 33 conversions from another ad. Assuming these ads have both spent about the same amount of money, and they’ve gotten about the same amount of clicks, this is not a statistically significant result. It’s only like a 60% level of confidence.
So should you keep running this test? No! It could take forever to achieve a statistically significant result. It might never happen. Obviously the ads are performing about the same, so just pick one and move on.
When I’m split testing ads, I want to find an ad that is going to give me significantly better results. That doesn’t mean I’m waiting for statistical significance. Instead, I’m just looking for big differences. If I don’t see a big difference, then I’ll try something new.
Myth # 3
“Tiny differences can make a big impact when split testing ads.”
There are gurus who talk about things like using a period in an ad vs. not using a period, and that this can cause big differences in performance. I’ve seen examples of this.
If you really think something like this is worth testing, do this instead. Put two identical ads in one of your ad groups. You’ll see that even with identical ads, you will get different results. Sometimes the differences look significant. But, again, these are identical ads. The differences happen sometimes when you’re dealing with numbers.
The most progress will be made when you test big changes. Test completely different ads and landing pages.
Once you find a winner that can’t be beaten…only then should you worry about testing smaller changes.
And when I say smaller changes, I mean, like, different headlines, different pictures on your landing pages, maybe different words here and there. I’m not talking about tiny changes like the punctuation in your ad or the color of a button on your landing page – these things are simply not worth testing.
Myth # 4
“You should test a lot of ads or landing pages at the same time.”
So you might put ten different ads in an ad group.
For starters, Google isn’t going to rotate 10 ads evenly for you. Even if you tell the system not to optimize your ads, you’ll see that some of your ads start do get early preference. Google is going to show some of those ads more than others. And of course, this skews your results.
Even if each ad gets shown the same number of times, it’s not as efficient as testing two ads at a time.
With a 50/50 split test (just testing two ads), you can test more quickly. You find winners faster, and more of your budget starts to go towards better performing ads.
I’ve dumped a lot on you in this post. But the bottom line is split testing should be done with only two options at a time. You should look for big improvements. Don’t waste time testing small changes, and don’t wait for statistically significant results.
Check out my Advanced Google Ads training for more information about what to do, and what not to do, in your Google Ads campaigns: https://secure.adleg.com/advanced-google-ads