AdWords Experiments: Is Your Campaign a Jekyll or a Hyde?

July 17, 2018
Blog User Image

Have you been itching to change up an AdWords campaign but don’t want to rock the boat too much? Want to see if switching a bid strategy or ad delivery can produce different results? Then you should check out AdWords drafts and experiments.  

AdWords experiments allow you to create a copy of a campaign where you can change a variable or setting and test a portion of your traffic. After a period of time, you can analyze your results to see if the draft change produced an overall net gain for your campaign, or if it was merely one of life’s many learning experiences.  

Below are ways–some obvious, some not so obvious–that you can use AdWords experiments to confirm or disprove your advertising hypotheses. In no time at all, you will feel like a scientist firing up tests in the lab (in a non-Breaking Bad sort of way).   

Bid Strategies 

One of the most popular uses for experiments, and the one that AdWords will prompt you to use in the settings menu, is to test out different bidding strategies. A colleague recently ran an experiment to test Target CPA versus Manual CPC bidding to see if the former would improve performance. She found that the results of the test largely depended on conversion volume within each campaign, with Target CPA only working effectively in high conversion volume campaigns (at least 30 conversions in the past 30 days). Thus, we ultimately used Target CPA in four of the six campaigns and maintained Manual CPC bidding in the lower conversion volume campaigns. Here you can see how not committing 100% allows for a more selective approach, in addition to more opportunities for learning and discovery.  

Ad Rotation 

Ad rotation can be another great use for your diabolical AdWords experiments. Recently, we witnessed a top-performing campaign dip in its conversion rates and rise in CPA. We thought this was due, in part, to a low quality score stemming from a below average expected CTR. As we had been testing a few pieces of ad copy using the “Rotate Indefinitely” setting with no consistent winner, we decided to run an experiment in which half the traffic was on this setting, while half was switched to “Optimize for Best Performing Ad”. After running the experiment for over a month, we saw a 44% lower CPA in the ads running under “Optimize for Best Performing Ad”. When we made the switch in ad rotation in the actual campaign, we promptly saw a pick-up in performance. Hypothesis: Confirmed.   

Targeting 

Our final example is less traditional, but shows how unique scenarios can benefit from a little creative thinking, mad scientist-style. A few months back, a client approached me with five sets of display ads they wanted to target to the same audience as another campaign that had only one set of ads; the client wanted to split the traffic evenly between the original and new ad sets. Since running all of these ads in the same ad group would reduce the original campaign’s traffic down to just a sixth, a colleague suggested an experiment as a workaround. We set up a draft of the original campaign where we deleted all the old ads and added in the five new sets, so we were able to split the traffic 50/50 as requested. We avoided both drastically reducing traffic to the original set, as well as the possibility of competing against ourselves for placements. The scientific method at work!

The above examples represent just three methods for using experiments in your accounts to both optimize performance and divide traffic. A useful rule to keep in mind is to make only one meaningful change between your draft and the original campaign, as this makes it easy to isolate the variable affecting performance. As with any A/B test, too many variables can muddy the waters and invalidate the experiment. With that in mind, strap on your goggles, heat up your Bunsen burner, and start experimenting! 

Hey There!

Thanks for reaching out.

Name
Job Title
Phone
Email
Company
What're you interested in?
Message

Thanks!
We'll be in touch shortly.

Until then, why not browse some of our work?

See our work