As an e-commerce advertiser and marketer, your business objectives depend heavily on the success of your ad campaigns on the Meta ad platform. But as we discussed in part 1 of our 3-part blog series, consumer buying journeys are diverse and potentially difficult to track in a cookieless and privacy-focused environment. Thus, its important to rigorously test all aspects of your Meta campaigns.
Without testing which ad campaigns perform best using real-time insights from the Conversion API (C-API), a server-side tracking solution, you won't be able to make the best decisions for your marketing strategies.
But choosing which C-API to choose for your business objectives can be challenging considering there are various options available like -
In this article, we aim to shed light on conducting A/B tests to measure the performance of your ad campaigns with different C-API solutions.
This will help you evaluate various C-API options effectively and drive better consumer engagement to increase conversions, sales, and revenue on your ads.
Let’s dive in!
Before we begin for a refresher on the distinctions between basic, advanced, and predictive C-API solutions, check out this blog.
For a quick overview of how Angler AI's Predictive C-API solution handles sophisticated use cases that advanced or basic C-API do not support, consider the following business objectives and success criteria that our predictive C-API supports:
Now let’s get started on how you can start setting up A/B tests in Meta Ads Manager Experiments tool.
When you create an A/B test, you want to compare two versions of an ad strategy by changing only one variable at a time, such as the events you're optimizing for, ad images, ad text, target audience, or placement.
Meta will show each version of your ad to a segment of your audience, ensuring nobody sees both, and then determine which version performs best. Meta will also declare the winner of the test and will provide you with the statistical significance of the results.
This means if you rerun the same test, it will indicate how likely it is to find the same winner.
Before selecting a variable to test, choose a hypothesis for your test.
For example, you might hypothesize that campaigns optimizing with Angler predictive C-API will outperform your current campaign optimization option (eg: a basic or advanced C-API solution) based on the cost per order (or purchase ROAS) metric.
To create an A/B test, you can either duplicate an existing campaign, ad set, or ad and change a variable, or compare two existing campaigns or ad sets.
If you duplicate an existing campaign, the ad budget is also duplicated. Otherwise, the current ad budgets are used, which you can change in Meta Ads Manager at the ad set level.
We recommend using the same budget for both versions in a test to ensure a fair comparison. A/B testing can then measure the performance of each strategy on a cost per result basis or cost per conversion lift basis.
We also recommend you conduct A/B tests when you want to measure changes to your advertising or quickly compare two different ad strategies.
We do not recommend informal testing, such as manually turning ad sets or campaigns on and off. This can lead to inefficient ad delivery and unreliable test results.
For example -
A/B testing helps ensure your audiences are evenly split and statistically comparable, whereas informal testing can lead to overlapping audiences.
Put another way: in an A/B test, the same user will not see both variants during the testing period.
For instance consider the A/B test setups below:
The challenger (variant A) optimizes the Purchase event using Angler’s predictive CAPI, while the control (variant B) optimizes the Purchase event using either a basic C-API solution (eg: Shopify) or an advanced C-API solution.
Edit one of the campaigns (let's say it is called Brand | Broad Interests | Angler Pixel).
Scroll down to the middle where you will see the A/B test option, then click on the Create A/B test button.
You should see a pop-up screen like the one shown below given you already have both campaigns set up.
Now, select another existing ad for the other variant.
From the drop-down menu, select the campaigns that are going to be used as variants A and B, respectively.
When done, publish the test.
When you run A/B tests on Meta, you see result-readouts that identify -
For example, in the example below - there is an 84% chance you would get the same winner (Angler’s predictive C-API) if you ran this test again.
Furthermore, inside Meta Ads Manager, you can see performance by attribution window: 1-day view, 1-day click, 7-day click, and 28-day click.
The mentioned campaign was optimized for a 7-day click attribution window.
It is interesting to note that according to Facebook’s attribution, both Angler and business as usual (BAU) performed similarly on a 1-day click basis (also known as in-market buyer optimization), but Angler started outperforming BAU between days 2 and 7, continuing through day 28.
To effectively evaluate and leverage C-API solutions for your ad campaigns on the Meta platform, follow these concluding action steps:
As a performance marketer, the success of your ad campaigns on the Meta ad platform is crucial for achieving your business goals.
Conducting A/B tests is essential to evaluate the performance of different server-side integration (C-API) solutions for your ad campaigns.
By following the simple steps in this guide to conduct A/B tests, you can measure the effectiveness of two or more server-side integrations (C-API), choose the best option to optimize your marketing goals, and identify which solution best suits your requirements.
And finally, make sure to join us again for the third part in our blog series, where we’ll will learn how to measure performance across different ad platforms (Meta, Google Ads, etc.), CRMs and other marketing channels to paint a holistic picture of your marketing performance and make macro budgeting decisions.
Book a demo to learn more or try our Predictive CAPI free for 30 days,
and start optimizing your paid media spend with AI!