Meta Ad Campaign A/B Testing: Evaluating Conversions API Options for Better Performance

As an e-commerce advertiser and marketer, your business objectives depend heavily on the success of your ad campaigns on the Meta ad platform. But as we discussed in part 1 of our 3-part blog series, consumer buying journeys are diverse and potentially difficult to track in a cookieless and privacy-focused environment. Thus, its important to rigorously test all aspects of your Meta campaigns.

Without testing which ad campaigns perform best using real-time insights from the Conversion API (C-API), a server-side tracking solution, you won't be able to make the best decisions for your marketing strategies.

But choosing which C-API to choose for your business objectives can be challenging considering there are various options available like -

In this article, we aim to shed light on conducting A/B tests to measure the performance of your ad campaigns with different C-API solutions.

This will help you evaluate various C-API options effectively and drive better consumer engagement to increase conversions, sales, and revenue on your ads.

Let’s dive in!

Before we begin for a refresher on the distinctions between basic, advanced, and predictive C-API solutions, check out this blog.

For a quick overview of how Angler AI's Predictive C-API solution handles sophisticated use cases that advanced or basic C-API do not support, consider the following business objectives and success criteria that our predictive C-API supports:

How to setup A/B tests to evaluate C-API options

Now let’s get started on how you can start setting up A/B tests in Meta Ads Manager Experiments tool.


A/B test one variable at a time

When you create an A/B test, you want to compare two versions of an ad strategy by changing only one variable at a time, such as the events you're optimizing for, ad images, ad text, target audience, or placement.

Meta will show each version of your ad to a segment of your audience, ensuring nobody sees both, and then determine which version performs best. Meta will also declare the winner of the test and will provide you with the statistical significance of the results.

This means if you rerun the same test, it will indicate how likely it is to find the same winner.


Choose A/B test hypothesis

Before selecting a variable to test, choose a hypothesis for your test.

For example, you might hypothesize that campaigns optimizing with Angler predictive C-API will outperform your current campaign optimization option (eg: a basic or advanced C-API solution) based on the cost per order (or purchase ROAS) metric.


A/B test creation

To create an A/B test, you can either duplicate an existing campaign, ad set, or ad and change a variable, or compare two existing campaigns or ad sets.

If you duplicate an existing campaign, the ad budget is also duplicated. Otherwise, the current ad budgets are used, which you can change in Meta Ads Manager at the ad set level.

We recommend using the same budget for both versions in a test to ensure a fair comparison. A/B testing can then measure the performance of each strategy on a cost per result basis or cost per conversion lift basis.

We also recommend you conduct A/B tests when you want to measure changes to your advertising or quickly compare two different ad strategies.

We do not recommend informal testing, such as manually turning ad sets or campaigns on and off. This can lead to inefficient ad delivery and unreliable test results.


For example -

A/B testing helps ensure your audiences are evenly split and statistically comparable, whereas informal testing can lead to overlapping audiences.

Put another way: in an A/B test, the same user will not see both variants during the testing period.

For instance consider the A/B test setups below:

The challenger (variant A) optimizes the Purchase event using Angler’s predictive CAPI, while the control (variant B) optimizes the Purchase event using either a basic C-API solution (eg: Shopify) or an advanced C-API solution.

Set up A/B tests

Edit one of the campaigns (let's say it is called Brand | Broad Interests | Angler Pixel).

Scroll down to the middle where you will see the A/B test option, then click on the Create A/B test button.

You should see a pop-up screen like the one shown below given you already have both campaigns set up.

Now, select another existing ad for the other variant.

From the drop-down menu, select the campaigns that are going to be used as variants A and B, respectively.

We recommend the following settings:

  • Test name: give it a descriptive name that will help you and your team to identify what was the objective of the A/B test.
  • Key Metric: select it based on your campaign objective (null hypothesis) that optimizing with Angler predictive CAPI improves (lowers) cost per website purchase.
  • Toggle on - Include upper funnel metrics in test report - these metrics gauge user activities that precede your key metric. They can provide insights when your key metric cannot determine a winner. For instance, a test may not find a winner using cost per purchase as the key metric. But if it found a top performer based on cost per add-to-cart, you could use that information to make decisions.
  • Start: the start date of the test. If you are ready to launch, the earliest the test can start is at midnight the next day.
  • End: end date of the test, we recommend a 2-week test.
  • Toggle on End test early if a winner is found option.

When done, publish the test.

Evaluating Performance

When you run A/B tests on Meta, you see result-readouts that identify -

  • the winning variant
  • the margin of winning
  • and statistical power analysis.

For example, in the example below - there is an 84% chance you would get the same winner (Angler’s predictive C-API) if you ran this test again.

Furthermore, inside Meta Ads Manager, you can see performance by attribution window: 1-day view, 1-day click, 7-day click, and 28-day click.

The mentioned campaign was optimized for a 7-day click attribution window.

It is interesting to note that according to Facebook’s attribution, both Angler and business as usual (BAU) performed similarly on a 1-day click basis (also known as in-market buyer optimization), but Angler started outperforming BAU between days 2 and 7, continuing through day 28.

Meta Ad Campaign A/B Test Action Summary

To effectively evaluate and leverage C-API solutions for your ad campaigns on the Meta platform, follow these concluding action steps:

  1. Pick one ad variable to test at a time. Eg: ad images, ad text, target audience, or placement.
  2. Chose A/B test Hypothesis
  3. Create A/B test using the same budget for both variants
  4. Pick the recommended settings for your A/B test mentioned above
  5. Evaluate performance to choose the right C-API solution or in general the winning ad campaign


Conclusion

As a performance marketer, the success of your ad campaigns on the Meta ad platform is crucial for achieving your business goals.

Conducting A/B tests is essential to evaluate the performance of different server-side integration (C-API) solutions for your ad campaigns.

By following the simple steps in this guide to conduct A/B tests, you can measure the effectiveness of two or more server-side integrations (C-API), choose the best option to optimize your marketing goals, and identify which solution best suits your requirements.

And finally, make sure to join us again for the third part in our blog series, where we’ll will learn how to measure performance across different ad platforms (Meta, Google Ads, etc.), CRMs and other marketing channels to paint a holistic picture of your marketing performance and make macro budgeting decisions.

Ready to Supercharge your Growth
With Angler AI?

Learn more about our Predictive CAPI, or start a free trial for 30 days to start optimizing your paid media spend with AI and see the results for yourself.