Hi there,
Thanks for reaching out! Happy to give you some of my initial thoughts on your Advantage+ question. It's a really common situation to be in – you've got something that's working well and you're rightly cautious about breaking it. The short answer is you should definitely avoid touching your main campaign.
The way Advantage+ (ASC) works means that making significant changes, like adding a bunch of new creatives, will almost certainly reset the learning phase and could easily wreck the stable performance you're seeing. Instead, the approach I'd take, and what we do for our clients, is to build a systematic, separate testing framework. This lets you find your next winning ads without risking the revenue from your main campaign. I'll walk you through how to do that below.
TLDR;
- Do NOT add new creatives directly to your working Advantage+ campaign. It will reset the learning phase and you risk destroying its current stable performance.
- Set up a separate "Creative Testing" campaign. This should be a standard conversions campaign using Ad Set Budget Optimisation (ABO) to isolate variables and give each new creative a fair test.
- Use this testing campaign to identify statistically proven "winners" based on metrics like ROAS and CPA before even thinking about introducing them to your main campaign.
- The most important piece of advice is to treat your main scaling campaign (ASC) as sacred. Your goal is to feed it only proven winners from your testing environment, not use it as a test bed itself.
- I've included a flowchart below that visualises the testing structure, as well as an interactive calculator to help you analyse and compare creative performance.
The Advantage+ Campaign is a Black Box - Don't Shake It
First off, it’s great that you've got an Advantage+ campaign that's out of learning and delivering steady results. That's the goal, and a lot of advertisers struggle to get there. The temptation to tweak and 'optimise' is always strong, but with ASC, it's a temptation you need to resist.
You need to think of a successful ASC campaign as a finely tuned engine. The algorithm has processed a huge amount of data about your customers, your products, and your existing creatives. It has found a specific 'pocket' of performance – a combination of audiences, placements, and bidding strategies that works. When you add new creatives, you're not just adding more options; you're forcing the entire system to re-evaluate everything from scratch. You're telling the algorithm, "That stable, profitable model you built? Forget it. Here's a load of new, unproven stuff to figure out."
This is what triggers the learning phase to reset. It's not just a status message; it’s a period of high volatility and increased spend where the algorithm is essentially gambling with your money to find a new stable path. Sometimes it finds a better one, but more often than not, especially when the original campaign was already working well, performance gets worse. You can easily see your Cost Per Acquisition (CPA) double overnight and your Return on Ad Spend (ROAS) get cut in half. I've seen it happen countless times in accounts we've taken over. An advertiser has a 'golden' campaign, gets impatient, throws in a dozen new ads, and kills it dead.
The core principle here is stability. Your main, money-making campaign's job is to scale predictably. Its job is not to test new ideas. You need a different place for that, a sandpit where you can experiment without knocking over the castle you’ve already built.
We'll need to look at building a "Challenger" Testing Framework
So, if you can't touch your main campaign, how do you test new creatives? You build a separate, dedicated testing environment. This is what we call a "Challenger" or "Creative Testing" campaign. Its only job is to put new ad creatives through their paces in a controlled way to find the next winner that can eventually be graduated to your main scaling campaign.
This approach completely de-risks the process. Your main ASC campaign keeps running, bringing in steady sales. Meanwhile, on the side, your testing campaign is methodically working to find a creative that can outperform your current best ads. For instance, I remember working with a women's apparel brand that was hesitant to touch their main campaign. By implementing a separate testing framework, we were able to identify new winning creatives that ultimately helped drive their return on ad spend to 691%. You get clean, reliable data because you are isolating the variable you want to test: the creative itself.
Think about it like a Formula 1 team. They have their main race car, which is optimised to the absolute limit for winning the current race. They don't try out a brand-new, untested engine design during the Grand Prix on Sunday. That would be insane. Instead, they have a separate test track and a test car where they run new parts through rigorous trials. Only once a new part has proven, with data, that it's faster and more reliable does it earn a place on the main race car. Your advertising should operate under the exact same logic.
Creative A
Creative B
Creative C
I'd say you need to structure the test properly
Setting up the testing campaign correctly is vital for getting clear results. A poorly structured test is just as bad as not testing at all, because you can't trust the data. Here’s a simple but effective way to set it up:
- Campaign Objective: Always choose 'Sales' (or 'Leads' if that's your goal). You must test using the same objective as your main campaign. Testing with a 'Traffic' or 'Engagement' objective tells you nothing about a creative's ability to actually convert a customer. It's a classic mistake that wastes a lot of money.
- Budgeting: Use Ad Set Budget Optimisation (ABO), not Campaign Budget Optimisation (CBO). This is a point of contention for some, but for pure creative testing, ABO is superior. It allows you to set a specific daily budget for each ad set, ensuring that every creative you're testing gets a fair amount of spend to prove itself. With CBO, Meta's algorithm will quickly shift the budget to the ad it *thinks* will win based on early signals, which can often kill a potentially great ad before it has a chance to gather enough data.
- Ad Set Structure: The golden rule is one creative per ad set. If you put multiple creatives in the same ad set, you're back to the CBO problem – Facebook will pick an early favourite and the other ads will barely get any impressions. By isolating each new creative in its own ad set, you force an equal test. Let's say you have three new video ads to test. You would create three ad sets, each with the exact same targeting and budget, but with a different one of the new videos in each.
- Audience Targeting: In your testing campaign, the audience should not be the variable. The creative is. Therefore, you should use a single, reliable, and broad audience for all ad sets in the test. A proven lookalike audience (like 1% of Purchasers) or a broad targeting stack that has worked for you in the past is perfect. The goal is to keep the audience consistent so that any difference in performance between the ad sets can be attributed directly to the creative.
- Budget Allocation: Your testing budget doesn't need to be huge. A good rule of thumb is to allocate around 10-20% of your main campaign's daily spend to testing. The key is to set the daily budget for each ad set high enough to get at least one or two conversions per day, based on your average CPA. If your CPA is £30, a £10/day budget per ad set won't give you meaningful data quickly. You might need to set it to £40-£50/day to get the fast feedback you need.
By following this structure, you create a scientific experiment. The only significant difference between each ad set is the ad creative itself. This means when you look at the results, you can be highly confident that the winning ad set contains the winning creative.
You probably should analyse the data, not your gut feeling
Once your test is running, the next challenge is knowing how to interpret the results and when to make a decision. It's easy to get impatient, but this is where the biggest gains are made. For one B2B software client, a methodical testing process like this was key to reducing their cost per user acquisition from a staggering £100 all the way down to just £7. It's easy to declare a winner after just one day, but this often leads to poor choices based on statistical noise.
Here’s what to look for and how long to wait:
- Give it Time: Let each ad set run until it has spent at least 1-2x your target CPA. If your target Cost Per Purchase is £25, don't even look at the data until each ad set has spent £25-£50. This ensures you're moving beyond initial luck and getting a more stable picture of performance. Ideally, you want to wait 3-7 days to smooth out any daily fluctuations in performance.
- The Primary Metric is ROAS (or CPA): While it's tempting to focus on vanity metrics like Click-Through Rate (CTR) or Cost Per Click (CPC), they don't pay the bills. The ultimate decider is efficiency. Which creative is generating sales at the lowest cost? That's your winner. A creative might have a lower CTR but a much higher conversion rate on the website, making it far more profitable. Always prioritise the bottom-of-funnel metrics.
- Look for Statistical Significance: A creative with 2 sales at a £10 CPA isn't necessarily better than one with 1 sale at a £15 CPA. The data set is too small. You need to see a clear and sustained trend. The winning creative should be consistently outperforming the others on your key metric over several days.
To help with this, you can use a simple comparison framework. Look at the performance of your existing ads in your main ASC campaign – what's your average ROAS and CPA there? That's your benchmark. A new creative is only a "winner" if it can confidently beat that benchmark in your testing campaign.
You'll need a process for scaling the winners
Okay, so you've run your test and you've found a clear winner. A new creative that's delivering a 4.0x ROAS while your old ads are averaging 3.0x. Now what? How do you get this new ad working at scale without, again, breaking your main campaign?
You still need to be cautious. My preferred method is to duplicate your existing, successful Advantage+ campaign. In this new, duplicated campaign, you would turn off all the old creatives and add *only* your newly proven winner (or maybe your top 2-3 all-time winners including the new one). You then launch this new "Champion" ASC campaign with a similar budget to the original.
Now you have two campaigns running side-by-side: the "Incumbent" (your original, untouched campaign) and the "Challenger" (the new one with the winning creative). Let them run for a few days. Very often, the new campaign with just the lean, proven creative will outperform the older one. If it does, you can begin to scale the budget up on the new one while scaling it down on the old one, eventually phasing the original campaign out completely. This ensures a smooth transition and continuous improvement without any catastrophic drops in performance.
This whole process might sound like a lot of work compared to just clicking "edit" and adding the new ad to your existing campaign. And it is. But it's a system. And systems are what allow you to scale your ad spend reliably and profitably. It turns creative refreshment from a risky gamble into a predictable process of improvement. This is the difference between amateur 'ad boosting' and professional media buying.
This is the main advice I have for you:
To put it all together, here is a simple, actionable plan you can follow. This is the exact process we'd use for a client in your position.
| Step | Action to Take | Why You're Doing It |
|---|---|---|
| Step 1: Isolate | Do not edit your existing, working Advantage+ Campaign. Treat it as a sacred, money-making machine that should not be disturbed while it's performing well. | To protect your primary source of revenue and avoid resetting the learning phase, which could cause a major and unpredictable drop in performance. |
| Step 2: Create | Launch a new, separate "Creative Testing" campaign. Set the objective to Sales and use Ad Set Budget Optimisation (ABO). Allocate 10-20% of your main campaign spend to it. | To create a controlled 'sandpit' environment where new ideas can be tested without risking your main campaign's stability. |
| Step 3: Test | Inside the testing campaign, create one ad set for each new creative you want to test. (1 Creative per Ad Set). Use the same broad/proven audience for all ad sets. | This isolates the creative as the only variable, ensuring that any performance difference is due to the ad itself, giving you clean, reliable data. |
| Step 4: Analyse | Let the test run until each ad set has spent at least 1-2x your target CPA. Identify the winner based on the best ROAS or CPA, not on vanity metrics like CTR. | To make data-driven decisions rather than guessing. You need enough spend to ensure the results are statistically significant and not just luck. |
| Step 5: Graduate | Duplicate your original ASC campaign. In the duplicated version, pause all old creatives and add ONLY the proven winner from your test. Launch this as a "Challenger". | This is the safest way to introduce a new creative to the powerful ASC algorithm, by giving it a clean slate with a proven ad to scale, minimising disruption. |
| Step 6: Scale | Monitor the "Challenger" vs the "Incumbent" campaign. If the new campaign outperforms the old one, gradually shift budget to it until it becomes your new primary scaling campaign. | To ensure a smooth transition of budget and scale up what's working best, creating a cycle of continuous, data-backed improvement. |
This might seem like a complex process, but once you've done it once, it becomes second nature. It's the foundation of scalable and sustainable advertising on Meta. Having a reliable system for testing and scaling creatives is arguably the most valuable asset an advertiser can have.
Managing this process of continuous testing, analysis, and scaling is a full-time job, and it's where expert help can make a significant difference. A seasoned eye can spot winning trends faster, structure tests more efficiently, and knows from experience which creative angles are most likely to work for your market. This systematic approach is precisely how we manage client accounts to deliver consistent growth, taking the guesswork out of creative strategy.
If you'd like to chat through your account in more detail and see how a framework like this could be implemented for your specific business, we offer a completely free, no-obligation initial consultation call. We can review your setup together and identify some immediate opportunities.
Hope this helps!
Regards,
Team @ Lukas Holschuh