Hi there,
Thanks for reaching out!
That's a classic situation to be in, and a good problem to have. Finding a strong new ad is half the battle, but you're right to be cautious. Introducing it the wrong way can mess up a perfectly good scaling campaign. It’s something we see all the time when auditing new client accounts – people either get too excited and break what's working, or they're too scared to test and leave potential gains on the table.
The short answer is that there's a better way than just duplicating or adding creatives into your live ad sets. You need a proper system for testing that protects your winners while giving new ads a fair shot. Let's walk through how I'd approach this.
TLDR;
- Stop editing your live scaling CBO campaign directly. You risk resetting the learning phase and messing up your performance. Protect what's already working at all costs.
- Create a separate, dedicated testing campaign using Ad Set Budget Optimisation (ABO). This lets you force spend to new creatives to get data quickly and fairly.
- Your current "interest" targeting of 600-700M is basically broad targeting. This is fine while it works, but for predictable scaling you need a more structured approach with different audience types (Lookalikes, retargeting).
- The most important advice is to only "graduate" a proven winning creative from your testing campaign into your main scaling campaign. This methodical process removes guesswork and is how you scale reliably.
- This letter includes a flowchart for a testing framework and an interactive calculator to model how small creative improvements can impact your ROAS.
We'll need to look at how you're testing...
First off, your instincts about the risks are spot on. Let's break them down:
Adding the new ad to existing ad sets: You're right to worry about restarting the learning phase. Any significant edit to an ad set can trigger it. Even if it doesn’t, the algorithm will have a bias towards your existing ads that already have performance history and social proof. Your new ad might not get a fair share of the budget to prove itself. It also makes analysing the data a complete nightmare, as you can't cleanly compare the performance of the new ad versus the old ones over time. It just muddies the waters.
Duplicating each ad set with the new creative: This is a much safer option, and closer to the right idea. It isolates the new creative. Your concern about audience overlap is understandable, but with CBO, it's less of an issue than people think. The whole point of CBO is to manage the budget across ad sets and find the cheapest conversions, so it inherently deals with some level of internal auction competition. The real problem with this approach is that you're still conducting a major test inside your main scaling campaign. If the new ad bombs, you've just wasted a chunk of your scaling budget on a loser. It's inefficient.
The solution isn't to choose between these two options. It's to separate your testing from your scaling entirely. What works is having two distinct types of campaigns running: one for scaling proven winners, and one for finding the *next* winner.
I'd say you need to run a proper creative test...
Your current CBO campaign is your 'Scaling Campaign'. Leave it alone. Let it keep doing its thing. You're going to build a new, separate 'Testing Campaign'.
Here’s how you set it up:
- Use Ad Set Budget Optimisation (ABO), not CBO. This is important. In a testing enviroment, you want to control the budget at the ad set level. This forces spend across your test variables evenly, ensuring your new creative gets enough budget to generate meaningful data. CBO would just push all the money to what it already knows works, defeating the purpose of the test.
- Pick Your Best Audience. Duplicate the single best-performing ad set from your scaling CBO campaign into this new ABO testing campaign. Use the exact same interest targeting. You want to test your creative against a proven audience to remove that variable.
- Structure the Test. Inside this new ad set, you'll have at least two ads. One will be your current best-performing ad (the 'control' or 'champion'). The other will be your new ad (the 'challenger'). If you have more than one new ad, you can add them here too, but I'd suggest testing one big idea at a time to keep it clean.
- Set a Controlled Budget. Give this testing ad set a modest daily budget. A good rule of thumb is enough to get at least one or two conversions per day, based on your average CPA. If your CPA is £50, a £50-100/day budget is a decent start. Let it run for 3-5 days, or until it has spent at least 2-3x your target CPA. Don't touch it during this time.
What are you looking for? Don't just look at ROAS or CPA initially. Those are lagging indicators. Look at the leading indicators first:
- -> Click-Through Rate (CTR): Is the new ad grabbing more attention? A significantly higher CTR is a massive green flag.
- -> Cost Per Click (CPC): A higher CTR usually leads to a lower CPC, meaning you're getting cheaper traffic.
- -> Cost Per Add to Cart / Initiate Checkout: Are people taking those initial steps?
Sometimes a new creative might have a slightly higher CPA initially but a much better CTR. This tells you the ad is resonating, but maybe the landing page experience needs tweaking for that specific angle. A great creative can make your entire funnel cheaper and more efficient, as you can see with the calculator below. Even a tiny lift in CTR can have a huge impact on your final numbers.
You probably should re-evaluate your audience strategy...
You mentioned you're using a couple of interests with a 600-700M size. Tbh, that's not really interest targeting anymore; that's effectively broad targeting. An audience that large means you've either stacked dozens of unrelated interests or you're targeting one massive one like "shopping". While it's working now, which is great, it's not a very repeatable or strategically sound way to scale in the long run. You've found something that works, but you probably don't know *why* it works.
For more predictable growth, you need to structure your audiences more deliberately. Think of it like a funnel.
Your current interest-based ad sets are at the Top of the Funnel (ToFu). As you gather more data (website visitors, purchasers), you can start building audiences for the middle and bottom of the funnel. Retargeting people who added a product to their cart (BoFu) will almost always give you a better return than targeting cold interests.
The most powerful ToFu audiences are often Lookalikes. Once you have at least 100-200 purchases tracked by your pixel, you can create a Lookalike audience of those purchasers. This tells Facebook to "go find me more people who look just like the people who already bought from me." We worked on a campaign for a women's apparel brand where we saw this exact principle in action. Initially, they were using broad interests, but once we introduced a 1% Purchaser Lookalike audience, the return on ad spend jumped, eventually reaching a 691% return. This should definitely be your next testing frontier after you've nailed down your creative testing process.
You'll need a clear graduation path for winners...
Okay, so you've run your test in the ABO campaign, and your new creative is a clear winner. It's getting a lower CPA, or a much higher CTR with a similar CPA. Now what? It's time to "graduate" it to your main scaling CBO campaign.
Here’s the simplest, safest way to do it:
- Go into your scaling CBO campaign.
- Select your three existing, high-performing ad sets.
- Duplicate them. This creates three new ad sets with the exact same audience targeting. I'd name them something like "[Interest] - [New Creative Name]" to keep track.
- In these three NEW ad sets, go to the ad level, turn off all the old creatives, and add ONLY your new winning creative.
- Now you have your three original ad sets running the old ads, and three new ad sets running the new winning ad, all inside the same CBO campaign.
- Let them all run together for 24-48 hours. The CBO algorithm is smart. It will quickly see that the new ad sets are performing better and will start to shift the campaign's budget towards them automatically.
- After a day or two, once you see the budget shifting and the new ad sets performing well, you can safely pause the three original ad sets.
That's it. You've now successfully swapped in a new, better-performing creative without disrupting the campaign's learning, resetting its history, or flying blind. It's a methodical, low-risk process that protects your current revenue while introducing new growth drivers.
This is the main advice I have for you:
I know this seems like a lot more work than just clicking 'duplicate', but this separation of testing and scaling is really what separates casual ad-buyers from professionals. It's this exact process that allows for sustainable scaling. One campaign we managed for an online course creator, for instance, used this methodical testing and graduation process to generate over $115,000 in revenue in just six weeks, all while maintaining a profitable return on ad spend. You build a machine for finding winners, and a separate machine for milking those winners for all they're worth.
Here’s a summary of the plan I've laid out:
| Step | Action | Rationale |
|---|---|---|
| 1. Isolate | Create a new, separate ABO (Ad Set Budget) Campaign solely for testing. Do not edit your live CBO campaign. | Protects your profitable scaling campaign from risky tests. ABO ensures each test variable gets a fair budget. |
| 2. Test | Duplicate your best performing ad set into the ABO campaign. In this ad set, run your new creative against your current best creative. | Creates a clean, controlled 'champion vs. challenger' test against a proven audience to get reliable data. |
| 3. Analyse | Let the test run for 3-5 days. Analyse leading indicators (CTR, CPC) and lagging indicators (CPA, ROAS) to find the true winner. | A holistic view of the data helps you understand not just *what* won, but *why* it won, leading to better future creatives. |
| 4. Graduate | Once a winner is declared, duplicate the ad sets in your main CBO campaign, replace the old ads with the new winner, and run them concurrently for 24-48 hours. | Allows CBO to seamlessly shift budget to the better-performing ad sets without resetting the learning phase. |
| 5. Pause | After confirming the new ad sets are stable and receiving budget, pause the original ad sets. | Completes the transition, leaving your campaign stronger and more efficient than before, ready for further scaling. |
This is the kind of methodical, data-driven framework we implement for all our clients. It takes the emotion and guesswork out of managing ad accounts and replaces it with a predictable process for growth. If you ever feel like you've hit a scaling plateau or just want a second pair of expert eyes to look over your entire strategy, we offer a completely free, no-obligation strategy session where we can do just that.
Hope this helps!
Regards,
Team @ Lukas Holschuh