Hi there,
Thanks for reaching out! That's a really common question and it's actually one of the most important things to get right when you're trying to scale your Facebook ads. Messing up your testing can cost you a lot of money and kill momentum, but getting it right is how you find those winning ads that you can really put budget behind. I'm happy to give you some initial thoughts and guidance on how I'd approach this, based on what we do for our clients.
The short answer is you should definitely not add new creatives to the ad set that's already working well. It's a sure-fire way to disrupt its performance. Instead, you need a proper, repeatable system for testing. Let's break down what that looks like.
TLDR;
- Never add new creatives to a live, winning ad set. This resets the learning phase and messes up your performance data, making it impossible to know what's truly working.
- The best practice is to use a separate testing campaign. Create a dedicated campaign (or use specific ad sets within your main campaign) just for testing new creatives against your proven, winning audience.
- Isolate variables for clean data. When testing creatives, the only thing that should change is the creative itself. Keep the audience, budget, and placement the same to get a clear signal on which ad performs best.
- Focus on leading indicators, not just sales. In the early days of a test, look at metrics like Click-Through Rate (CTR) and Cost Per Click (CPC) to spot potential winners before they've had enough time to generate lots of sales.
- This letter includes a visual flowchart of the testing process and an interactive calculator to help you figure out a sensible testing budget.
We'll need to look at why you shouldn't touch a winning ad set...
I know it's tempting. You have an ad set that's getting conversions, the algorithm seems to like it, and you think, "I'll just pop this new creative in here and see what happens". It feels efficient. But it's one of the biggest mistakes I see people make.
Every time you make a significant edit to an ad set—like adding a new ad—you risk sending it back into the 'Learning Phase'. During this phase, Meta's algorithm is frantically spending your money to figure out who to show your ads to. It's volatile, performance can be all over the place, and your costs can spike. Your previously stable, profitable ad set can suddenly become a cash furnace.
More importantly, though, it ruins your data. Let's say you add three new creatives to your winning ad set which already has one winning creative. A week later, the ad set's performance has tanked. What caused it? Was it one of the new ads? All of them? Did adding them just upset the algorithm's delivery of the original winning ad? You have no way of knowing for sure. You've mixed your variables, and now your results are meaningless. You can't make smart decisions with messy data. The goal of testing isn't just to find a winner; it's to understand *why* it's a winner. You can't do that by chucking everything into one pot.
You mentioned your broad ad set didn't get any conversions and you turned it off. That's a good instinct. But it also highlights the need for a structured approach. Sometimes broad targeting can work amazingly well, but only *after* your pixel has tons of data. For now, sticking with the interest-based audience that's working is the right move, especially for testing.
I'd say you need a dedicated testing framework...
So, what's the alternative? Instead of messing with your golden goose, you need to build a separate, safe environment to test your new ideas. This is what we call a testing campaign or a testing ad set.
The logic is simple: you have one campaign that is purely for 'scaling'. This campaign contains only your proven, winning ad sets and creatives. You don't touch it unless you're increasing the budget. Then, you have another campaign that is purely for 'testing'. This is where you experiment.
Here’s how the workflow looks in practice. It's a cycle of continuous improvement rather than random tweaks.
1. Isolate Winner
Identify your best performing audience and ad creative in your main campaign.
2. Create Test
In a separate testing campaign, duplicate the winning ad set (same audience). Add your new creatives here.
3. Run & Analyse
Run the test on a small, controlled budget. Analyse leading indicators (CTR, CPC) and conversions.
4. Graduate Winner
If a new creative outperforms the old one, move it into your main 'Scaling' campaign.
This structure is the secret to stable, scalable results. Your main campaign (we'll call it the 'Scaling Campaign') keeps chugging along, making you money. Meanwhile, your 'Testing Campaign' is your lab. It's where you take risks without jeopardising your main source of revenue. Once a creative proves itself in the testing campaign, you can then "graduate" it to the scaling campaign. Some people do this by creating a new ad set in the scaling campaign with the new winning creative, or adding it to an existing winning ad set that's using CBO (Campaign Budget Optimisation), but the key is that it's been validated first.
Your question was should you use a new ad set or a new campaign. Honestly, either can work.
-> New Ad Set Method: You can have one main campaign using CBO. Inside it, you have your proven 'scaling' ad set and a separate 'testing' ad set. You'd allocate a small part of the campaign budget to the testing ad set by using ad spend limits. The risk is that CBO might not give your test enough budget if the scaling ad set is performing really well.
-> New Campaign Method: This is cleaner. You have a "Scaling Campaign" and a "Testing Campaign". You set a separate, fixed budget for each. This guarantees your test gets the money it needs and the data is completely isolated. This is the approach I'd recommend for you right now as its the most straightforward and gives you the cleanest data.
You probably should use your winning audience as a benchmark...
You've already done some of the hard work by finding an interest-based audience that converts. This is now your benchmark. When you're testing new creatives, you should test them against this *exact same audience*. Why? Because you want to isolate one variable at a time.
If you test a new creative on a new audience, and it fails, what have you learned? Nothing. You don't know if the creative was bad or the audience was bad. But if you test a new creative against your proven audience and it fails, you know with much higher certainty that the creative was the problem.
Think of it like a science experiment. Your winning audience is your 'control' group. The new creatives are your 'variables'. By keeping the control consistent, you can confidently attribute any changes in performance to the variable.
We do this for all our clients. We build a hierarchy of audiences to test against, starting with the ones most likely to convert. For an e-commerce client, it might look something like this:
| Funnel Stage | Audience Type | Example | Purpose |
|---|---|---|---|
| BoFu (Bottom) | Warmest Audience | Added to Cart in last 14 days (but not purchased) | Highest conversion potential. Good for retargeting with offers. |
| MoFu (Middle) | Engaged Audience | Visited website or engaged with Instagram in last 30 days | Interested but not ready to buy. Good for building trust. |
| ToFu (Top) | Proven Cold Audience | Your winning interest-based audience | This is your best audience for creative testing. It's stable and large enough for clear results. |
| ToFu (Top) | Lookalike Audience | 1% Lookalike of past purchasers | Good for finding new customers who behave like your existing ones. |
Your winning interest group falls into that 'Proven Cold Audience' category. It's the perfect place to test new creatives because it's broad enough to give you statistically significant data, but targeted enough that you know the people in it are generally receptive to your offer.
You'll need to know what to measure...
You mentioned your broad ad set got no conversions. For a sales campaign, conversions (purchases) are obviously the final goal. But when you're testing, especially with a small budget, you might not get enough sales on each creative variation to declare a clear winner quickly. This is where you need to look at 'leading indicators'. These are metrics that tell you if a creative is on the right track, even before the sales start rolling in.
The main ones are:
-> Click-Through Rate (CTR): This tells you what percentage of people who see your ad are actually clicking on it. A higher CTR means your ad is grabbing attention and is relevant to the audience. It's the first hurdle. If nobody clicks, nobody buys.
-> Cost Per Click (CPC): This is how much you're paying for each click. A lower CPC is generally better, as it means you can get more traffic to your site for the same budget. A high CTR often leads to a lower CPC because Facebook rewards engaging ads.
-> Cost Per Mille (CPM): This is the cost to show your ad to 1,000 people. It's often an indicator of audience quality and competition. Sometimes a really good creative can even lower your CPM.
Imagine you're testing two new creatives (Ad A and Ad B) against your current winner (Control). After 3 days, here's what the data might look like:
In this scenario, maybe none of the ads have made a sale yet. But Ad B has a CTR that's double your control ad. That's a massive signal. It tells you that something about that creative is resonating much more strongly with your audience. It's getting more people to your website for less money. It has a much higher *potential* to become a sales winner. Ad A, on the other hand, is a dud. You can probably turn it off now and allocate its budget to Ad B to get a result faster.
This is how you make decisions quickly without having to wait weeks for sales data to accumulate. Look for strong leading indicators, double down on what's working, and cut what's not.
You will want to set a proper testing budget...
A common question I get is "how much should I spend on testing?". The answer isn't a fixed number; it's related to your cost per acquisition (CPA). You need to spend enough to give an ad a fair chance to get a conversion.
A good rule of thumb is to be willing to spend at least 1.5x to 2x your target CPA per creative before you make a final decision. If your target CPA for a sale is £50, you should be prepared to spend £75-£100 on each new creative you're testing. If after spending £100 it hasn't made a single sale, and its leading indicators (CTR, CPC) are poor, it's very unlikely to ever be profitable.
Here’s a simple calculator to help you plan your testing budget based on your own numbers. Adjust the sliders to see how your target CPA and the number of creatives you want to test affect your required budget.
So this is the main advice I have for you:
To wrap this all up, here is a clear, actionable plan you can follow. This isn't just theory; this is the exact process we use to manage and scale accounts spending thousands of pounds a day. It removes guesswork and replaces it with a system.
| Step | Action | Reasoning |
|---|---|---|
| 1. Protect Your Winner | Do not edit your current winning ad set. Rename your current campaign to something like "[SCALING] - Main Campaign". | This protects your stable, profitable campaign from disruption and the learning phase reset. It's now your control group. |
| 2. Create a Test Campaign | Create a completely new campaign. Name it "[TESTING] - Creative Test 1". Set a small daily budget (e.g., £20-£50/day). | This isolates your test budget and data, ensuring your test gets a fair chance without affecting your main campaign's performance. |
| 3. Duplicate the Audience | In your new testing campaign, create one ad set. Use the 'copy' function to duplicate your winning interest-targeted ad set into this campaign. Use the exact same targeting. | You are testing creatives, not audiences. Using a proven audience as your benchmark is the only way to get a clean read on creative performance. |
| 4. Add New Creatives | Inside this new ad set, create your new ads. Aim for 2-4 new variations. Make sure your original winning creative is also in here so you have a direct comparison. | This creates a head-to-head competition. The algorithm will naturally start to favour the ad that performs better within the ad set. |
| 5. Analyse & Decide | Let it run for 3-5 days. Turn off any clear losers (very low CTR, high CPC) early. After enough spend, compare the top performer to your original control ad. | Use leading indicators first, then conversions. You're looking for a new creative that beats your old one on key metrics. |
| 6. Graduate the Winner | If you find a new clear winner, pause the testing campaign. Then, add that new winning creative to your main "[SCALING]" campaign. | You've successfully found and validated a better ad without risking your main revenue stream. Now you can scale it with confidence. Repeat the process. |
This process might feel a bit slower than just dumping everything together, but it's methodical. It's how you build a truly resilient and scalable ad account rather than just getting lucky with one ad set that eventually dies off. This is what separates amateurs from professionals and what allows you to consistently improve your results over time, rather than just riding a rollercoaster of good weeks and bad weeks.
Running ads effectively is a science, and it requires a disciplined approach to testing and optimisation. Without a solid framework, you're essentially gambling. With one, you're making calculated investments based on clear data.
If you'd like to go over your account and strategy in more detail, we offer a free, no-obligation initial consultation where we can look at your setup together and provide some more specific advice. It might be helpful to have a second pair of expert eyes on it.
Hope this helps!
Regards,
Team @ Lukas Holschuh