Hi there,
Thanks for reaching out!
That's a great question, and one that gets asked a lot. It's a classic dilemma: you've got something that's working brilliantly, and you're terrified of touching it in case you break the magic. I'm happy to give you some initial thoughts and guidance on how you can keep testing and scaling without messing up your current winners. The short answer is yes, you can, but it's more about building a proper structure for the long run rather than just adding bits and pieces to a single campaign.
TLDR;
- Don't worry about the learning phase: Adding a new ad set to a running ABO (Ad Set Budget Optimisation) campaign will not reset the learning phase for your existing, well-performing ad sets. Each ad set learns independently in an ABO setup.
- Think structure, not just campaigns: Instead of creating endless 'testing campaigns', you should build a permanent, structured account with dedicated campaigns for each stage of the customer journey (e.g., Prospecting, Retargeting).
- Prioritise your audience testing: There's a logical order to testing audiences that gets better results. I've laid out a priority list below, starting with detailed interests and moving to more powerful lookalike and retargeting audiences as you gather data.
- The real bottleneck is often the offer: No amount of clever targeting can fix a weak offer. Your ads need to speak directly to a specific, urgent pain point your ideal customer is experiencing.
- This letter includes a visual flowchart to help you structure your account, an interactive calculator to budget your tests effectively, and an action plan table to get you started.
Let's bust a common myth: Adding new ad sets won't break your campaign... if you do it right.
First things first, let's clear up your main worry. In an Ad Set Budget Optimisation (ABO) campaign, which is what you're running, the budget and the 'learning phase' are managed at the ad set level. Each ad set is its own little ecosystem. When you introduce a new ad set to the campaign, that new ad set will enter the learning phase. It needs to gather its own data to figure out who to show ads to.
However, this process has absolutely no impact on your two existing 'winning' ad sets. They will continue to run as they were, using their own budget and their own accumulated learning. They won't be reset, they won't re-enter learning, and their performance won't be directly affected by the new ad set's presence. You've essentialy ring-fenced them with their own budget, so Meta's algorithm won't steal from them to fund your new test.
The story is a bit different with Campaign Budget Optimisation (CBO), where Meta distributes a single campaign budget across all ad sets. In a CBO world, adding a new ad set could disrupt performance because the algorithm might start diverting budget from your proven winners to the new, unproven test ad set. But even then, it doesn't technically 'reset' the learning of the old ad sets. For your ABO setup, you are perfectly safe to add new ad sets without fear of breaking what already works.
So now that we've established it's safe, the real question becomes: is it the best way to do it? This is where we move from a simple tactical question to a much more important strategic one about how you structure your entire ad account for sustainable growth.
I'd say you need a dedicated testing structure, not just a "testing campaign".
What you're calling a "testing campaign" is really your 'prospecting' or 'Top of Funnel' (ToFu) campaign. Its job is to find new customers who've never heard of you before. The mistake many advertisers make is to treat testing as a temporary activity. They create a campaign, test a few things, find a winner, and then stop testing. But the market changes, creative fatigues, and what works today might not work next month.
The best accounts I've seen treat prospecting as a perpetual testing machine. You should have a long-term, always-on ToFu campaign. Inside this campaign, you keep your winners running (like your two current ad sets) and you continuously introduce new ad sets to challenge them. You're looking for new champions that can either outperform your current winners or find a new pocket of customers at a profitable cost.
This approach should be part of a wider account structure based on the marketing funnel. It sounds complicated, but it's quite simple:
- ToFu (Top of Funnel - Prospecting): This is your current campaign. Its only job is to reach cold audiences—people who don't know you. This is where all your audience and creative testing happens.
- MoFu (Middle of Funnel - Engagement): A separate campaign that retargets people who have shown some interest but haven't taken a high-intent action. This could be people who have watched a percentage of your videos or engaged with your Facebook/Instagram page.
- BoFu (Bottom of Funnel - Conversion): Another separate campaign that retargets people who are close to converting. This is your highest-intent audience: website visitors, people who have added items to their cart, or initiated checkout.
Structuring your account this way gives you incredible clarity. You know exactly what each campaign's job is, you can allocate budget logically, and you can tailor your messaging for each stage of the journey. A cold prospect needs a different message than someone who abandoned their shopping cart an hour ago. This structure is what allows you to scale effectivly.
1. ToFu (Prospecting)
Targeting cold audiences who have never heard of you. This is your testing ground.
Winner 1 (Keep Running)
Winner 2 (Keep Running)
New Test: Women Audience
New Test: Broad with New Creatives
2. MoFu (Retargeting)
Targeting warm audiences who have engaged but not visited your site.
Facebook Page Engagers
Instagram Engagers
Video Viewers (e.g., 50%)
3. BoFu (Retargeting)
Targeting hot audiences who are close to buying. Your highest ROAS will be here.
All Website Visitors (30d)
Viewed Content / Product Pages (14d)
Added to Cart (7d)
Initiated Checkout (3d)
You'll need a clear priority for what you test next...
Once you have this structure, the question becomes: what audiences should you be testing in your ToFu campaign? A lot of people just guess or pick interests at random. But there's a logical progression that usually yields the best results. You should prioritise audiences based on how closely they resemble your ideal customer.
Here's how I typically prioritise audiences for an eCommerce account, starting with the coldest and moving towards the most valuable:
| Priority | Audience Type | Description & When to Use |
|---|---|---|
| 1. Start Here | Detailed Targeting | Interests, behaviours, and demographics. This is your starting point for any new account. You have to tell Meta who to look for. Test different interest 'stacks' in separate ad sets. |
| 2. When You Have Data | Lookalike Audiences | Once you have at least 100-1000 purchases, create lookalikes. Start with a lookalike of your purchasers, then add to carts, then website visitors. These almost always outperform interests. |
| 3. Retargeting (MoFu/BoFu) | Custom Audiences | These aren't for prospecting, but for your other campaigns. Audiences like Website Visitors, Add to Cart, etc. These are your money-makers. |
| 4. Scaling Phase | Broad Targeting | Once your pixel has thousands of conversion events and is really 'smart', you can test ad sets with no detailed targeting at all (just age, gender, location). Meta's algorithm is now good enough to find customers on its own. |
You're currently at stage 1, testing Detailed Targeting. Your plan to test a "women" audience and "broad" is a good next step. Just make sure for the 'broad' test, your pixel has enough data to work with. If you've only had a few dozen conversions, broad targeting might struggle. But if you have hundreds, it's definitely worth a test. For instance, one campaign we managed for a client in the outdoor equipment space was structured specifically to drive high traffic, ultimately bringing in over 18,000 website visitors.
We'll need to look at how much to spend before making a decision...
A common mistake when testing is not giving an ad set enough budget or time to prove itself. People often panic and turn off an ad set after it's spent £10 without a sale. But you can't make informed decisions with such little data.
So, how much should you spend on a test? A good rule of thumb is to be willing to spend between 2x and 3x your target Cost Per Acquisition (CPA) before you decide if an ad set is a winner or a loser. If you're aiming for a £20 CPA, you should let that ad set spend £40-£60 before you kill it. If it gets sales within that budget, great! If not, it's likely not going to be a winner, and you can confidently turn it off and test something else.
This prevents you from making emotional decisions and ensures you're giving each audience a fair chance to succeed. It's a numbers game, and you need to let the numbers speak for themselves. I've built a simple calculator below to help you figure out your minimum test budget per ad set.
This is the main advice I have for you:
So, pulling all this together, here’s a clear, actionable plan for what you should do next. This approach will keep your current success safe while creating a robust system for finding your next winning ad set.
| Action | Reasoning | How to Implement |
|---|---|---|
| Keep Your Campaign Running | Your ABO campaign is performing well. Adding new ad sets won't harm the performance of your existing winners, so there is no need to duplicate or start over. | Simply rename the campaign to something like "[ToFu] - Prospecting - ABO" for clarity. Do not touch the winning ad sets. |
| Add New Ad Sets Directly | This is the most efficient way to test. It keeps all your prospecting efforts in one place, allowing you to easily compare performance against your current winning ad sets. | Inside your renamed prospecting campaign, create two new ad sets. One targeting 'women', and one for 'broad' with your new general creatives. Set a daily budget for each based on your test budget calculations. |
| Systematically Kill Losers | Not every test will be a winner. It's vital to cut underperformers quickly so you can reallocate that budget to your proven winners or new tests. | Use the calculator above. Once a new test ad set has spent 2-3x your target CPA without achieving a purchase, turn it off. Don't get emotional about it. |
| Build Retargeting Campaigns | Prospecting is only half the battle. A lot of your profit will come from retargeting interested users. This is a seperate but equally important job. | Create two new campaigns: "[MoFu] - Retargeting - Engagers" and "[BoFu] - Retargeting - Website Visitors". Start populating them with the audiences outlined in the flowchart. |
| Plan Your Next Tests | Testing should be continuous. Always have your next audience or creative idea ready to go for when a test concludes (either as a winner or a loser). | Look at the Audience Prioritisation table. Once you have enough purchase data, your next test should absolutly be a 1% lookalike of your purchasers. |
This systematic approach might seem like more work upfront than just duplicating a campaign, but it will save you a huge amount of time, money, and guesswork in the long run. It's the difference between randomly gambling and professionally managing an advertising portfolio.
Navigating all these layers—the structure, audience prioritisation, creative testing, and budgeting—is exactly where professional expertise makes a difference. It's about moving from reacting to problems to proactively building a system that generates predictable results. While you can certainly implement this framework yourself, having an experienced partner can help you accelerate the process, avoid common pitfalls, and scale your results much faster.
If you'd like to chat through your account in more detail and get a second opinion on your strategy, we offer a completely free, no-obligation initial consultation where we can look at your campaigns together. It's often really helpful for getting clarity on the next steps.
Regards,
Team @ Lukas Holschuh