Hi there,
Thanks for reaching out!
Happy to give you some initial thoughts on your question about scaling winning ads on Meta. Tbh, the way you've described your setup – a separate adset for testing and one for scaling – is super common, but it's often the very reason people run into the exact problem you're facing. It sounds logical on the surface, but it works against how the Meta algorythm actually learns and optimises.
The short answer is you're probably fighting the system rather than working with it. Let's break down why your winners are fizzling out and what a more robust scaling structure looks like. It's less about finding a magic "winner" and more about building a system that consistently finds and funds what's working.
TLDR;
- The 'test adset' and 'scale adset' structure is likely your biggest problem. It forces winning ads back into the learning phase and resets social proof, causing them to fail.
- You should be using Campaign Budget Optimisation (CBO) to let Meta's algorithm automatically allocate budget to the best-performing creatives within a single, proven adset.
- Stop focusing on 'winning creatives' in isolation. A real winner is a combination of the right creative, the right audience, and the right offer. Focus on finding winning audiences first.
- Broad targeting is a trap for new accounts. You need to feed the pixel with high-quality data from more specific, detailed targeting first before you can expect broad to work effectivly.
- This letter includes a flowchart visualising a better campaign structure and an interactive calculator to help you project your scaling efforts safely.
We'll need to look at your definition of a 'winner'...
Before we even get into structure, let's talk about this idea of a 'winner'. It's a term everyone throws around, but it's a bit of a misnomer. When you say you're duplicating a 'winner', what does that actually mean? A day of good results? A handful of cheap leads? This is a massive tripwire for a lot of advertisers.
The problem is statistical confidence. An ad that gets, say, 5 leads at £2 each in a test adset with a £20/day budget hasn't proven anything yet. It's had a good day, sure, but it's not statistically significant. When you move that ad to a 'scaling' adset with a £200/day budget, you're asking Meta to find 10x the number of people based on a tiny, unreliable dataset. The algorithm essentialy has to start from scratch, and the initial 'good luck' often doesn't hold up under real pressure. It's like judging a footballer on one good kick in training and then being surprised when they don't score a hat-trick in the final.
A true winning creative needs to perform consistently over a longer period and with significant spend behind it. My rule of thumb is that an ad or adset should spend at least 3x your target CPA (Cost Per Acquisition) before you even think about making a call on its performance. If your target cost per lead is £15, an adset needs to have spent at least £45 before you can confidently say it's not working. The same logic applies to a winner – it needs to maintain a good CPA after spending a significant amount and generating a decent volume of conversions, not just a lucky few.
Think about it from the algorithm's perspective. It needs data points (conversions) to learn who your ideal customer is. The more data it gets, the smarter it becomes at finding similar people. When you move an ad from a low-spend test, you're not really transfering its 'winning' DNA; you're just taking a single data point and hoping for the best. More often than not, that hope isn't a viable strategy.
To help you get a feel for what you can afford to spend to get a lead, let's look at Lifetime Value (LTV). Most businesses only look at the immediate return, but understanding what a customer is worth over their entire relationship with you changes everything. It tells you how much you can *actually* afford to pay for a lead.
Customer Lifetime Value (LTV) Calculator
I'd say your campaign structure is the real issue...
Right, this is the main bit. The ABO (Ad set Budget Optimization) structure with one adset for testing and another for scaling is fundamentaly flawed. I see it all the time, and it almost never works for the reasons you're seeing.
Here’s what’s happening when you duplicate your 'winner':
1. You Reset the Learning Phase: Every time you duplicate an ad into a new adset, Meta treats it as a brand new ad. All the data, history, and learning associated with it in the test adset? Gone. It's forced back into the learning phase from zero. This is incredibly inefficient and is the primary reason for inconsistent performance.
2. You Lose Social Proof: Any likes, comments, and shares on your ad are tied to the specific ad ID. When you duplicate it, you create a new ad with a new ID. All that valuable social proof, which can lower your costs and increase trust, is left behind on the original ad in your test adset. It might not seem like a big deal, but it really does make a diffrence.
3. Budget Allocation is Unpredictable: In your scaling adset, even if you put five 'winners' in there, Meta doesn't know they're winners. It just sees five new ads and starts distributing the budget amongst them to see what works. The ad that won in a low-budget environment might not be the one the algorithm favours in a higher-budget, more competitive environment.
So, what’s the alternative? A much more stable and scalable method is to use Campaign Budget Optimisation (CBO). Instead of setting budgets at the adset level, you set one budget for the entire campaign. Then, you put your different audiences in separate adsets within that campaign. Meta's algorithm will then automatically distribute the budget to the best-performing adset in real-time. It's designed specifically for this purpose.
The workflow should be about testing *creatives* inside a *proven audience*, not moving 'winning' ads between different audiences. Here's a better way to structure things:
-> Step 1: Find a Winning Audience. Use a CBO campaign to test different adsets, each with a different audience (e.g., one for a lookalike audience, one for a specific interest stack, one for another). Let Meta tell you which audience is the most promising by seeing where it allocates the spend.
-> Step 2: Consolidate and Test Creatives. Once you have a clear winning audience (or two), create a new CBO campaign dedicated just to that audience. Inside this campaign's single adset, you'll put your existing best-performing creatives. This is now your 'scaling' campaign.
-> Step 3: Introduce New Creatives. When you want to test a new creative, you don't put it in a separate 'testing' adset. You add it directly into your main scaling CBO adset alongside your proven ads. The CBO will give it a little budget to see how it fares. If it performs well, the algorithm will automatically start giving it more and more of the budget. If it sucks, it'll get minimal spend and you can just turn it off. No duplication, no resetting, no lost social proof.
This approach lets the algorithm do the heavy lifting. You're working *with* its optimisation process instead of constantly disrupting it. It's a much more hands-off, stable, and truly scalable method.
The Flawed "Test & Scale Adset" Method
A Better CBO-Based Structure
You probably should rethink your approach to targeting...
You mentioned you're using broad targeting in both adsets. This is another potential red flag, especialy if your pixel doesn't have a ton of conversion data yet. Broad targeting can be incredibly powerful, but you have to earn the right to use it. It works by telling Meta, "Here's my ad, go find the right people for it based on all the data you have about who has converted on my website before." If you haven't given it enough data, it's just guessing. It's a shot in the dark.
You need to think of your targeting strategy as a funnel, just like your sales process. You start more specific to gather data and then broaden out as the pixel gets smarter.
Here’s how I prioritise audiences for a new account or campaign:
1. Start with Detailed Targeting: This is your bread and butter when starting out. You need to do the work to define your Ideal Customer Profile (ICP). But forget demographics. Your ICP is not a person; it's a problem state. What podcasts do they listen to? What software do they use? What influencers do they follow? Who are your competitors? Target those interests. For example, if you're selling a project management tool, don't just target "small business owners." Target people interested in Asana, Trello, Monday.com, or who follow pages about agile methodology. This is how you feed the pixel high-quality, relevant data about who your real customers are.
2. Build High-Intent Retargeting Audiences: As soon as you have traffic (you need at least 100 people in an audience), start building retargeting lists. But don't just lump all website visitors together. Prioritise them based on intent. Someone who visited your pricing page is way more valuable than someone who just read a blog post. My priority order is usually: Initiated Checkout > Added to Cart > Viewed Product/Service Page > All Website Visitors. Target these high-intent groups first (BoFu - Bottom of Funnel).
3. Create High-Quality Lookalike Audiences: Once you have enough conversion data (at least 100 purchases or leads, but honestly more is better), you can create lookalike audiences. Again, quality matters. A lookalike of your *customers* is infinitely more valuable than a lookalike of your website visitors. Start with a 1% lookalike of your highest-value audience (e.g., purchasers) in your target country. This will be your most potent cold audience.
Only after you have success with these more defined audiences and your pixel has thousands of conversion events should you realy start testing broad targeting. By then, the algorithm will have a much clearer picture of who to show your ads to, and "broad" will actually mean "broad but relevant," not just "anyone with a pulse." Your current approach of jumping straight to broad might be sending the algorithm on a wild goose chase, leading to inconsistent results and wasted spend.
You'll need a better way to think about budgets and scaling...
Scaling isn't just about dumping more money into an adset. Doing that too quickly can shock the algorithm and push you back into the learning phase, which tanks performance. You have to be more strategic about it. There are two main ways to scale a successful campaign:
1. Vertical Scaling (Increasing the Budget): This is the simplest method. You take a winning CBO campaign or ABO adset and you gradually increase its daily budget. The key word here is *gradually*. The general rule of thumb is to not increase the budget by more than 20-30% every 2-3 days. This gives the algorithm time to adjust and find more people without disrupting its learning. Any sudden, large increase will likely cause your CPA to spike.
2. Horizontal Scaling (Expanding the Audience): This is where you take what's working and duplicate it to new, similar audiences. For example, if your 1% lookalike of purchasers is performing brilliantly, you can duplicate that adset and test a 1-2% lookalike, or a 2-5% lookalike. Or if a particular stack of interests is working, you can create a new adset that targets a different but related stack of interests. This method helps you fight audience fatigue (where your existing audience has seen your ads too many times) and find new pockets of customers, allowing you to increase your overall daily spend without hammering a single adset.
A healthy scaling strategy uses a mix of both. You vertically scale your best-performing campaigns while horizontally scaling to discover new audiences that can also be scaled vertically. It’s a continuous cycle of optimising and expanding.
Your current method of duplicating a single ad into a high-budget adset is an extreme, and often fatal, form of vertical scaling. You're making a massive budget jump without giving the system any time to adapt. A slower, more methodical approach is almost always more profitable in the long run.
Safe Campaign Scaling Projector
My main advice I have for you:
To pull this all together, here is a summary of the strategic shifts I'd recommend you make. This isn't just about tweaking settings; it's about adopting a more professional and robust approach to managing your Meta ads that's built for long-term, stable growth.
| Problem Area | Recommendation | Rationale |
|---|---|---|
| Campaign Structure | Switch from separate ABO test/scale adsets to a single CBO campaign per stage of the funnel. | This prevents resetting the learning phase, preserves social proof, and lets Meta's algorithm automatically allocate budget to the best ads, which is more stable and efficient. |
| Creative Testing | Introduce new creatives directly into a proven, winning CBO adset alongside existing performers. | Allows for a true 'survival of the fittest' test in a live enviroment. Good ads get funded automatically; bad ads don't, and can be switched off without disrupting the campaign. |
| Defining a "Winner" | Judge performance based on statistically significant data (e.g., spending at least 3x your target CPA) rather than a few early, cheap conversions. | Avoids false positives and ensures you are scaling ads that have proven their ability to perform consistently, not just ones that got lucky on a low budget. |
| Targeting Strategy | Start with specific, detailed targeting and high-intent lookalikes to feed the pixel data. Only move to broad targeting once the pixel is mature. | Ensures you're giving the algorithm high-quality data to learn from, which makes its optimisation and eventual use of broad targeting far more effective and less costly. |
| Scaling Method | Use gradual vertical scaling (increase budget by 20% every 2-3 days) on winning campaigns, and horizontal scaling (testing new audiences) to expand reach. | This avoids shocking the algorithm with sudden budget changes, which maintains stable performance and allows for sustainable, profitable growth without constant performance dips. |
I know this is a lot to take in, and it's a fundamental shift from the way many people are taught to run Facebook ads. The truth is that scaling effectively is one of the hardest parts of paid advertising. It's where most businesses either waste a significant amount of money or leave a huge amount of potential revenue on the table because they can't break through their performance plateaus.
Following the principles I've outlined above will put you miles ahead of the competition. It requires more strategic thinking upfront but leads to far less day-to-day firefighting and much more predictable results.
This is, of course, a high-level overview based on your question. A proper deep-dive would involve getting into your account to analyse your audiences, creatives, and offer to see where the biggest levers for growth are. If you'd like to do that, we offer a completely free, no-obligation strategy session where we can do just that. It's often incredibly helpful for people to get a second pair of expert eyes on their campaigns.
Hope that helps!
Regards,
Team @ Lukas Holschuh