Hi there,
Thanks for reaching out!
That's a classic and incredibly frustrating problem with Meta's CBO, especially during hectic times like BFCM. I've seen this happen to loads of accounts. The platform's algorithm goes a bit haywire, dumps a ton of cash into a new, unproven adset, gets poor results, and leaves you wondering where your budget went. It feels like the system is working against you, taking money away from your winners to bet on a loser.
The good news is, this is solvable. The bad news is that it's usually a symptom of a deeper structural issue in the ad account, not just a random glitch. It’s less about the algorithm being "broken" and more about giving it the wrong instructions and an environment where it's bound to make mistakes. We'll need to change the way you test and structure things to give you back control.
I’m happy to give you some initial thoughts and guidance on how you can sort this out. The core idea is to move away from mixing unproven adsets with your proven performers and instead build a seperate, controlled enviroment for testing.
TLDR;
- Your new adsets are hogging the budget because Campaign Budget Optimisation (CBO) is designed to aggressively test new audiences, often misinterpreting early, cheap impressions as a sign of a winner, especially during competitive periods like BFCM.
- The root cause is likely mixing cold, unproven audiences (Top of Funnel) in the same campaign as your warm, proven retargeting audiences (Middle/Bottom of Funnel). This confuses the algorithm.
- The most important piece of advice is to create a dedicated 'Testing Campaign' using Ad Set Budget Optimisation (ABO) or CBO with strict adset spend limits. This quarantines new adsets, giving you full control over spend until they prove themselves.
- You need a clear hierarchy for testing audiences, starting with hyper-specific interests based on your customer's 'nightmare' problem, then moving to lookalikes of high-value actions (like purchases), and finally scaling winners into their own campaigns.
- This letter includes an interactive Lifetime Value (LTV) calculator to help you understand how much you can truly afford to spend per lead, and a flowchart illustrating the ideal campaign structure to prevent this issue from happening again.
First off, let's look at why your new adsets are hogging the budget...
What you're describing is a classic CBO behaviour. When you introduce a new adset into a Campaign Budget Optimisation campaign, Meta's algorithm has to figure out what to do with it. Its prime directive is to spend the total campaign budget as efficiently as possible to get the most results (based on your campaign objective, which I assume is conversions/sales).
So, it enters a rapid 'exploration' phase. It will push a significant amount of spend to the new adset very quickly to gather data. It’s looking for early signals – clicks, impressions, maybe even a few early, cheap conversions. During a hyper-competitive time like BFCM, the auction is volatile. The algorithm might find a pocket of cheap inventory and interpret that as a massive opportunity. It thinks, "Aha! This adset is getting cheap reach, let's pour the money in!"
The problem, as you've seen, is that these early signals are often misleading. Cheap impressions don't equal profitable customers. The algorithm spends half your daily budget before it has enough *conversion* data to realise the audience is a dud. By then, your consistent, profitable adsets have been starved of budget for hours, and your overall performance for the day is wrecked. Your Cost Controls (I'm assuming that's what you mean by CC's) are a good safety net, but they can't stop the algorithm from misallocating the budget *before* it hits the cap; they only prevent it from overspending on a single conversion.
Tbh, this behaviour is a feature, not a bug. CBO is designed for scaling proven campaigns, not for erratically testing new things. When you mix the two, you create chaos. You're essentially asking the algorithm to simultaneously exploit known goldmines (your proven adsets) and go prospecting for new ones (your test adset) with the same pot of money. It's not smart enough to do both well at the same time, so it defaults to the exciting new thing first. The solution is to seperate these two jobs into different campaigns entirely.
I'd say the real issue isn't CBO, it's how you're structuring your audiences...
This leads us to the real culprit: campaign structure. I'd bet you're putting new, cold 'prospecting' adsets into the same campaign as your proven 'retargeting' adsets (people who have visited your site, added to cart, etc.). This is one of the most common and costly mistakes I see when auditing accounts.
Think of your audiences in three distinct stages:
- ToFu (Top of Funnel): Cold audiences. People who have never heard of you. These are your interest-based, behaviour-based, and broad lookalike audiences. They are the hardest and most expensive to convert.
- MoFu (Middle of Funnel): Warm audiences. People who have engaged with your brand but haven't bought yet. This includes video viewers, social media page engagers, and general website visitors.
- BoFu (Bottom of Funnel): Hot audiences. People who are on the verge of buying. This is your 'Added to Cart (last 7 days)', 'Initiated Checkout (last 7 days)', and other high-intent website actions.
MoFu and BoFu adsets will almost always have a higher conversion rate and lower cost per acquisition (CPA) than ToFu adsets. They know you, they trust you, and they're already considering a purchase. When you put a new, unproven ToFu adset into a CBO campaign with your high-performing BoFu adsets, you create a completely unfair fight. The algorithm sees the historically amazing performance of your BoFu adsets and gives the *campaign* a high performance baseline. It then tries to get the new ToFu adset to match that, fails, and causes the budget swings you're seeing.
The correct approach is to structure your campaigns to mirror this funnel. You should have, at a minimum, two seperate campaigns:
- A Prospecting Campaign (ToFu): This campaign contains ONLY your cold audiences. This is where you test new interests and lookalikes.
- A Retargeting Campaign (MoFu/BoFu): This campaign contains ONLY your warm and hot audiences. This is where you focus on closing the deal with people who already know you.
This seperation gives the algorithm clarity. The goal of the Prospecting campaign is to find new customers at an acceptable CPA. The goal of the Retargeting campaign is to recapture existing interest at a very high ROAS. The performance expectations are different, so the campaigns must be different.
1. TESTING Campaign (ABO)
- -> New Interest Adset 1
- -> New Interest Adset 2
- -> New LAL Adset 1
- -> New LAL Adset 2
2. SCALING Prospecting Campaign (CBO)
- -> PROVEN Interest Adset 1
- -> PROVEN LAL Adset 1
3. SCALING Retargeting Campaign (CBO)
- -> MoFu: Video Viewers
- -> BoFu: Add to Cart (7d)
- -> BoFu: Website Visitors (30d)
You'll need a better way to test new adsets...
So, how do you introduce new adsets safely? You need a dedicated, controlled testing process. Stop throwing them into your main campaigns and hoping for the best. Here's a framework I use:
Step 1: Create a Dedicated 'Testing Campaign'
This is the most important step. Create a brand new campaign with the sole purpose of testing new ToFu adsets. This campaign acts as a 'quarantine zone'. The results here, good or bad, won't affect your proven, money-making campaigns.
Step 2: Use Ad Set Budget Optimisation (ABO) for Testing
For this testing campaign, I’d strongly recommend using ABO instead of CBO. With ABO, you set the budget at the adset level. This gives you absolute control. If you want to test a new adset with £20 a day, it will spend exactly £20 a day. No more, no less. It can't steal budget from anywhere else. This is the simplest way to guarantee a fair test and prevent budget haemorrhaging.
If you're adamant about using CBO for testing, you MUST use Adset Spend Limits. In the adset settings, you can set a 'Minimum Daily Spend' and a 'Maximum Daily Spend'. For a new adset, you'd set a maximum spend (e.g., £20). This tells the CBO algorithm, "Feel free to optimise, but do not spend more than this amount on this adset today." It's a bit less direct than ABO but achieves a similar result.
Step 3: Define Your 'Graduation' Criteria
Before you even launch the adset, you need to know what success looks like. An adset is not a failure just because it isn't profitable on day one. You need to give it enough budget to exit the learning phase and show its true potential. A good rule of thumb is to let an adset spend 2-3x your target Cost Per Purchase (CPA). If your target CPA is £30, you need to be prepared to let a new adset spend £60-£90 before you make a decision.
If after that spend, it has zero purchases, it's probably a dud. Kill it. If it has one or two purchases around your target CPA, it has potential. Let it run. If it's a clear winner and is getting consistent results, then it has 'graduated'.
Step 4: Promote Winners to a 'Scaling' Campaign
Once an adset has graduated from the Testing Campaign, you can then move it into your main 'Prospecting - CBO Scaling' campaign. You do this by duplicating the winning adset into the CBO campaign. This campaign is where your best-performing cold adsets live. Because all the adsets in this campaign are proven winners, CBO can do its job properly: allocating budget between several good options to find the best one on any given day.
This systematic approach—Test in ABO, Scale in CBO—removes the guesswork and volatility. It gives you control when you need it most (testing) and lets the algorithm take over when it's most effective (scaling).
You probably should define your customer by their pain, not just their interests...
Now, even with the perfect structure, your tests will fail if your targeting is rubbish. This is where most people get it wrong. They target broad interests that are only vaguely related to their product. If you're selling high-end coffee beans, targeting people with an interest in "Coffee" is a waste of money. That includes millions of people who are perfectly happy with their instant Nescafé.
You need to stop thinking about demographics and start thinking about nightmares. What is the specific, urgent, expensive problem that your product solves? Who feels that pain most acutely? Your Ideal Customer Profile (ICP) isn't "women aged 25-40"; it's a state of being. It's a problem.
Let's take an e-commerce example. Say you sell ergonomic office chairs. Your ICP isn't "people who work from home." It's the person who finishes their workday with a shooting pain in their lower back, who has spent money on physio, and who is terrified that this chronic pain will affect their ability to work and live their life. Their nightmare is permanent injury and a loss of income.
How does this translate to targeting? Instead of targeting "Work from Home" (too broad), you target interests that signal this pain:
- Competitors: People who like pages for Herman Miller, Steelcase, etc.
- Tools & Software: People who use project management tools like Asana or Slack (signals a desk job).
- Publications & Influencers: People who follow physiotherapists, chiropractors, or publications about workplace wellness.
- Layered Interests: People who are 'Small Business Owners' AND have an interest in 'Ergonomics'.
This is the work. You need to become an expert in your customer's specific problem and then find the digital breadcrumbs they leave across the internet that signal they have that problem. If you just target broad interests, you're asking the algorithm to find a needle in a haystack. If you target pain-point indicators, you're starting in the part of the haystack where all the needles are. This makes it far easier for your new adsets to find traction quickly, giving the algorithm positive early signals and making your tests far more likely to succeed.
Let's work out how much you can actually afford to spend...
One final point on profitability. You mentioned your new adsets aren't performing "profitably". This is a really common concern, but it's often based on a flawed understanding of the numbers. Are you judging profitability based on a single purchase? The real question isn't "What was my ROAS today?" but "What is this customer worth to me over their entire lifetime?"
This is where understanding your Customer Lifetime Value (LTV) is so important. If you only know the value of the first transaction, you will always be too conservative with your ad spend and you'll kill potentially great adsets too early. A customer might only spend £50 on their first purchase, which looks like a loss if your CPA was £60. But what if that same customer comes back and spends another £200 over the next year? Suddenly that £60 acquisition cost looks like a brilliant investment.
Let's calculate a rough LTV. You'll need three numbers:
- Average Order Value (AOV): What a customer spends on an average order.
- Purchase Frequency (F): How many times a customer buys from you in a year.
- Gross Margin %: Your profit margin after the cost of goods is taken out.
The calculation is simpler than it looks. Let's say your AOV is £80, your average customer buys 2.5 times a year, and your gross margin is 60%.
Customer Value per Year = £80 (AOV) * 2.5 (F) = £200
Gross Margin per Customer per Year = £200 * 60% = £120
If your customer sticks around for an average of 3 years, your LTV is £360 (£120 * 3). A healthy LTV:CAC (Customer Acquisition Cost) ratio is 3:1. This means you can afford to spend up to £120 to acquire that customer and still have a very healthy, profitable business. Suddenly, that £60 CPA that looked like a loss is actually a 2x return over the long term.
Knowing this number frees you from the tyranny of day-to-day ROAS. It gives you the confidence to invest properly in testing new audiences and to weather the initial learning phase without panicking. Use the calculator below to get a feel for your own numbers.
I've detailed my main recommendations for you below:
This is a lot to take in, I know. But fixing these structural issues is the only way to get predictable, scalable results from Meta ads, especially during crazy periods like BFCM. Here's a summary of the actionable plan I've outlined.
| Area of Focus | Problem | Actionable Solution |
|---|---|---|
| Campaign Structure | Mixing unproven cold adsets with proven warm adsets confuses the CBO algorithm. | Seperate your campaigns. Create a 'Prospecting' campaign for cold ToFu audiences and a 'Retargeting' campaign for warm MoFu/BoFu audiences. |
| Audience Testing | New adsets are spending the budget too quickly and inefficiently without control. | Create a dedicated 'Testing Campaign'. Use Ad Set Budget Optimisation (ABO) to set strict daily budgets per adset, or use CBO with Adset Maximum Spend Limits. |
| Scaling Process | There is no clear process for moving a successful test into a scaled environment. | Once an adset in the Testing Campaign proves profitable (spends 2-3x CPA and hits targets), duplicate it into your main CBO 'Scaling' campaign. |
| Targeting Quality | Broad, generic interest targeting leads to poor quality traffic and failed tests. | Define your Ideal Customer Profile (ICP) by their 'nightmare' problem. Target interests related to competitors, tools, and influencers that signal this specific pain point. |
| Performance Metrics | Judging adsets as "unprofitable" based on day-one ROAS leads to killing potential winners too early. | Calculate your LTV to understand your true maximum affordable CPA. Use this number to give new adsets a fair chance to perform before making a decision. |
Implementing a robust structure like this takes some initial effort, but it pays off massively in the long run. It turns your ad account from a volatile, unpredictable money pit into a reliable engine for growth. You'll know exactly what's working, what's not, and how to scale your winners without breaking everything.
This is precisely the kind of strategic overhaul we specialise in. It's often difficult to see the bigger picture when you're in the trenches every day dealing with these frustrations. Getting a fresh pair of expert eyes on the account can make all the difference.
If you’d like to go through your account setup together and map out a more detailed plan, we offer a completely free, no-obligation strategy session. We can have a look at your exact campaigns and give you some more specific advice. Feel free to book one in if you think that would be helpful.
Hope this helps!
Regards,
Team @ Lukas Holschuh