Hi there,
Thanks for reaching out!
Happy to give you some initial thoughts on your question about CBO and scaling with new creatives. It's a common sticking point, and honestly, the way most people approach it is fundamentally flawed. You're trying to unlock new audience segments, but the method you're considering is probably going to do the opposite and just cannabilise your existing results.
Let's get into how I'd approach this. I've broken down my thinking below, which should give you a much clearer framework for testing and scaling your campaigns profitably.
TLDR;
- Adding a new creative to a winning ad set won't find new audience segments; it will just compete for the same audience's attention.
- You must seperate your winning campaigns from your testing campaigns. Never test new things in a campaign that's already working well.
- To find new audiences, you must test new audiences in new ad sets, not just new creatives in old ones. Your focus should be on their "nightmare" problems, not just demographics.
- Before you even think about scaling, you must calculate your Customer Lifetime Value (LTV) to understand how much you can actually afford to pay for a new customer.
- This letter includes a functional LTV calculator and a flowchart demonstrating a proper campaign testing structure to help you implement these ideas.
We'll need to look at how CBO really works...
Alright, so the first thing we need to clear up is a big misconception about Campaign Budget Optimisation (CBO). You've got this one winning ad set and you want to add a new creative to it, hoping it'll find a "new micro segment". I get the logic, but that's just not how the algorithm works.
When you have a CBO campaign, Meta's algorithm is looking at all the ad sets inside it and deciding which one is most likely to get you the results you want for the cheapest cost. It then pushes the budget towards that winning ad set. Inside that ad set, it does the same thing with your creatives—it finds the one that's performing best and gives it the lion's share of the impressions.
So, if you drop a new creative into your existing winning ad set, you're not telling Meta to go find new people. You're just giving it another option to show to the exact same audience it's already targeting. The new creative will simply compete with your old winner for the attention of the same pool of users. Best case scenario, the new creative is even better and takes over. Worst case, it's not as good, it pulls some budget away from your winner while it's being tested, and your overall performance dips. It's an inefficient way to test and it absolutly won't unlock new segments.
This is probably the most common mistake I see when I audit accounts. People treat their live, profitable campaigns like a science lab, constantly tinkering and adding new variables. You're effectively trying to fix something that isn't broken, and you risk breaking it in the process. A winning campaign should be protected, not used as a testing ground.
I'd say you need a proper testing framework...
So what's the solution? You need to compleatly seperate your proven campaigns from your experimental ones. This is non-negotiable for stable scaling. Your current winning CBO campaign? Let's call it your 'PROVEN' campaign. You should duplicate it, turn off the original, and then leave the new one well alone. Don't touch it. Its job is to consistently bring in results.
Now, you create a completely new campaign. This is your 'TESTING' campaign. This is where you get to play, experiment, and try to find your next winner. Inside this TESTING campaign is where you'll build new ad sets to target the new "micro segments" you're looking for. The core principle here is to isolate variables. If you want to test a new audience, you create a new ad set for that audience. If you want to test a new creative, you put it in an ad set and test it against your current best-performing creative (the control).
Here’s a simple structure I use for this:
Campaign 1: PROVEN (CBO)
This is your money-maker. Do NOT touch it unless performance significantly drops.
Winning Ad Set 1
(Your current winner: audience + creative)
Winning Ad Set 2
(Once you find a new winner in testing, you can move it here)
Campaign 2: TESTING (ABO)
This is your lab. Isolate variables and find your next winner here. Use Ad Set Budgets (ABO) for control.
Test Ad Set A: New Audience 1
Creative A (Winner) vs. Creative B (New)
Test Ad Set B: New Audience 2
Creative A (Winner) vs. Creative B (New)
I'd recomend using Ad Set Budgets (ABO) for your TESTING campaign. This gives you direct control over how much you spend on each test. With CBO in a testing environment, the algorithm might quickly decide it doesn't like one of your new audiences and stop spending on it before you've gathered enough data to make a proper call. ABO ensures each new ad set gets a fair shot.
Once an ad set in your TESTING campaign consistently outperforms the baseline in your PROVEN campaign, you've found a new winner. You can then move that entire ad set (the new audience and winning creative combination) into your main PROVEN CBO campaign. Now your scaling campaign has two proven winners, and your budget will be optimised across both. This is how you scale methodically and safely.
You probably should define who you're looking for...
This brings us to the next logical question: what new audiences should you be testing? "Unlocking micro segments" is a nice phrase, but it's meaningless if you don't have a clear idea of who you're trying to reach. This is where most advertising efforts fall apart. People spend all their time on the technical setup and almost no time on the human element.
You need to forget generic demographic profiles. "Men aged 25-45 who are interested in technology" is useless. That could be anyone. Instead, you need to define your customer by their pain. What is the specific, urgent, expensive nightmare that keeps them up at night? Your product or service is the solution to that nightmare.
For example, one of our B2B software clients sells a tool that helps with sales outreach. We could target people with the job title "Sales Manager". But that's lazy. What's the sales manager's nightmare? It's his team not hitting their quota, his best reps threatening to quit because their lead lists are rubbish, and him having to explain this failure to the board. The pain is real and it's urgent.
So instead of targeting "Sales Manager," we target interests that are proxies for that pain. We'd target people who are members of "SaaS Growth Hacks" groups, people who follow specific sales influencers known for talking about pipeline problems, or people interested in competing software like Outreach.io or SalesLoft. We're not targeting a demographic; we're targeting a problem state. This is how you find your "micro segments". They are pockets of people unified by a shared problem.
Generic Demographic Targeting (Wasteful)
- Interest: "Business"
- Interest: "Technology"
- Demographic: Age 30-50
- Behaviour: Small Business Owners
Pain-Based Targeting (Effective)
- Interest: "HubSpot" (They already use marketing tools)
- Interest: "Stratechery by Ben Thompson" (They follow industry analysis)
- Interest: "SaaS Growth Hacks" (They're actively trying to solve a problem)
- Lookalike: Top 1% of your existing highest-value customers
Do this work first. Map out 5-10 of these "nightmare scenarios" and the corresponding niche interests, influencers, software tools, or publications that your ideal customer would engage with. These become the audiences you'll build out in your TESTING campaign. Now you're not just throwing darts in the dark; you're systematically testing hypotheses about who your customer is and what they care about.
You'll need to understand your numbers before you scale...
Let's say your testing framework is working. You've found a new audience that's converting. The temptation is to just crank up the budget. But this is where you can get into serious trouble if you don't know your numbers. The real question you should be asking isn't "How can I find a new audience?" but "How much can I afford to pay to acquire a customer from that new audience?"
The answer lies in a metric that surprisingly few businesses track properly: Customer Lifetime Value (LTV). This tells you the total profit you can expect to make from an average customer over the entire course of your relationship. Once you know this, everything else about your advertising becomes clearer.
Here’s the basic formula:
LTV = (Average Revenue Per Customer Per Month * Gross Margin %) / Monthly Churn Rate
Let's break it down with an example. I remember one campaign we worked on for a subscription box service where we achieved a 1000% return on ad spend once we got their numbers right.
-> Average Revenue Per Account (ARPA): £40/month
-> Gross Margin %: 60% (after cost of goods, shipping, etc.)
-> Monthly Churn Rate: 8% (the percentage of customers who cancel each month)
The calculation would be: (£40 * 0.60) / 0.08 = £24 / 0.08 = £300 LTV.
Each customer you acquire is worth £300 in gross profit to your business. A healthy business model often aims for a 3:1 LTV to Customer Acquisition Cost (CAC) ratio. This means you can afford to spend up to £100 to acquire a single new customer and still have a very profitable business. Suddenly, that £50 Cost Per Purchase from your Meta ads doesn't look so scary, does it? It looks like a bargain.
This is the maths that unlocks aggressive, intelligent scaling. It frees you from the tyranny of chasing cheap but low-quality leads and allows you to confidently invest in acquiring high-value customers. I've built a simple calculator for you below so you can plug in your own numbers and see for yourself.
My main recommendations for you are below:
So, to bring this all together, trying to find new customers by adding creatives to a winning ad set is a dead end. You need a more strategic approach that starts with understanding the customer, builds a rigorous testing process, and is grounded in solid financial metrics. It's a lot more work upfront, but it's the only way to build a scalable and predictable customer acquisition machine instead of just getting lucky with one ad set.
I've detailed my main recommendations for you in a table below. This is the exact process we'd follow if we were managing your account.
| Action | Rationale | Implementation Step |
|---|---|---|
| 1. Isolate Winners | Protect your profitable campaigns from risky experiments that can hurt performance. | Create a 'PROVEN' CBO campaign for your winning ad sets. Do not add new variables to it. |
| 2. Build a Testing Lab | To find new winning audiences and creatives without disrupting what works. | Create a separate 'TESTING' ABO campaign. Use this to test new audiences and creatives. |
| 3. Define the "Nightmare" | Find high-intent audiences by focusing on their problems, not just their demographics. | Brainstorm 5-10 specific pains your ideal customer has. Find interests that are proxies for this pain. |
| 4. Calculate Your LTV | To understand your true allowable cost per acquisition and enable profitable scaling. | Use the LTV formula or the interactive calculator above to find your LTV and target CAC. |
| 5. Graduate Winners | Systematically scale your budget towards proven audience/creative combinations. | When a test ad set consistently beats your PROVEN campaign's performance, move it into the PROVEN campaign. |
This systematic approach is really what separates professional campaign management from the guesswork that leaves most businesses stuck. It's about building systems, not just launching ads. It takes discipline, but it's the foundation for any successful paid acquisition strategy.
Hopefully this gives you a much better roadmap. It's a fundamental shift in thinking, from 'how do I tweak this ad set?' to 'how do I build a machine that finds new customers?'. This is the kind of strategic work we do with our clients every day, moving them from unpredictable results to a clear, data-driven growth plan.
If you'd like to walk through how these principles could apply specifically to your business and your ad account, feel free to book in a free, no-obligation consultation with us. We can have a proper look at your setup and give you some more tailored advice.
Regards,
Team @ Lukas Holschuh