Hi there,
Thanks for reaching out! Read through your situation and I can see why you're feeling confused, Meta ads can feel like a total black box when you're used to the way Google works. It's a common frustration, so don't worry, you're not alone in this.
I'm happy to give you some initial thoughts and guidance. Tbh, the whole ABO vs CBO debate is a bit of a distraction from the real issues that are likely holding your account back. You've got an "okay" ROAS of 3.5, but you're right to feel like you're losing money because your current process of testing AI creatives daily is probably burning cash on things that never had a chance to work in the first place. We'll need to sort out your testing methodology first, and then look at the bigger picture: your message and your offer.
TLDR;
- Stop the daily creative churn. Uploading 3 new AI creatives every day is confusing the algorithm and preventing you from gathering any real learnings.
- The ABO vs CBO debate is the wrong question. CBO is generally better for scaling, but only once you've found winning creatives and audiences through a structured testing process in a separate campaign.
- Your AI avatars are likely a symptom of a bigger problem. Shiny creative tools can't fix a weak message. You need to focus on your customer's core problem, not just create visually interesting ads.
- The most important thing to fix is your offer and messaging. A 3.5 ROAS is okay, but a truly compelling offer would make your ads far more effective and scalable. We'll explore how to diagnose this.
- This letter includes several interactive calculators and diagrams, including a flowchart for a proper creative testing process and a calculator to determine how much you can truly afford to pay for a customer.
We'll need to look at the ABO vs CBO debate... but not how you think
Alright, let's get this out of the way first. Everyone gets really hung up on ABO (Ad Set Budget Optimisation) vs CBO (Campaign Budget Optimisation), but honestly, it's not the magic bullet people think it is. You've noticed CBO performs better, and there's a simple reason for that: it's what Meta's algorithm is built for now. CBO lets the system do its job by automatically allocating your budget to the ad sets (and by extension, the ads) that it predicts will get you the best results.
Your observation that "performance is noticeably worse compare to CBO" is bang on. When you use ABO, you're forcing an equal or set amount of spend on each ad set, regardless of how it's performing. You're telling the algorithm, "I know better than you, spend £50 here even if it's getting terrible results." With CBO, you're telling it, "Here's £150 for the day, go find me the cheapest conversions within these ad sets." The algorithm will naturally gravitate towards the lower-hanging fruit – the audience and creative combination that's working best.
So why do people still recommend ABO for testing? The theory is that it forces spend on new creatives to give them a 'fair chance'. But this is a flawed way of looking at it. If a creative is genuinely good, the CBO algorithm is desperate to find it and spend money on it because that's how it achieves its objective for you. The problem isn't CBO not spending; it's that most of your creatives probably aren't good enough to justify the spend in the first place. The algorithm figures this out much faster than we can, often within a few dollars of spend.
The fact that most of your creatives aren't even getting $10 of spend in your CBO campaign is not a sign that CBO is broken. It's actually a signal that the algorithm has quickly determined those creatives are unlikely to perform well and has shifted the budget to the one(s) it has more confidence in. You're essentially getting free, rapid-fire testing feedback. Wasting $50 in an ABO ad set just to 'prove' a creative is bad is a much more expensive way to learn the same lesson.
I'd say you need to flip your thinking. CBO is your scaling tool. It's for when you have a few proven audiences and a handful of proven creatives. ABO can have a place, but only within a very controlled, separate *testing campaign* with strict rules. You don't mix your testing and scaling together in one big CBO campaign and hope for the best, which is what it sounds like you're doing now.
Think of it like this. Your current CBO campaign is a battlefield where only the strongest survive. Throwing in three brand new, untested soldiers every single day is just sending them to an immediate slaughter. The seasoned veterans (your existing winning ads, if you have any) will get all the resources (the budget) because the general (the algorithm) knows they have the best chance of winning the battle (getting you conversions).
I'd say you are testing your creatives all wrong
This brings us to the core of the issue. Your problem isn't CBO vs ABO. Your problem is your entire approach to creative testing. Uploading three new AI-generated creatives every single day is, to be blunt, chaos. You're not actually *testing* anything; you're just throwing stuff at the wall and creating a massive mess for the algorithm to sort through.
Here’s why this approach is failing you:
- No Learning Period: Every time you add a new ad, you nudge the ad set back into the 'Learning Phase'. Doing this daily means your campaigns are *constantly* in learning, never achieving stability or gathering enough data to properly optimise. You're essentially resetting the system's progress every 24 hours.
- No Valid Comparison: How can you possibly know if "AI Avatar 7" is better than "AI Avatar 4" if they ran on different days, with different daily budgets, and against different competing ads in the auction? You can't. A proper test requires controlling the variables, and your current method has no controls at all.
- You're Wasting Your Winners: Let's say one of those AI ads actually shows promise. In your current system, it gets a tiny bit of spend for a day, and then tomorrow it's competing with three brand new ads. It never gets the chance to mature, gather data, and potentially become a long-term winning creative. You're likely pausing ads that could have been great if you'd just given them a chance.
You need to completely separate your testing from your scaling. This is probably the single biggest change you can make to your account right now. You need a dedicated, systematic process for identifying winning creatives *before* they ever make it into your main CBO scaling campaign.
Here's a much better way to do it:
- Create a dedicated "Creative Testing Campaign". This campaign can use ABO. Its only job is to test new creatives. You'll create one ad set per new creative idea (or a small group of very similar creatives). Give each ad set a small, identical daily budget – say $20-$30. The audience targeting in this campaign should be one of your broadest, most reliable audiences. The goal here isn't to be super efficient; it's to isolate the creative as the primary variable.
- Run the test for a set period. Let these ads run for at least 3-4 days without touching them. This gives them time to exit the learning phase and gather some initial data. Don't make snap judgements after 24 hours.
- Define your winning metric. How do you know if a creative is a "winner"? It's not just about ROAS. In a testing environment, you might look for leading indicators like a high Click-Through Rate (CTR), a low Cost Per Click (CPC), or a high number of 'Add to Carts'. Your main goal is to find ads that get a strong initial reaction from the audience. A good rule of thumb is to kill any ad that has spent 2-3 times your target Cost Per Acquisition (CPA) without a single conversion.
- Graduate the winners. Once you've identified a creative that clearly outperforms the others based on your metrics, you pause the testing ad set. Then, and only then, do you duplicate that winning creative into your main CBO "Scaling Campaign". This campaign should only contain your all-star creatives that have already proven themselves.
This process is methodical. It's scientific. It removes guesswork and allows you to build a library of proven ads that you can rely on. It means your main CBO campaign becomes a stable, predictable engine for growth, fed only the best fuel. You'll go from testing three new ads a day to maybe testing a batch of 3-5 new ads *per week*, and only one or two will actually make it into your main campaign.
Batch 3-5 new angles/hooks
1 ad set per idea
$20/day budget each
Check CTR, CPA, Add to Carts
Move to Scaling CBO
Pause & Learn
You probably should look at your creatives themselves...
Now, let's talk about these "AI avatar creatives". You're paying $299 for a tool, and you're excited about a new VEO3 feature. I get it. New tech is exciting. But I need you to be brutally honest with yourself: are these ads actually working because they connect with a customer's deep-seated problem, or do they just look cool?
From my experience, 99% of the time, the secret to a great ad isn't the flashy editing or the AI-generated presenter. It's the message. It's the first three seconds – the hook – that speaks directly to a specific, urgent, and expensive nightmare your ideal customer is having. Your ad's only job is to make them stop scrolling and think, "How did they know? That's exactly my problem."
An AI avatar can deliver a message, sure. But can it truly convey the empathy and understanding of someone who lives and breathes your customer's pain? Usually not. These tools often spit out generic, feature-focused scripts that sound like every other ad on the internet. "Are you tired of X? Our solution offers Y and Z. Click here to learn more." It's boring, and it doesn't work.
Instead of focusing on the *tool* that makes the ad, you need to focus on the *intelligence* that goes into it. Who is your ideal customer? Not their demographics. I don't care if they're 25-34 and live in London. What keeps them up at night? What are they secretly terrified of failing at in their job or life? What problem do they have that's costing them time, money, or status?
Let's imagine you're selling a project management tool. A generic AI ad might say:
"Struggling to keep your projects on track? Our tool helps you manage tasks, deadlines, and team collaboration all in one place. Boost your productivity today!"
It's not wrong, but it's not compelling either. It speaks to no one in particular.
Now, let's use the 'Problem-Agitate-Solve' framework, targeting a marketing manager:
(Problem) "Your CMO just asked for an update on the Q3 launch, and your stomach drops. You've got five different spreadsheets, a dozen Slack channels, and you have no idea if you're on schedule or about to miss a critical deadline."
(Agitate) "Another missed deadline means another awkward conversation, and your competitors are launching new features every week. You feel like you're constantly putting out fires instead of doing the creative work you love."
(Solve) "Get a single source of truth for all your marketing projects. Our platform turns chaos into clarity, so you can confidently tell your CMO you're ahead of schedule. See how in a free trial."
See the difference? The second ad isn't selling a feature; it's selling relief from a career-threatening nightmare. An AI tool can't come up with that level of empathy. That comes from you doing the hard work of truly understanding your customer.
So, before you get excited about VEO3, I'd challenge you to spend a week doing nothing but customer research. Find the forums, the subreddits, the Facebook groups where they hang out. Read their complaints. Understand their language. Then, write ten different hooks based on those specific pain points. You could even film them yourself on your phone. I guarantee that a raw, authentic video that nails the customer's pain will outperform a slick, AI-generated video with a generic message every single time.
You'll need a better offer, not just better ads
This leads us to the final, and most important, piece of the puzzle. The number one reason campaigns fail isn't the targeting, the bidding strategy, or even the creative. It's the offer. A 3.5 ROAS is okay, as I said. It's profitable. But it's not great, and it suggests there might be a weakness in what you're actually asking people to do or buy.
Great advertising can't fix a mediocre offer. If your product is confusing, if your pricing is wrong, if your landing page doesn't build trust, or if you're asking for too much commitment too soon (like a "Request a Demo" button), you're forcing your ads to do all the heavy lifting. It's like trying to fill a leaky bucket. You can pour water in faster (spend more on ads), but you'll always be losing most of it through the holes in the bottom (your weak offer).
What is your offer? Are you selling a product directly? A subscription? Are you generating leads? You need to critically evaluate every step of the customer journey *after* they click the ad.
- Is the value proposition crystal clear on your landing page? Can someone understand what you do and why it matters within 5 seconds?
- Is there a strong element of proof? Testimonials, case studies, reviews, social proof – anything to show that real people have used and benefited from your solution.
- Is the Call to Action (CTA) low-friction and high-value? Instead of "Buy Now," could you offer a discount on the first purchase? Instead of "Request a Demo," could you offer a free, valuable resource like a checklist, a template, or an automated audit? You have to give value to get value.
The strength of your offer has a massive impact on your conversion rates, which in turn dictates your entire ad performance. A small improvement in your landing page conversion rate can make an unprofitable campaign profitable, or a profitable campaign massively scalable.
Let's look at the maths. A 3.5 ROAS means for every $1 you spend, you get $3.50 back. It's decent. But what if you could improve your website's conversion rate from, say, 2% to 3% by clarifying your offer and adding some social proof? That one-percent change has a huge ripple effect.
Play around with that calculator. See what happens to your ROAS when you increase the conversion rate from 2% to 3%, or 4%. The difference is massive. Often, it's far easier and cheaper to improve your conversion rate by 1% than it is to improve your ad's CTR by 1%. You should be spending just as much time optimising your landing page and offer as you do making new creatives.
I've detailed my main recommendations for you below:
So, to pull this all together, here is the exact advice I have for you. Stop the daily chaos and start implementing a proper system. This is how you go from feeling confused and frustrated to being in control of your advertising.
| Area of Focus | Problem | Actionable Solution |
|---|---|---|
| Campaign Structure | Mixing testing and scaling in one CBO campaign, causing instability and low spend on new ads. | Immediately create two separate campaigns: 1) A "Creative Testing" campaign using ABO to isolate and test new ideas with small budgets. 2) A "Scaling" campaign using CBO that contains *only* creatives that have proven themselves in the testing campaign. |
| Creative Process | Uploading 3 new AI creatives daily is inefficient, prevents learning, and causes algorithm confusion. | Stop the daily churn. Move to a weekly testing cycle. Batch 3-5 new creative *ideas* (based on customer pain points), test them for 3-4 days in your new testing campaign, and only graduate the clear winners. |
| Messaging & Strategy | Over-reliance on a creative tool (AI avatars) instead of focusing on the underlying message. | Pause buying new creative tools. Spend the next week researching your ideal customer's biggest 'nightmare'. Write 10 ad hooks that speak directly to that pain. The message is more important than the medium. |
| Offer & Conversion | An "okay" ROAS of 3.5 suggests the offer or landing page may be leaking conversions and limiting scalability. | Critically audit your landing page and offer. Is the value prop clear? Is there enough social proof? Can you make the call-to-action lower friction? A/B test one major change on your landing page this month. |
This is a lot to take in, I know. But the core idea is simple: bring order to the chaos. Right now, you're operating without a system. By implementing a proper testing framework and shifting your focus from shiny tools to your customer's actual problems, you can build a much more stable, scalable, and profitable advertising machine.
Getting this right can be tricky, and it takes experience to spot the opportunities and avoid the common pitfalls. The difference between an okay 3.5 ROAS and a truly great one often comes down to this kind of rigorous methodology and strategic insight. For instance, I remember working on a Meta Ads campaign for a women's apparel company where implementing a similar structured testing process helped us achieve a 691% return. That's the kind of shift that can make a huge difference, saving you months of expensive trial and error.
If you'd like to go through your account together on a call and have us map out a more detailed strategic plan for you, we offer a completely free, no-obligation initial consultation. It might be helpful to have a second pair of eyes on everything.
Hope this helps!
Regards,
Team @ Lukas Holschuh