Hi there,
Thanks for your enquiry about your Meta campaign. It's a really common problem you're describing, but the reason for it (and the solution) isn't what most people think. It's less about Meta not giving ads a 'chance' and more about how you're giving instructions to the algorithm.
Happy to walk you through why this happens and how we'd go about fixing it.
TLDR;
- The issue isn't a bug; Meta's algorithm is working as designed by rapidly allocating budget to what it thinks is the early winner, even if it's wrong.
- Putting 16 ads in a single ad set is a flawed testing method. It forces the algorithm to make a premature choice and starves potentially better ads of data. It's one of the most common mistakes we see.
- The solution is a structured testing approach. You need to group your ads into smaller, themed ad sets (3-4 ads max per set) using Ad Set Budget Optimisation (ABO) to ensure each concept gets a fair budget.
- This letter includes an interactive calculator to show you how your current setup dilutes your budget, and a flowchart visualising the algorithm's 'greedy' decision-making process.
- Your immediate action should be to pause the campaign, restructure it based on the advice below, and relaunch with a proper testing framework to find your *actual* winning ads.
We'll need to look at why Meta's algorithm is a 'Greedy Toddler' with your budget...
Right, so the first thing to get your head around is that the Meta algorithm isn't designed to be 'fair'. It's designed to be ruthlessly efficient at one thing: getting you the result you asked for (in your case, conversions) at the lowest possible cost, as fast as possible. Think of it less like a careful scientist and more like a greedy toddler in a sweet shop. It's going to grab the first sweet it sees and ignore everything else.
When you launch an ad set, it enters what Meta calls the "learning phase". During this initial period, the algorithm is frantically trying to figure out which ad in your set is going to work. It shows your ads to a small portion of your audience and watches the results like a hawk. The moment one ad gets a conversion or two, even if it's just down to pure luck, the algorithm gets a massive signal. It thinks, "Aha! This is the one!" and starts pouring the majority of the budget into it. In your case, 94% of it.
This creates a self-fulfilling prophecy. The ad that got the early lucky break gets more budget, so it gets shown to more people, which means it has more opportunities to get conversions, which then tells the algorithm to give it even *more* budget. Meanwhile, your other 15 ads, including the ones that you noticed have a better cost-per-result, are left starving in the corner with only 0.4% of the spend. They never get enough budget to gather enough data to prove they're actually the better performers over the long run. It's a vicious cycle.
This isn't a flaw in the system; it's the system working exactly as intended. Your instructions were "maximise conversions," and the algorithm found the quickest, most decisive path to do that based on the very first signals it got. The problem isn't the algorithm; it's the environment you've put it in.
Start
16 Ads Launched
Early Signals
Ad #7 gets a lucky early conversion
Budget Shift
94% of budget is given to Ad #7
Result
Ad #7 dominates, even if other ads are more efficient.
I'd say you need to bust the "Mega Ad Set" myth...
This brings me to the core of the problem. Placing 16 different ads inside a single ad set is, quite frankly, a recipe for disaster and one of the most common mistakes I see when auditing new client accounts. It feels like you're doing a comprehensive test, but you're actually doing the opposite. You're creating chaos.
When you give the algorithm 16 options, you're not giving it 16 fair chances. You're diluting your budget so thinly across all of them that none of them can get enough spending behind them to generate statistically significant data. For an ad to truly prove itself, it needs to exit the learning phase, which Meta says requires about 50 conversions in a 7-day period. With a $300/day budget split between two ad sets ($150 each), and then that $150 split between 16 ads... you can see the problem. Each ad is getting a pittance.
You're not running a scientific test; you're running a lottery. A proper test requires controlling the variables. By lumping everything together, you create too much noise. The algorithm can't distinguish between a genuinely good ad and one that just got lucky. It's an issue of structure, not a fault of the platform.
Let's look at the maths. Use the calculator below to see just how little budget each of your ads is actually getting to work with, and how that impacts its ability to ever prove its worth.
Average Daily Budget Per Ad: $9.38
Days Needed Per Ad to Exit Learning*: 133
You probably should use a structured testing framework...
So, what’s the alternative? The solution is to move away from that chaotic "mega ad set" and adopt a structured, methodical approach to creative testing. This means more work upfront, but it's the only way to get reliable data that lets you scale your campaigns profitably.
The goal is to give each distinct creative *idea* a fair shot. Here’s how we do it:
1. Switch to Ad Set Budget Optimisation (ABO): For testing, you want to control the budget at the ad set level, not the campaign level (CBO). This is critical. By setting a specific budget for each ad set (e.g., $30-$50 per day), you guarantee that each one gets the exact amount of spend you want it to, preventing one audience or concept from hogging all the money.
2. Group Ads by Concept: Look at your 16 ads. They're probably not 16 entirely unique ideas. You likely have a few core concepts, with variations in headlines, images, or copy. Your first job is to group them. Maybe you have a "Testimonial" angle, a "Problem-Agitate-Solve" angle, a "Features & Benefits" angle, and a "UGC-style" angle. These are your concepts.
3. Build Themed Ad Sets: Create a separate ad set for each of these concepts. Inside each ad set, you should only have 3-4 ads at the absolute maximum. These ads should be variations of that single concept. For example, in your "Testimonial" ad set, you might test the same testimonial video with three different opening hooks or headlines. This allows the algorithm to optimise the smaller details within a controlled environment, rather than trying to compare apples with oranges (and pears, and bananas...).
This structure gives you much cleaner data. You'll be able to see not just which individual ad performs best, but which overall *creative angle* or *message* resonates most with your audience. That's a far more valuable insight for scaling your business.
Your Current (Flawed) Setup
Ad Set 1 (ABO: $150/day)
Our Recommended Testing Structure
Ad Set 1 - Testimonials (ABO: $40/day)
Ad Set 2 - Problem/Solve (ABO: $40/day)
Ad Set 3 - UGC (ABO: $40/day)
You'll need to know when to kill an ad, and when to scale it...
Once you've got your new structured campaign running, the next question is how to analyse the results. You spotted that some of your under-spent ads had a lower cost per result, which is good, but making decisions based on such a tiny amount of data is dangerous. You need clear rules for when to turn an ad off ("kill") and when to give it more budget ("scale").
Here are some simple rules of thumb we use:
- Don't be impatient. Let an ad set run for at least 3-4 days before you even look at it. The algorithm needs time to stabilise, and daily performance will fluctuate wildly at the start.
- Spend Equals CPA. A common rule is to wait until a specific ad has spent at least your target Cost Per Acquisition (CPA). If your goal is to get leads for $25, wait until an ad has spent $25. If it has zero conversions by that point, it's probably not a winner. I like to be a bit more generous and go for 1.5x or 2x the target CPA, just to be sure.
- Look for Outliers. After a few days, you'll start to see clear winners and losers within each ad set. If one ad has a CPA of $15 and another has a CPA of $50, the decision is obvious. Turn off the expensive one.
- Don't Get Distracted by Vanity Metrics. It's tempting to look at Click-Through Rate (CTR) or Cost Per Click (CPC). While these can be helpful for diagnosing problems (a very low CTR might mean your creative is boring), they are not your main goal. I've seen many ads with a low CTR and high CPC that have an amazing CPA, and vice versa. Your campaign is optimised for conversions, so Cost per Conversion is the *only* metric that truly matters when deciding winners. Everything else is secondary.
Once you've identified a winning creative concept (e.g., the "Testimonial" ad set is outperforming all others) and a winning ad within that set, you can then move that winner into a separate "scaling" campaign. This campaign can use Campaign Budget Optimisation (CBO) and a much larger budget, because you're no longer testing—you're now scaling something you know works. This seperation of testing and scaling is absolutly vital for long-term success.
This is the main advice I have for you:
Okay, that was a lot of theory. Let's make it practical. If I were in your shoes, here is the exact plan I would follow today to fix your campaign and start getting reliable results. I've detailed my main recomendations for you below:
| Step | Action | Why You Should Do This |
|---|---|---|
| 1 | Pause Your Current Campaign Immediately | Stop wasting budget on a flawed structure. Every dollar spent now is based on unreliable data and is preventing you from finding your true best performers. |
| 2 | Group Your 16 Ads into 4-5 "Concepts" | Analyse your existing ads. Group them by their core message or angle (e.g., testimonials, feature focus, problem/solution). This forms the basis of your new test structure. |
| 3 | Build a New ABO Testing Campaign | Create a completely new campaign. Set the budget at the Ad Set Level (ABO), not the Campaign Level (CBO). This is non-negotiable for clean testing. |
| 4 | Create One Ad Set Per Concept | Create 4-5 ad sets inside your new campaign, one for each concept you identified in Step 2. Put the 3-4 relevant ad variations inside each corresponding ad set. Set a budget of ~$30-$40 per day for each ad set. |
| 5 | Let It Run & Analyse with Rules | Let the new campaign run for 3-5 days without touching it. Then, analyse performance based on Cost Per Conversion. Turn off underperforming ads/ad sets that have spent at least 1.5x your target CPA with poor results. |
| 6 | Scale Winners in a Separate CBO Campaign | Once you have a clear winning ad and concept, duplicate it into a new "Scaling" campaign that uses CBO. You can allocate a much larger budget to this campaign, as you're now feeding the algorithm proven performers. |
This process is how professional media buyers build and scale accounts. It's disciplined, data-driven, and it removes the guesswork and randomness that's currently hurting your performance. It takes a bit more effort to set up, but the payoff in terms of efficiency and scalability is enormous.
Running ads effectively isn't just about making nice creatives; it's about building a machine that can reliably test those creatives, identify winners, and then pour fuel on the fire. This methodical approach is the difference between hoping for results and engineering them. You might find that with a proper structure, your cost per lead drops significantly, and you can scale your budget far beyond $300/day while remaining profitable.
If you'd like an expert pair of eyes to look over your account and help you implement this kind of structure, we offer a completely free, no-obligation initial consultation. We can jump on a call, share screens, and I can give you some specific feedback on your ads and audiences. It's often the quickest way to spot opportunities and get things moving in the right direction.
Hope this helps!
Regards,
Team @ Lukas Holschuh