Hi there,
Thanks for reaching out!
Happy to give you some initial thoughts on the issue you're having with your ads. It's a really common problem, and honestly, the solution isn't what most people think. It's not a technical glitch or a bug you need to fix. It's the Meta algorithm giving you a massive, flashing signal about your ads and your campaign structure. Most people try to fight the algorithm, but the trick is to understand what it's telling you and use that to your advantage.
So, lets get into why only one of your ads is getting any love from Meta, and more importantly, how you can build a proper testing system that actually gives you winners instead of just burning your budget.
TLDR;
- Your problem isn't a bug; the Meta algorithm is deliberately choosing what it thinks is the 'best' ad based on very early data and starving the others to maximise your results (based on your campaign objective).
- Putting multiple ads in one ad set with a small budget isn't a real test. It's a lopsided fight where one ad gets all the budget before the others have a chance. You need to change your campaign structure.
- The best way to find winning ads is through methodical testing. This means isolating variables (testing one thing at a time, like the image or the headline) and using a campaign structure that forces the budget to be spent evenly across your test variants.
- The most important piece of advice is to stop focusing on impressions and start focusing on the metrics that actually matter: cost per lead, cost per sale, and ultimately, how much you can afford to spend to acquire a customer. This changes everything.
- This letter includes a detailed breakdown of better campaign structures, ad copy frameworks, and an interactive calculator to help you figure out your target customer acquisition cost based on their lifetime value.
We'll need to look at why Facebook is a ruthless (but smart) auction...
Right, first thing we need to get straight. Just because an ad is marked 'active' doesn't mean Meta has any obligation to show it to people. Thinking it does is probably the most common and costly mistake I see people make. You're not paying for a guaranteed spot in a newspaper; you're entering a brutal, real-time auction against millions of other advertisers for a finite amount of space in people's feeds.
When you launch your ad set with those three ads, the algorithm doesn't treat them equally. It's not a fair race where they all get a chance to run a few miles. It's more like a split-second audition. It shows each ad to a tiny handful of people, a micro-audience, just to see what happens. It looks at who stops scrolling, who clicks, who even hovers for a second longer. Based on this tiny, initial sliver of data, it makes a prediction. It says, "Okay, based on these first 50 impressions, Ad A looks like it's going to get me the best results for the objective this advertiser set (whether that's clicks, leads, or whatever). So, to be efficient and not waste their money, I'm going to put the entire £30 daily budget behind Ad A."
The other two ads? They get sidelined. The algorithm has decided they're the weaker contenders and, in its cold, robotic logic, it's actually trying to help you by not spending your money on what it predicts will be losers. The problem is, its initial prediction can sometimes be wrong, especially on a small budget. But it's doing exactly what it was designed to do: maximise results ruthlessly and quickly.
This is why you see one ad get all the impressions while the others sit at zero. It's not broken. It's the system working as intended. The real issue is that your current setup doesn't allow for a proper test. You've asked the algorithm to pick a winner, and it has. Your job isn't to force the other two ads to run; it's to create a structure where you can learn which ad is *actually* the best, based on real data, not a premature guess.
I'd say you need to rethink your entire testing framework...
So, how do we fix this? We stop letting the algorithm make premature decisions for us. We need to create a testing environment where each ad gets a fair shot. Your current setup, with three ads in one ad set on a $30 budget, is what's called a 'Dynamic Creative' test by default, but without the proper setup. It's designed for budget consolidation, not for rigorous testing.
To conduct a true, clean test, you must isolate the variables. The most important variable you need to control at the start is the budget itself. You need to force Meta to spend money on each ad you want to test. The best way to do this is by changing your campaign structure from one ad set to multiple.
There are two main ways to set up budgets: Campaign Budget Optimisation (CBO) and Ad Set Budget Optimisation (ABO). With CBO, you set the budget at the campaign level, and Meta decides which ad set (and which ad within it) gets the money. This is powerful for scaling winning campaigns, but it's terrible for testing because you'll run into the exact same problem you have now, just at the ad set level. For testing, you want to use ABO.
With Ad Set Budget Optimisation (ABO), you set a specific daily or lifetime budget for each individual ad set. This gives you granular control and is perfect for testing. Here's what a proper testing structure would look like for your situation:
Campaign: [Your Test Objective - e.g., "Lead Gen Test"] - Budget: ABO
- Ad Set 1: [Audience Name] - [Creative Angle 1] - Budget: $10/day
- Ad 1: [The actual ad for Creative Angle 1]
- Ad Set 2: [Audience Name] - [Creative Angle 2] - Budget: $10/day
- Ad 2: [The actual ad for Creative Angle 2]
- Ad Set 3: [Audience Name] - [Creative Angle 3] - Budget: $10/day
- Ad 3: [The actual ad for Creative Angle 3]
Look at the difference. In this structure, you're telling Meta, "You have no choice. You *will* spend $10 today trying to make Ad 1 work, $10 on Ad 2, and $10 on Ad 3." Now, all three ads are forced to get impressions. They're all forced to gather data. After a few days, you can look at the results and make a human, strategic decision about which one is the actual winner, based on the metrics that matter to your business, not just the early performance signals Meta was guessing with.
This method, putting one ad in each ad set, is the purest way to test. It prevents what's known as 'audience overlap' and ensures each ad is judged on its own merit. It's a bit more work to set up, I'll grant you, but it's the foundation of any succesful paid advertising campaign I've ever run. You have to build a reliable testing machine before you can even think about scaling. It's one of those things that seems small but makes all the difference. I've seen clients come to us after spending thousands on ads with no idea what worked because their account was a mess of ad sets with dozens of ads fighting each other. We spend the first week just rebuilding the structure, and that alone often improves results before we've even launched a new creative.
You probably should define what you're actually testing...
Okay, so we've fixed the structure. Now each of your three ads is getting a fair slice of the budget. But what are these ads actually testing? This is the next level of thinking that separates amatuers from pros. If your three ads are all completely different—different images, different headlines, different body copy, different calls to action—then what are you learning when one of them wins?
You might know that 'Ad B' was the winner, but you have no idea *why*. Was it the image? The headline? The offer? You've learned nothing you can apply to your next round of ads. It's a dead end.
A professional testing methodology is about isolating one single variable at a time. You form a hypothesis and you test it. For example:
- Hypothesis: "I believe a picture of a person using my product will perform better than a simple product shot."
Test: Create two ads that are IDENTICAL in every way (same headline, same copy, same audience), except for the image. One has the person, one has the product shot. Now when one wins, you've learned something tangible about your audience. - Hypothesis: "I believe a headline that asks a question will get more engagement than a headline that makes a statement."
Test: Two identical ads, but you change only the headline.
This is how you generate real insights. You need to stop thinking about ads and start thinking about your customer. Who are they, really? What is the single biggest, most frustrating problem in their life or work that your product solves? This is what I call their 'Nightmare Scenario'. Your ad's only job is to show up in their feed and speak directly to that nightmare. Forget demographics like "women aged 25-34 who like yoga". That's useless. Think about the *pain*. The Head of Sales who is terrified of missing his quarterly target. The new mum who hasn't had a full night's sleep in six months. The SaaS founder who watches his AWS bill creep up every month without knowing why.
Once you know the pain, you can write copy that resonates. I often use a framework called 'Before-After-Bridge'. It's simple but powerful.
The 'Before-After-Bridge' Copywriting Framework
| Framework | Generic (Bad) Copy | Before-After-Bridge (Good) Copy |
|---|---|---|
| Before Describe their current pain. |
"Our accounting software is private and secure. It helps businesses manage their finances. Request a demo today." | "Another weekend spent wrestling with spreadsheets? You're chasing invoices instead of growing your business, and you have no clear picture of your cash flow." |
| After Paint a picture of the dream outcome. |
"Imagine knowing your exact financial position in 30 seconds. Invoices are paid automatically, and you have the confidence to make big decisions because your data is crystal clear." | |
| Bridge Show how your product gets them there. |
"Our platform is the bridge. It automates your bookkeeping and gives you the insights you need. Start a free 30-day trial and get your weekends back." |
Look at the difference. The 'bad' copy talks about the product. The 'good' copy talks about the customer's life. This is what you should be testing. Not just random images, but different ways of articulating their pain and your solution. I've worked on campaigns for B2B software where simply reframing the message like this, from features to benefits, has reduced the cost per trial by over 50%. We took one client selling to medical recruiters from a £100 Cost Per User Acquisition down to just £7 by focusing their ads on the nightmare of finding qualified candidates, rather than the features of their platform.
You'll need to understand what 'winning' even means...
This brings me to the most important point of all. Let's say you've set up your ABO test campaign. You're testing three different headlines. After three days, you have your data. How do you decide which ad is the 'winner'?
Most people will look at the vanity metrics. They'll look at Click-Through Rate (CTR) or Cost Per Click (CPC). Ad A might have a great CTR of 3% and a low CPC of £0.50. Ad B has a lower CTR of 1.5% and a higher CPC of £1.00. The obvious winner is Ad A, right? Wrong. This is a trap.
What if Ad A brought 100 clicks to your website, but only one of them signed up for your service? Your Cost Per Lead (CPL) is £50. What if Ad B only brought 50 clicks, but 5 of them signed up? Your CPL is only £10. Ad B, despite looking worse on the surface, is actually 5 times more effective at achieving your business goal. You should turn off Ad A immediately and put all the budget behind Ad B.
You have to optimise for the metric that happens closest to the money. If you're selling a product, that's Return On Ad Spend (ROAS). If you're generating leads, it's Cost Per Lead (CPL). Everything else is just noise. This is also why you must have your conversion tracking set up perfectly. If you don't know how many leads or sales each specific ad is generating, you're flying completely blind.
But we can go even deeper. What's a 'good' CPL? Is a £10 lead good and a £50 lead bad? The answer is: it depends. The only way to know is to understand your Customer Lifetime Value (LTV). LTV is the total profit you can expect to make from a single customer over the entire course of their relationship with your business. Once you know this number, everything else falls into place.
Let's do some quick maths. Say you run a subscription box service.
- Average monthly subscription price: £30
- Your gross margin (profit after cost of goods): 60%
- Average number of months a customer stays subscribed: 8 months
Your LTV is (£30 * 0.60) * 8 = £144. Each customer you acquire is worth £144 in pure profit to your business. A common rule of thumb is to maintain a 3:1 LTV to Customer Acquisition Cost (CAC) ratio. This means you can afford to spend up to £144 / 3 = £48 to acquire a new customer. Now you have your target CAC.
If your website converts 10% of leads into customers, you can afford to pay up to £48 * 0.10 = £4.80 per lead. Suddenly, that £10 CPL from Ad B doesn't look so great anymore. It's unprofitable. You see how this changes your entire perspective? It frees you from chasing cheap clicks and allows you to make intelligent, data-driven decisions about your ad spend.
I've built a small calculator here so you can play with your own numbers. This is the single most important calculation for any business using paid ads.
LTV & Target CPA Calculator
And finally, the offer is probably your biggest problem...
I've left the most important part for last. You can have the best ad creative in the world, the most perfect targeting, and a flawless campaign structure, but if your offer is weak, you will fail. Full stop. The number one reason I see campaigns fail is a bad offer.
What is a bad offer? It's anything that is high-friction and low-value for the prospect. The classic example in B2B is the "Request a Demo" button. It's the most arrogant call to action in marketing. It asks a busy, important person to give up 30-60 minutes of their time to be sold to. It offers them zero immediate value and requires a huge commitment. No wonder conversion rates are terrible.
Your offer's only job is to provide a moment of undeniable value. An "aha!" moment that makes the prospect sell themselves on your solution. You must solve a small, real problem for them for free to earn the right to solve the whole thing later.
What does a good offer look like?
- For a SaaS company: A free trial (no credit card). A freemium plan. Let them use the actual product and see the value for themselves. We helped one SaaS client generate 1535 trials on Meta simply by making their trial completely frictionless.
- For a service business/agency: A free, automated tool. A 'Website SEO Audit' that shows their top 3 keyword opportunities. A 'Data Health Check' that flags issues in their database. For us, it's a free 20-minute strategy session where we audit a failing ad campaign. It provides immense value upfront.
- For an eCommerce store: A compelling discount on the first purchase (e.g., 20% off). A free gift with the first order. Free shipping is table stakes now, you need more than that. I remember one store launch where we generated 1500 leads at just $0.29 each by offering an entry into a prize draw for early subscribers.
Think hard about your offer. Is it genuinely valuable to your ideal customer? Is it low-friction? Does it solve a small piece of their nightmare scenario, right now? If not, no amount of ad tinkering will save you. A better offer will do more for your results than a better headline ever will. It's the engine of the entire system. Without a powerfull engine, you're not going anywhere.
This is the main advice I have for you:
I know this is a lot to take in, especially when you started with what seemed like a simple question. But as you can see, the problem of one ad getting all the impressions is just the tip of the iceberg. It's a symptom of a deeper need for a more strategic and methodical approach. I've broken down my main recommendations into a table for you to use as a checklist.
| Problem Area | Your Current Situation (The Symptom) | The Underlying Cause | Recommended Solution | Your First Actionable Step |
|---|---|---|---|---|
| Campaign Structure | Only one of three ads gets impressions. | A single ad set with multiple ads allows the algorithm to pick a premature 'winner' and ignore the rest. This isn't a valid test. | Switch to an Ad Set Budget Optimisation (ABO) campaign. Create a separate ad set for each ad creative you want to test. | Duplicate your existing ad set twice. In each of the three ad sets, pause two of the ads, so only one unique ad is active in each. Assign a third of your total budget to each ad set. |
| Testing Methodology | Unsure why one ad 'wins' and can't replicate success. | Testing too many variables at once (different images, headlines, copy) means you don't learn anything specific from the results. | Isolate one variable per test. Form a clear hypothesis (e.g., "This headline will beat that headline") and test only that single element. | Decide on ONE thing to test next. For example, find your best-performing ad so far and create two new versions that only change the headline. |
| Performance Metrics | Focusing on surface-level metrics like impressions or clicks. | Optimising for vanity metrics (CTR, CPC) often leads to high traffic but few actual conversions, wasting your budget. | Define your business goal (Lead, Sale, etc.) and optimise for that. Calculate your LTV to find your target Cost Per Acquisition (CPA). | Use the interactive calculator above to get a rough estimate of your LTV and target CPA. Make 'Cost Per Result' your primary success metric in Ads Manager. |
| The Offer | Low conversion rate on your landing page. | Your Call to Action is likely high-friction (e.g., "Buy Now", "Request Demo") and offers little immediate value to the prospect. | Develop a low-friction, high-value offer that solves a small problem for your customer instantly (e.g., free trial, checklist, automated audit). | Brainstorm three potential 'free' offers you could create that would be genuinely helpful for your ideal customer, even if they never buy from you. |
Getting this stuff right is what makes the difference between ads that are a cost centre and ads that are a predictable growth engine for your business. It takes time and it takes a lot of testing, but by following a methodical process like the one I've outlined, you stop gambling and start investing intelligently.
This is obviously a complex process, and it can feel overwhelming. You have to be a strategist, a copywriter, a data analyst, and a technician all at once. Avoiding the common pitfalls and speeding up the learning curve is exactly where professional help can make a huge difference. We've run hundreds of these tests across dozens of industries, from B2B SaaS to eCommerce, and that experience allows us to identify the highest-leverage opportunities in an account much faster.
If you'd like to go through your specific situation in more detail, we offer a completely free, no-obligation 20-minute strategy call where we can take a look at your campaigns together and give you some more tailored advice.
Hope that helps!
Regards,
Team @ Lukas Holschuh