Hi there,
Thanks for reaching out! It’s a really common and frustrating situation to be in, seeing the platform spend your hard-earned cash on what looks like the weaker ad. It feels completely counterintuitive, doesn't it?
The short answer is that Facebook’s algorithm is often playing a longer game than we are, and what looks like a mistake is actually part of its process. But to really get to the bottom of it and fix it for good, we need to look a bit deeper into how the algorithm 'thinks' and how you should be structuring your tests to get clear, reliable results. I’m happy to give you some initial thoughts and guidance on this.
TLDR;
- Facebook isn't 'wasting' money; its algorithm is in an 'exploration' or 'learning' phase, testing which ad will deliver the best long-term results, not just looking at yesterday's conversions. It's predicting future performance based on thousands of signals.
- Your current testing setup (one ad set, two nearly identical ads) is unreliable. You need a structured approach that tests significant variables (like offers, angles, or audiences) rather than minor media changes.
- Stop focusing only on conversion count. You need to calculate your Customer Lifetime Value (LTV) to understand the maximum you can afford to pay for a customer (your CAC). This is the key metric for scaling profitably. Use our LTV calculator inside to find your number.
- The real issue often isn't the algorithm, it's the creative and the offer. A weak message to the wrong audience won't convert, no matter how the budget is spent.
- You must give the algorithm enough time and data before making decisions. Turning off an ad too early based on one or two days of data is one of the most common ways to kill a potentially winning campaign.
We'll need to look at why Facebook is 'wasting' your money... which it probably isn't
Okay, let's tackle the main frustration head-on. When you see your budget flowing to the ad with fewer purchases, the immediate reaction is that the system is broken. You've given it one job – find purchases – and it seems to be failing. But this is one of the biggest myths in paid advertising.
You have to understand that Meta's algorithm is not a simple machine that just rewards past performance. It’s a predictive engine. It's constantly running millions of micro-simulations to figure out which ad, shown to which person, at which time, is most likely to lead to a conversion in the future at the lowest cost. The 'learning phase' is this exact process in action.
When you launch a new ad set, the algorithm is in full exploration mode. It doesn't have much data yet, so it has to test. It might send more budget to Ad B (the one with fewer conversions so far) for a number of reasons:
- Audience Pockets: It might have found a small, cheaper 'pocket' of users within your target audience who respond to Ad B. Even if they haven't converted yet, the algorithm might see positive early signals – like high click-through rates, low cost-per-click, or long landing page dwell times – and predict that this pocket will eventually convert more cheaply than the audience responding to Ad A.
- Creative Fatigue Prediction: The algorithm might predict that Ad A, while getting early wins, has characteristics that will lead to it fatiguing faster. Maybe its imagery is more aggressive or its copy is very direct, which burns out quickly. It might see Ad B as having more long-term potential for stable performance.
- Auction Competitiveness: The ad auction is dynamic. It could be that the specific users who are most likely to respond to Ad A are more expensive to reach right now because of high competition. The algorithm might be pushing spend to Ad B to reach a less competitive, and therefore cheaper, segment of the audience, trying to balance out your overall cost per purchase in the long run.
Think of it like a football manager. In the first 10 minutes of a match, Striker A scores a goal. But the manager notices that Striker B, who hasn't scored yet, is making better runs, pulling defenders out of position, and creating more chances for the team. A novice would say "Keep passing to Striker A!". But the experienced manager knows that sticking with Striker B's strategy might lead to winning the entire game, not just scoring one early goal. The Facebook algorithm is that experienced manager. It's looking at all the data points, not just the one obvious 'goal'.
I'd say you need to rethink your testing structure...
The bigger issue here, to be brutally honest, isn't the algorithm's choice; it's the test you've set up. Testing two nearly identical ads ("small media change") within the same ad set is one of the least effective ways to find a winner. This is a common mistake and it's holding your account back.
Why is it a bad test? Because you're not testing a meaningful hypothesis. If the ads are too similar, any difference in performance is likely down to random chance and the whims of the algorithm, rather than one ad being genuinely better. You don't learn anything actionable. Did the blue background really outperform the slightly different blue background? Probably not. It's just statistical noise.
A proper testing structure isolates one big variable at a time. You should be testing things that can cause a major shift in performance, like:
- The Offer: Free Shipping vs 10% Off.
- The Angle: Focusing on the product's luxury quality vs its practical durability.
- The Creative Concept: A user-generated style video vs a polished studio product shot.
- The Audience: A Lookalike audience of past purchasers vs a broad interest-based audience.
Instead of putting two similar ads in one ad set, you should have dedicated campaigns for each stage of the funnel. This is how we structure accounts for clients, from eCommerce brands to B2B SaaS companies. It gives you clarity and control.
| Campaign (Funnel Stage) | Objective | Typical Audiences to Test | Main Goal |
|---|---|---|---|
| ToFu (Top of Funnel) - Prospecting | Conversions (Purchases) | -> Lookalikes of Purchasers -> Lookalikes of Highest Value Customers -> Broad Interest/Behaviour Targeting |
Find new customers who've never heard of you. This is where you test your big creative ideas. |
| MoFu (Middle of Funnel) - Retargeting | Conversions (Purchases) | -> Website Visitors (last 30 days) -> Video Viewers (50%+) -> Social Media Engagers |
Bring back people who showed interest but didn't buy. Show them testimonials or different product benefits. |
| BoFu (Bottom of Funnel) - Retargeting | Conversions (Purchases) or Catalog Sales | -> Added to Cart (last 7 days) -> Initiated Checkout (last 7 days) -> Viewed Specific Products (Dynamic Ads) |
Close the deal with people who were on the verge of buying. Offer a small incentive or remind them what they left behind. |
Within your ToFu Prospecting campaign, you would use Campaign Budget Optimisation (CBO). You'd create multiple ad sets, each targeting a different audience (e.g., Ad Set 1: Lookalike Purchasers, Ad Set 2: Interest - "Handcrafted Jewelry"). Then, within each ad set, you place 2-3 ads that are significantly different from each other (e.g., Ad 1: Video Testimonial, Ad 2: Carousel of best-sellers, Ad 3: Static image with a bold offer). CBO will then automatically shift budget not just between the ads, but between the *audiences* that are performing best. This is a much more powerful and informative way to test.
You probably should focus on the bigger picture...
Let's take a step back. The question you're asking is about a single ad set. But the question you *should* be asking is "How much can I afford to pay for a customer and still be wildly profitable?". The answer to that changes everything.
Most advertisers are trapped in a cycle of chasing a lower and lower Cost Per Purchase (CPP). They see a £15 CPP and panic. But what if I told you that each customer you acquire is actually worth £300 to your business over their lifetime? Suddenly, paying £15, £20, or even £50 to acquire that customer looks like an incredible bargain. This is the difference between thinking like an ad manager and thinking like a business owner.
You need to calculate your Customer Lifetime Value (LTV). It's the most important number in your business. It tells you how much you can really afford to spend on acquisition (your Customer Acquisition Cost, or CAC). A healthy business model typically aims for an LTV:CAC ratio of at least 3:1.
Let's break down the maths. It's simpler than it looks.
- Average Revenue Per Account (ARPA): How much revenue does a typical customer generate per month (or year, if that's easier)?
- Gross Margin %: What's your profit margin on that revenue after accounting for the cost of goods sold?
- Monthly Churn Rate: What percentage of customers do you lose each month? (If you don't lose customers, you can use customer lifetime in months, e.g. 1 / 24 months = 4.16% churn).
The formula is: LTV = (ARPA * Gross Margin %) / Monthly Churn Rate
Once you know your LTV, you know your ceiling for ad spend. I remember one eCommerce client selling subscription boxes. They were fixated on their £20 cost per new subscriber. We did this calculation and found their LTV was over £250. This gave them the confidence to scale their ad spend aggressively, knowing they could afford a CPA of up to £80 and still be profitable. The result? We hit a 1000% Return On Ad Spend because we weren't afraid to spend what was necessary to acquire high-value customers.
Use this calculator to get a rough idea of your own numbers. It might just change your perspective on what a 'good' cost per purchase really is.
You'll need a message they can't ignore...
So we've established the algorithm is trying its best and that you need a better testing structure. But here's the final piece of the puzzle, and it's the most important one: none of this matters if your ads are rubbish.
Often, when an advertiser sees budget being 'wasted', it's because both ads are underperforming, and the algorithm is just picking the lesser of two evils. The solution isn't to force budget to the ad with one conversion; it's to write an ad that's so good it starts getting conversions consistently.
Your ad needs to speak directly to the pain point of your ideal customer. You're not selling a product; you're selling a solution to a problem, a transformation from a "before" state to a desired "after" state. For eCommerce, this is crucial. You don't sell a handcrafted necklace; you sell the feeling of confidence and the compliments a person will receive when they wear it.
We use a simple but powerful framework for this called Before-After-Bridge.
- Before: Describe the customer's current, frustrating reality. What problem do they have?
- After: Paint a picture of their ideal future, where that problem is solved.
- Bridge: Position your product as the bridge that gets them from Before to After.
Here's an example for a fictional store selling ergonomic office chairs:
(Before) Another day ending with a sore back and stiff neck? That cheap office chair is costing you more than you think in productivity and comfort.
(After) Imagine finishing your workday feeling energised, focused, and completely pain-free, ready to enjoy your evening.
(Bridge) The ErgoFlex Pro is the bridge to that reality. Designed by physiotherapists, it provides dynamic lumbar support that adapts to your every move. Click to feel the difference.
This is what you need to be testing. Not a "small media change", but fundamentally different ways of communicating your value. A good test would be pitting an ad using the Before-After-Bridge framework against one using a different angle, like focusing on social proof (testimonials) or scarcity (limited edition).
❌ Poor Split Test (What you're doing)
✅ Good Split Test (What you should do)
So, what should you do now?
It's easy to get lost in the weeds with this stuff, tweaking individual ads and worrying about daily spend. But scaling a business with paid ads requires a more strategic, high-level approach. The good news is that you're asking the right questions, which is the first step.
The problem you described is a symptom of a few underlying issues that, once fixed, will make your advertising far more effective and less frustrating. It's not about forcing the algorithm to do what you want; it's about giving it the right inputs (strong creative, clear offers, well-structured tests) so that it can do its job properly.
I've detailed my main recommendations for you below:
| Recommendation | Why It Matters | Your First Action |
|---|---|---|
| Trust the Learning Phase | The algorithm needs time (at least 3-5 days) and data (around 50 conversions per ad set per week is the ideal goal) to optimise. Making changes too early resets the learning phase and prevents you from ever getting stable results. This definitely hurts performance. | Leave your current ad set running for another 3 days without touching it. Observe what happens to the cost per purchase over a longer period. |
| Build a Proper Testing Structure | Testing tiny variables gives you no actionable learnings. You need to test significant differences in angles, offers, and creative to understand what truly drives your customers to buy. A weird setup won't get you far. | Plan your next campaign. Create one CBO campaign with two ad sets targeting two different audiences. In each ad set, create two ads with completely different copy and images. |
| Calculate Your LTV | You can't know if your ad performance is 'good' or 'bad' without knowing how much a customer is worth. This is the single most important metric for making smart decisions about scaling your ad spend. | Use the calculator in this letter to get an estimate of your LTV. Then, set a target Cost Per Acquisition (CPA) that is no more than 1/3 of your LTV. |
| Focus on Your Message | The algorithm can only work with what you give it. World-class targeting can't save a weak or confusing message. A powerful ad addresses a customer's pain point directly and offers a clear solution. | Take your best-selling product and write two new ad headlines for it: one using the "Before-After-Bridge" framework and one featuring your best customer testimonial. |
I know this is a lot to take in. Moving from a tactical "ad-by-ad" view to a strategic, full-funnel approach is a big shift, but it's the only way to get consistent, scalable results from platforms like Facebook. It requires expertise not just in clicking the right buttons in Ads Manager, but in understanding business metrics, customer psychology, and creative strategy.
This is where working with a specialist can make a huge difference. We spend all day, every day inside ad accounts across dozens of industries, from eCommerce to high-ticket B2B services. We've seen these patterns play out hundreds of times and have developed systems to test effectively, find winning formulas, and scale them profitably.
If you'd like to go through your ad account and strategy together on a call, we offer a completely free, no-obligation initial consultation. We can take a look at your specific setup, your numbers, and give you a clear, actionable plan to move forward. It's often the quickest way to get clarity and stop wasting time and money on tactics that don't work.
Regards,
Team @ Lukas Holschuh