Hi there,
Thanks for reaching out with your question. It's a classic one that catches a lot of people out, so happy to give you some of my initial thoughts and guidance on it. It sounds like you're wrestling with Meta's algorithm, and the good news is there's a much better way to handle this than just guessing which ad to turn off. The problem isn't really about which shirt colour is better, but about how you're testing and measuring in the first place.
Let's get this sorted.
TLDR;
- Stop focusing on likes. They're a vanity metric that tells you almost nothing about what actually drives sales. Your only goal is purchases and Return on Ad Spend (ROAS).
- Your current test is probably statistically insignificant. Spending just $100 is not nearly enough data for Meta's algorithm to make a smart decision, or for you to judge which ad is a true winner.
- Meta's budget optimisation can be lazy. It often picks an early, random 'winner' based on cheap clicks or engagement and starves other, potentially better ads. You need to control the test yourself.
- The core issue is your testing methodology. You need a proper structure to test creatives reliably, which involves forcing an equal budget across your ads to get real data.
- This letter includes an interactive calculator to help you understand what you can actually afford to pay for a customer, which is a far more important number than your cost per like.
We'll need to look at the real enemy here: vanity metrics
Right, first things first, we need to have a brutally honest chat about 'likes'. Likes don't pay your bills. They don't cover your stock costs, and they certainly don't pay your ad spend. In my experience, they are one of the most dangerous distractions in advertising. An ad full of likes with no sales is a failure. An ad with zero likes but a 5x return on ad spend is a massive success. You've got to be ruthless about this.
So, why is the ad with more likes getting less spend? Because Meta's algorithm is doing what it thinks you want. When an ad gets a lot of engagement early on (likes, comments, shares), the algorithm sees it as 'relevent' and rewards it with a lower cost to show it to more people (a lower CPM). But the people who are quick to 'like' a picture of a t-shirt are often not the same people who pull out their wallets to buy it. They're browsers, not buyers.
I've seen this countless times, especially with eCom clients. I remember one campaign we worked on for a women's apparel brand that saw a 691% return. Some of their highest-selling ads had very average engagement. Meanwhile, some beautiful lifestyle shots got loads of likes but sold next to nothing. The algorithm, if left to its own devices on a 'Brand Awareness' objective, would have poured money into the pretty pictures. But because we forced it to optimise for *purchases*, it learned to ignore the vanity metrics and focus on what made money. This is a critical distinction. You are paying Meta to find you customers, not fans.
I'd say you're making decisions with blindfolds on
Your second problem, and it's a big one, is data. Or rather, the lack of it. You've spent $100. In the world of paid ads, that's practically nothing. It's not enough for the algorithm to exit its 'learning phase' properly, and it's definitely not enough for you to declare one ad a dud and the other a potential winner.
Think of it like a coin toss. If you toss a coin 10 times and get 7 heads, you might think the coin is biased. But you know intuitively that it's probably just random chance. If you tossed it 1,000 times and got 700 heads, then you'd be pretty certain somethings up. It's the same with ads. You need a large enough sample size (impressions, clicks, and most importantly, purchases) to make a statistically significant decision.
Right now, Meta is favouring the ad that's getting cheaper results, which in this case might be cheap clicks or impressions, not sales. It found a pocket of people who are happy to click or look but not buy, and because that's cheaper, it's pouring the budget there. This is a classic Campaign Budget Optimisation (CBO) problem when you're in the early testing phase. CBO is great for scaling *proven* winners, but it's terrible for finding them in the first place because it doesn't give each ad a fair shot.
What you should be doing is using Ad Set Budget Optimisation (ABO). This lets you set a specific budget for each ad set (or in your case, you could put each ad in its own ad set). This forces Meta to spend your designated amount on each variation, regardless of its early performance. Yes, it might be less 'efficient' in the short term, but your goal right now isn't efficiency; it's learning. You are paying for data. Once you have enough data to prove one shirt colour consistently outperforms the other in actual sales, then you can switch off the loser and consider moving the winner into a CBO scaling campaign.
I would let each ad run until it has spent at least 1-2 times your target Cost Per Acquisition (CPA). If you don't have a target CPA yet, you need one. Let's figure that out.
You probably should know your numbers first
Before you spend another dollar, you need to answer the most important question in your business: "How much can I afford to spend to acquire a customer?" Most business owners can't answer this, and it's why they fail at paid advertising. They're obsessed with getting a low Cost Per Lead (CPL) or Cost Per Click (CPC) instead of focusing on the big picture: Lifetime Value (LTV).
Let's do a quick, rough calculation. This is the maths that separates professional advertisers from amateurs. Answering this tells you if that ad that hasn't made a sale yet is actually a failure, or if you just haven't spent enough on it.
Average Revenue Per Account (ARPA): What's the average order value for a customer? Let's say it's $40 for a shirt. But do they buy more than once a year? If they buy 3 times, your annual revenue per customer is $120.
Gross Margin %: After the cost of the shirt, shipping, etc., what percentage is profit? Let's say it's 60%.
Monthly Churn Rate: This is more for subscriptions, but for eCom, we can think of it as 'customer lifespan'. How long before a customer is unlikely to buy again? Let's model it over a year, so we can simplify for now.
A simpler eCom model is just: LTV = (Average Order Value x Purchase Frequency) x Gross Margin %
LTV = ($40 x 3) * 0.60 = $72
So, each customer is worth roughly $72 in gross margin to you. A healthy business model aims for at least a 3:1 LTV to Customer Acquisition Cost (CAC) ratio. This means you can afford to spend up to $24 ($72 / 3) to acquire a single new customer and still have a very healthy, profitable business. Suddenly, spending $100 and not getting a sale doesn't seem so catastrophic, but it does tell us something. Based on these numbers, you'd need to get 3-4 sales from that $100 to be on track. Your underperforming ad (the one with no sales) has already spent $75 of that budget. At a target CPA of $24, it should have generated about 3 sales by now. The fact that it hasn't, while the other ad is getting better engagement on a smaller spend, is a signal—but we still need a proper test to be sure.
Play around with this calculator below. Put in your own numbers. This will give you your 'breakeven' CPA and your 'target' CPA. This is your new north star. Not likes.
You'll need a proper testing structure
Okay, so let's stop guessing and start building a propper system. You don't just throw two ads in an ad set and hope for the best. You need a structure that allows for methodical testing and scaling. This is how we structure campaigns for our eCom clients, from those selling cleaning products to those with an 8x return selling maps.
The basic idea is to separate your audiences by their temperature: Cold, Warm, and Hot. In Meta's language, this is Prospecting (ToFu - Top of Funnel), Retargeting (MoFu - Middle of Funnel), and Re-engagement (BoFu - Bottom of Funnel).
1. ToFu (Top of Funnel) - Prospecting Campaign:
This is where you find new customers. Your audience targeting will be based on interests (e.g., people interested in competing brands, fashion magazines, ethical clothing) and Lookalike audiences (e.g., a lookalike of your past purchasers). This is where you do your creative testing. You'd set up an ABO campaign, with multiple ad sets. Each ad set would target a different audience, and inside each ad set, you'd test your creative variables (like the shirt colour).
2. MoFu (Middle of Funnel) - Retargeting Campaign:
This campaign targets people who have shown some interest but haven't bought yet. Audiences would include website visitors, people who have engaged with your Instagram page, or people who watched 50% of one of your video ads. The messaging here is different. It's less about introduction and more about overcoming objections or reminding them why they were interested.
3. BoFu (Bottom of Funnel) - Abandoned Cart Campaign:
This is your highest-intent audience. They've added a product to the cart or even started the checkout but didn't finish. You target them with very specific ads, maybe offering a small discount or free shipping to get them over the line. These campaigns often have the highest ROAS.
Here's a simple flowchart of how that looks:
- Interest Targeting
- Lookalike Audiences
- Broad Targeting (later)
- ABO for testing
- CBO for scaling
- Website Visitors
- Social Engagers
- Video Viewers
- Usually ABO/CBO
- Conversion Objective
- Added to Cart
- Initiated Checkout
- Viewed Content
- Dynamic Product Ads
- Offer/Urgency focus
Your test with the two shirts belongs firmly in the ToFu campaign. You would create one ad set for a specific audience (e.g., "Lookalike of Purchasers 1%"). Inside that ad set, you put your two ads. Then you duplicate that ad set and just change the audience (e.g., "Interest: Sustainable Fashion"). By using ABO, you can set a $20/day budget on each ad set, forcing an equal test. After a few days, once you've spent your target CPA on each ad set, you'll have much clearer data on which shirt colour sells better to which audience.
This is the main advice I have for you:
I know this is a lot to take in, especially when you came with what seemed like a simple question. But just 'turning off the bad ad' is a temporary fix for a symptom, not a cure for the underlying problem. The problem is a lack of strategy and process. Getting this right is how we take clients from burning money to achieving a 1000% return on ad spend like we did for a subscription box client.
Here is a summary of my recomendations for you to implement:
| Area of Concern | Recommended Action | First Step to Take |
|---|---|---|
| Metric Focus | Shift focus from vanity metrics (likes) to business metrics (Purchases, ROAS). Your goal is profit, not popularity. | 1. Customise your Ads Manager columns to prominently display Purchase ROAS, Cost Per Purchase, and Outbound CTR. Hide the 'Likes' column. |
| Budget Allocation | Stop using Campaign Budget Optimisation (CBO) for testing. It prevents a fair test by favouring early, often misleading, signals. | 2. Create a new campaign with Ad Set Budget Optimisation (ABO) turned on. This will be your dedicated 'Creative Testing' campaign. |
| Testing Methodology | Run a structured A/B test. Isolate your variables and give each ad a fair chance to perform by forcing an equal budget. | 3. Inside your new ABO campaign, create one ad set. Place both shirt ads inside it. Duplicate the ad set for each new audience you want to test against. Set a daily budget for each ad set. |
| Data Analysis | Avoid making decisions on too little data. Wait until spend is statistically significant before declaring a winner or loser. | 4. Let each ad run until it has spent at least your target CPA (which you calculated earlier). Only then should you evaluate performance and decide to turn one off. |
| Campaign Structure | Your current 'one ad set' approach is not scalable. You need a full-funnel structure to capture users at every stage of their journey. | 5. Plan out three separate campaigns: one for Prospecting (ToFu), one for Retargeting (MoFu), and one for Cart Recovery (BoFu). Your test will live in the ToFu campaign. |
As you can see, running paid ads effectively is a lot more involved than just boosting a post. It requires a deep understanding of the platform's mechanics, statistical analysis, and a robust strategic framework. It's a full-time job, and doing it wrong is a very fast way to lose money. For a lot of bussiness owners, their time is better spent on the parts of the business they are experts in - product design, customer service, operations.
Getting this stuff right from the start can be the differance between a failed clothing brand and one that scales profitably. If you feel like this is a bit overwhelming and you'd rather have an expert team build and manage this entire system for you, we should probably have a proper chat. We offer a free, no-obligation initial consultation where we can look at your ad account together and lay out a more detailed growth plan specific to your brand.
Regards,
Team @ Lukas Holschuh