Hi there,
Thanks for reaching out! Happy to give you some initial thoughts on what you're seeing. It's a really common situation to be in when you're starting a new store, so don't worry. You're asking the right questions, but I suspect you might be focusing on the wrong part of the problem.
Honestly, the whole CBO vs. ABO debate is a massive red herring for most new advertisers. It's like arguing about what brand of petrol to put in a car that doesn't have an engine yet. Neither will get you anywhere if the fundamentals aren't right. The real problem isn't how Meta distributes your budget; it's what you're telling it to do with that budget in the first place. Your 'engine' is your targeting, your creative, and your offer. Let's get that built first.
We'll need to look at your testing methodology...
So, your first CBO campaign got 4 purchases. That feels like a win, and it is! But with broad targeting on a brand new pixel, I'd say you got lucky. The pixel has no idea who your customer is, so it's essentially throwing darts in the dark. It hit the board four times, which is great, but you can't build a reliable business on random chance. When you try to repeat it, as you saw, it often falls flat. You can't scale luck.
Your second ABO campaign fell flat for a few reasons. Firstly, you changed your website. This is a huge variable. What you thought was "streamlining" might have actually hurt your conversion rate. Secondly, you introduced more variance in the ads. More variance isn't always better, especially if the new hooks weren't as good as the originals. You basically changed the car's engine, wheels, and driver all at once and then wondered why it crashed. Too many variables makes it impossible to know what actually worked or didn't.
The biggest myth out there right now, especially for new stores, is that you should start with broad targeting. This is terrible advice. Broad targeting is powerful, but only *after* you've fed the Meta pixel thousands of high-quality conversion events. Once the pixel knows, with a high degree of certainty, what your ideal customer looks like, you can let it go 'broad' and it will find more of those people for you. Starting with broad is asking an intern on their first day to run the entire company. It needs direction first. Your job right now is to be that direction.
This means you need a structured, systematic way to test. Not just throwing things at the wall. You're in the information gathering stage. Every dollar you spend should be buying you data on what works, not just chasing immediate sales. The sales will come once you have that data.
I'd say you should build a simple, logical testing structure...
For a new store, everything you do should be about teaching the pixel. The best way to do this is by starting with audiences that are much more specific. You need to tell Meta, "Hey, ignore the millions of people in Australia, I think my customers look more like *this* specific group". This is where detailed interest targeting comes in.
But you have to be clever about it. Picking the right interests is an art. I see so many accounts just targeting something like "Clothing" because they sell t-shirts. That's way too broad. It includes everyone from people who buy luxury fashion to people who buy budget basics. Your ad will be shown to millions of irrelevant people. You need to find interests that your ideal customer is much more likely to have than the general population.
For example, if you're selling high-end streetwear, instead of "Clothing", you might target interests like "Hypebeast", "Supreme" (the brand), "Complex Magazine", or specific designers known in that niche. Someone interested in those is far more likely to be your customer. You're giving Meta a massive head-start. The key is to think about the unique publications, brands, public figures, and tools your ideal customer engages with.
I'd structure your testing campaign using ABO for now. This gives you direct control over how much you spend on each audience test. A typical structure I'd use for a new ecom store would be:
Campaign: AU - Prospecting - Conversions (Sales)
- -> Ad Set 1 (ABO Budget: $20/day): Competitor Targeting
- Interests: Brands your customers also buy from.
- -> Ad Set 2 (ABO Budget: $20/day): Media Targeting
- Interests: Magazines, blogs, influencers your customers follow.
- -> Ad Set 3 (ABO Budget: $20/day): Related Hobbies/Activities
- Interests: What else are your customers into that relates to your product?
- -> Ad Set 4 (ABO Budget: $20/day): Software/Tools Targeting
- Interests: Any specific apps, tools or software your ideal customer might use. More for B2B but can work for some consumer products too.
Inside each ad set, you'd run your 3-4 best ads. Let them run for at least 3-4 days without touching them. This gives Meta enough time to figure things out. After a few days, you can analyse the results and turn off what's not working, then eventually move the winners into a scaling campaign (which could be CBO). But you're not there yet. You're still in the lab, figuring out the formula.
You probably should focus on validating your creative and offer first...
This is probably the most overlooked part. People spend ages on targeting and then run ads that are just... average. The truth is, with the power of Meta's algorithm today, a truly killer ad creative and an irresistable offer can make even a mediocre audience work. But the best targeting in the world won't save a bad ad or a weak offer.
When I say 'offer', I don't just mean your product. It's the whole package: the price, the shipping cost (free shipping is a massive lever), your return policy, any discounts, and crucially, the trust and credibility of your website. Your first campaign got sales, but your second didn't after you 'streamlined' the site. I'd be taking a very hard look at what changes you made. Did you remove customer reviews? Did you make the 'Add to Cart' button less obvious? Is the new design slower to load?
Based on my experience looking at new stores, some common fails are:
- -> Poor Product Photography: Images are everything in e-commerce. They need to be professional, clear, and show the product in the best possible light. User-generated content (UGC) style photos and videos often perform brilliantly.
- -> Weak Product Descriptions: Don't just list features. Sell the benefit. How does your product make the customer's life better? Use persuasive language.
- -> Lack of Trust Signals: Do you have reviews? A clear returns policy? Secure payment badges (like Shopify Payments, PayPal)? An 'About Us' page that tells your story? Without these, people are hesitant to give you their card details. A site can look 'clean' but feel untrustworthy.
Your ad creative is just as important. In your second test, you had 'more variance'. But was it purposeful variance? You should be testing completely different *angles* or *concepts*, not just slightly different hooks. For example:
- -> Ad Concept 1: The Problem/Solution. Show the 'before' state (the problem your product solves) and the 'after' state (the better life with your product). This is a classic for a reason.
- -> Ad Concept 2: The Social Proof. Use customer testimonials, reviews, or UGC showing real people loving your product. This builds immense trust.
- -> Ad Concept 3: The 'How It's Made'. Show the quality and craftsmanship that goes into your product. This works really well for handcrafted or high-quality goods.
Running ads with these distinct concepts will give you much clearer data on what messaging resonates with your audience. It's a much better test than just changing the first sentence of your ad copy.
Hypothetical Creative Effectiveness Score
You'll need to understand your numbers to make good decisions...
You mentioned clicks and landing page views, which is a good start. But to really diagnose what's happening, you need to look at the whole funnel. The journey from someone seeing your ad to becoming a customer has several steps, and a failure at any one of them will kill your results.
Here's what I'd be looking at:
- Click-Through Rate (CTR): Out of everyone who saw your ad, how many clicked? If this is low (e.g., under 1%), your ad creative probably isn't grabbing attention. The image, video, or headline isn't working.
- Landing Page View Rate: Of those who clicked, how many actually waited for your page to load? If there's a big drop-off here, your website is too slow. This is a conversion killer.
- Add to Cart Rate (ATC): Of those who viewed your product page, how many added an item to their cart? If this is low, the problem is likely on the product page itself. Is the price too high? Are the photos unconvincing? Is the description weak?
- Initiate Checkout Rate (IC): Of those who added to cart, how many started the checkout process? A drop-off here can point to unexpected shipping costs being revealed in the cart.
- Purchase Rate: Of those who started checkout, how many finished? If people drop off here, your checkout process might be too long, complicated, or seem untrustworthy.
Analysing this funnel tells you *where* to focus your efforts. A low CTR means you need better ads. A low ATC rate means you need a better product page. Just looking at the final number of purchases doesn't give you this insight. For e-commerce, we see a lot of campaigns that generate lots of interest and clicks but no sales, and it's almost always a problem with the offer or the website experience, not the ads themselves.
Finally, you need to know your numbers. The most important question isn't "what should my Cost Per Purchase be?" but "what Cost Per Purchase can I *afford*?". This comes down to understanding your Customer Lifetime Value (LTV). If a customer buys from you once for $50, but on average they come back and spend another $100 over the next year, their LTV is $150. This means you can afford to spend more than you might think to acquire them in the first place. Knowing this number gives you confidence during the testing phase, when costs are often higher.
Customer Lifetime Value (LTV) Calculator
So, what's the plan?
You're in the learning phase, and that's okay. The key is to be methodical. Forget scaling, forget huge budgets, and definitly forget broad targeting for now. Your only goal for the next few weeks should be to find one or two audiences and one or two ad creatives that can consistently bring in sales, even if it's just a few. One campaign we worked on for a women's apparel brand drove a 691% return, but it all started with this exact kind of methodical testing to find the initial winning formula.
I've detailed my main recommendations for you below:
| Area of Focus | Recommendation | Why it Matters |
|---|---|---|
| Campaign Structure | Use one ABO (Ad Set Budget Optimisation) campaign for testing. Create 3-5 different ad sets, each with a small daily budget ($15-$20 AUD). | This gives you control to see exactly which audience performs best. It stops Meta from spending your whole budget on one audience that gets a lucky early sale. |
| Targeting Strategy | Pause all broad targeting. Each of your 3-5 ad sets should target a different 'theme' of specific interests (e.g., Competitor Brands, Related Magazines, etc.). | You need to teach the new pixel who your customer is. Specific interests give it the clear data it needs to learn quickly. |
| Creative Testing | In each ad set, test 2-3 completely different ad *concepts* (e.g., Problem/Solution vs. Social Proof). Don't just change headlines. | This tells you what messaging actually resonates with people, which is far more valuable data than just knowing which headline gets a slightly higher CTR. |
| Measurement | Analyse the full funnel: CTR, Landing Page Views, Add to Carts, Initiated Checkouts, and Purchases. Identify your biggest drop-off point. | This shows you where the real problem is. You might be spending all your time fixing your ads when the real issue is your product page or shipping costs. |
| Decision Making | Let tests run for at least 3-4 days. Turn off any ad set that has spent ~2x your target cost per purchase without a single sale. | This provides a simple, data-driven rule for making decisions and prevents you from wasting money on clear losers. It stops emotional decision-making. |
This whole process is about replacing guesswork with a system. It takes patience, and it won't always feel like you're making huge strides every day, but it's the only reliable way to build a profitable advertising engine for a new store.
It can feel like a lot to juggle, and getting this initial testing framework right is often where people get stuck. It's the difference between building a solid foundation or building on sand. If you feel like you could use an expert eye to help you build out this initial testing plan and analyse the results properly, we offer a free, no-obligation strategy session where we can go through your account together.
Hope this helps!
Regards,
Team @ Lukas Holschuh