Hi there,
Thanks for reaching out!
I had a look at your question about Meta ads testing strategies. It's a great question, and your current approach is definitely more structured than what I see in many accounts. But you're right to ask if there's a more efficient way. Tbh, what you're doing is a very detailed "creative bake-off," but it might be focusing on the smaller details while missing the bigger picture that really moves the needle.
I'm happy to give you some initial thoughts and outline a framework that we've found to be much more effective for systematically finding winning combinations and scaling them profitably. It's less about endlessly refining single ads and more about building a robust machine that finds new customers predictably.
TLDR;
- Your current testing method is too granular and likely too slow. You risk optimising for the best creative within a poor-performing audience, which is a waste of budget.
- The hierarchy of importance in paid ads is: Audience > Offer > Creative. Your strategy needs to reflect this, starting with rigorous audience testing first.
- The most important piece of advice is to structure all your activity around a proper sales funnel (ToFu, MoFu, BoFu). This allows for clearer measurement, tailored messaging, and more efficient scaling.
- Use Dynamic Creative (DCT) for initial audience discovery, but switch to static ads in a separate scaling campaign once you've found a winning audience-creative combination.
- I've included an interactive flowchart to help you priortise your audiences and a calculator to estimate your target Cost Per Acquisition (CPA).
You're testing tactics, not strategy...
Your current process—refining creatives, then copy, then headlines, then audiences—is a logical, step-by-step approach. It’s a micro-level optimisation. The problem is that you could spend weeks and hundreds of pounds finding the perfect headline for an audience that was never going to buy your product in the first place. You're polishing a part of the engine without checking if the engine is connected to the wheels.
The Meta algorithm is incredibly powerful, but it's not a mind reader. When you give it a Dynamic Creative ad with 5 creatives, 4 copy options, and 4 headlines, you're asking it to test 80 different combinations. It will find the combination that gets the best result *within that ad set's audience*. However, this often leads to what's called a 'local maximum'. It finds the best possible outcome in a small, isolated environment. But what if a completely different audience, tested with just your 'average' creative, would have performed 3x better? Your current method doesn't really allow for that kind of discovery efficiently.
The fundamental shift you need to make is from asking "What is the best ad?" to asking "Who are the best people to show my ads to?" Once you find the *who*, figuring out the *what* becomes infinitely easier and cheaper. This is why we build our entire testing framework around the sales funnel.
I'd say you need to structure by the funnel...
Instead of thinking in terms of "testing" and "scaling" as two separate, sequential phases, you should think in terms of a perpetual, always-on funnel structure. This is the absolute foundation for any successful Meta ads account. It organises everything and gives you a clear dashboard for what's working and what isn't.
The funnel is typically split into three stages:
1. Top of Funnel (ToFu) - Prospecting:
- Who: Cold audiences. These are people who have never heard of you before. This is where you test your broad targeting, interests, behaviours, and lookalike audiences.
- Goal: To find and attract new potential customers at an acceptable cost. You're filling the top of your funnel with fresh traffic.
- Your Role: This is where 90% of your audience testing should happen. Your aim here is to identify new pockets of customers who respond well to your offer.
2. Middle of Funnel (MoFu) - Nurturing:
- Who: Warm audiences. People who have shown some interest but haven't taken a key action yet. This includes website visitors (who didn't buy), video viewers, social media page engagers, etc.
- Goal: To build trust and move people closer to a decision. You're reminding them of your value and overcoming objections.
- Your Role: The messaging here is different. You might show them testimonials, case studies, or different angles of your product.
3. Bottom of Funnel (BoFu) - Closing:
- Who: Hot audiences. These are people on the verge of converting. They've added a product to the cart, initiated checkout, or visited a key page multiple times.
- Goal: To get the conversion over the line. Simple as that.
- Your Role: This is where you might use urgency, scarcity, or special offers to prompt immediate action. This is often your most profitable campaign.
Why is this structure superior? Because it lets you allocate budget strategically. If you need more new customers, you increase the ToFu budget. If your cart abandonment is high, you boost the BoFu budget. It also allows you to tailor your message perfectly to the audience's "temperature," which dramatically increases conversion rates. Running the same ad to a cold prospect and a cart abandoner is a recipe for wasted spend.
We'll need to look at your audience priorities...
Once you have the funnel structure, the next question is which audiences to test and in what order. A lot of advertisers just throw a bunch of random interests into the machine and hope for the best. That's a mistake. There's a clear hierarchy of audience quality, and you should priortise your testing budget accordingly. The closer an audience is (or a lookalike of it is) to the final conversion event, the better it's likely to perform.
I've put together a flowchart that visualises the decision-making process for prioritising audiences. This is the exact framework we use for our clients.
2. LAL of Add to Cart
3. LAL of Website Visitors
You probably should streamline your testing process...
So, how do we turn this theory into a practical, efficient testing process? Here's a revised workflow that focuses on finding winning audiences first, then scaling them.
Phase 1: Audience Discovery & Creative Exploration (ToFu Campaign)
The goal of this phase is to answer one question: "Which groups of people are most responsive to my offer?"
- Campaign Setup: Use a Campaign Budget Optimisation (CBO) campaign. This lets Meta allocate the budget to the best-performing ad sets automatically, which is exactly what we want during testing. But you can also start with Ad Set Budget Optimisation (ABO) to ensure each audience gets a fair test spend. Let's assume ABO for control.
- Structure:
- Campaign 1: ToFu Prospecting (Conversion Objective)
- Ad Set 1: Audience A (e.g., Interest Stack: Shopify, WooCommerce, eCommerce)
- Ad Set 2: Audience B (e.g., Interest Stack: Competitor Pages, Industry Publications)
- Ad Set 3: Audience C (e.g., Lookalike 1% of Past Purchasers)
- Ad Set 4: Audience D (e.g., Lookalike 1% of Website Visitors)
- The Ads: Inside each ad set, now you can use your Dynamic Creative setup. But keep it focused. Instead of 5 creatives, pick 2-3 of your strongest *concepts*. For example, a User-Generated Content (UGC) style video, a clean product shot, and a graphic with a strong headline. Pair these with 2 different copy angles (e.g., one focused on pain points, one on benefits) and 2 headlines. This is enough variety for the algorithm to work with without becoming unmanageable.
- Measurement: Your number one metric is Cost Per Acquisition (CPA) or Return on Ad Spend (ROAS). Don't get distracted by vanity metrics like CTR or CPC. Set a rule: if an ad set spends 2x your target CPA without a conversion, turn it off. Be ruthless. The goal is to quickly eliminate losers and identify potential winners.
Phase 2: Validation & Scaling (Scaling Campaign)
Once you've run Phase 1 for a few days (or long enough to get statistically significant data), you'll have a winner. Let's say Ad Set 3 (LAL of Purchasers) is delivering conversions well below your target CPA.
- Campaign Setup: Create a *new* CBO campaign specifically for scaling. CBO is perfect here because you want Meta to dynamically push budget to your proven winners.
- Structure:
- Campaign 2: CBO Scaling (Conversion Objective)
- Ad Set 1: Winning Audience C (LAL of Purchasers)
- The Ads: Now, look at the results from your DCT inside the winning ad set from Phase 1. Meta will tell you which creative, copy, and headline combination performed best. You take that winning combination and build it as a "static" ad (one creative, one copy, one headline) inside your new scaling ad set. We call this 'graduating' the ad. Why do this? Because it gives you stability and full control. You can now duplicate this winning ad and test *small variations* on it—a different hook in the first 3 seconds of the video, a different call to action, etc. This is where micro-optimisation belongs, *after* you've found the winning audience and message.
- Scaling: You scale by gradually increasing the budget of this CBO campaign (no more than 20-30% every 48 hours to avoid resetting the learning phase). As you find more winning audiences in your Phase 1 campaign, you 'graduate' them into this scaling campaign as new ad sets. Over time, your scaling campaign becomes a powerhouse of your best-performing audiences and creatives.
This two-pronged approach allows you to constantly explore for new opportunities (Phase 1) while simultaneously exploiting your proven winners for profit (Phase 2). It's a much more robust and scalable system.
You'll need to know your numbers...
This entire process hinges on knowing what a "good" result looks like. You can't kill a losing ad set if you don't know your target CPA. You can't scale a winner if you don't know what a profitable ROAS is for your business.
The cost per result can vary massively depending on your industry, your offer, and the countries you're targeting. For simple conversions like a lead or a signup in developed countries (UK, US, etc.), you can expect a CPC between £0.50-£1.50 and a landing page conversion rate of 10-30%. For eCommerce sales, that conversion rate drops to 2-5%.
To help you get a feel for this, I've built a simple interactive calculator. You can adjust the sliders for your average Cost Per Click (you can find this in your ads manager) and your estimated Landing Page Conversion Rate (you can find this in Google Analytics) to see what your target Cost Per Acquisition should be. This will give you a concrete benchmark to measure your testing campaigns against.
Remember, the goal isn't just to get cheap conversions; it's to get profitable ones. Knowing your numbers is the only way to make informed, data-driven decisions instead of guessing.
This is the main advice I have for you:
To pull this all together, here is a table outlining the recommended structure and process. This framework moves you away from a simple creative bake-off towards a strategic system for growth.
| Phase | Campaign Setup | Core Objective | Key Actions & KPIs |
|---|---|---|---|
| 1. Audience Discovery (ToFu) | ABO Campaign with 3-5 ad sets, each targeting a distinct high-priority audience (Interests, LALs). | To find new, scalable audiences that convert below your target CPA. | Use DCT within each ad set to explore creative angles. KPI: CPA. Kill any ad set that spends >2x target CPA without converting. |
| 2. Scaling Winners | Separate CBO Campaign. Add winning audiences from Phase 1 into their own ad sets here. | To profitably scale spend on your proven audience/creative combinations. | 'Graduate' winning DCT combos into static ads. Scale budget by 20% every 48 hours. KPI: ROAS / CPA at scale. |
| 3. Retargeting (MoFu/BoFu) | A dedicated, always-on CBO Campaign with ad sets for different retargeting windows (e.g., 7-day, 30-day). | To maximise conversions from your existing warm and hot traffic. | Tailor messaging to user actions (e.g., testimonials for website visitors, discount codes for cart abandoners). KPI: ROAS. |
Adopting this kind of strategic framework is what separates amateur advertisers from professionals. It requires more setup upfront, but it provides clarity, control, and a clear path to scaling your results far more efficiently than your current method allows.
This might seem like a lot to take in, and implementing it correctly does take experience. Getting the structure right, choosing the right initial audiences, and correctly interpreting the test results are areas where mistakes can be costly. This is often where working with an expert can make a huge difference, as we've implemented and refined this exact process across dozens of accounts and can help you avoid the common pitfalls.
We offer a completely free, no-obligation initial consultation where we can take a look at your current ad account and provide some more specific, actionable recommendations. If that's something you'd be interested in, we'd be happy to set up a call.
Hope that helps!
Regards,
Team @ Lukas Holschuh