Hi there,
Thanks for reaching out!
Happy to give you some initial thoughts and guidance on your Meta ads structure for creative testing. It's a really common question and getting it right from the start can save you alot of money and time. It's good you're putting so much thought into the testing phase before you even launch, that already puts you ahead of many others.
Your current idea for a structure is a decent starting point, but I think we can refine it quite a bit to get you much clearer, more reliable data on which of your 7 videos and 3 images are actually the winners. Let's walk through it.
I'd say you need a more granular testing setup...
Right, so your proposed campaign: CBO at €100 with two adsets, one for all videos and one for all images, both targeting broad.
The main issue here is how CBO (Campaign Budget Optimisation) works. When you give the budget to the campaign level, Meta's algorithm will try to get you the most results for the lowest cost. It's smart, but in a testing scenario, its efficiency can actually work against you. What will likely happen is that Meta will quickly decide which of your two adsets (video or image) it 'likes' more, and it'll start funnelling the majority of the €100 budget into that one. The other adset will barely get any spend, so you won't have a fair test between videos and images as a format.
It gets worse inside the adsets themselves. Within the adset that gets the budget (let's say it's the video one), the same thing will happen. Meta will pick one or two of your 7 videos that it thinks will perform best and give them most of the spend. The other five videos might get a few impressions here and there, but not enough to give you any real idea if they could have been winners. You'll end the test thinking 'Video X is the best', when actually Video Y just never got a proper chance to be seen.
You also mentioned a potential conflict between the adsets. It's not so much a conflict, but more of a competition for the budget, and CBO is designed to pick a winner very quickly. For scaling, this is brilliant. For testing, it's a bit of a nightmare because you need to force the budget to be distributed evenly to get clean data. You need to know not just which creative wins, but why and by how much. The proposed CBO structure just won't give you that clarity.
Basicly, you'll end up with skewed results that are based on Meta's initial, rapid-fire optimisation rather than a true, controlled test of each individual creative asset. We need to isolate the variables, and your current structure lumps too many variables together.
You'll need to think about ABO vs CBO for testing...
This leads us to the ABO (Ad Set Budget Optimisation) vs CBO debate. As a general rule of thumb that I've seen work time and time again: use ABO for testing, and CBO for scaling.
With ABO, you set the budget at the ad set level. This gives you complete control over how much you spend on testing each element. It forces an even spend, which is exactly what we want in a test. It might seem less efficient on the surface, and your overall CPA might be higher during the test phase, but the quality of the data you get is far superior and will save you money in the long run.
So, how should we structure this? There are a couple of ways to approach it, depending on how rigorous you want to be.
Method 1: The Adset-Level Test
This is a solid middle-ground. You'd create one campaign using ABO.
- -> Campaign: [PRODUCT] - Creative Test - ABO
Inside this campaign, you'd create your adsets. For a new product, we need to test audiences as well as creatives. The best creative in the world will fail if you show it to the wrong people. So let's pick 2 or 3 distinct audiences to start with. These should be ToFu (Top of Funnel) audiences based on interests. For an e-commerce product, think about magazines your audience reads, brands they like, influencers they follow, etc. Let's say we pick 'Audience A' and 'Audience B'.
Your structure would look like this:
- -> Adset 1: Audience A - €20/day budget (ABO)
- -> Ad 1: Image 1
- -> Ad 2: Image 2
- -> Ad 3: Image 3
- -> Ad 4: Video 1
- -> ...and so on for all 10 creatives.
- -> Adset 2: Audience B - €20/day budget (ABO)
- -> Ad 1: Image 1
- -> Ad 2: Image 2
- -> ...and so on for all 10 creatives.
In this setup, each audience is tested with all creatives. You'll start to see patterns. Maybe videos work best with Audience A, but images are better for Audience B. Or maybe Creative X is a clear winner across both audiences. It's a good approach, but it still has a slight version of the CBO problem: within each adset, Meta will still favour certain creatives over others. It's better than your original plan, but not perfect.
Method 2: The Pure Creative Test (My Recommended Approach)
This is the most scientifically sound way to do it. It's more work to set up, but the data is squeaky clean. This structure isolates every single creative so there's no doubt about what's working.
The structure is one creative per adset. Yes, that means 10 adsets.
- -> Campaign: [PRODUCT] - Creative Test - ABO
- -> Adset 1: Creative 1 (Video) - €10/day budget - Targeting Audience A
- -> Ad 1: Video 1
- -> Adset 2: Creative 2 (Video) - €10/day budget - Targeting Audience A
- -> Ad 1: Video 2
- -> ...and so on for all 10 creatives, each in their own adset, all with the same budget and same audience.
Why is this better? Because you've fixed the budget and the audience. The *only* variable is the creative itself. If Adset 1 gets a €5 CPA and Adset 2 gets a €20 CPA, you know with 100% certainty that Video 1 is four times more effective than Video 2 with this audience. There's no guesswork. You're not relying on Meta's algorithm to distribute spend within an adset; you are forcing it. This gives you undeniable proof.
Your total budget would be 10 adsets x €10/day = €100/day, the same as your proposed CBO. But the results will be a world apart in terms of clarity. After you've run this test on Audience A, you can duplicate the campaign and run it again on Audience B to see if the results hold up. I remember working with a women's apparel brand, and using this exact method, we achieved a 691% return simply by being this rigorous with our initial creative testing. We found that one particular style of user-generated video outperformed studio-shot images by a factor of 5, something we would never have discovered with a lumped-together test.
We'll need to look at your audience targeting...
As I mentioned, the creative is only half the battle. You could have the best ad in the world, but if you show it to people who have zero interest in your product, you'll get zero sales. You've put 'BROAD' in your plan, which can work for accounts with a huge amount of pixel data and a mature product, but for a new product launch, it's like throwing darts in the dark. You're relying on Meta to find your customer from scratch, which is expensive and slow.
You need to give the algorithm a much stronger starting signal. This means using detailed targeting based on interests, behaviours, and demographics. This is your Top of Funnel (ToFu) strategy.
Think deeply about your ideal customer. Don't just think about age and gender.
- -> What websites do they visit?
- -> What influencers do they follow on Instagram?
- -> What brands (even competitors) do they buy from?
- -> What magazines or blogs do they read?
- -> What tools or software do they use?
The key is to pick interests that are *specific* to your audience. A common mistake is picking interests that are too broad. For instance, if you're selling high-end coffee beans, targeting the interest "Coffee" is a bad idea. Millions of people who just drink instant coffee fall into that category. You'd be better off targeting interests like "James Hoffmann", "Aeropress", "Speciality Coffee Association", or high-end coffee machine brands. These interests are far more likely to contain your actual target customer, and exclude those who aren't.
For your initial test, I'd suggest creating 2-3 distinct audience 'hypotheses'.
- -> Audience A (Competitor-based): A group of interests based on your direct competitors.
- -> Audience B (Interest-based): A group of interests based on related hobbies, magazines, influencers etc.
- -> Audience C (Demographic/Behaviour-based): A more layered audience, maybe combining an interest with a behaviour like 'Engaged Shoppers'.
You'd then run your 'Pure Creative Test' (Method 2) on Audience A first. Once you have your winning creatives, you can then test them against Audience B and C to see if you can beat the results. This structured approach to audience testing is just as important as the creative test.
Later on, once your pixel has gathered data (you want at least 100 purchase events, but honestly more like 500-1000 to be really effective), you can move into the more advanced stuff like Lookalike Audiences and Retargeting (MoFu/BoFu). A 1% Lookalike of your 'Purchasers' list will almost always outperform any interest-based audience you can build. But you have to earn the data to build that first. You start with interests.
You probably should focus on the right metrics...
So, you're running your test. How do you decide what's a 'winner'? It's easy to get lost in all the columns in Ads Manager. For an e-commerce store, you need to focus on the metrics that actually lead to money.
Here's how I'd analyse the results, in order of importance:
- Return On Ad Spend (ROAS): This is the king. If you spend €10 and make €40 in revenue, your ROAS is 4x. This is the ultimate measure of success. The creative/adset with the highest ROAS wins. Period.
- Cost Per Purchase (CPA): If ROAS data isn't available or you don't have enough purchases yet, CPA is your next best metric. How much does it cost you to get one sale? Lower is better.
- Cost per Add to Cart / Initiate Checkout: In the very early days of a test (first 24-48 hours), you might not have many purchases. These are your leading indicators. If one creative is getting Add to Carts for €2 and another is costing €10, it's a strong sign the first one is going to be your winner.
- Click-Through Rate (CTR - Link): This tells you how compelling your ad is. A high CTR (above 1% is okay, above 2% is good for ToFu) means your image/video and headline are grabbing attention. If a creative has a very low CTR, it's probably dead on arrival, no matter how good the offer is. It's failing at its first job.
- Cost Per Click (CPC - Link): This is closely related to CTR. A high CTR usually leads to a lower CPC. It's a good measure of how 'relevent' Meta thinks your ad is to the audience.
Don't get bogged down by vanity metrics like reach, impressions, or 'post engagement'. They don't pay the bills. Focus on the actions that happen on your website.
You need a rule for when to kill a failing adset in your test. A good rule of thumb is to let it spend at least your target CPA. If your product is €50 and you're aiming for a €25 CPA, let each adset spend at least €25-€30 before you make a call. If it hasn't gotten a single purchase by then (or even an Add to Cart), it's probably not going to work. Turn it off and let the budget go to the other contenders. Some people use 2x or 3x target CPA as their cutoff, it depends how much risk you are willing to take. But definately don't turn things off after just a few euros of spend.
I've detailed my main recommendations for you below:
This is a lot to take in, I know. So here's a summary of the actionable strategy I would recommend for your new product launch creative test. I remember a client that sells cleaning products where we used a similar approach to find a creative that drove a 633% return.
| Phase | Campaign Setup | Structure & Rationale | Key Metrics to Judge By |
|---|---|---|---|
| Phase 1: Pure Creative Test (Duration: 3-5 days) |
Campaign Type: Sales Budget Optimisation: ABO (Ad Set Budget) Total Budget: €100/day |
Structure:
|
Primary: ROAS, Cost Per Purchase (CPA).
Secondary: Cost per Add to Cart, Link CTR. Action: After 3-5 days, identify the top 2-3 winning creatives based on these metrics. |
| Phase 2: Audience Test (Duration: 3-5 days) |
Campaign Type: Sales Budget Optimisation: ABO (Ad Set Budget) Total Budget: €60-€90/day |
Structure:
|
Primary: ROAS, Cost Per Purchase (CPA).
Action: Identify the single best combination of creative + audience. This is your "golden adset". |
| Phase 3: Scaling (Ongoing) |
Campaign Type: Sales Budget Optimisation: CBO (Campaign Budget) Total Budget: Start with €50-€100/day, increase slowly. |
Structure:
|
Primary: ROAS.
Action: Monitor ROAS closely. Increase budget by 20% every 2-3 days as long as ROAS remains above your target. |
As you can see, a proper testing framework is a multi-stage process. It's methodical and requires patience, but it's the foundation for building a truly profitable advertising account. Just throwing things at the wall with a broad CBO campaign is a recipe for wasted ad spend and confusing results. This way, you build from a position of strength, knowing exactly what works and why.
This process is time-consuming and requires a fair bit of experience to analyse the results correctly and not make knee-jerk reactions. Getting it wrong can be costly, not just in ad spend but in the missed opportunity of failing to identify a creative that could have been a huge winner for your business.
This is often where expert help can make a significant difference. An experienced eye can help you set up these tests correctly, interpret the data without emotion, and make the right decisions to scale your campaigns profitably. We've run these kinds of tests for countless clients across dozens of niches, from eCommerce and SaaS to course creators, and that experience helps us get to the winning formula much faster.
If you'd like to chat through this in more detail and have us take a look at your specific plans, we're happy to offer a free initial consultation. It's a no-obligation call where we can give you some more tailored advice and you can get a better sense of how we work.
Hope this helps you get started on the right foot!
Regards,
Team @ Lukas Holschuh