Hi there,
Thanks for reaching out. It's a really common situation to be in – you've followed a testing strategy you've read about, but the results are confusing and Meta seems to be working against you. The short answer to your question is yes, you should probably turn off the video ad set, but the real issue is a bit deeper than that. Your entire testing structure is likely what's holding you back and costing you money.
TLDR;
- Your current testing method (Dynamic Creative + CBO) is flawed. It masks which specific ad combinations are working and allows Meta's algorithm to spend your budget inefficiently, especially when testing different formats like video vs. image.
- Stop using CBO to test fundamentally different creative types. Use Ad Set Budget Optimisation (ABO) for testing to force equal spend and get clean data on what really works. CBO is for scaling winners, not finding them.
- The most important advice is to simplify your testing. Instead of dynamic creative, build 3-5 complete, individual ads (one image, one headline, one primary text per ad) and run them against each other to find a clear winner.
- Your current CPA of $39 against a $69 AOV gives you a very tight margin. This means disciplined testing is absolutely essential to find a truly profitable ad you can scale.
- This letter includes a visual flowchart for a better testing process and an interactive calculator to help you figure out your true break-even ROAS.
We'll need to look at why Meta is sabotaging your results...
First off, let's be clear about what's happening in your account. You've put two very different types of creative – videos and static images – into a cage match and told the referee (CBO) to just give the prize money to whoever lands the first punch. The problem is, the first punch isn't always the knockout blow.
Campaign Budget Optimisation (CBO) is designed to dynamically allocate your campaign's budget to the ad set that it predicts will get you the most results for the lowest cost. On paper, this sounds great. In practise, especially during the testing phase, it's often a liability. The algorithm is heavily biased by early data. If your video ad set happened to get one or two cheap link clicks or even a lucky conversion in the first 24 hours, CBO can latch onto it and decide "This is the winner!" It will then starve your other, potentially better-performing ad set of budget before it ever has a chance to prove itself.
This is especially true when comparing videos and images. Videos often generate higher engagement metrics initially (views, reactions, shares) which can signal to the algorithm that it's a "good" ad, even if those engaging aren't actually converting. Static images might have lower initial engagement but a much higher click-to-purchase intent. By putting them in the same CBO campaign, you're not comparing apples to apples; you're comparing apples to oranges, and the algorithm is getting confused. It's spending the most on your video ad set because it thinks it's doing its job based on early, often misleading, signals.
You've seen this yourself. The video ad set is "underperforming massively" in terms of actual sales, yet it's eating all the budget. This isn't a bug; it's a feature of how CBO works. You've given it the freedom to make a choice, and it's made the wrong one based on the flawed premise of your test.
I'd say you need a more structured testing framework...
So, how do we fix this? We take back control. The goal of testing isn't just to find a winner, but to do so with clean, unambiguous data. You need to know *why* something is winning. To do that, we need to move away from CBO for testing and embrace Ad Set Budget Optimisation (ABO).
With ABO, you set the budget at the ad set level. This means you can create two ad sets—one for images, one for videos—and give them both an equal, fixed daily budget. For example, if your daily budget is $50, you give the Image Ad Set $25/day and the Video Ad Set $25/day. This forces a fair fight. Now, Meta *has* to spend equally on both, allowing you to gather enough data to make an informed decision based on the metric that actually matters: your Cost Per Acquisition (CPA).
This structured approach forms the foundation of a proper testing and scaling process. You use a seperate, dedicated ABO campaign purely for testing new creatives and audiences. Once you've run the test and have statistically significant data that clearly shows one ad or ad set is a winner (e.g., the image ad set is getting a $25 CPA while the video is at $55), you then take that winning ad and move it into a *different* campaign. This second campaign is your 'Scaling Campaign', and *that* is where you can use CBO effectively, because you are now feeding it with proven, high-performing ads.
I've mapped this out for you below. This is the kind of systematic process that seperates amateurs from professionals and turns ad spend from a gamble into a calculated investment.
Step 1: Create
New ABO Campaign (For Testing Only)
Step 2: Isolate
Ad Set 1: Images ($25/day)
Ad Set 2: Videos ($25/day)
Step 3: Test
Inside each ad set, create 3-5 individual, complete ads. No dynamic creative yet.
Step 4: Analyze
Run for 3-5 days. Identify the single best performing ad based on CPA & ROAS.
Step 5: Scale
Move the ONE winning ad into a new CBO Campaign (For Scaling Only).
You probably should rethink your creative testing process...
Now let's tackle the second part of the problem: the "3:2:2" dynamic creative setup. This feature is one of Meta's most misunderstood tools. Marketers often use it as a shortcut for creative testing, but it's actually a black box that hides valuable insights.
When you give Meta 3 creatives, 2 headlines, and 2 primary texts, you're creating 12 possible ad combinations (3 x 2 x 2 = 12). What happens next is very similar to the CBO problem. The algorithm will serve a few of these combinations, and as soon as it finds one that gets slightly better initial results, it will pour the majority of the budget into that single combination. The other 10 or 11 variations might never get a real chance to run. You might have a killer headline or a winning image sitting dormant in your ad set, completely undiscovered, because the algorithm gave up on it after just a few impressions.
You end up knowing that *something* in that ad set is working, but you don't know the exact combination. Is it Image A with Headline B, or Image A with Headline C? You have no idea. This makes it impossible to learn and iterate. You can't take the winning elements and build new, better ads from them because you can't be certain what the winning elements actually are.
The solution is, again, simplification and control. For initial testing, you should build each ad out manually. This means you create:
- Ad 1: Best Image 1 + Best Headline 1 + Best Primary Text 1
- Ad 2: Best Image 2 + Best Headline 2 + Best Primary Text 2
- Ad 3: Best Video 1 + Best Headline 1 + Best Primary Text 2
And so on. You create 3-5 complete, distinct "ad concepts" and run them against each other in your ABO testing campaign. This way, the performance data is tied to the ad as a whole. You will know, without a doubt, that Ad 1 outperformed Ad 2 and Ad 3. You have a clear, undeniable winner.
Once you've found that winning ad concept, *that's* when Dynamic Creative can be useful. You can take your winning image and winning primary text, and then use Dynamic Creative to test 5 different headlines against it to see if you can squeeze out a bit more performance. But it should be used for optimisation of a known winner, not for initial discovery. Right now, you're trying to do both at once, and it's leading to messy data and wasted spend.
3 Creatives
2 Headlines
Result
But Meta likely only tests 1-2 properly.
You'll need to understand your numbers to make decisions...
Finally, let's talk about your metrics. You've mentioned your Average Order Value (AOV) is $69 and your CPA is currently $39. This gives you a Return On Ad Spend (ROAS) of $69 / $39 = 1.77x. While this might seem okay on the surface (you're making more than you're spending), for most e-commerce businesses, this is dangerously close to being unprofitable, if not already there.
You have to account for your Cost of Goods Sold (COGS), shipping costs, payment processing fees, and other overheads. A common rule of thumb for e-commerce is that you need a ROAS of at least 3x to be comfortably profitable and have room to grow. A 1.77x ROAS means your ad performance is, at best, marginal.
This adds urgency to what we've been discussing. You can't afford to let Meta waste your budget on underperforming video ads or untested dynamic combinations. Every dollar needs to be working as hard as possible to acquire customers at a much lower CPA. Your goal shouldn't just be to get your campaign's CPA down from $39, but to find an ad that can consistently achieve a CPA of $23 or less, which would get you to that 3x ROAS target ($69 / $23 ≈ 3.0).
To really understand what you need to aim for, you need to calculate your Break-Even ROAS. This is the point at which you are neither making nor losing money on your ads. Any ROAS above this number is profit. Use the calculator below to get a clearer picture. Plug in your AOV and estimate your profit margin (AOV minus COGS, shipping, etc., then divided by AOV). This will show you the minimum ROAS you need to survive.
So, what's the immediate plan?
Knowing all this, the path forward becomes much clearer. You need to stop the current campaign, take a step back, and rebuild with a structure that prioritises clean data and disciplined testing. It might feel like you're slowing down, but you're actually building a much more solid foundation for profitable scaling.
I've detailed my main recommendations for you below in a clear, actionable table. This is the exact process we'd use for a new client in your situation to diagnose creative problems and find a path to profitability.
| Step | Action | Reasoning |
|---|---|---|
| 1. PAUSE | Immediately pause your current CBO campaign. | It is wasting money on an underperforming ad set and providing messy data. Stop the bleeding first. |
| 2. REBUILD | Create a new campaign using Ad Set Budget Optimisation (ABO). Name it "[TESTING] - Creatives". | This gives you full control over the budget allocation to ensure a fair test between your creative concepts. |
| 3. SEGMENT | Inside the new campaign, create at least two ad sets: one for your best Image ads, one for your best Video ads. Give them equal daily budgets. | This isolates the performance of each creative format, preventing CBO from making a premature decision. |
| 4. SIMPLIFY | In each ad set, create 2-3 individual ads. Do NOT use Dynamic Creative. Each ad should be a complete, self-contained unit (1 creative + 1 headline + 1 primary text). | This allows you to track performance at the specific ad level, giving you clear, actionable data on which full concept works best. |
| 5. ANALYSE | Let the campaign run for 3-5 days, or until each ad set has spent at least 1-2x your target CPA. Analyse the results at the ad level. | Patience is key. You need enough data to make a statistically valid decision. Look for the single ad with the best CPA and ROAS. |
| 6. SCALE | Once you have identified a clear winning ad, duplicate it into a new, separate CBO campaign. Name it "[SCALING] - [Winning Ad Name]". | Now that you have a proven winner, you can give the algorithm the freedom to find the cheapest conversions for you at scale using CBO. |
Implementing a system like this is what separates reactive advertisers from strategic ones. It's not about finding a magic "hack" like the 3:2:2 method; it's about applying a rigorous, scientific process to your advertising. It takes a bit more work up front, but it saves an enormous amount of money and time in the long run.
This is precisely the kind of challenge where professional guidance can make a massive difference. We've spent years refining these testing methodologies across countless accounts. We can help you navigate this process, avoid common pitfalls, and accelerate your path to finding a profitable, scalable ad campaign.
If you'd like to go over your account together and get a more personalized plan, we offer a free, no-obligation strategy session. It's a chance for us to look at your specific setup and provide some more concrete advice.
Regards,
Team @ Lukas Holschuh