Hi there,
Thanks for reaching out!
That's a really common question, and you've already figured out that Ads Manager doesn't give you a simple "breakdown by ad format" button. Your current method of using naming conventions is what a lot of people do to get by, but it doesn't really solve the underlying issue of how to properly test them against each other.
I'm happy to give you some of my thoughts on how we'd approach this. It's less about finding a hidden report and more about setting up your campaigns in a way that gives you clear, reliable answers. It takes a bit more work up front, but it's the only way to know for sure what's working and what's not.
We'll need to look at why Facebook's algorithm makes this tricky...
The first thing to realise is how the Facebook algorithm works, especially if you're using Campaign Budget Optimisation (CBO). When you put multiple ads—in your case, a video, a carousel, and a single image—into the same ad set, the system will try to find a "winner" very quickly. It'll start pushing the budget towards the ad that gets the first few cheap clicks or conversions, and it might stop showing the others almost completely.
The problem is, the algorithm's first choice isn't always the best one in the long run. A video ad, for instance, might have a higher cost per click initially but lead to much better quality traffic that converts at a higher rate on your website. If the algorithm chokes its budget off after a few hours because a simple image got a couple of cheap clicks, you'll never know the video's true potential. This is probably what's happening in your campaigns and why a direct comparison is so difficult.
So, your naming convention is good for organising and filtering your results after the fact, but it doesn't fix this core delivery problem. It's a reactive way to analyse skewed data, rather than a proactive way to get clean data you can actually trust. You're trying to compare the perfromance of three runners, but the judge is only letting one of them run the full race.
I'd say you need to set up a proper testing framework...
To get around this, you have to take control back from the algorithm and force it to give each ad format a fair shot. The way to do this is with a structured A/B test, sometimes called a split test. It sounds technical, but the concept is simple: you change only one thing at a time to see what difference it makes. In your case, that one thing is the ad format.
Here's how I would set it up, step-by-step:
1. Campaign and Objective
Start with a single campaign. Make sure it's set to the conversion objective that actually matters to your business, whether that's sales, leads, or something else. Don't test this with a traffic or engagement objective if your real goal is to get conversions, as you'll get misleading results.
2. Ad Set Structure is Everything
This is the most important part. Instead of one ad set with all your different ads in it, you're going to create multiple ad sets within that one campaign.
-> Ad Set A: Single Image
-> Ad Set B: Video
-> Ad Set C: Carousel
Each ad set will contain *only one* ad format. So, Ad Set A will have your single image ad(s), Ad Set B will have your video ad(s), and so on. This is non-negotiable for a clean test.
3. Control All Other Variables
For this test to be valid, the *only* difference between these ad sets must be the ad format. Everything else has to be identical.
-> Budget: Do NOT use CBO (Campaign Budget Optimisation) for this test. Instead, set budgets at the ad set level (Ad Set Budget Optimisation). Give each ad set the exact same daily or lifetime budget. This ensures each format gets the same amount of spend to prove itself.
-> Audience: The targeting for each ad set must be exactly the same. Same locations, same age, same gender, same detailed interests or custom audiences.
-> Placements: Use the same placements for all of them. Don't let one run on Audience Network while another is only on Instagram Feeds.
-> Ad Copy: This one is often overlooked. Your headline, primary text, and call-to-action should be the same across all ad formats. If you change the text on the video ad, you're no longer just testing the format; you're also testing the copy. The idea is to isolate the impact of the visual format itself.
4. Let the Test Run
Patience is a real virtue here. You can't call a winner after 24 hours and £10 of spend. You need to let the ad sets run long enough to get a sufficent amount of data. What does "sufficent" mean? It depends on your business, but a good rule of thumb is to wait until each ad set has at least 50 conversions. If you get very few conversions, you'll need to run it for longer to be confident in the results. You need to give the test enough time and budget to produce a clear winner, otherwise you're just guessing.
I recall a campaign where we helped a software client achieve 4,622 registrations at a cost of $2.38 per registration using Meta Ads. Setting up a similar testing framework allowed us to pinpoint the most effective ad format for their specific audience and conversion goals.
You probably should analyse more than just the final conversion...
Once your test has run its course, the job isn't just to look at the "Amount Spent" and "Purchases" columns and declare a winner. The real expertise comes from looking at the *entire* funnel to understand *why* one format beat the others. This tells you a lot about your customers' behaviour.
Here are the metrics I'd compare across your different ad sets:
-> Cost Per Mille (CPM): How much does it cost to show your ad 1,000 times? Video ads often have a higher CPM. This isn't necessarily a bad thing, it's just the entry cost.
-> Click-Through Rate (CTR): What percentage of people who see the ad click on it? This tells you how good the format is at grabbing attention. Carousels often do well here because they are interactive.
-> Cost Per Click (CPC): How much are you paying for each click? A high CTR usually leads to a lower CPC, but not always.
-> Landing Page View Rate: Of the people who click, how many actually wait for your website to load? If this is low, it might be an issue with your site speed, but sometimes a specific add format can lead to more accidental clicks from less interested people.
-> Cost Per Acquisition (CPA) or Return On Ad Spend (ROAS): This is your bottom line. How much does it cost to get a sale or lead? Or what's your return for every pound spent? This is usually the most important metric, but the others above tell you the story of how you got there.
To make this clearer, imagine you ran your test and got these results. Looking at them in a table can make the story jump out.
| Metric | Ad Set A (Single Image) | Ad Set B (Video) | Ad Set C (Carousel) | What it might mean... |
|---|---|---|---|---|
| CTR (Link) | 1.8% | 1.1% | 2.2% | The Carousel is the best at getting an initial click. The Video is the least effective at getting a click, but that may not be the whole story. |
| CPC (Link) | £0.60 | £1.10 | £0.55 | The Video is much more expensive for each click. The Carousel is the cheapest. This is a direct result of the CTR. |
| Add to Cart Rate (on site) | 3% | 8% | 4% | This is interesting. Visitors from the Video Ad are more than twice as likely to add a product to their cart. They are highly qualified. |
| Cost Per Purchase (CPA) | £33.33 | £13.75 | £13.75 | In this fictional scenario, the Video and Carousel have the exact same winning CPA. The Single Image is a clear loser. Even though the video clicks were expensive, they converted so well that it made up for it. The carousel got cheap clicks that converted at an average rate, leading to the same great result. |
In the scenario above, you'd probably want to run both the Video and Carousel formats going forward, and drop the Single Image. Without analysing the full picture, you might have wrongly dropped the video ad because of its high CPC.
You'll need to match the format to your objective and funnel stage...
The final layer of complexity is that there's no single "best" ad format. The winner of your test will depend heavily on what you're selling and who you're selling it to. This is where you have to think strategically about what job you want the ad to do.
Single Image Ads: As we've seen in our examples, these are brilliant for getting a simple message across quickly. If you have a visually stunning product, an easily understood offer, or you're retargeting someone with the exact product they looked at, a single image is often the most efficient way to do it. It's direct and to the point.
Video Ads: Video is king when you need to tell more of a story or demonstrate something. For software, showing the interface in action is far more powerful than a screenshot. We've seen great results from User-Generated Content (UGC) style videos—they feel more authentic and do a great job of showing a product being used in a real-world context. A video can pre-qualify viewers, so when they do click, they're much more likely to be serious. This often leads to a higher CPA initially but can result in more qualified leads and a lower cost per lead down the line.
Carousel Ads: These are my go-to for eCommerce clients. You can show a range of different products, or use each card to highlight a different feature or benefit of a single product. They can also be used to tell a mini-story, with each card building on the last. Because they're interactive, they can get great engagement and a high CTR. They are a very versatile add format.
It's also worth thinking about where your audience is in their journey with you. The best ad format for a completely cold audience is often different from the best format for someone who has already added a product to their cart.
-> Top of Funnel (ToFu - Cold Audiences): These are people who've never heard of you. You need to grab their attention and introduce your brand. An engaging, problem-solving video often works best here. You're trying to educate and build trust.
-> Middle of Funnel (MoFu - Warm Audiences): These people have shown some interest—maybe they've visited your website or watched a video. Here, a carousel showing them different products or a testimonial video can work well to nudge them along.
-> Bottom of Funnel (BoFu - Hot Audiences): This is your retargeting audience of people who have added to cart or initiated checkout. Don't overcomplicate it. A simple, direct single image ad (often a Dynamic Product Ad showing the exact item they abandoned) with an offer like "Free Shipping" is often the most effective way to close the deal.
Testing formats against each other at each stage of this funnel is how you build a truly optimised advertising machine.
So, this is the main advice I have for you:
I've covered a lot of ground here, so I've put together a summary table of the main recommendations for you to follow. This is the exact process we use to determine which creative formats will drive the best results for our clients.
| Recommendation | Detailed Action | Why It's Important |
|---|---|---|
| 1. Implement a Structured Test | Create a new campaign with 3 separate ad sets (Image, Video, Carousel). Use Ad Set Budgets, not CBO. Keep all other variables (audience, text, offer, placements) identical across all ad sets. | This is the only scientifically reliable way to isolate the impact of the ad format and get clean data. It removes the guesswork and the bias of Facebook's algorithm, giving each format a fair chance. |
| 2. Analyse the Full Funnel | Don't just look at the final CPA or ROAS. Compare CTR, CPC, and crucial on-site metrics like Add to Cart Rate or Lead Form Conversion Rate for each format. | This tells you the full story of customer behaviour. A format might have a high CPC but lead to more qualified traffic and a better final result, which you'd miss if you only looked at one surface-level metric. |
| 3. Test by Funnel Stage | Recognise that the winning format might be different for cold audiences (ToFu) versus hot retargeting audiences (BoFu). Consider running separate tests for each. | Different formats serve different purposes. A video that works to introduce your brand to a new customer might be overkill for someone who just needs a reminder to complete their purchase. This maximises effeciency. |
| 4. Iterate and Scale the Winner | After running the test long enough to get sufficent data (e.g., at least 50+ conversions per ad set if possible), confidently pause the losing formats and scale up the budget for the winner(s). | Don't let underperforming ads drain your budget based on a hunch. Systematically find what works and then put your money behind it to improve your overall campaign perfromance and profitability. |
Following this process will move you from guessing what works to knowing what works. While the steps themselves are straightforward, the real value often comes from experience in interpreting the results and deciding what to test next. For example, we've managed campaigns where we've reduced a client's Cost Per Lead by over 84% simply by testing different formats and finding the right combination for their audience.
It's not just about setting up the test; it's about knowing what hypotheses to test in the first place, how to read the nuances in the data, and how to use those insights to build a bigger, better strategy. This kind of ongoing, structured testing is the bedrock of any successful paid advertising campaign.
This is the sort of deep dive we would typically do during an initial consultation. If you'd ever like a second pair of expert eyes to go through your account and strategy in more detail, we offer a free, no-obligation review where we can give you some more specific pointers. Just let me know.
Hope this helps!
Regards,
Team @ Lukas Holschuh