Hi there,
Thanks for reaching out!
That’s a really sharp question you've asked. It gets right to the heart of how to properly test and scale campaigns on platforms like Meta. A lot of advertisers get this wrong, waste a load of money, and then wonder why their ads aren't working. They either change too many things at once or are too scared to touch anything that seems to be working.
The short answer is no, it is absolutely not bad to use the same creative across multiple ad sets. In fact, it's the correct and most methodical way to figure out what truly works. The real issue isn't about reusing creatives, but about having a robust framework to test them in a way that gives you clear, actionable data. I'll walk you through my thoughts on how to build that framework, moving from guesswork to a predictable system.
TLDR;
- Reusing creatives is essential: Using the same creative across different ad sets isn't bad; it's the foundation of A/B testing. It allows you to isolate the audience as the variable, so you know exactly why performance changes.
- Structure is everything: Don't just test random audiences. Organise your account into a ToFu (Top of Funnel), MoFu (Middle of Funnel), and BoFu (Bottom of Funnel) structure. This lets you test new audiences systematically while retargeting engaged users effectively.
- Audience quality over quantity: Prioritise your audience testing. Start with audiences closest to your desired conversion event (e.g., lookalikes of purchasers) before moving to broader interests. A structured approach is much better than throwing everything at the wall.
- Performance is more than clicks: Stop obsessing over cheap clicks. The only metrics that matter are Cost Per Acquisition (CPA) and Return On Ad Spend (ROAS). Use our interactive calculator below to understand how much you can truly afford to pay for a customer.
- Your original ad set is safe: Testing a winning creative elsewhere won't negatively impact the original ad set's performance in any meaningful way. Audience overlap is a factor at huge scale, but for most advertisers, it's a non-issue.
We'll need to look at the core principle: Isolating Variables...
Think of your advertising like a scientific experiment. If you want to know if a new fertiliser helps plants grow, you don't change the fertiliser, the amount of water, and the amount of sunlight all at the same time. You'd have no idea what caused the change. You change just one thing—the fertiliser—and keep everything else the same. That's the only way to get a clean result.
Advertising is exactly the same. The two biggest variables you control are:
- The Creative: The ad itself (the image/video and the copy).
- The Audience: Who you show the ad to.
When you have a creative that's performing well in one ad set, you've found a potentially winning 'message'. The logical next step is to see which other groups of people (audiences) also respond to this message. To do that, you must keep the creative identical and only change the audience. This is called audience testing.
The common mistake is to do the opposite. An advertiser will create a new ad set for a new audience and, at the same time, create a brand new ad for it. When it fails (or succeeds), they have no idea why. Was it the new audience? Or was it the new ad? They've learned nothing. By using your proven creative in new ad sets, you are correctly isolating the audience as the variable. You are testing the question: "Does this message that works for Audience A also work for Audience B, C, and D?" This is how you find pockets of scalability.
Your second question, about it impacting the original ad set's performance, is a valid concern but largely unfounded for 99% of advertisers. The theoretical issue is 'audience overlap'. If you target two very similiar audiences (e.g., 'people who like Nike' and 'people who like Adidas'), some users will be in both. This means your own ad sets are, in a tiny way, competing against each other in the auction. However, Meta's algorithm is smart enough to manage this, and unless you're spending tens of thousands per day on nearly identical audiences, the impact is negligible. The value of the data you get from testing far outweighs any tiny increase in cost from overlap. Don't let theoretical problems stop you from practical testing.
I'd say you need a proper campaign structure...
Randomly testing audiences is better than not testing at all, but it's still inefficient. To do this professionally, you need a structure that separates users based on their awareness of your brand. We call this a funnel-based approach: Top of Funnel (ToFu), Middle of Funnel (MoFu), and Bottom of Funnel (BoFu).
- ToFu (Top of Funnel): This is for reaching cold audiences—people who have never heard of you. This is where you'll do most of your audience testing with your winning creatives. You're trying to find new customer pools.
- MoFu (Middle of Funnel): This is for re-engaging people who have shown some interest but haven't gone deep. They might have watched a video or visited your website. You're warming them up.
- BoFu (Bottom of Funnel): This is for closing the deal with people who are close to converting. They've added a product to their cart or initiated a checkout. These are your hottest prospects.
Having seperate campaigns for each part of this funnel allows you to control your budget and messaging precisely. You wouldn't show a "20% Off Your First Order!" ad to someone who has never heard of you, but you absolutely would show it to someone who abandoned their cart.
This structure provides the perfect playground for your testing. Your winning creative gets tested across various ToFu audiences. Once people engage, they are automatically moved into your MoFu and BoFu campaigns to receive different, more direct messaging. It becomes a machine for converting strangers into customers.
ToFu
Cold Audiences (Interests, Lookalikes)
MoFu
Warm Audiences (Website Visitors, Video Viewers)
BoFu
Hot Audiences (Add to Cart, Checkout Initiated)
You probably should prioritise your audiences systematically...
Okay, so you have a structure. Now, which audiences do you test first? This is another area where people waste money. They test obscure interests while ignoring the highest-potential audiences available to them.
You should always prioritise audiences based on how closely they resemble your ideal customer. The closer they are to the money, the better they will usually perform. Here is the hierarchy I use when building out campaigns, especially for eCommerce, though the principle applies to any business.
ToFu (Cold) Audiences - In Order of Priority:
- High-Value Lookalikes: The best audience you can possibly target is a Lookalike of your existing customers, specifically your *best* customers (e.g., highest LTV). This is literally telling Meta "go find more people exactly like the ones who already give me the most money." This should always be your first test. If you dont have enough data for that, move down the list.
- Lower-Funnel Lookalikes: Next, create Lookalikes of people who have taken actions closer to a purchase, like 'Initiated Checkout' or 'Added to Cart'. These are still very high-intent audiences.
- Detailed Targeting (Interests/Behaviours): This is what most people think of as "targeting." The trick here is to be specific. For example, targeting the interest 'Amazon' to find eCommerce store owners is a common mistake because it's too broad. You'd be better off targeting interests like 'Shopify', 'WooCommerce', or followers of eCommerce-focused publications. You need to think, "What interest is a strong signal that this person is my ideal customer, and not just a member of the general public?"
- Broad Targeting: Once your pixel has seasoned with thousands of conversion events, you can sometimes test 'broad' targeting (no interests, just age/gender/location). This gives the algorithm maximum freedom to find buyers based on its data. Do not start with this; it only works on mature accounts.
MoFu/BoFu (Retargeting) Audiences - In Order of Priority:
These aren't for testing as much as they are for converting. You should always have these campaigns running. They are your safety net, catching interested people and bringing them back.
| Funnel Stage | Audience | Why It's a Priority |
|---|---|---|
| BoFu (Hot) | Added to Cart / Initiated Checkout (in last 7-14 days) | Highest purchase intent. These people were one step away from buying. A simple reminder or offer can close the sale. |
| MoFu (Warm) | Viewed Product / Visited Website (in last 30 days) | They've shown direct interest in what you offer. They need more convincing, perhaps with testimonials or feature highlights. |
| MoFu (Cool) | Video Viewers (e.g., 50%+) / Social Engagers | Lower intent than website visitors, but they are brand-aware. Good for showing them more value-driven content. |
You'll need to understand what 'performance' really means...
Let's go back to your concern about "impacting performance." What performance metric are you worried about? Clicks? Likes? Comments? None of that matters. The only performance that matters is the one that puts money in your bank account.
A creative might get a fantastic Click-Through Rate (CTR) and low Cost Per Click (CPC) in one audience because that audience is full of 'clicky' people who engage with everything but never buy. In another audience, the same creative might get a lower CTR and higher CPC, but every single click converts into a high-value sale. Which ad set has better 'performence'? Obviously the second one.
You must shift your mindset from chasing vanity metrics to optimising for profit. The two most important metrics are:
- Cost Per Acquisition (CPA): How much does it cost you to get one customer?
- Return On Ad Spend (ROAS): For every £1 you put into ads, how many pounds in revenue do you get back?
To really understand what you can afford for your CPA, you need to know your Customer Lifetime Value (LTV). One campaign we worked on for a B2B SaaS client had leads costing $22 each. On the surface, that might seem expensive to some. But their LTV was over $5,000. Paying $22 to acquire a $5,000 customer is a trade you should make all day long. Without knowing your numbers, you're flying blind.
Here’s an interactive calculator to help you understand your ROAS. Play around with the numbers. See how a small increase in revenue from a better audience can dramatically improve your profitability, even if the ad spend stays the same. This is why testing is so vital.
I've detailed my main recommendations for you below:
To bring this all together, here is a simple, actionable plan for how to take your winning creative and use it to find new, profitable audiences. This process removes guesswork and builds a repeatable system for growth. This is the main advice I have for you:
| Step | Action | Why It Matters |
|---|---|---|
| 1. Identify Winner | Review your existing ad sets. Identify the single creative with the best business metric (lowest CPA or highest ROAS), not vanity metrics. This is your 'control' creative. | Ensures you are scaling what is already proven to make you money, not just what gets clicks. This is your benchmark. |
| 2. Select Test Audiences | Choose 3-5 new, distinct ToFu audiences to test against. Use the prioritisation list: start with high-value lookalikes if possible, then move to specific, well-researched interests. | This provides a structured way to find new customer pools. Keeping the audiences distinct prevents major overlap and gives you cleaner data. |
| 3. Structure the Test | Create a new Campaign Budget Optimisation (CBO) campaign. Inside it, create one ad set for each of your 3-5 test audiences. In each ad set, use the *exact same* winning creative from Step 1. | This is the perfect scientific setup. The CBO setting lets Meta's algorithm allocate budget to the best-performing audience automatically. The only variable between ad sets is the audience. |
| 4. Analyse & Iterate | Let the campaign run until each ad set has spent at least 1-2x your target CPA. Analyse the results based on CPA/ROAS. Turn off the losing ad sets. The winners become your new evergreen audiences. | This data-driven approach removes emotion. You keep what works, kill what doesn't, and now you have a new winning creative/audience combination to try and beat in the next test. |
Following this process will definately change the way you approach your advertising. You'll move from a state of uncertainty to one of clarity, where every pound spent is an investment in learning and growth. It takes discipline, but it's the only way to build a scalable and profitable advertising engine.
While these principles are straightforward, implementing them consistently—doing the research for audiences, analysing the data correctly, and knowing when to make changes—takes a significant amount of time and experience. It's what we do for our clients every day, allowing them to focus on running their business while we focus on growing it.
If you'd like to have an expert pair of eyes on your ad account, we offer a free, no-obligation initial consultation where we can review your current strategy and provide some actionable insights. It's often the quickest way to spot opportunities and fix costly mistakes.
Regards,
Team @ Lukas Holschuh