Hi there,
Thanks for reaching out! I saw your post and thought I'd give you some of my thoughts on it. It’s a really common question, and getting the process right for testing and scaling creatives is one of the biggest things that separates campaigns that do okay from ones that really fly. What you're asking about is right at the heart of managing Meta ads properly.
You've basically stumbled upon a classic paid ads dilemma: do you isolate new variables (creatives, in this case) in a new ad set, or do you add them to a proven environment? The short answer for Meta is that your mate is generally on the right track. Adding the new winning ad into the currently winning adset and turning off the underperformers is almost always the better approach. The method the TikTok rep suggested is more of a TikTok-specific thing, and trying to apply it to Meta can cause a few problems.
I'm happy to walk you through why that is and give you a more robust framework for thinking about this. This goes a bit deeper than just where to put a new ad, it's about setting up your account for stable, scalable growth. Getting this right is how we've been able to achieve strong results for clients.
We'll need to look at your campaign structure...
First up, let's talk about the main reason why your current method feels like it's working but is actually holding you back. When you create a new ad set with the exact same targeting as an existing one, you're making them compete against each other. This is called audience overlap.
Essentially, you're telling Meta to enter two of your own ad sets into the same auction to try and reach the same person. This drives up your own costs because you're bidding against yourself. It also splits your data. The algorithm learns based on the performance data within a single ad set. When you split that data across two, three, or more ad sets, each one learns slower and less efficiently. You're making it harder for Meta to optimise for you.
The reason your new ad set "overtakes" the old one is likely because Meta's algorithm might initially favour the new ad set (giving it what we sometimes call 'new ad energy'), or simply because budget gets fragmented and performance becomes unstable in both. You shut down the old one, but then the cycle repeats when you launch the next new creative. It's a constant churn that prevents you from ever building up a strong, stable ad set with lots of performance history that the algorithm can really work with.
A much better way is to consolidate. Think of your ad set as the container for your audience targeting, and the ads inside it as the messages you're testing on that audience. Your goal should be to find the best combination of audience and creative. By keeping the audience consistent in one ad set, you can more reliably test which creatives resonate best with them.
This is where Campaign Budget Optimisation (CBO) comes in really handy. Are you using it? If not, you should probably consider it for your main scaling campaigns. With CBO, you set the budget at the campaign level, and Meta automatically distributes it to the best-performing ad sets within that campaign. This principle also works at the ad set level - Meta will automatically spend more on the best-performing ads within an ad set. By keeping your winning creatives in one ad set, you let the algorithm do the heavy lifting of figuring out which ad to show to get the best results, rather than you manually trying to balance budgets between two identical ad sets. It’s just much more efficient.
I'd say you need a proper testing framework...
Your question suggests you're already thinking about testing, which is great. But let's refine that process. Instead of just picking a "winner" and creating a new ad set, you should have a more structured approach. A lot of advertisers seperate their testing from their scaling.
Here’s a simple way to think about it:
1. Your 'Scaling' Campaign: This is your main, always-on campaign. It uses CBO, contains your best-performing, 'proven' ad sets (based on your audiences), and within those ad sets, your 'proven' winning ads. This campaign should be pretty stable and you shouldn't mess with it too much.
2. Your 'Testing' Campaign: This is where you experiment. You can run this with a smaller budget. Here, you can create new ad sets to test completely new audiences, or you can create an ad set that mirrors your winning audience from the scaling campaign and use it to test a batch of new creatives against each other. Some people use ABO (Ad Set Budget) for testing campaigns to ensure each variable gets a fair amount of spend.
The process would look something like this:
- -> You come up with 3-5 new creative ideas (videos, images, new copy angles).
- -> You launch them in your testing campaign, inside an ad set targeting your best audience.
- -> You let them run until they've each had enough impressions to make a decision (this depends on your budget and CPA, but you need statistically significant data). Don't make a decision after just one day or a handful of conversions.
- -> You analyse the results. Look at your main KPI (like CPA or ROAS), but also look at secondary metrics. Is a creative getting a really high click-through rate (CTR) but not converting? Maybe the message is good but the landing page doesn't match. Is one getting great engagement? It could be good for retargeting.
- -> Once you have a clear winner (or two) from the test, then you introduce that proven creative into your main 'Scaling' campaign. You add it to the relevant, existing, winning ad set. At the same time, you look at the ads already in that scaling ad set and pause the one that's performing the worst (what we call 'creative fatigue').
This way, you're continuously cycling fresh, proven creatives into your main campaign without disrupting its performance with untested ads or messing up the structure by creating duplicate ad sets. It's a system for continuous improvement. For a subscription box client, this systematic testing and cycling of creatives inside their core campaign helped us achieve a 1000% Return On Ad Spend. The structure enabled the creative to work its magic.
You probably should think about your funnel...
This brings me to a bigger point. Not all creatives are created equal, because not all audiences are the same. A "winning" ad for people who've never heard of you before is probably going to be very different from a "winning" ad for someone who has already visited your site and added a product to their cart.
This is why structuring your campaigns around a marketing funnel is so important. A typical funnel on Meta looks like this:
-> Top of Funnel (ToFu): This is your prospecting. You're reaching cold audiences – people who don't know your brand. Here you'd use your detailed targeting (interests, behaviours) and lookalike audiences. Your creatives need to grab attention, introduce the problem you solve, and establish your brand.
-> Middle of Funnel (MoFu): This is your warm audience. People who have engaged with you somehow but haven't gone deep into your site. Maybe they watched 50% of your video ad or visited your landing page. You're retargeting them. The creative here is about building more trust and showing them more about your product or service.
-> Bottom of Funnel (BoFu): This is your hot audience. People who have shown strong intent, like adding a product to the cart, initiating checkout, etc. You're retargeting them to get them over the finish line. Creatives here can be more direct, maybe featuring a discount, testimonials, or reminding them of the specific product they looked at (using Dynamic Product Ads).
When you talk about a "winning ad," you need to ask: winning for which part of the funnel? An ad with a strong call-to-action and a discount code might be a huge winner in your BoFu retargeting, but it would likely fail with a cold ToFu audience who have no idea who you are. Your creative testing should be done within the context of the funnel. You should be testing different ads for your ToFu, MoFu, and BoFu audiences.
I often see people just lump all their retargeting audiences together. That's a mistake. The person who just visited your homepage needs a different message than the person who abandoned a full shopping cart. Getting this granular is how you make your budget work much harder.
You'll need to understand audience prioritisation...
So, if you're going to structure your campaigns by the funnel, you need to know which audiences to prioritise. When we take on a new account, this is one of the first things we map out. For an eCommerce account, the priority we generally follow looks like this. The further down the funnel the audience is, the higher the priority because they are more likely to convert.
| Funnel Stage | Audience Type | Example Audiences (in order of priority) |
|---|---|---|
| BoFu (Bottom) | Hot Retargeting |
- Added to Cart (but not purchased) - Initiated Checkout (but not purchased) - Previous Customers (for repeat purchases) |
| MoFu (Middle) | Warm Retargeting |
- Website Visitors (excl. purchasers) - Video Viewers (e.g., viewed 50% or more) - Social Media Engagers (IG/FB) |
| ToFu (Top) | Cold Prospecting |
- Lookalikes of your best customers (e.g., LAL of Purchasers) - Lookalikes of high-intent actions (e.g., LAL of Add to Cart) - Detailed Targeting (Interests/Behaviours) |
You start by building audiences at the bottom of the funnel first, as they will give you the quickest and highest return. Then you work your way up. When testing creatives, you test them against the relevant audience. You'd test a direct-response, "buy now" ad on your BoFu audience, and a brand story video on your ToFu audience.
And on that note, about the TikTok rep. They aren't necesserily wrong, but they're talking about a different platform. TikTok's algorithm seems to favour newness and its 'learning phase' is different. Ad formats and user behaviour are also completely different. On TikTok, ad fatigue can set in incredibly fast, so constantly launching new ad sets might be their way of forcing the algorithm to find new pockets of users. But on Meta, the algorithm is more mature and benefits from consolidation and rich data history within an ad set. Applying TikTok's advice to Meta is a classic case of a square peg in a round hole. What works on one platform rarely works the same way on another. We've run campaigns across Meta, TikTok, Apple and Google for clients and they all need their own tailored approach.
This is the main advice I have for you:
I know that's a lot to take in, so I've put the main recomendations into a table for you. This is a basic blueprint for a more professional and scalable Meta Ads setup. It's the kind of structure that lets you grow without constantly having to reinvent the wheel.
| Area of Focus | Recommended Action | Why It's Better |
|---|---|---|
| Campaign Structure | Stop creating new ad sets for winning ads. Consolidate into a single scaling campaign, ideally using CBO. | Prevents audience overlap, avoids you bidding against yourself, concentrates data for faster learning, and lets the algorithm optimise budget more effectively. |
| Creative Management | Add new, tested, winning creatives into your existing, proven ad sets. Pause the worst-performing ad in that ad set to make room. | Keeps your best ad sets fresh without resetting the learning phase. Creates a sustainable system for managing creative fatigue and continuous improvement. |
| Testing Process | Use a seperate, dedicated 'Testing Campaign' with a smaller budget to find your next winning creatives before they go into your main scaling campaign. | Protects the performance and stability of your main campaign. Allows for controlled experiments to get clear data on what works and what doesn't. |
| Audience Strategy | Structure your campaigns and ad sets around the ToFu, MoFu, and BoFu funnel. Prioritise audiences based on their intent level. | Ensures you're showing the right message to the right person at the right time. This dramatically increases conversion rates and overall ROAS. |
Implementing a structure like this can feel like a bit of work upfront, but it pays off massively in the long run. It gives you clarity on what's working and why, and it turns your ad account from a chaotic mess of ad sets into a predictable, scalable machine. I remember one B2B software client where we generated 4,622 registrations at just $2.38 each using Meta ads. This was only possible because we had a tight funnel structure and a rigorous creative testing process feeding it.
It's clear you're already on the right path by thinking critically about your process. The next step is to elevate that process into a full strategy. While the principles are what I've laid out here, the real skill comes in the execution: choosing the right interests, crafting compelling lookalikes, writing copy that converts, producing creatives that stop the scroll, and analysing the data to make the right decisions day-in and day-out. It's a full-time job.
This is often where bringing in an expert can make a huge differance. We live and breathe this stuff every day, so we can implement these advanced structures quickly and manage them efficiently to get you the best possible results from your ad spend.
If you'd like to go over your account and strategy in more detail, we offer a free initial consultation. We can have a look together and I can give you some more specific pointers based on what I see. No strings attached, just a helpful chat.
Hope this helps!
Regards,
Team @ Lukas Holschuh