Hi there,
Thanks for reaching out!
I’ve had a look at the situation with your Meta campaigns, and I’m happy to give you some initial thoughts. It's a really common problem to see high click-through rates but no actual leads, and the cause is often not what you'd expect. It’s less about ad fatigue and more about the fundamental structure of your testing stratgey.
Let's get this sorted.
TLDR;
- Your testing method of quickly picking "winners" from 8 identical ad sets is the main reason for the unstable results. It prevents the algorithm's learning phase from completing and relies on random luck.
- A high CTR is often a vanity metric. Clicks don't equal leads. You might be attracting people who click on everything but never convert. You need to focus on lead form submission rate instead.
- The "link clicks" metric is misleading. It includes clicks to your profile, shares, and comments, not just clicks that open the lead form. The metric you really care about is "Leads".
- The most important advice is to simplify your campaign structure. Use fewer ad sets with distinct audiences, give them enough budget and time to gather meaningful data, and then make decisions based on Cost Per Lead, not CTR.
- This letter includes a diagnostic flowchart to pinpoint where you're losing people and an interactive calculator to determine how much you can actually afford to pay per lead.
The real problem isn't ad fatigue, it's your testing method...
Okay, let's get straight to the point. Your theory about targeting people who've already seen the ad is plausible, but it's not the real issue here. The root of the problem is the way you’re testing.
This approach of duplicating lots of ad sets, running them on a tiny budget for a couple of days, and then picking a "winner" is a very common 'hack' you see online. The idea is to game the "randomness" of the algorithm. But in reality, it does the opposite. It actively sabotages your campaigns and leads to exactly the kind of unstable perfomance you're seeing now.
Here’s why: Meta's algorithm has a "learning phase". During this period, it needs to gather about 50 conversions (in your case, leads) per ad set in a week to understand who your ideal customer is. It's exploring, testing different pockets of your audience, and figuring out who is most likely to fill out your form. When you run 8 ad sets with a budget equal to just one lead, you're not giving any of them enough data or time to even begin learning properly. You're starving the algorithm of the information it needs to do its job.
So what happened? Those one or two "best performing" ad sets didn't win because they were genuinely better. They won because of random luck. In a tiny dataset over 48 hours, one ad set might randomly be shown to two people who were going to click anyway, while another wasn't. You then took this random fluctuation, declared it a winner, and shut down the others. When you gave the "winner" more budget, the luck ran out, and its true (poor) performance was revealed. You haven't found a winning ad set; you've just scaled up a random one.
Ad fatigue is a real thing, but it happens after an audience has seen your ad many times over a longer period. It wouldn't cause a complete drop-off to zero leads overnight, especially when your CTR is actually increasing. The problem is the flawed foundation of the campaign itself.
We'll need to look at why 'High CTR' is lying to you...
This brings me to the next point. Everyone loves a high CTR, but in your case, it's a trap. It’s making you think the ad is working when it's not. A high CTR with no conversions usually points to one of two things:
- You're attracting the wrong people. The algorithm, now with a bigger budget, might be optimising for clicks because it can't find people who will convert into leads. It finds the path of least resistance, which is serving your ad to people who are habitual clickers. They click on lots of ads but rarely follow through with an action. They're curious, but not qualified. This would explain why your CTR went up while your leads dropped to zero.
- There’s a major drop-off after the click. Your ad creative is clearly getting attention (which is good!), but something is stopping people from completing the next step. With on-Facebook lead forms, the friction point is the form itself.
It's also important to understand that the "link clicks" metric on Meta is very broad. It counts clicks on any link in the ad, which could be a click to your Facebook page, a click to expand the text, a share, or a comment. The metric that matters is "Leads". If you want to see how many people opened the form, you'd need to look at a metric like "Lead form views". The flowchart below shows where things can go wrong.
Ad Impression
User sees the ad
Link Click
User clicks somewhere
Form Opens
User sees the form
Lead Submitted
User converts
I'd say you need to rebuild your campaign for stable results...
To fix this, you need to stop trying to outsmart the algorithm and start working with it. This means building a simple, logical campaign structure that allows for proper learning and gives you reliable data to make decisions.
Forget the 8 ad sets. I'd recommend starting with one campaign and just 2 or 3 ad sets. The key is that each ad set should target a genuinely different audience. This allows you to conduct a real test to see which type of audience responds best to your offer, rather than testing the same audience 8 times.
Here’s a structure I’d suggest starting with:
| Level | Setup Details | Rationale |
|---|---|---|
| Campaign | Objective: Leads Optimisation: On-Facebook Lead Forms Budget: Campaign Budget Optimisation (CBO) ON |
This centralises the budget and lets Meta automatically allocate more spend to the best-performing ad set over time, once it has enough data. |
| Ad Set 1 | Audience: Broad Targeting (Men) Details: Location, Age, Gender (Men). No interests. |
Once your pixel has data, broad targeting can work extremely well. It gives the algorithm maximum freedom to find converters. |
| Ad Set 2 | Audience: Interest Stack (Men) Details: Group 5-10 highly relevant, related interests together. |
A clean test of a specific interest group against a broad audience. This tells you if layering interests helps or hinders the algorithm. |
| Ad Set 3 & 4 | Replicate Ad Set 1 and 2 but for Women. | This maintains your split-test by gender, but within a much more robust and logical structure. |
| Ads | Use your 2-3 best performing ad creatives inside EACH ad set. | This ensures you're testing audiences, not creatives. Consistent ads across all ad sets mean the only major variable is the audience itself. |
The most important part of this is patience. You need to give this new campaign a proper budget (I’d recommend at least 3-5x your target CPL per day) and let it run for at least 5-7 days without touching it. Let it exit the learning phase. Only then can you look at the Cost Per Lead for each ad set and make an informed decision about which one to scale and which to turn off.
You probably should focus on the lead form itself...
If you implement the new structure and still see people clicking but not converting, then the problem is almost certainly the lead form itself. Your ad has made a promise, and the lead form experience is breaking it.
Here are some things to check:
- Number of Questions: Are you asking for too much information? Every additional field you add will cause more people to drop off. Stick to the absolute essentials. Name and Email/Phone Number is often enough to start a conversation.
- Sensitive Information: Are you asking for personal or sensitive information upfront? People are very hesitant to share things like their full address or company name with someone they don't know.
- Clarity of Value: Does the headline or intro text on your lead form remind them why they should fill it out? Reiterate the benefit. What do they get in return for their details?
- Use Pre-filled Fields: Meta can auto-fill information like name and email from the user's profile. Make sure this is enabled. The less they have to type, the higher your conversion rate will be.
Getting your cost per lead down is directly tied to business goals. But how much can you actually afford to spend? A high CPL might seem expensive, but if those leads convert into high-value customers, it could be a bargain. I've built a simple calculator below to help you figure out your maximum affordable CPL. This shifts the focus from chasing cheap leads to acquiring profitable customers.
You'll need an action plan...
I know this is a lot to take in, especially when you thought you were on the right track. The world of paid advertising is full of these myths and 'hacks' that sound good but cause more harm than good. The key to long-term success isn't tricks; it's a disciplined process of testing, learning, and optimising based on solid data.
I've detailed my main recommendations for you below in a simple action plan. This is the process we would follow to stabilise and scale a campaign like yours.
| Step | Action | Why It's Important |
|---|---|---|
| 1. PAUSE & REBUILD | Immediately pause your current winning ad sets. Rebuild the campaign from scratch using the simplified 1 Campaign, 2-4 Ad Set structure I outlined above. | Stops you from wasting more money on a flawed setup. A clean build ensures you're starting from a solid foundation without any legacy issues. |
| 2. LAUNCH & LEARN | Launch the new campaign with a sufficient budget (e.g., £50-£100/day if possible) and DO NOT TOUCH IT for 7 days. Let it complete the learning phase. | This is the hardest but most important step. It gives the algorithm the time and data it needs to stabilise and find your customers. Resisting the urge to tweak things early is vital. |
| 3. ANALYSE CORRECTLY | After 7 days, analyse the performance. Ignore CTR and Link Clicks. Focus solely on two metrics: number of Leads and Cost Per Lead (CPL) for each ad set. | This forces you to make decisions based on business outcomes, not vanity metrics. The ad set with the lowest CPL is your true winner. |
| 4. OPTIMISE & SCALE | Turn off the ad set(s) with a significantly higher CPL. Keep the winner(s) running. If you want to scale, increase the budget on the winning ad set by no more than 20% every 2-3 days. | Gradual budget increases prevent the ad set from being thrown back into the learning phase, allowing for stable, predictable scaling of your lead generation. |
This process requires a shift in mindset from quick wins to sustainable growth. It can feel slow at first, but it's how you build campaigns that deliver consistent results month after month. I remember one campaign we worked on for a B2B client in the environmental controls sector; by moving from a chaotic testing structure to a disciplined one like I've described, we were able to reduce their cost per lead by 84%.
Navigating this can be complex, and making the right decisions on budget, audience, and creative requires experience. If you feel like you'd benefit from an expert eye to guide you through this process and help you avoid the common pitfalls, we offer a free, no-obligation initial consultation. We can take a look at your account together and lay out a more detailed strategy.
Hope this helps!
Regards,
Team @ Lukas Holschuh