You're right to be confused, there's loads of conflicting advice on this. Your method A is how most people start, but it's not a real test. Facebook's algorithm is built to find a 'winner' as fast as possible and it will pour most of the budget into one ad, often before the others have had a proper chance. So you're not really testing; you're just letting the algorithm pick its early favourite, which might not be the best one for you long term.
For a proper, clean test, method C is the way to go:
- Create one campaign with the budget set at the ad set level (ABO).
- Then make separate ad sets for each creative you want to test. Keep the audience and copy identical across them all.
- This forces Facebook to spend the budget you set on each creative, so you get clean data and can compare performance properly. It's a bit more work to set up, but the results are much more reliable.
Dynamic creative (your method B) is useful, but for a different job. It's great when you want to test lots of little elements at once – like 5 images, 3 headlines, and 2 bits of copy – and let Facebook find the best combination. It's more for optimisation rather than a straight A/B test of a whole creative concept. We find it works best once you already have a winning creative angle. Definitely stick with C for your main testing.
Hope this helps!