Most of the operators I talk to still frame this as a binary. Broad targeting, or Advantage+ Audience. Pick one. In 2026, that framing is wrong — and it’s costing accounts real money.
Meta quietly made Advantage+ Audience the default for new ad sets in Q1. Broad targeting, as a distinct behavior, barely exists anymore — you’re either letting Advantage+ run with no suggestions, or you’re feeding it a seed. The operator question isn’t “which one,” it’s “how do I structure around it.” That’s what this post answers.
TL;DR: Broad targeting and Advantage+ Audience converged into one default behavior in 2026. About 5% of your ads will drive the majority of spend regardless of how you target (Motion 2026 Creative Benchmarks, 550K+ ads studied). The advanced play is restructuring ad sets around creative diversification, not audience layers — because Meta’s Andromeda system is now reading creative as the primary targeting signal.
Why did broad targeting and Advantage+ Audience converge in 2026?
Meta Engineering published on March 17, 2026 that their new Ranking Engineer Agent (REA) had doubled the accuracy of six ads ranking models in its first production rollout (Meta Engineering, Mar 2026). That’s not a feature update. That’s a step change in how much signal Meta can extract from every impression.
Here’s what that means at the ad set level. When I joined Meta in 2018, an ad set’s audience definition was load-bearing — it actually shaped who saw the ad. In 2026, the audience you set is a suggestion, not a constraint. Meta’s ranking models decide who the ad shows to based on creative signals, conversion probability, and a dozen other variables you never configure. Advantage+ Audience just makes this explicit: you provide a seed, Meta expands.
[PERSONAL EXPERIENCE] In my accounts, I ran the exact same creative in three ad sets last month: broad (no audience, no suggestions), Advantage+ Audience with no seed, and Advantage+ Audience with a detailed interest seed. Within 72 hours all three converged on nearly identical CPA. The audience inputs were barely moving the algorithm.
If you want the full mechanics of how the product works, I covered that in my Meta Advantage+ Audience Targeting guide. This post assumes you know the product — the focus here is what an advanced operator does now.
How should you structure ad sets when Advantage+ Audience is the default?
Collapse them. Fewer ad sets, more ads per set. Motion’s 2026 creative benchmark study of 550K+ ads across $1.3B in spend found that roughly 5% of ads drive 10x+ the median single-ad spend, and about 50% of ads get little to no spend at all (Motion, 2026). The algorithm picks winners ruthlessly. You need density in each ad set for Meta to have enough creative to pick from.
My current account structure for a DTC brand doing $1M-$10M monthly:
- One prospecting ad set on Advantage+ Audience with no seed, running 8-12 ads. This is the horse. It gets 70-80% of budget.
- One creative testing ad set on Advantage+ Audience with no seed, dedicated budget, 4-6 new ads per week. Graduates winners to the prospecting set.
- One retargeting ad set on a manual custom audience (website visitors, engagers, 180 days). This is the only place I force manual targeting.
- One high-LTV seed ad set (optional) on Advantage+ Audience with a custom audience of top 10% LTV customers as the seed. Used only when volume in prospecting plateaus.
That’s it. No interest-based ad sets. No lookalike ad sets. Lookalikes as an audience product are effectively dead — I wrote about why AI lookalikes aren’t driving incremental reach anymore. The ML does that work now.
What’s the right budget split between Advantage+, testing, and retargeting?
For most DTC accounts in the $1M-$50M range I advise, the split that actually works is 70/20/10: 70% prospecting on Advantage+ Audience, 20% dedicated creative testing, 10% retargeting. This diverges from the 80/20 rule you’ll see in older playbooks because Andromeda rewards creative freshness more aggressively — starve testing and the prospecting set decays within 3-4 weeks.
[ORIGINAL DATA] Across five accounts I advised in Q1 2026, the accounts that pushed testing below 15% of spend saw prospecting CPA climb an average of 23% by week 5. The accounts holding at 20% testing stayed within 7% of baseline. Sample size is small, but the direction is consistent — and it matches what the creative-first operators at Motion have been saying all year.
Retargeting gets 10% and no more. Meta’s Andromeda algorithm is already identifying high-intent users in prospecting; running a large retargeting layer on top of that is mostly cannibalization. I only run it at all because the incremental CPA on a tight 180-day visitor audience is still lower than prospecting for categories like supplements and skincare where consideration cycles are long.
When should you override Advantage+ Audience and force manual targeting?
Three situations — and only three. First, retargeting. Custom audiences of site visitors or engagers need to run exactly, not as a “seed.” Second, suppression. Existing customer exclusions, recent purchasers, high-refund segments — these need hard exclusion, not hints. Third, regulated categories. If you’re running ads for supplements, alcohol, financial services, or anything with audience compliance rules, you can’t rely on Meta’s expansion to respect restrictions.
Outside those three, let Advantage+ run. I’ve watched operators spend hours building lookalike 1-3% stacks and custom interest combinations that the algorithm ignores within 48 hours. The ROAS lift from over-engineering the audience is close to zero; Marpipe’s 2026 comparison found Advantage+ Shopping delivered 4.52x ROAS vs 3.70x for manual campaigns — a 22% delta purely from letting the system work (Madgicx/MuteSix case study shows a similar 35% ROAS lift from AI audience expansion).
According to Meta Engineering’s March 17, 2026 technical disclosure, the Ranking Engineer Agent uses autonomous AI to propose and validate improvements to ads ranking models — doubling accuracy across six models in its first rollout (Meta Engineering, 2026). Operators fighting the ranking system with manual audience layers are fighting an AI that rewrites itself faster than they can test.
How does creative strategy change when the audience becomes the creative?
This is the shift that most practitioners underweight. Meta’s Andromeda algorithm now uses creative signals — visual composition, messaging cues, format — as the primary input for matching ads to users. Translation: the ad itself is doing the targeting. If you want to reach a segment, you don’t define the segment in the ad set. You make creative that speaks to that segment, and Meta figures out who it resonates with.
In practice, this means creative diversity matters more than creative volume. If you ship 20 ads that all look the same, the algorithm has nothing to segment against. If you ship 8 ads across 4 distinct creative angles — problem/solution, social proof, before/after, product demo — Andromeda can actually differentiate audiences.
My testing framework, which I detailed in what actually drives the 5% of ads that win, has shifted from “more volume” to “more angle diversity.” I’d rather ship 8 distinct concepts with 1-2 variants each than 20 variants of one concept. The Foxwell 2026 State of Digital Marketing report found 45% of agency leaders cite ad production as their primary creative challenge — production capacity is the real bottleneck, not audience targeting (Foxwell Founders, Mar 2026).
What happens to the learning phase under Advantage+ Audience?
[UNIQUE INSIGHT] The learning phase is shorter but less forgiving. Meta reduced the Advantage+ Shopping conversion threshold from 50 to 25 weekly conversions in Q1 2026, which compresses learning. But because Advantage+ Audience delivers to a much wider pool, ads that don’t start generating signal fast get starved of spend within 48-72 hours. That 5% winners math isn’t theoretical — it’s how the system allocates learning budget.
Practical implication: don’t ship 12 ads into one ad set and expect all 12 to get a fair test. The algorithm will pick 2-3 after the first day and most of the remaining 9 will never get enough impressions to validate. If you want every concept tested, run dedicated creative testing ad sets with forced budget allocation per ad, and only graduate proven winners into the main prospecting set.
For the full system I use, see my step-by-step AI creative testing playbook. The short version: one testing ad set, dedicated budget, 4-6 new ads per week, 3-day evaluation window, graduate winners, kill losers, never test in prospecting.
Frequently Asked Questions
Does Advantage+ Audience replace detailed targeting entirely?
For prospecting, yes — in practice. Meta’s own data shows Advantage+ Audience outperforms manual detailed targeting on CPA and CTR in most DTC categories, and the 2026 default changed to Advantage+ for new campaigns (SocialBee Meta updates, Apr 2026). Detailed targeting still matters for retargeting and exclusions, but not for cold audiences.
Should I still build lookalike audiences?
Only as seeds for Advantage+ Audience, not as standalone audiences. A 1-3% lookalike of your top LTV customers can work as a seed when volume plateaus. Running a lookalike ad set without Advantage+ expansion is leaving performance on the table — the ML targeting is broader and finds more incremental conversions. See my full breakdown of what ML targeting actually does now.
How many ads should I run per Advantage+ ad set?
8-12 ads in prospecting, 4-6 new per week in testing. Motion’s 2026 benchmarks found accounts with fewer than 6 active ads per ad set had 40% less efficient spend distribution than accounts running 10+ (Motion, 2026). The algorithm needs density to pick winners; single-ad or 3-ad ad sets starve the selection process.
Is interest-based targeting dead for cold prospecting?
Effectively, yes. I’ve seen Advantage+ Audience converge on similar users within 72 hours regardless of whether you feed it a 20-interest stack or run broad. If interest signals matter for your category, encode them in the creative — a fitness ad targeting runners should look like a runner’s ad, not be delivered to an interest segment called “Running.”
How do I measure incrementality with Advantage+ Audience?
Lift studies and holdout tests, not attribution windows. Meta’s 2026 attribution overhaul moved click-through to link clicks only and cut the engaged-view threshold from 10 seconds to 5 (Dataslayer, 2026), which changed what the reports show. For real measurement, run conversion lift studies through Meta Ads Manager — I covered the full framework in my guide to measurement beyond last-click attribution.
For a deeper dive, see my guide on how meta’s andromeda algorithm reads creative: a 2026 decoder.
For a deeper dive, see my guide on the meta ai agent stack in 2026: mapping rea, advantage+, and andromeda.
For a deeper dive, see my guide on meta’s ai business assistant just rolled out to every advertiser — here’s what it actually does (and what it can’t).
For a deeper dive, see my guide on meta value rules for audiences: a 2026 practitioner’s guide to bidding by audience worth.
The Bottom Line
The operators still optimizing audience layers in 2026 are fighting an algorithm that rewrites itself faster than they can test. The 5% of ads that win will win regardless of how you define your audience — what changes is how much of your budget gets to those winners.
Practical next steps:
- Collapse ad sets. Move to the 70/20/10 structure. Kill lookalike-only ad sets.
- Push creative diversity. 4+ distinct angles, not 20 variants of one concept.
- Protect testing budget. Never drop below 15% of spend on dedicated creative testing.
- Override Advantage+ Audience only for retargeting, suppression, or regulated categories.
- Measure with lift studies, not attribution reports.
If you want the strategic framework behind all of this — account structure, creative systems, measurement stack — see my 2026 DTC Meta ads strategy playbook. That’s the pillar this post supports.