Last summer, the apparel brand Snag Tights started noticing something strange. Their Meta ads looked “off.” The pictures had what their team would later call “that strange AI sheen.” By February 2026, the brand went public on Facebook, posting that some of their ad budget had been used to test AI-generated creative they never authorized. “We have opted out of every single AI feature we possibly can,” they wrote (Marketing Brew, April 2026). Their question is the question every Meta advertiser is now asking: should you opt out of Advantage+ AI creative auto-tweaks, or lean in?
TL;DR: Opt out by default if you’re running structured creative tests, working in regulated verticals (supplements, finance), or have a brand-safety mandate. Opt in if you’re a low-volume DTC account that needs Meta to expand creative variants for you. As of March 2026, opt-out preferences now persist across new campaigns (Meta Business Help, 2026) — the cost of opting out has dropped, so the default answer for most operators is opt out, then opt in selectively.
What Are Advantage+ Creative Auto-Tweaks — and Why Is Everyone Talking About Them in 2026?
Advantage+ creative enhancements are Meta’s suite of AI tools that modify your ads after you upload them. Image expansion stretches a 1:1 creative to 9:16. Music gets added to a static image. Brightness shifts. Text overlays move. Generative templates spin up new variants. And as of February 2026, every new campaign in Sales, Leads, and App Promotion objectives launches with all of these enhancements pre-selected (AdMove, 2026).
The reason everyone’s talking about it: the defaults shifted from opt-in to opt-out without most advertisers noticing. Marketing Brew’s April 2026 reporting documented agencies and brands seeing AI-modified creative in the wild — sometimes weeks before the operator caught it. Meghan Kelly of Formada Social described the symptom directly: “Even when we duplicate approved ads, AI features re-enable themselves, generate new visuals, and override decisions that were already made” (Marketing Brew, 2026).
Here’s the part nobody’s framing well: the question isn’t whether AI auto-tweaks are good or bad. It’s whether they’re good or bad for the test you’re running right now. A creative tweak that helps a low-volume account scale a winner is the same tweak that pollutes a structured 6-concept brief. Same feature, opposite outcomes, depending on what you’re measuring.
The Marketing Brew piece quoted Curtis Howland of Misfit Marketing being blunt about it: “The problem is that their AI ads are really bad” (Marketing Brew, 2026). He’s right that quality is uneven. He’s also describing one specific use case — full AI-generated ads — not the entire toolkit. The framework matters because the tools aren’t one thing.
What Changed About Opt-Out in March 2026 — and Why Does It Matter?
As of March 2026, opt-out preferences for Advantage+ creative now persist across future campaigns. Translation: if you turn off image animation once at the account level, new campaigns inherit that setting instead of resetting to default opt-in (Meta Business Help, 2026). Meta also confirmed advertisers can opt out at any time and aren’t penalized for doing so.
This is bigger than it sounds. Before persistence shipped, opting out was effectively a tax — you had to do it in every campaign, every time, and the toggles were scattered across three different surfaces in Ads Manager. Most operators tapped out after the third campaign. The default opt-in won by friction, not by performance.
Persistence changes the math. The cost of opting out is now one click at the account level, applied forever. The cost of staying opted in is a feature you didn’t evaluate, modifying creative you spent money to produce, on a campaign you didn’t flag. For most operators running real tests, that math is now obvious.
When Do Auto-Tweaks Accelerate Winner Discovery?
Motion’s 2026 study of 550,000+ Meta ads across $1.3 billion in spend found that ~5% of ads become real winners (defined as 10x or more of an account’s median single-ad spend), ~6% drive the majority of every account’s spend, and roughly 50% never receive meaningful spend at all (Foxwell Digital recap of Motion data, 2026). The implication is brutal: most ads die. The auction is ruthless. The job of creative testing is to find the 5% as fast as possible.
Auto-tweaks help when an account doesn’t have enough creative volume to find winners on its own. As I’ve written previously in my creative testing system, “the stack cannot find the 5% winners from 10 variants of one concept” — you need 6 to 10 genuinely different creatives across 3 to 5 distinct themes for Andromeda to retrieve against. Some accounts can’t produce that volume in-house. For them, auto-tweaks are a creative-volume cheat code: image expansion alone can extend a 1:1 hero image into Reels-native 9:16 placements without re-shooting.
The accounts that benefit most from opting in:
- Solo founders and small DTC teams producing fewer than 8 net-new concepts per month. Auto-tweaks fill out the variant tree Meta’s auction needs to surface winners.
- New ad accounts without the 50-event-per-week minimum needed to feed Andromeda’s creative-signal extraction. Variant volume buys you faster signal accumulation.
- Lead-gen advertisers running broad-targeting Sales campaigns where the upside of one auto-generated variant outperforming static is real and the brand-safety bar is lower.
- Accounts running pure performance objectives (Sales, App Promotion) where you don’t care which creative wins, you just need a winner.
If that’s your account, the auto-tweaks aren’t the enemy. They’re an in-house creative team you didn’t hire. The catch: you have to actually want what they produce, and you have to be willing to call your own ads “ours” even if AI made half of them.
When Do Auto-Tweaks Pollute Your Test Reads?
The flip side is structured testing. AppsFlyer estimates 70-80% of Meta ad performance is now driven by creative quality (Foxwell Digital, 2026). When the creative is the variable you’re testing, anything that mutates the creative mid-test is destroying the experiment.
I’ve put this directly in the past: tests are measurement instruments, and mid-test interventions corrupt the instrument. Accepting an auto-tweak during a live creative test doesn’t optimize the test — it collapses it. You started measuring whether concept A beats concept B. Now you’re measuring whether concept A-with-image-expansion beats concept B-with-music-overlay. Same dollars, but you’ve lost the ability to attribute the lift.
This is also the place where Meta’s Andromeda algorithm matters. Andromeda extracts creative signals — visual composition, color, motion, text density, format — and uses them to retrieve ads against user embeddings. As I covered in my Andromeda explainer, the creative is the targeting signal. Auto-tweaks change the signal. Andromeda then retrieves your ad against a slightly different audience pool than the one you originally tested into. The cohort drift is invisible in your reporting, but it’s measurable in your CPA variance.
The accounts that should opt out by default:
- Structured creative testing programs running 6-10 concept briefs per month with explicit hypothesis-per-concept reads. Auto-tweaks add a confound to every cell.
- Regulated verticals — supplements, finance, healthcare, alcohol. Generative auto-tweaks can produce claims, imagery, or compositions that fail policy review or trigger advertiser strikes. The legal cost of a wrong tweak dwarfs the variant cost.
- Brand-led DTC accounts where visual identity is the asset. Snag Tights’ complaint isn’t about performance; it’s that the AI sheen damages brand trust with their existing audience. “If the picture isn’t of anything real, you’re effectively scamming the customer, right?” their team asked (Marketing Brew, 2026). For brand-equity advertisers, that’s a hard floor.
- Accounts with strict creative QA workflows — agency clients with sign-off chains, retail brands with brand guidelines, B2B advertisers with legal review.
What’s the Practitioner Decision Framework for 2026?
The opt-in/opt-out decision should be made at the account level first, then overridden at the campaign level only when there’s a specific reason. The decision is driven by three variables: creative volume, vertical risk, and testing intent. Here’s the matrix I’d use.
| Account Profile | Image Expansion | Music Add | Generative Variants | Text Tweaks |
|---|---|---|---|---|
| DTC, $1M-$10M spend, structured testing | Opt out | Opt out | Opt out | Opt out |
| DTC, $10M-$50M spend, brand-led | Selective | Opt out | Opt out | Opt out |
| Solo founder, low creative volume | Opt in | Opt in | Selective | Selective |
| Lead gen, broad targeting | Opt in | Opt in | Opt in | Opt in |
| Supplements / regulated | Opt out | Opt out | Opt out | Opt out |
| B2B / enterprise | Opt out | Opt out | Opt out | Opt out |
The reason “selective” appears for some cells: image expansion is the lowest-risk auto-tweak because it preserves the original creative content and only fills out aspect ratio. Text tweaks are the highest-risk because they can rewrite headlines that legal already approved. Most operators bundle all four into one opt-in/opt-out decision, but that’s leaving precision on the table. Per-feature toggling is in your control — use it.
One frame I keep coming back to: most operator friction with Meta’s AI defaults is “trying to assert control at a layer where Meta has already taken it.” The answer for the long game isn’t to fight every auto-tweak — it’s to feed the stack the kind of creative diversity that makes auto-tweaks unnecessary in the first place. If you’re shipping 6-10 distinct concepts a month with intentional variation, you don’t need Meta to generate variants for you. The auto-tweaks become noise on top of signal you’re already producing.
How Do You Actually Opt Out of Advantage+ Creative Enhancements?
The mechanics are simple, but the toggles are scattered. Here’s the workflow:
1. Account-level opt-out (the most efficient lever): Go to Ads Manager → Account Settings → Generative AI Features. Disable the master toggle for “Use AI to test ad variations.” This is the change that, as of March 2026, persists across new campaigns. Most operators only need to do this once.
2. Campaign-level opt-out (for granular control): Inside each ad set, expand the “Optimization & Delivery” section. Under “Advantage+ Creative,” uncheck individual enhancements: Image Expansion, Image Animation, Music, Brightness & Contrast, Text Improvements, Visual Touch-ups, Standard Enhancements. Per-feature toggling is exposed at this level.
3. Verify on existing campaigns: Persistence applies to new campaigns. Existing live campaigns retain whatever settings they were launched with. Audit any campaign launched before March 2026 individually if you want consistency. Bulk-edit by ad set is supported via the Ads Manager bulk-edit panel.
4. Confirm in Ads Library: Pull up your Meta Ad Library entry for the brand. If you see ads you didn’t produce — especially with auto-generated visuals — that’s confirmation auto-tweaks have been running. Document and screenshot before opting out for a clean audit trail.
Worth flagging: opting out doesn’t affect Advantage+ placement selection, Advantage+ audience, or Advantage+ budget. Those are different systems. The opt-out only applies to creative-level enhancements — the things that modify the actual ad asset.
Is the Snag Tights Story Meta’s Fault — or the Advertiser’s?
Both. Meta shipped a default that pre-selected AI features in new campaigns starting February 2026 (AdMove, 2026). That’s a UX choice that knowingly exploited the friction of opt-out. When a default opts users into a behavior they wouldn’t choose if asked, the platform is responsible for what happens. Snag Tights wasn’t lazy; they were caught by a setting change they didn’t see communicated.
But the practitioner side: any advertiser running a brand-led account in 2026 should be auditing settings every quarter. Defaults shift constantly. Jon Loomer documented 83 separate Meta advertising changes in 2025 alone (Jon Loomer, 2025). The pace is faster in 2026. If you’re not running a settings audit, you’re trusting that Meta’s defaults match your strategy — and they don’t.
The deeper read: this isn’t a Meta problem or a Snag Tights problem. It’s a 2026 Meta-buyer problem. The platform is increasingly moving toward AI-by-default. Operators who don’t install a quarterly settings audit will keep getting surprised. The question isn’t whether to fight the next default change — it’s whether you’ve built the operational habit of checking what your account is actually doing.
Frequently Asked Questions
Does opting out of Advantage+ creative hurt your performance?
Meta states advertisers aren’t penalized for opting out (Meta Business Help, 2026). The auction doesn’t weight your bid lower for it. What can change is your reach: with fewer auto-generated variants, Andromeda has fewer creative signals to retrieve against, which can compress impression volume on small accounts. For accounts running structured testing with 6-10 net-new concepts monthly, the impact is negligible. Test it, don’t assume.
What’s the difference between Advantage+ creative and Advantage+ Shopping?
Advantage+ Shopping is a campaign-type that automates audience, placement, and budget for ecommerce objectives. Advantage+ creative is a set of creative-level enhancements that modify your ad asset itself. They’re different systems with different opt-out paths. You can run Advantage+ Shopping campaigns with creative auto-tweaks fully disabled.
Will opt-out persistence cover campaigns I duplicate from old templates?
Mostly. As of March 2026, opt-out preferences set at the account level apply to new campaigns. Duplicated campaigns inherit the source campaign’s settings — if you duplicate from a campaign that had auto-tweaks enabled, the duplicate keeps them. The Formada Social complaint quoted in Marketing Brew — AI features re-enabling on duplication — reflects this gap. Re-audit any duplicated campaign before launch.
Are auto-tweaks visible in Ads Library after they’re generated?
Yes. The Meta Ad Library shows the actual creative being served, including any AI-modified variants. If you opted in to auto-tweaks and Meta generated variants, those variants appear as separate creative entries in your library. Pull the Library at least monthly to verify nothing is running that you didn’t intend.
How does this interact with the new Meta AI Business Assistant?
The Business Assistant (rolled out April 24, 2026) makes opportunity-score recommendations for opt-in to AI features. Treat its recommendations as inputs, not directives. As I covered in my Meta AI agent stack guide, the assistant operates one layer above creative auto-tweaks — it can suggest you re-enable features you turned off. Hold your opt-out unless your decision matrix says otherwise.
What about AI-generated vs human creative performance?
Different question, different answer. Auto-tweaks modify human creative. Fully AI-generated creative is a separate workflow (and product) where Meta produces the asset from scratch. Performance data is mixed and vertical-dependent. Opting out of auto-tweaks doesn’t commit you to either side of the AI-generated debate — it just keeps your existing assets unmodified.
For a deeper dive, see my guide on meta ads ai connectors: how the one-click data layer changes reporting and attribution workflows.
The Bottom Line
For most operators in 2026, the answer is opt out by default, then opt in selectively where the decision matrix says it pays. The cost of opting out has dropped now that preferences persist (Meta Business Help, 2026). The cost of staying opted in by accident, especially for brand-led DTC and regulated verticals, can show up as wasted budget, polluted test reads, or a Snag Tights moment of seeing your ads in the wild and not recognizing them.
The bigger play: stop making this a one-time decision. Install a quarterly settings audit. Document your opt-out posture. Check your Ad Library monthly. The defaults will keep moving. Your job is to make sure your account’s posture moves on purpose, not on whatever Meta shipped last week.
If you’re building structured creative tests, my creative testing system walks through the brief format and concept-diversity targets that make auto-tweaks irrelevant. If you’re trying to make sense of how Andromeda actually reads what you ship, the Andromeda creative decoder covers what signals matter most. The two together — structured tests plus knowing what the algorithm is reading — reduce the surface area where Meta’s defaults can hurt you.