
The hardest part of running Meta ads in 2026 isn’t picking audiences anymore. It’s understanding what the algorithm sees when it looks at your creative, because the creative is what the algorithm grades now. Most of the coverage of Meta’s Andromeda retrieval engine has been news-summary work: what it is, when it launched, which Meta Engineering post announced it. That’s useful context, but it doesn’t tell you what to change on Monday morning. I’ve spent the last few months reconciling the primary engineering papers with what I’m actually seeing in accounts, and the result is this decoder: which creative signals Andromeda actually extracts, how it weighs them, and what AI-native media buyers should do differently.
TL;DR: Meta’s Andromeda retrieval engine delivered +6% recall improvement and +8% ads quality on targeted segments (Meta Engineering, 2024). Under Andromeda, the creative is read as a dense feature vector — visual composition, on-screen text, audio entities, format fit — and matched to users through embedding-space neighbors, not predefined audiences. Your job is to feed it diverse, well-differentiated creative, not to over-segment ad sets.
What Does It Mean That Andromeda “Reads” Your Creative?
Andromeda is Meta’s retrieval engine: the first stage of ranking that decides which of millions of ads even get considered for a given user. It replaced the legacy retrieval stack with a custom neural network designed around NVIDIA’s Grace Hopper chips, giving Meta roughly 10,000x the model capacity of the previous system (Meta Engineering, 2024). When media buyers say Andromeda “reads” creative, what we actually mean is that each ad gets encoded into a dense feature vector — a numerical representation of everything the model can extract from the asset itself — and then matched against a user-context vector at retrieval time.
The practical consequence: the ad creative is no longer just the message delivered to a pre-selected audience. The creative is the targeting signal. As I wrote in a prior note, “audience is a suggestion, not a constraint” — the audience you set at the ad-set level narrows the candidate pool, but the actual matching inside that pool is driven by creative features and a set of learned user embeddings. If your creatives don’t differ in what the encoder can see, you’re effectively asking Andromeda to rank six identical vectors.
Which Creative Signals Does the Algorithm Actually Extract?
This is the part no competitor has mapped directly. Meta doesn’t publish a spec sheet of which features the encoder uses, but you can triangulate from three places: the Andromeda paper, the Meta Adaptive Ranking Model (MARM) paper from March 2026, and what’s observable in delivery data. Four signal categories appear to matter most.
1. Visual composition and attention cadence
The encoder pulls a time-series of visual features across the first few seconds of video: motion, color contrast, subject placement, shot changes. A static hero frame on a product shot reads very differently from a 3-second hook with a face-on subject, a benefit overlay, and a cut at second 2. Thumb-stop rate is still the shorthand, but the algorithm isn’t looking at one metric — it’s looking at the shape of the visual signal.
2. On-screen text, audio, and spoken entities
Meta’s encoders include OCR and speech-to-text layers. That means your overlay text (“clinically tested,” “free returns,” a price point) and your voice-over script both enter the feature vector as entities. Two creatives that look nearly identical but say different things become distinct candidates in retrieval.
3. Format fit and placement-native cues
Aspect ratio, duration, and whether the creative was built for Reels vs. Feed are features too. Reels-native 9:16 creative with vertical motion reads differently than a 1:1 asset zoom-cropped to fit. This is why the same creative can perform three times better in one placement than another even when the audience is identical.
4. Embedding-space neighbors
The most important and least-discussed feature: every creative gets a location in the embedding space relative to every other creative Meta has ever served. If your creative’s vector sits near ads that have worked well for high-LTV users in your vertical, retrieval picks up on that similarity even before your own performance data exists. That’s why fresh creative can go live with measurable traction immediately — it’s not “learning” in the old sense, it’s being matched to neighbors that already have history.
Why Did Meta Shift to Creative-First Ranking?
The shift was a performance play, not a philosophy one. Andromeda delivered a +6% recall improvement in retrieval and +8% ads quality on targeted segments (Meta Engineering, 2024). Advantage+ campaigns running AI-native targeting drove +22% ROAS vs. manual. Meta’s Q4 2025 earnings reported an additional +3% YoY conversion-rate lift and +12% YoY ads quality improvement from the ranking and runtime model stack (Meta Q4 2025). MARM, which went live on Instagram in Q4 2025, added +3% ad conversions and +5% ad CTR for targeted users (Meta Engineering, 2026).
Those numbers look modest until you stack them. Meta’s ads business ran at roughly $165B in 2025. Six percent of recall compounded across that base pays for years of capex. And the 2026 infrastructure guidance of $115-135B tells you how committed they are to letting the models get larger: MARM is already around one trillion parameters with sub-second inference latency, which is LLM-scale reasoning applied to ad ranking.
Translation for the media buyer: the algorithm is getting sharper faster than most accounts’ creative programs are. I covered why the audience side stopped mattering in my piece on broad targeting + Advantage+ audience strategies; this post is the creative side of that same shift.
How Many Creatives Should You Run Under Andromeda?
Motion’s 2026 Creative Benchmarks report analyzed 550,000+ Meta ads across $1.3B in spend (window: Sep 2025 – Jan 2026) and found the distribution most operators already suspect but rarely plan for: roughly 5% of ads become “real winners” (spend ≥10x the account median), about 6% drive the majority of spend in any given account, and ~50% get minimal or zero spend (Motion, 2026). As Andrew Foxwell summarized it: the algorithm is picking winners ruthlessly, and it’s doing so faster than the learning-phase framework most buyers still reference.
The operational read is counterintuitive. Under Andromeda, stuffing an ad set with 20+ creatives often hurts you, because the ranker fragments signal across too many candidates and can’t resolve a winner fast enough. Monica Shukla at Mile Marker flagged this on AdExchanger in early 2026, and it tracks with what I see: 6-10 genuinely different creatives in a broad ad set outperforms 20+ minor variants almost every time. The goal isn’t creative volume — it’s embedding-space coverage.
This is the part where the Motion distribution stops being a curiosity and starts being a planning constraint: if ~6% of your creatives will drive the majority of spend, you need a creative pipeline producing distinct-enough ads that a 6% winner rate actually surfaces real winners. I broke down the creative-generation side of this in my post on AI-generated ads on Meta and what full automation means for media buyers.
What Changes in Your Workflow When the Algorithm Reads Creative?
Here’s the decoder mapped to the three points in your workflow where it actually matters.
At the brief stage: stop briefing variants. Start briefing distinct concepts that hit different embedding-space neighborhoods. A “hook variation” where one ad says “lose 10 pounds” and another says “drop a size fast” is one concept with two overlays. A problem-aware testimonial, a demo, a founder-to-camera story, and a UGC-style comparison are four concepts — those spread across the embedding space and give Andromeda distinguishable candidates. Three diverse concepts beats ten variants of one.
At the edit stage: treat the first 3 seconds like the only thing the encoder will weight heavily, because a disproportionate share of its signal comes from there. That means aggressive differentiation of hook, overlay, and audio entity in the opening beat. If two of your edits share the same hook frame and opening voice-over, they’ll likely land next to each other in the embedding space and cannibalize each other’s learning.
At the launch stage: fewer, more different ads per ad set. I default to 6-10 creatives in broad Advantage+ ad sets, built around 3-5 distinct concepts. Let the algorithm pick the 6% winner, then double down on what worked by generating variants only within that concept’s embedding neighborhood. That’s how you compound. For the full framework I use to evaluate what worked, see my creative analysis systems post and the AI creative testing pillar.
What Does REA Mean for How Fast This Will Change?
The Ranking Engineer Agent (REA) paper from March 17, 2026 matters because of what it implies about iteration speed, not because it’s a creative-reading system itself. REA is an autonomous AI agent that runs ML experimentation on Meta’s ads ranking stack. According to Meta, REA drove roughly 2x model accuracy across 6 models and about 5x engineering output — three engineers running eight models where historically two engineers ran a single model each. That’s the rate at which Meta’s ranking system is improving itself.
The media buyer’s takeaway is uncomfortable but clear. The ranking model you’re optimizing against this quarter will not be the same one you’re optimizing against next quarter, and the gap between them is getting wider. As I put it in an earlier piece, operators fighting the ranking system with manual audience layers are fighting an AI that rewrites itself faster than they can test. The work that compounds now is on the creative input side — because that’s the lever you actually control, and it’s the one the algorithm keeps weighting more heavily. More context on this in my piece on AI agents inside Meta Ads Manager.
Frequently Asked Questions
Does Andromeda read static images the same way it reads video?
No. The feature vector for a static image is thinner — composition, OCR, embedding-space placement, format. Video adds motion, audio entities, and temporal attention patterns, which gives the encoder substantially more to work with. In 2026, video creative generally gets richer retrieval matching than static, which is one reason Motion’s 2026 benchmarks show video dominating the winners tier.
How is Andromeda different from Advantage+ audience targeting?
Andromeda is the retrieval layer — the first stage that decides which ads are even candidates for a given user. Advantage+ audience is an ad-set configuration that tells Andromeda how much latitude to take with targeting. Same system, different control points. The full explainer is here.
Should I still use ASC or manual campaigns under Andromeda?
Advantage+ Shopping Campaigns give Andromeda the most freedom to match creative-to-user, which tends to compound well with the creative-first shift. Manual campaigns still have a role for testing narrow hypotheses, but for scaled prospecting, ASC + broad targeting is what I default to.
Does uploading more creatives hurt or help performance?
It depends on differentiation, not count. Twenty near-identical variants tends to fragment signal and slow winner resolution (flagged by Shukla on AdExchanger, 2026). Six truly different concepts almost always outperforms.
How do I know if my creative is actually covering the embedding space?
You can’t see the embedding space directly, but you can proxy it by forcing concept diversity — different actors, settings, hooks, pain points, and formats per ad set. If you can describe all your creatives in the same sentence, they’re probably embedding-neighbors.
For a deeper dive, see my guide on the meta ai agent stack in 2026: mapping rea, advantage+, and andromeda.
For a deeper dive, see my guide on how to use manus ai in meta ads manager: a practitioner’s 2026 setup guide (and what the china block changes).
For a deeper dive, see my guide on should you opt out of meta’s advantage+ ai creative auto-tweaks? a 2026 practitioner decision framework.
For a deeper dive, see my guide on how to test meta ads creative on a low budget: an ai-powered framework for dtc brands under $10k/month.
The Bottom Line
Andromeda and MARM have moved Meta’s ads system toward reading your creative the way a recommendation system reads a piece of content: dense feature vectors, neighbor matching, LLM-scale ranking. The +22% Advantage+ ROAS lift and 5-6% winner rate in Motion’s 550K ad study are telling the same story from two directions: the algorithm is getting better at finding winners, and the winners are concentrating. The media buyers who compound in 2026 are the ones who feed Andromeda distinguishable creative and stop trying to out-target the retrieval engine. The decoder isn’t complicated — it’s just different from how we used to run accounts.