A Facebook ad account audit is a structured, top-to-bottom review of a Meta Ads account that diagnoses where ad budget is leaking and prioritises the fixes that will most improve return on ad spend. A complete audit is not a campaign review. It checks the underlying infrastructure your campaigns sit on.
The 9-pillar framework in this guide:
- Account access and hygiene
- Pixel, Conversions API, and Aggregated Event Measurement
- Attribution settings
- Account and campaign structure
- Audience architecture
- Creative inventory and fatigue
- Bidding, budgets, and pacing
- Placements and delivery
- Reporting and cross-platform sanity-check
Each pillar below has a checklist, the exact thresholds that separate healthy from broken, the fix path, and the revenue impact of ignoring it. Total read time: 21 minutes.
Most Facebook ad audit guides assume you run an e-commerce store. This one does not. The framework below works just as well for lead-gen, B2B SaaS, services, and local accounts as it does for direct-to-consumer brands — because the infrastructure that makes a Meta Ads account healthy is the same regardless of vertical. Tracking, attribution, structure, audience logic, and creative hygiene either work or they do not.
We have run this audit on accounts spending five thousand dollars a month and accounts spending three hundred thousand dollars a month. The findings cluster around the same handful of issues: a Conversions API that is either missing or improperly deduplicated, Aggregated Event Measurement priorities that nobody has touched since setup, attribution windows that are inconsistent across campaigns, audience overlap that quietly inflates CPMs by twenty to forty percent, and creatives that fatigued weeks before anyone noticed.
None of these are exotic. All of them are fixable. Most of them are leaking money right now in your account. The point of this guide is to give you a framework that finds every one of them in a single, structured pass — so you can either fix them yourself or know exactly what a paid audit should be checking when you hire one.
If you run an e-commerce brand specifically and want the vertical-specific deep-dive, our Facebook ads audit guide for e-commerce is the companion to this one. If you want the scannable 60-point checklist version, see the Meta ads audit checklist.
What a Facebook Ad Account Audit Actually Is (and What It Is Not)
A Facebook ad account audit is a structured diagnostic of the entire account — not a campaign performance review, not a creative critique, not a media plan. It checks the plumbing. It asks: is this account configured to run ads profitably, and if not, where are the leaks?
The distinction matters because most under-performing accounts are not under-performing because of a single bad campaign. They under-perform because something at the account level is broken — usually tracking, attribution, or structure — and that broken thing is silently degrading the performance of every campaign in the account. Tuning individual campaigns when the account-level infrastructure is broken is like rearranging deck chairs.
A campaign review optimises within the existing setup. An account audit decides whether the existing setup is worth optimising or whether it needs to be rebuilt.
When to Run a Facebook Ad Account Audit
Five trigger events make an audit non-negotiable:
- A sudden ROAS or CPA shift. If your performance changed by more than fifteen percent in a two-week window without an obvious cause (new launch, seasonal effect, deliberate budget change), audit before you tune. The cause is almost always upstream.
- A change of manager or agency. Whenever a new person takes over an account, audit immediately. Inherited accounts are full of things the previous owner knew about but did not document — paused-but-not-archived campaigns, custom UTMs, audience definitions that depend on undocumented rules.
- A planned scale step. Going from ten thousand to fifty thousand a month, or fifty to two hundred, exposes every structural weakness that did not matter at smaller spend. Audit before you scale, not after.
- A platform update. Major Meta platform shifts — new Advantage+ defaults, attribution changes, AEM re-architectures — rewrite the rules of what is optimal. Audit within thirty days of any major platform release.
- An iOS or browser privacy update. Anything that affects browser tracking — Safari ITP iterations, Chrome cookie deprecation steps, iOS releases — degrades Pixel data and forces a Conversions API health check.
Outside of these triggers, run a full audit quarterly. Accounts spending over twenty-five thousand dollars per month benefit from a monthly mini-audit limited to the three pillars most likely to drift: tracking, attribution, and creative fatigue.
1. Account Access and Hygiene
Access is the pillar nobody audits and everybody ignores — until an ex-employee runs a competitor campaign from your ad account, or an offboarded agency keeps a partner link active for ten months and pulls performance data into a pitch deck. Access is not a security checklist item. It is the foundation of who can see, change, and break the account that runs your revenue.
What to Check
- Every user in Business Manager — admins, employees, and finance roles — has a clear, current reason to be there
- Two-factor authentication is enforced for every admin, not optional
- No partner agencies retain access months after offboarding
- Ad account, Pixel, page, catalog, and domain ownership all sit inside the same Business Manager
- Payment methods are current, with a backup card on file to prevent campaigns auto-pausing on failed billing
- Naming conventions exist and are followed — campaigns, ad sets, and ads are findable by name without opening them
- The account-level spending limit (if set) reflects current monthly spend, not a number from two years ago
What Bad Looks Like
Twelve people listed as admins, four of whom no longer work at the company. Two ex-agencies retain partner access. Two-factor authentication is off for the founder account. The Pixel is owned by a personal Business Manager that belongs to a freelancer who set up the account three years ago. Campaigns are named "Test 1", "Test 1 - copy", and "FINAL FINAL v3". There is no backup payment method, so campaigns auto-paused last quarter when the primary card expired and nobody noticed for four days.
What Good Looks Like
Every Business Manager user is current, with two-factor authentication enforced for admins. Ex-staff and former agencies have been removed within seven days of offboarding. The Pixel, ad account, page, catalog, and domain are all owned by the brand's Business Manager — not a partner's. Naming conventions are documented and followed. Two payment methods are on file. Campaign names follow a consistent pattern that makes filtering and reporting straightforward.
How to Fix It
Open Meta Business Manager, go to Business Settings, and audit People, Partners, and Pages one by one. Remove anyone without a current, justified reason for access. Enforce two-factor authentication on admins. If your Pixel or ad account is owned by an external Business Manager, request a transfer to your own — this is the single most important fix because losing access to your own Pixel means losing access to your audience-build history. Document a naming convention and rename existing campaigns over the next two weeks. Add a backup payment method.
Revenue Impact
Hygiene fixes do not lift ROAS directly. They prevent catastrophic loss. We have seen a brand lose three months of audience-build data because their ex-agency deleted the Pixel during an offboarding dispute. We have seen a four-day campaign outage because of an expired card and no notification. The expected value of fixing this pillar is not measured in incremental ROAS — it is measured in not waking up to a five-figure preventable problem.
2. Pixel, Conversions API, and Aggregated Event Measurement
This is the pillar everything else depends on. If tracking is broken, every metric in the account is a fiction. The algorithm optimises against whatever conversion signal it receives — accurate or not — and nothing else you do downstream can compensate for bad upstream data.
The post-iOS 14.5 environment shrunk browser-based Pixel data by twenty to sixty percent depending on audience device mix. Conversions API restores the missing signal by sending events server-side, but only if it is configured correctly and deduplicated against the browser Pixel. Aggregated Event Measurement (AEM) then determines which event Meta optimises toward when a user has not opted into tracking. All three have to work together.
What to Check
- Pixel fires on every priority event: Page View, View Content, Add to Cart, Initiate Checkout, Purchase, Lead (and any custom events you optimise toward)
- Conversions API is active and sending server-side events for every priority event
- Event Match Quality (EMQ) is 6.0 or higher for every priority event in Events Manager
- Browser Pixel and CAPI events are deduplicated using a shared event_id
- Aggregated Event Measurement priority places your highest-value event (usually Purchase or Lead) in the top slot
- Domain is verified in Business Manager
- Meta-reported conversion volume is within ten to fifteen percent of your backend (Shopify, Stripe, CRM)
What Bad Looks Like
Pixel is installed, Conversions API is not configured. Or CAPI is technically active but missing customer parameters (no hashed email, no IP, no user agent), leaving Event Match Quality stuck at 4.2. Deduplication is broken — Meta-reported purchases are 1.6x your actual order count because every conversion is being counted twice. AEM events are in default order, so Meta is optimising for "Add to Cart" instead of "Purchase". Domain is not verified, and nobody has noticed because the warnings are buried in Events Manager.
What Good Looks Like
Pixel and CAPI both fire for every priority event. EMQ is 7.0+ across the board. Deduplication via shared event_id holds Meta-reported volume within ten to fifteen percent of backend reality. AEM priority is explicitly configured with the highest-value event first. Domain verified. A weekly cross-check between Meta data and Shopify (or your CRM) is part of the reporting routine, with a documented variance band so drift gets caught early.
How to Fix It
If CAPI is not set up, install it. Shopify, WooCommerce, and BigCommerce all have native CAPI integrations now — use those before resorting to a custom server-side implementation. For more control, route through a server-side Google Tag Manager container. When configuring CAPI, send every available customer parameter (hashed email, hashed phone, client IP, user agent, fbc/fbp). For deduplication, generate a unique event_id per event and pass it identically from both browser and server. In Events Manager, manually set AEM priority with your primary conversion event at the top. Verify your domain. Cross-reference Meta-reported volume against backend weekly until the variance band is stable. The official Meta reference documents are Conversions API setup and Aggregated Event Measurement.
Revenue Impact
Going from broken to healthy tracking typically lifts reported and actual ROAS by fifteen to thirty-five percent within four to six weeks. The mechanism is not that ads suddenly perform better — it is that the algorithm finally optimises against complete data. We audited an account spending forty-five thousand a month that was missing forty-two percent of conversions due to a broken CAPI deduplication. Fixing tracking alone moved attributed ROAS from 2.1x to 3.4x without changing a single ad, audience, or budget.
For Business Owners
Ask the person managing your Meta Ads what your Event Match Quality score is for the Purchase event. If they do not know, or if it is below 6.0, the performance numbers they are reporting to you are partial. Fixing this is not optional. It is the highest-leverage change in the account.
3. Attribution Settings
Attribution is the lens through which Meta reports your performance. Change the lens, change the picture. Two campaigns with identical real-world performance can show different ROAS numbers if their attribution windows are different — and most accounts have inconsistent windows set across campaigns without anyone realising.
The post-iOS default is 7-day-click plus 1-day-view. Older accounts often still carry 28-day-click windows from before Meta deprecated them, or 1-day-click windows set during early iOS panic that are now too narrow to capture the full conversion path. Both produce systematically biased reporting.
What to Check
- Every campaign uses the same attribution window (default: 7-day-click + 1-day-view)
- Comparison window in Ads Manager reporting matches optimisation window — you are not comparing 7-day-click optimisation against 1-day-click reporting
- A second, independent attribution source is wired up: GA4 with proper UTMs, Triple Whale, Northbeam, or a custom server-side attribution layer
- A documented variance band exists between Meta-reported and independent-source revenue (typical: ten to twenty-five percent)
- UTM parameters are applied consistently at the campaign or ad set level, not haphazardly per-ad
What Bad Looks Like
Three different attribution windows in use across the account: prospecting on 7-day-click, retargeting on 1-day-click, brand on 28-day-click. Reports from the agency use a different window than the optimisation setting. No independent attribution source — Meta is the only voice in the room. UTMs are missing or inconsistent, so GA4 cannot reconcile traffic against Meta-reported clicks. The team has no idea whether Meta is over-reporting by twenty percent or sixty percent because they have nothing to compare it to.
What Good Looks Like
All campaigns on a unified 7-day-click + 1-day-view window. Reporting and optimisation windows match. UTMs follow a documented schema, applied at campaign or ad set level. A second attribution source produces a weekly comparison; variance lives inside a known band and breaches trigger an investigation. The team has calibrated trust in the Meta number — they know how much to discount it for cross-channel conversations.
How to Fix It
Standardise the attribution window. Update each campaign to 7-day-click + 1-day-view unless you have a specific reason to deviate (long consideration cycles for B2B can justify wider; subscription LTV-driven testing can justify narrower). Update reporting comparison windows to match. Implement consistent UTM tagging using Meta's dynamic URL parameters at the campaign level so every ad inherits the same structure. Wire up a second source — GA4 is the minimum bar; a dedicated attribution platform is better — and document the expected variance band.
Revenue Impact
Attribution fixes do not increase actual revenue. They correct what you believe about revenue, which then fixes downstream decisions. We have seen accounts pause profitable campaigns because 1-day-click reporting under-reported their ROAS by forty percent. The reverse also happens: campaigns kept alive on inflated 28-day-click numbers that were actually losing money. Standardising attribution typically reveals one or two campaigns the team has been wrong about.
4. Account and Campaign Structure
Meta's 2026 algorithm rewards consolidation. The hyper-segmented account structure that worked in 2019 — fifteen campaigns, narrow interest stacks, separate ad sets per persona — is now the single biggest reason accounts under-perform. Each ad set needs roughly fifty conversion events per week to fully exit Learning Phase. Fragment a budget across thirty ad sets and most of them never reach that threshold.
The right structure balances consolidation with diagnostic clarity. Too consolidated and you cannot tell what is working. Too fragmented and nothing works long enough to learn from.
What to Check
- Total active campaigns: under 5–8 for accounts under fifty thousand a month, under 12–15 for accounts up to two hundred thousand a month
- Each campaign maps to a single funnel stage and a single objective
- No ad set has been stuck in Learning Limited longer than seven days
- Either a deliberate Advantage+ Shopping (or Advantage+ Sales) implementation, or a deliberate manual structure — not a half-mix that competes with itself
- Clear separation between prospecting and retargeting, or a clear reason for combining them
- Naming conventions encode campaign type, objective, audience, and launch date
What Bad Looks Like
Eighteen active campaigns. Six of them stuck in Learning Limited. Naming is "Sales Campaign 4" next to "Copy of Sales Campaign 4". An Advantage+ Shopping campaign running alongside three manual prospecting campaigns targeting the same audience, all bidding against each other. No documented logic for why a campaign exists, no clear owner, no archive process. The team adds new campaigns when something is not working instead of fixing the existing ones.
What Good Looks Like
Five to seven campaigns. Each one has a documented purpose. Either Advantage+ is doing the prospecting work and manual campaigns handle the niches Advantage+ cannot reach, or manual campaigns do the prospecting with a clean broad-audience setup and Advantage+ is not running. Ad sets receive enough conversions per week to exit Learning. Names follow a documented pattern. Underperformers are paused and archived, not duplicated.
How to Fix It
Consolidate. Take a list of every active campaign and ask: what is this for? Anything without a clear answer pauses. Combine ad sets that target overlapping audiences with the same objective. Decide deliberately whether Advantage+ Shopping is the prospecting engine or whether you run manual — do not run both pulling in opposite directions. Document a naming convention and rename. For the deeper consolidation playbook, our e-commerce campaign structure guide walks through the architectures we use across spend tiers.
Revenue Impact
Consolidating from fifteen-plus campaigns to five to seven typically lifts ROAS by ten to twenty-five percent within three weeks — not because consolidation is magic, but because the algorithm finally has enough data density per ad set to optimise. Learning Limited ad sets under-perform their potential by twenty to forty percent.
5. Audience Architecture
Audience strategy in 2026 looks almost nothing like it did in 2019. The interest-stack era is over. Meta's algorithm finds the right audience faster than any manually-defined interest combination — provided you feed it accurate conversion signal (Pillar 2) and do not fight it with overlapping ad sets. The audit job is to remove the friction that prevents the algorithm from doing what it is now built to do.
What to Check
- Audience overlap across active prospecting ad sets is under 30% (use Meta's Audience Overlap tool)
- Broad audience is part of the prospecting mix — not the entire mix, and not absent
- Lookalikes use 1–3% similarity for prospecting and only update when source seed crosses meaningful thresholds (every 5–10K new customers, not weekly)
- Retargeting audiences exclude existing customers and high-frequency repeat exposures
- Custom audiences are refreshed — Pixel-based audiences older than 180 days have largely cycled out and the audience size is misleading
- Exclusion stack is documented: who gets excluded from what, and why
What Bad Looks Like
Twenty interest-stack ad sets with sixty percent overlap, all bidding against each other. No broad audience anywhere. Lookalikes built on a Pixel event that has not fired correctly in a year. Retargeting pulling in existing customers because the exclusion list was never set up. CPMs twenty to forty percent higher than benchmark because the account is competing with itself in every auction.
What Good Looks Like
Two or three prospecting ad sets: one broad, one 1% lookalike on a healthy purchase seed, one interest-based for niches the algorithm cannot reach organically. Overlap below 25%. A clean retargeting ad set with proper exclusions. Custom audiences refreshed quarterly. Documented exclusion logic so new ad sets inherit the same rules.
How to Fix It
Run the Audience Overlap tool on every active prospecting ad set. Anything above thirty percent overlap gets consolidated or removed. Add or expand a broad-audience ad set to give the algorithm room to find buyers it would not have found through interest targeting. Rebuild lookalikes off a clean Purchase audience (which only works if Pillar 2 is solid). Document the exclusion stack and apply it consistently.
Revenue Impact
Eliminating audience overlap typically reduces CPMs by fifteen to thirty-five percent within three weeks. On a thirty thousand a month account, that is roughly four to ten thousand a month back into impressions that actually drive incremental sales rather than self-competition.
6. Creative Inventory and Fatigue
Creative is the single biggest lever in a 2026 Meta Ads account. The algorithm is good enough at delivery and targeting that creative is now the limiting factor on performance for most accounts. Fatigue is the silent killer: a creative that worked three months ago is now dragging the campaign down, but reports still attribute past performance to it.
What to Check
- Frequency on prospecting ad sets is under 3.0 — above that, fatigue compounds quickly
- 14-day rolling click-through rate is stable or rising on every active ad
- Cost per result on individual ads is not climbing week-over-week
- Creative pipeline produces 3–5 new ads per ad set per month minimum
- Format mix includes static, motion, UGC, and at least one founder-led or talking-head asset
- Hook (first 3 seconds) varies across creatives — not every video opens with the same product shot
- Winner ads are systematically iterated on, not just left to fatigue
What Bad Looks Like
Eight active ads, all launched in the same week three months ago. Frequency above 5.0 on the top spender. CTR has halved over the past month and CPA has nearly doubled, but nobody has caught it because reports show the lifetime average. No creative pipeline — the team plans to "refresh creative next quarter". Every video opens with the same product hero shot.
What Good Looks Like
Twelve to twenty active ads with rotating freshness — no ad older than six weeks unless it is an outlier winner. Frequency held below 3.0 on prospecting. Weekly fatigue review identifies any ad with two of three negative signals (rising frequency, declining CTR, rising CPA). A documented creative pipeline delivers fresh hooks weekly. Format mix is balanced. Winners are iterated — same proven concept, new hook, new opening frame.
How to Fix It
Build a fatigue dashboard that flags any ad hitting two of three negative signals. Pause flagged ads. Replace with iterations on proven winners — new hook, new opening frame, new format — before introducing brand-new concepts. The deeper playbook is in our creative fatigue audit guide. If you do not have a creative pipeline, build one — the 3-C content framework is the system we use to generate ideas at scale.
Revenue Impact
Replacing fatigued top-spenders with fresh iterations of the same winning concepts typically lifts ROAS by twenty to forty percent within fourteen days. The largest single-fix lift we have measured was eighty-three percent, on an account where the same three creatives had been carrying eighty percent of spend for five months.
Halfway through the framework. If you would rather skip the spreadsheet and have us run all nine pillars against your account, we do free 48-hour Quick Scans — public-data only, no account access required, video walkthrough delivered.
Money-back guarantee: if we do not surface at least three things you did not already know, we send you fifty dollars. The next four pillars cover the rest of the framework.
Get your free Quick Scan7. Bidding, Budgets, and Pacing
Bidding is where most teams over-engineer. The default setting — Highest Volume (formerly "Lowest Cost") with no bid cap — is the right choice for almost every account most of the time. Bid caps and cost caps are advanced tools that need conversion density to work, and the accounts that need bid controls usually do not have the data density to use them safely.
What to Check
- Each ad set delivers at least 50 conversions per week — the threshold for fully exiting Learning Phase
- Bidding strategy matches the campaign objective (Highest Volume for prospecting at most spend tiers, Cost Cap only when you have stable historical CPA data to anchor it)
- Campaigns are not hitting daily delivery caps before midnight in the user's timezone (a sign of under-budgeting on a winning campaign)
- Budget changes follow a +20% / -20% rule — larger swings reset Learning Phase
- Campaign Budget Optimization (CBO) is used where it makes sense, ad-set budgets where you need diagnostic visibility
- No ad set has been paused for over 30 days — pause is not archive; clean it up
What Bad Looks Like
Cost caps applied to ad sets receiving twelve conversions a week — the algorithm has nothing to optimise against, so delivery throttles. Budget doubles and triples mid-week, resetting Learning repeatedly. Top-performing ad set is hitting its daily cap by 4pm and missing the evening conversion window every day. Forty paused-but-not-archived ad sets clog the interface. CBO and ABO mixed seemingly at random across the account.
What Good Looks Like
Highest Volume is the default. Cost caps appear only on ad sets with three weeks of stable conversion data. Budgets adjust in 20% steps, in writing, with a documented reason. Top-spending ad sets are consistently spending their full daily budget — never hitting the cap mid-day, never under-pacing. CBO is used at the prospecting layer, ABO at retargeting. Paused ad sets get archived weekly.
How to Fix It
Switch every bid-capped ad set without sufficient data density to Highest Volume. Set a budget review cadence — weekly, with documented changes. Where a winning ad set hits its cap before end-of-day, raise its budget twenty percent. Archive paused ad sets older than thirty days. Decide deliberately on CBO vs ABO at the campaign level rather than per-instinct.
Revenue Impact
Bidding fixes are usually small — five to twelve percent ROAS improvement — but they recover spend that is being throttled at the auction level. The largest single bidding fix we have measured was a campaign that was capping out at 4pm; raising the budget twenty percent recaptured fifteen percent of weekly revenue.
8. Placements and Delivery Diagnostics
Placements are where blended numbers hide bad performance. Advantage+ Placements is the right default, but it does not guarantee that every placement is contributing — it only optimises overall delivery. Some accounts have placements quietly burning fifteen percent of budget at 0.3x ROAS while the blended number looks acceptable.
What to Check
- Placement-level breakdown by spend, impressions, ROAS, and CPA across Facebook Feed, Instagram Feed, Instagram Stories, Instagram Reels, Facebook Reels, Marketplace, Audience Network, and Messenger
- Device breakdown — mobile vs desktop performance differences for accounts where landing-page experience varies by device
- Creative format matches placement — vertical 9:16 for Stories and Reels, square 1:1 or 4:5 for Feed
- No single placement consumes 60%+ of spend without delivering proportional results
- Audience Network is reviewed specifically — it is the placement most likely to under-perform and most likely to be left on by default
What Bad Looks Like
Audience Network consuming twelve percent of spend at one-fifth of the blended ROAS, and nobody has looked at the placement breakdown in months. Vertical-format creatives running on Feed placements where they get cropped awkwardly. Mobile delivers 92% of spend but nobody has tested the mobile checkout flow recently — if a mobile checkout bug exists, every placement is silently capped by it.
What Good Looks Like
Monthly placement breakdown documented and reviewed. Under-performing placements are excluded at the ad set level (when you have data confidence), or accepted as cost of broad delivery (when you do not). Creative formats match placement aspect ratios. Mobile and desktop checkout flows are tested quarterly. Audience Network is either deliberately on with documented performance, or off.
How to Fix It
Open Ads Manager, run a Placement breakdown for the last 30 days, and identify any placement spending more than five percent of budget at less than half blended ROAS. Either exclude it at the ad set level (with enough data) or upload placement-appropriate creative formats and re-test. Test mobile and desktop checkout paths. If Audience Network is on without a clear performance reason, exclude it.
Revenue Impact
Placement fixes typically recover three to twelve percent of wasted spend. Smaller than tracking or creative fixes, but the work is fast and the recovered budget compounds when redirected to placements that are actually converting.
9. Reporting and Cross-Platform Sanity-Check
Reporting is the audit's validation layer. If everything in pillars 1–8 is healthy, reporting is the proof. If something is broken, reporting catches it before it costs another month of budget. Most accounts rely on a single source — Meta's own dashboard — which is the equivalent of letting the team grade its own homework.
What to Check
- Meta-reported revenue compared weekly against backend truth (Shopify, Stripe, CRM)
- GA4 with proper UTMs as a second attribution voice
- A third source for higher-spend accounts: Triple Whale, Northbeam, Rockerbox, or a custom server-side attribution layer
- Variance band documented and monitored — alerts when weekly variance breaches the band
- Reporting cadence is regular (weekly minimum) and written down — not ad-hoc when something feels off
- Custom columns in Ads Manager surface the metrics that matter: ROAS, CPA, frequency, hook rate, CTR all-time vs 14-day, conversion rate
What Bad Looks Like
Meta is the only number anyone looks at. Variance against backend has not been checked in three months. GA4 is connected but the property has UTMs missing on seventy percent of clicks, so it is unusable for cross-reference. No documented cadence — reporting happens when the founder asks. Custom columns in Ads Manager are the default ones, missing frequency and hook rate.
What Good Looks Like
Weekly reconciliation between Meta-reported revenue and backend within a documented variance band (typical ten to twenty-five percent). GA4 with clean UTMs as second voice. A third attribution source for accounts over twenty-five thousand a month. Custom columns surface the diagnostic metrics. Reporting cadence is weekly, owned by a named person, with a written rule for what triggers a deeper review.
How to Fix It
Set up the weekly Meta-vs-backend reconciliation. Build it once, run it forever. Wire UTMs through Meta's dynamic URL parameters at campaign level so GA4 sees every click correctly. For accounts above the twenty-five thousand a month threshold, evaluate Northbeam, Triple Whale, or Rockerbox — the cost is meaningful but small relative to spend. Document the variance band and the rule for what breaches mean. Save a custom Ads Manager column set the team uses by default.
Revenue Impact
Reporting fixes do not directly lift ROAS. They prevent the next slow-rolling failure — a tracking drift, a creative fatigue cycle, an attribution change — from going undetected for weeks. Across the portfolio of accounts we audit, having a healthy reporting layer cuts time-to-detection on performance drops from three weeks to three days.
DIY or Paid Audit: Which Makes Sense?
This framework is the same one we use internally at BTB Audits. There is nothing proprietary or hidden in it — any senior performance marketer can run it well. The questions worth asking before deciding DIY versus paid are time, perspective, and benchmark access.
DIY makes sense when: you have a senior in-house team, you have spend under twenty-five thousand a month, you have time to run a full audit (three to five hours of focused work), and you do not need external benchmark data.
A paid audit makes sense when: you are spending above twenty-five thousand a month and the cost of a missed fix dwarfs the audit fee, you are founder-led without an internal performance marketing hire, you have changed agencies recently and want an independent opinion before committing to the new one, or you simply do not have three to five hours to spare.
We have written about this trade-off in more depth in the real cost of free ad audits. The short version: most "free" audits are thirty-minute sales calls disguised as audits. There are exceptions — ours included — but the default assumption should be that "free" means "qualified for a sales pitch." A genuine free audit produces written, video, or report-format findings without requiring a discovery call.
Facebook Ad Account Audit FAQ
What is a Facebook ad account audit?
A structured, top-to-bottom review of a Meta Ads account that diagnoses where ad budget is leaking and prioritises the fixes that will most improve return on ad spend. A complete audit covers nine pillars: access, tracking, attribution, structure, audiences, creatives, bidding, placements, and reporting.
How is an account audit different from a campaign review?
A campaign review checks whether individual campaigns are performing against their goals. An account audit checks the underlying infrastructure: tracking, attribution, structure, access, and reporting. Campaigns can look healthy in a campaign review while an account audit reveals that conversions are double-counted, attribution windows are inconsistent, or the Pixel is missing thirty percent of events.
How long does a Facebook ad account audit take?
A self-audit takes three to five hours of focused work for accounts spending under fifty thousand a month. Larger accounts take eight to twelve hours. A professional audit delivered as a video walkthrough typically returns within forty-eight to seventy-two hours.
How often should I audit my Facebook ad account?
Quarterly at minimum. Run an immediate audit on any sudden ROAS drop, agency or manager change, planned scale step, major Meta platform update, or iOS/browser privacy update. Accounts above twenty-five thousand a month benefit from a monthly mini-audit on tracking, attribution, and creative fatigue.
What is the most important pillar of the audit?
Pixel and Conversions API health. Every other metric is downstream of tracking accuracy. If tracking is partial, broken, or duplicated, the algorithm optimises against incomplete data and every other fix you make is built on sand.
Do I need account access to run an audit?
A complete audit requires read-only access to Ads Manager, Events Manager, and Business Manager. A partial audit using only public data — such as our Quick Scan — can identify creative fatigue, landing-page issues, structural red flags visible in Meta Ad Library, and competitive positioning gaps without account access. The deeper diagnostic of tracking accuracy, audience overlap, and attribution requires access.
Can I audit my Facebook ad account myself?
Yes. The framework here is the same one used by professional auditors. The benefit of an outside audit is benchmark data, an independent perspective, and faster execution. The benefit of self-auditing is depth of context: you know your customers, margin structure, and historical campaigns better than any external auditor.
What are the most common audit findings?
Across the audits we have run, seven findings recur most often: Conversions API not configured or deduplication broken, AEM event priority left at default with Purchase not at the top, audience overlap above thirty percent, creative fatigue with frequency above three on prospecting, fragmented account structure with too many Learning Limited ad sets, attribution windows inconsistent across campaigns, and ex-agencies retaining partner access months after offboarding.
What is a free Facebook ad account audit?
A genuine free audit is a public-data review that delivers concrete findings without requiring account access or committing the advertiser to a retainer. Most "free" audits are sales-call qualifiers disguised as audits. Our Quick Scan is one example of a real free audit: it analyses the Meta Ad Library, landing pages, and visible creative patterns, then delivers a private video walkthrough with a Leak Score and findings within forty-eight hours.
What attribution window should I use?
7-day-click + 1-day-view as the primary window. This is Meta's post-iOS 14.5 default and gives the algorithm enough conversion data to learn from while staying close to user intent. Apply it consistently across every campaign so cross-campaign comparisons stay valid. Layer one independent attribution source (GA4, Triple Whale, Northbeam) to cross-reference weekly.
How do I check if my Conversions API is working?
Open Meta Events Manager and verify three things. First, server events should be actively firing for Purchase, Initiate Checkout, Add to Cart, and View Content. Second, Event Match Quality should be 6.0 or higher. Third, the deduplication rate should show that browser and server events are being matched on a shared event_id. If any of these three fails, CAPI is not delivering its full value.
How much does a paid Facebook ad audit cost?
Paid audits range from approximately three hundred dollars for standalone audits to two thousand five hundred dollars for an audit bundled with implementation support. Our Forensic Report is $499 — full account audit, competitor intelligence, prioritised implementation plan, and a strategy call. The Forensic + Implementation tier is $2,499 and adds four weeks of monitoring and three bottleneck calls.
What does a Facebook ad audit template include?
A complete audit template includes the checklist of items to verify across all nine pillars, the threshold values that separate healthy from broken on every metric, a scoring rubric to convert the checklist into an account health score, and a prioritisation matrix for sequencing fixes by impact and effort. The framework on this page is that template — every section above doubles as a checklist item with explicit thresholds.
Should I audit Instagram ads separately?
No — Instagram ads run on the same Ads Manager, Pixel, and Conversions API as Facebook. The technical audit is identical. During Pillar 8 (placements), break results down by Instagram Feed, Stories, Reels, Facebook Feed, Marketplace, and Audience Network so the blended number does not hide an under-performing placement.
What happens after the audit?
A useful audit ends with a prioritised action plan. Sequence by impact-first, effort-second: tracking and attribution go first because everything else is downstream of accurate data, then structural consolidation, then audience and creative work, then bidding and placement tuning. Most accounts implement the highest-impact items in two to four weeks and see measurable ROAS lift in four to six weeks.
Skip the Spreadsheet — Get a Free Quick Scan
Every pillar in this guide comes from real findings in real Meta Ads accounts. Most accounts have at least four of these problems active right now, and each one is silently leaking budget every day it goes unfixed.
We run all nine pillars against your account in a free Quick Scan. Public-data only — no account access needed. You get a private video walkthrough, a Leak Score, and a prioritised list of findings within forty-eight hours. If we do not surface at least three things you did not already know, we send you fifty dollars. No retainer. No sales pitch.
Get your free Quick Scan48-hour turnaround. Money-back guarantee. No account access required.
Continue Reading
Facebook Ads Audit for E-Commerce
The vertical-specific deep dive for D2C brands running Meta Ads.
ChecklistMeta Ads Audit Checklist
The scannable, numbered companion checklist version of this framework.
CreativeAuditing for Creative Fatigue
Pillar 6, expanded — the metrics, timeline, and rotation playbook.