Ad Inventory Hygiene: Combining Account-Level Exclusions with Placement Reports
PPCautomationops

Ad Inventory Hygiene: Combining Account-Level Exclusions with Placement Reports

UUnknown
2026-02-17
9 min read
Advertisement

Operational playbook to audit placement reports, push account-level exclusions, and automate recurring cleanups to protect brand and performance.

Stop firefighting bad placements: a repeatable operational playbook

If your team wastes hours each week hunting down low-quality placements, or you’ve lost bids and brand safety to inappropriate inventory, this playbook is for you. In 2026, Google Ads’ account-level placement exclusions change the game — but the real win comes from pairing that feature with disciplined placement-report audits and automation. Below is a pragmatic, step-by-step operational workflow to audit placement reports, apply account-level exclusions, and automate recurring cleanups so you protect brand and performance at scale.

The 2026 context: why account-level exclusions matter now

In January 2026 Google Ads launched account-level placement exclusions, allowing advertisers to block placements once and have the block apply across Performance Max, Demand Gen, YouTube, and Display campaigns. That update answered a core pain point: placements were siloed at campaign and ad-group level, creating administrative overhead and missed guardrails as Google’s automated formats expanded.

Two trends make this playbook timely:

  • Automation-first buying: Performance Max and AI-based inventory selection increased reliance on account-level controls to protect brands.
  • Scale of inventory: more impressions across apps, connected-TV, and long-tail domains — manual blocking is no longer sustainable.

Top-level playbook (inverted pyramid)

  1. Pull placement reports and flag suspect inventory using clear thresholds.
  2. Classify placements into block, monitor, or whitelist buckets.
  3. Create or update an account-level exclusion list and apply it across the account.
  4. Automate the audit-exclude cycle: scheduled reports, scoring, and exclusion pushes with a dry-run step.
  5. Maintain governance: review queues, exception handling, and KPI tracking.

Step 1 — Audit placement reports: what to pull and how to prioritize

Start with a monthly (and weekly for large accounts) placement review. Use the Google Ads UI report builder or the Google Ads API (GAQL) and export to a CSV, BigQuery, or Google Sheets. Prioritize placements by spend and risk, not just impressions.

Fields to include

  • Placement URL or YouTube channel/playlist ID
  • Campaign & ad group
  • Impressions, clicks, cost, conversions, value, view-through conversions
  • Engagement metrics: CTR, view rate (for video), average CPV/CPC
  • Timestamp and device type
  • Content category / IAB category if available

Actionable filters and thresholds

Customize thresholds to your account, but use these starting rules to triage quickly:

  • High-spend, low-return: placements with >1% of account spend and ROAS < 30% of account target.
  • High impressions, zero conversions: >10k impressions and 0 conversions over 30 days.
  • High bounce / low engagement: video view rate < 10% or CTR < 0.05% on display after >5k impressions.
  • Brand-risk signals: placements with negative keywords in URL paths (e.g., hate, scam, illegal) or flagged IAB categories (sex, illicit drugs, extreme content).

GAQL example to pull placements (30-day window)

SELECT
  campaign.id,
  ad_group.id,
  segments.date,
  placement_view.url,
  metrics.impressions,
  metrics.clicks,
  metrics.cost_micros,
  metrics.conversions,
  metrics.conversions_value
FROM placement_view
WHERE segments.date DURING LAST_30_DAYS
AND metrics.impressions >= 100
ORDER BY metrics.cost_micros DESC

Step 2 — Classify placements: score and bucket

Turn raw placement lists into decision-ready buckets. Use a combined risk-score (heuristic or ML) and a business rule layer.

Simple heuristic scoring formula

Score = (SpendWeight * normalized_spend) + (CTRWeight * inverse_ctr) + (ConvWeight * inverse_conv_rate) + (CategoryWeight * category_risk). Example weights: Spend 0.35, CTR 0.20, Conv 0.25, Category 0.20.

Then assign buckets:

  • Score > 0.75 — Auto-block candidate (high confidence)
  • 0.4 < Score ≤ 0.75 — Review & monitor (manual QA before blocking)
  • Score ≤ 0.4 — Keep or whitelist

Enrich signals to reduce false positives

  • Cross-check domains against third-party brand-safety feeds (IAB, Blocklists, proprietary lists).
  • Apply semantic checks: run landing-page content through an LLM classifier for sensitive topics.
  • Flag YouTube channels with disputed content by combining channel metadata and comments sentiment (if available).

Step 3 — Apply account-level exclusions (UI and API)

With Google Ads’ 2026 update, you can apply one exclusion list at the account level — no more repeating exclusions across dozens of campaigns. Use the UI for small changes and the API for scale and automation.

Before you push: run a dry run

Always perform a dry-run that simulates impact (how much spend will be excluded) and shows the campaigns affected. Store the dry-run output as the audit trail.

Quick UI steps (manual)

  1. Go to Tools & Settings > Shared Library > Placement Exclusions (or the new Account-level exclusions pane).
  2. Create or update an exclusion list; paste domains or channel IDs.
  3. Save and confirm the dry-run impact report.

API snippet (Python, conceptual)

Use the Google Ads API (not exact code but a pattern you can adapt). This example shows a scheduled script that upserts domains to an account-level exclusion list.

from google.ads.googleads.client import GoogleAdsClient

client = GoogleAdsClient.load_from_storage()
customer_id = '123-456-7890'
exclusions = ['badsite.example', 'clickfarm.example']

# Pseudocode: Create or update an account-level exclusion list
def upsert_account_exclusions(customer_id, exclusions):
    # Build operations to create negative placement criteria at customer scope
    # Use the appropriate CustomerNegativeCriterion or PlacementExclusion resource
    for domain in exclusions:
        op = build_negative_placement_operation(customer_id, domain)
        response = client.service.customer_negative_criterion.mutate(...)

# Run with dry_run flag first, then actual run
upsert_account_exclusions(customer_id, exclusions)

Note: Work with your engineering team or Google Ads API specialist to adapt this pattern to your account. API names and resource structures evolve; use dry-run and logging.

Step 4 — Automate recurring cleanups: architecture and scripts

Automation reduces time and risk. Build a simple pipeline that runs nightly or weekly:

  1. Scheduled job queries placements (GAQL).
  2. Scoring engine classifies placements into buckets.
  3. Dry-run run produces impact report (Slack/email).
  4. Approved items are pushed to the account-level exclusion list via API.
  5. All actions logged and versioned (CSV/BigQuery) for audit.

Minimal automation pseudo-script (workflow)

# Nightly job outline (pseudo)
1. Run GAQL to pull last 7/30 days placements
2. Apply scoring function and tag placements
3. Save candidate list to BigQuery
4. Run dry-run to estimate impacted spend
5. If automation policy threshold met (e.g., predicted savings > $X), push to account exclusions
6. Notify channel with summary and link to audit file

Step 5 — Governance: avoid over-blocking and manage exceptions

Automation must be safe. Implement these governance controls:

  • Auto-block thresholds: Only auto-block when score > 0.9 and estimated excluded spend > $Y, or when a placement matches a high-risk category.
  • Manual review queue: Medium-risk placements go to a daily QA list for a human reviewer.
  • Whitelist exceptions: Maintain a domain whitelist for partners and known good publishers.
  • Rollback plan: Keep a 14–30 day undo window (store changes and who approved them).
  • Audit logs: Persist every change with timestamp, user/API caller, and dry-run output.

Measurement: what to track and expected impact

Measure both performance and operational efficiency:

  • Blocked spend prevented (monthly cost removed from low-quality placements)
  • Change in account-level CTR, conversion rate, and ROAS after exclusions
  • Brand-safety incidents (qualitative reports or third-party flags)
  • Hours saved in weekly ops (time previously spent on manual blocking)

Realistic short-term benefits (first 60–90 days): expect a reduction in wasted display/video spend of 10–30% on marginal placements, and a 10–20% improvement in conversion efficiency for the remaining inventory if you aggressively prune low-performing placements.

Case study (anonymized example)

Mid-market e-commerce account, monthly display/video spend $120k. Before account-level exclusions, the team manually applied campaign-level blocks and missed many long-tail placements.

  • Action: Implemented the audit pipeline with weekly scoring, auto-blocking at score > 0.9, manual review for 0.5–0.9.
  • Results (90 days): 23% reduction in wasted display/video spend, 18% increase in conversion rate on remaining placements, and saved ~6 hours per week of PPC ops time. Brand-incident alerts fell to zero after the first month.

That outcome demonstrates how combining account-level exclusions with disciplined automation preserves both brand and performance.

Advanced strategies & 2026-forward predictions

Prepare for the next wave of inventory complexity:

  • Contextual and semantic blocking: Deploy LLM classifiers to detect sensitive topic contexts on landing pages and exclude at account level when necessary.
  • Cross-channel exclusion harmonization: Consolidate exclusions across DSPs, CTV partners, and YouTube to avoid gaps as programmatic inventory grows.
  • Adaptive thresholds: Use reinforcement learning to tune auto-block thresholds based on outcome metrics (ROAS uplift, brand incidents).
  • Privacy-aware signals: As user-level signals continue to be constrained, rely more on contextual and aggregated indicators for scoring placements.
Operational hygiene is not a one-off project — it’s a recurring control loop. Account-level exclusions are the mechanism; disciplined audits and automation are the process.

Common pitfalls and how to avoid them

  • Aggressive auto-blocking without dry-runs: always simulate impact first.
  • Blind reliance on one signal (e.g., low CTR): use composite scores.
  • No rollback plan: keep versioned change logs and quick restore steps.
  • Poor stakeholder communication: notify brand and campaign owners prior to mass exclusions.

Actionable checklist you can run this week

  1. Export the last 30 days of placement data and sort by spend.
  2. Run the heuristic scoring above and tag candidates for auto-block, review, or whitelist.
  3. Create an account-level exclusion list in Google Ads UI with the top 10 high-confidence domains (dry-run first).
  4. Set up a nightly GAQL job to refresh placements and a Slack webhook to post the daily review file.
  5. Establish a 14-day rollback window and an approval owner for manual reviews.

Final takeaways

  • Account-level exclusions + automated audits = scalable hygiene. The Google Ads 2026 update removes a major administrative bottleneck; ops must adapt processes to leverage it.
  • Balance automation with governance. Use auto-blocks for high-confidence cases, and manual review when the cost of a false positive is material.
  • Measure both performance and time savings. The ROI of this work is both reduced wasted spend and reduced ops overhead.

Get started

Start a 30-day account-level exclusions audit this week: export 30 days of placement data, run the scoring script, and push a small, high-confidence exclusion list as a dry-run. If you want the sample GAQL queries and a starter Python automation template adapted to your account, reach out to your Ads API engineer or use this playbook as a blueprint.

Call to action: Implement the checklist above this week, run a dry-run, and schedule a 30-minute review to approve the first exclusion wave. Protect your brand and recover wasted spend with one repeatable loop.

Advertisement

Related Topics

#PPC#automation#ops
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-17T01:28:55.764Z