luvv to helpDiscover the Best Free Online Tools
Topic 1 of 8

Audience And Targeting Tests

Learn Audience And Targeting Tests for free with explanations, exercises, and a quick test (for Marketing Analyst).

Published: December 22, 2025 | Updated: December 22, 2025

Why this matters

Audience and targeting tests help you discover which people to show your message to and how broad or specific your targeting should be. As a Marketing Analyst, you’ll be asked to:

  • Decide whether to use broad targeting or specific segments (e.g., interest stacks, lookalikes, CRM lists).
  • Prove incremental lift vs. simple optimization (e.g., does retargeting add incremental sales?).
  • Allocate budget between prospecting and retargeting based on evidence, not hunches.
  • Prevent audience overlap and contamination that can invalidate results.

Concept explained simply

Audience and targeting tests compare two or more audience definitions under controlled conditions to see which group produces better business outcomes at similar cost and spend. You keep everything else as constant as possible (creative, bid, budget, schedule), change only the audience, and measure conversions or lift.

Mental model

Think of audiences as fishing zones. You have the same bait and boat (creative and budget). You test different zones (audiences) to learn where you catch more fish per hour (conversion rate and cost per result) without depleting the lake (scalability and reach). The best audience is not just the cheapest today; it’s the one that remains efficient as you scale.

Designing audience and targeting tests

  1. Define the core question. Example: “Is broad targeting as effective as a 1% lookalike?”
  2. Choose one primary success metric. Use a business outcome (e.g., purchases per 1,000 impressions, CPA, incremental lift).
  3. Keep variables constant. Same creative set, budget split, schedule, bid/optimization event.
  4. Randomize cleanly. Use mutually exclusive audience definitions; avoid overlap and cross-delivery.
  5. Size the test. Estimate needed impressions or conversions so your difference is detectable. As a rule of thumb, aim for ≄ 100 converting events per cell when in doubt.
  6. Run long enough. At least one full purchase cycle or conversion window (e.g., 7–14 days for many ecommerce flows).
  7. Check for SRM. If you aimed for 50/50 traffic but got 70/30, investigate sample ratio mismatch (SRM) before reading results.
Pre-launch checklist
  • Clear hypothesis and success metric defined.
  • Audiences are mutually exclusive (no user can be in two cells).
  • Creative set identical across cells.
  • Budgets equalized or proportionally constrained.
  • Attribution/conversion window aligned across the test.
  • Tracking events verified (fires consistently).
  • Planned runtime and minimum sample size documented.
During-test checks
  • Delivery split close to planned (e.g., 50/50 ± 5–10%).
  • No sudden creative swaps or bid strategy changes.
  • Frequency and reach monitored; avoid extreme frequency in one cell.
Post-test wrap-up
  • Primary metric compared with confidence interval or practical significance threshold.
  • Scalability assessed: reach, frequency, cost stability.
  • Document decision rule (e.g., adopt if CPA improves ≄ 10% with similar spend and stable delivery).

Worked examples

Example 1: Broad vs. 1% lookalike

Setup: Two ad sets, same creatives and budget. Cell A = Broad; Cell B = 1% lookalike from past purchasers.

  • Spend: A = $5,000, B = $5,100
  • Purchases: A = 220, B = 215
  • CPA: A = $22.73, B = $23.72
  • Reach: A = 120k, B = 70k
  • Frequency: A = 2.0, B = 3.0

Interpretation: Similar CPA, but Broad has higher reach and lower frequency, suggesting better scalability. Decision: Prefer Broad for scaling, keep LAL as a control or niche layer.

Example 2: Interest stack vs. two narrow interests

Setup: A = Interest stack (5 related interests). B = Narrow Interest 1. C = Narrow Interest 2. Same creatives and budgets.

  • Spend: A = $3,000, B = $3,000, C = $3,000
  • Purchases: A = 120, B = 92, C = 78
  • CPA: A = $25.00, B = $32.61, C = $38.46

Interpretation: Stacked interests outperform narrow single interests likely due to scale and algorithmic exploration. Decision: Use stacked or broad targeting; avoid overly narrow splits unless they’re incrementally valuable.

Example 3: Retargeting frequency cap

Setup: A = Retargeting with cap 2/day. B = Cap 1/day. Same audience definition and budget.

  • Spend: A = $2,500, B = $2,500
  • Purchases: A = 95, B = 88
  • CPA: A = $26.32, B = $28.41
  • Frequency: A = 5.2, B = 3.8

Interpretation: Higher frequency improves conversions with acceptable CPA. Decision: Keep cap 2/day, monitor fatigue over time.

Guardrails and diagnostics

  • Sample Ratio Mismatch (SRM): If planned 50/50 but see strong skew (e.g., 70/30), treat results as invalid until fixed.
  • Overlap & contamination: Ensure users can’t land in multiple cells. Use mutually exclusive audiences (e.g., include/exclude logic).
  • Seasonality & promotions: Keep tests away from major sales spikes unless applied equally to all cells.
  • Learning phase effects: Give time for delivery to stabilize; don’t judge cells in the first 1–2 days only.
  • Attribution consistency: Same conversion window and measurement method across cells.

Exercises (do these, then compare with solutions)

Exercise 1: Design a clean audience split test

Scenario: You want to compare Broad vs. 1% lookalike for a new product launch. Budget is $10,000 for 10 days. Your KPI is purchase CPA.

Tasks:

  • Write a hypothesis and decision rule.
  • Define mutually exclusive audience cells.
  • Specify creative, budget split, and runtime.
  • List pre-launch checks and what you’ll monitor mid-flight.

Expected output: A short plan with hypothesis, KPI, audience definitions, equal budgets, runtime, and a checklist.

Show solution

Hypothesis: “Broad will achieve equal or lower CPA than 1% LAL while reaching more unique users.”

  • Cells: A = Broad (country-level), B = 1% LAL from past 180-day purchasers; exclude B from A and vice versa.
  • KPI: Purchase CPA; secondary: purchases per 1,000 impressions, reach, frequency.
  • Setup: Same 3 creatives, identical bid strategy, budget split 50/50 ($500/day each) for 10 days.
  • Checks: delivery 50/50 ± 10%, stable frequency, same conversion window, no creative swaps mid-test.
  • Decision: Adopt Broad if CPA ≀ LAL by 0–10% with at least 100 purchases per cell and stable reach/frequency.
Exercise 2: Diagnose SRM and overlap

Scenario: Two audiences intended 50/50 delivery show 68/32 impressions after 3 days. CPA differences are small but you worry about bias.

Tasks:

  • Name the likely issue and why it matters.
  • List 3–4 causes and fixes.

Expected output: A brief diagnosis and a remediation checklist.

Show solution

Issue: Sample Ratio Mismatch (SRM). It suggests randomization or delivery bias; results may be invalid.

  • Causes: Audience overlap; different exclusions; platform learning skew; budget caps or pacing differences; geography/device filters mismatched.
  • Fixes: Make audiences mutually exclusive; re-align exclusions; equalize budget caps and pacing; match locations/devices; restart the test after corrections.
  • Checklist to submit: hypothesis, success metric, audience definitions, overlap prevention method, runtime, decision rule.

Common mistakes and how to self-check

  • Optimizing on CTR instead of outcomes: Self-check: Is your primary KPI tied to revenue (e.g., purchases)?
  • Overlapping audiences: Self-check: Can a user qualify for more than one cell? If yes, fix excludes.
  • Judging too early: Self-check: Have you reached your minimum sample size or full conversion window?
  • Changing multiple variables: Self-check: Did you keep creatives/bids identical? If not, isolate variables.
  • Ignoring scalability: Self-check: Does the winner maintain efficiency as spend grows (reach/frequency stable)?

Practical projects

  • Build a one-pager test plan comparing Broad vs. Lookalike audiences for your product or a dataset you have.
  • Run a small geo holdout: randomly assign 4 regions to control and 4 to test; measure lift in conversions per region.
  • Retargeting incrementality: create a holdout group excluded from retargeting for 2 weeks; compare revenue per user.
  • Historical simulation: split past users by last-touch channel or geo to estimate variance and sample size for a future test.

Who this is for

  • Marketing Analysts who own test design and reporting.
  • Performance marketers deciding budget splits across audiences.
  • Growth teams validating incrementality before scaling.

Prerequisites

  • Basic A/B testing concepts (control vs. treatment, randomization, significance).
  • Comfort with campaign metrics (reach, frequency, CTR, CVR, CPA, ROAS).
  • Ability to read platform reports and export data.

Learning path

  • Start: Refresh A/B test fundamentals and sample size basics.
  • Next: Learn audience architecture (prospecting vs. retargeting, lookalikes, CRM lists).
  • Then: Design a clean mutually exclusive test with a single KPI.
  • Finally: Practice diagnostics (SRM, overlap, seasonality) and scaling decisions.

Next steps

  • Turn one worked example into a real test plan.
  • Schedule a 10–14 day run with clear success criteria.
  • Document results and a roll-out plan if successful.

Mini challenge

You have three audiences: Broad, 1% LAL, and CRM past buyers (retargeting). Budget is limited. Propose a 2-cell test that maximizes learning speed and decision value. State your KPI and how you’ll prevent overlap.

One possible approach

Test Broad vs. 1% LAL for prospecting efficiency (CPA as KPI). Exclude CRM buyers from both cells. Keep creatives identical and budgets 50/50 for 10 days. Ensure mutual exclusions so past buyers never enter prospecting cells.

Quick Test

Take the quick test to check your understanding. Available to everyone; only logged-in users get saved progress.

Practice Exercises

2 exercises to complete

Instructions

Compare Broad vs. 1% Lookalike for a new product. Budget: $10,000 over 10 days. KPI: Purchase CPA.

  • Write a hypothesis and decision rule.
  • Define mutually exclusive audience cells.
  • Specify creatives, budget split, runtime.
  • List pre-launch checks and mid-flight monitors.
Expected Output
A concise plan documenting hypothesis, KPI, audience definitions, overlap prevention, equal budget split, runtime, and a pre/during checklist.

Audience And Targeting Tests — Quick Test

Test your knowledge with 7 questions. Pass with 70% or higher.

7 questions70% to pass

Have questions about Audience And Targeting Tests?

AI Assistant

Ask questions about this tool