Why this matters
Dashboards drive budget, product, and campaign decisions. If teams compute metrics differently, you get conflicting answers and bad bets. As a Marketing Analyst, you must define metrics once, compute them reliably everywhere, and flag any breaking change.
- You align marketing and finance on CAC, LTV, and ROI.
- You prevent “multiple truths” across dashboards and slides.
- You speed up analysis because definitions are reusable and testable.
Concept explained simply
A metric is a repeatable answer to a specific question, calculated the same way every time. Consistency means the definition, data, and filters do not change silently across tools.
Helpful mental model
Think of each metric as a recipe card:
- Name and the question it answers
- Ingredients (tables, fields, filters)
- Steps (formula and attribution rules)
- Pan size and time (grain and time window)
- Taste test (validation checks)
Anyone should be able to cook the same dish (compute the same metric) and get the same result.
Standard components of a metric definition
- Name: clear and unique.
- Business question: what decision it supports.
- Formula: exact expression with numerator and denominator.
- Filters/Segments: included and excluded data (e.g., channels, geos, organic vs paid).
- Grain: at what level the metric is computed (day, campaign, user, session).
- Time window: attribution window and reporting period (e.g., 7-day click).
- Attribution model: last click, first touch, data-driven, etc.
- Source of truth: specific tables/views and field names.
- Freshness: expected update schedule and latency.
- Edge cases: how to handle missing values, zeros, test traffic, refunds.
- Owner and version: who approves changes and version history.
Worked examples
Example 1: CTR (Click-Through Rate)
- Question: How often do ad impressions result in clicks?
- Formula: CTR = clicks / impressions.
- Filters: Exclude internal IPs and test campaigns; include paid channels only.
- Grain: campaign_daily.
- Time window: reporting day (UTC), no attribution window.
- Source: ad_platform.fact_ad_performance (fields: clicks, impressions, campaign_id, date).
- Edge cases: If impressions = 0, display as null (not 0%).
- Owner & version: Growth Analytics v1.2.
Example 2: CAC (Customer Acquisition Cost)
- Question: How much we spend to acquire one new paying customer?
- Formula: CAC = marketing_cost / new_customers.
- Filters: Include paid media and agency fees; exclude brand sponsorships unless tagged as acquisition; exclude refunds from new_customers.
- Grain: month and channel.
- Time window: Costs and new_customers in the same calendar month.
- Attribution: Last non-direct click.
- Source: finance.marketing_costs, crm.customers (signup_date, first_payment_date, acquisition_channel).
- Edge cases: If new_customers = 0, display as null; note partial months with data freshness < 95%.
- Owner & version: Marketing Ops v2.0.
Example 3: Repeat Purchase Rate (RPR)
- Question: What share of customers make 2+ purchases within 90 days of first order?
- Formula: RPR = customers_with_2plus_purchases_90d / cohort_customers.
- Filters: Ecommerce only; exclude employee orders and fraud flags.
- Grain: cohort_month.
- Time window: 90 days from each customer’s first_purchase_date.
- Attribution: Customer-level; no channel attribution.
- Source: dw.orders, dw.customers (fields: customer_id, order_id, order_ts, first_purchase_ts).
- Edge cases: If first_purchase_ts missing, exclude from cohort.
- Owner & version: Product Analytics v1.0.
Consistency playbook
- Draft the spec: Fill in the standard components above.
- Review with stakeholders: Marketing, Finance, Product agree on filters and windows.
- Set a single source of truth: One modeled view per metric if possible.
- Version control definitions: Keep a version and change log in the spec.
- Change management: When a metric changes, add a new version, update dashboards, and communicate impact.
- Guardrails in dashboards: Add tooltips with definition, last updated, and owner.
- Automated checks: Freshness alerts, volume anomalies, denominator=0 checks.
- Reconciliation: Periodically compare the metric across sources and explain any deltas.
Quick self-audit checklist
- Do we have a documented attribution model?
- Are data sources and fields explicitly named?
- Are edge cases and exclusions written and tested?
- Is there a version and owner?
- Do dashboards display when the metric was last updated?
Exercises
Complete these tasks, then compare with the solutions below. You can copy-paste the specs into your notes.
- Spec a conversion metric: Define “Session-to-Purchase Conversion Rate” for ecommerce, including formula, grain, window, sources, filters, and edge cases.
- Diagnose inconsistent CAC: Two dashboards show CAC $78 vs $92 for the same month. List at least 3 plausible definition differences and propose a remediation plan.
Exercise checklist
- Every spec includes grain, window, attribution, filters, and edge cases.
- Every diagnosis ties a difference to a reconciliation step.
- Your remediation plan includes owner, timeline, and communication.
Common mistakes and how to self-check
- Missing grain: Metrics computed at various levels yield mismatches. Self-check: Is “at what level” explicitly written?
- Unstated attribution: Different teams default to different models. Self-check: Is last click vs first touch documented?
- Soft exclusions: “We usually exclude tests” is not enough. Self-check: Are test flags and internal traffic filters exact?
- Changing definitions silently: Dashboards drift. Self-check: Is there a version and change log?
- Using raw platform totals: Platforms differ in spam filtering and timezone. Self-check: Is there a modeling layer and timezone standard?
Practical projects
- Create a 1-page “Metric Card” template and fill it for 5 core marketing metrics (CTR, CPC, CPA, CAC, ROAS). Add owners and versions.
- Build a reconciliation view that computes CAC from both ad platform data and finance costs; report the delta and the top 3 drivers.
- Add tooltips to an existing dashboard that show the metric definition, freshness, and owner.
Who this is for
- Marketing Analysts building or maintaining BI dashboards.
- Growth and Ops analysts who need one version of the truth.
Prerequisites
- Basic SQL and familiarity with data models (facts/dimensions).
- Understanding of marketing concepts (impressions, clicks, sessions, conversions).
- Comfort with attribution and time windows.
Learning path
- Learn metric definition components.
- Practice with 3+ worked examples.
- Apply to your org with a Metric Card template.
- Set up checks and guardrails in dashboards.
- Take the quick test and refine.
Next steps
- Document your top 5 metrics using the template.
- Run a 30-minute review with stakeholders to agree on definitions.
- Implement tooltips and freshness indicators on your dashboard.
Mini challenge
Your product launches a free trial. How would “New Customers” change? Draft a v1.1 definition including trial-to-paid conversion and note impacts on CAC and ROI. Keep the prior definition as v1.0 for historical comparability.