Who this is for
BI Analysts, Analytics Engineers, and Data Product Owners who need consistent, reusable metrics and calculations across dashboards, reports, and self-serve tools.
Prerequisites
- Basic SQL (SELECT, WHERE, GROUP BY, JOIN)
- Understanding of dimensions, facts, and time series
- Familiarity with business KPIs (e.g., Revenue, Conversion Rate)
Why this matters
In real BI work, stakeholders expect a single source of truth: the same metric should give the same number everywhere. Without reusable calculations and KPI governance, teams waste time fixing mismatches and debating definitions instead of finding insights.
Typical tasks this enables
- Ship a new dashboard in hours by composing pre-defined metrics and dimensions.
- Guarantee revenue matches Finance numbers across all reports.
- Change a KPI definition once and have it update everywhere safely.
- Allow self-serve users to explore governed metrics without breaking logic.
Concept explained simply
Reusable calculations are centrally defined formulas (metrics) with clear rules for inputs, filters, time, and aggregation. KPI governance is the lightweight process that keeps those definitions consistent, versioned, and trusted.
Mental model
Think of the semantic layer as a cookbook. Each metric is a well-tested recipe with ingredients (columns), steps (logic), serving size (grain), and plating rules (filters/time windows). Governance is the kitchen discipline that ensures every cook serves the same dish, every time.
Core principles
- Single source: define each KPI once in the semantic layer; reference it everywhere.
- Explicit grain: state the level (e.g., by day, by customer, by order) the metric is valid at.
- Time rules: define the calendar (UTC vs. local, fiscal vs. calendar) and default window.
- Filters baked in: include business-logic filters (e.g., exclude test orders) in the metric.
- Type/format: specify data type, units, currency handling, and rounding.
- Ownership: assign a clear owner and review cadence.
- Versioning: communicate changes with deprecation windows.
Worked examples
Example 1: Gross Revenue (USD)
Business rule: Sum of paid order amounts in USD, excluding canceled or refunded orders; default time is UTC calendar day.
- Base table: fact_orders
- Filters: order_status IN ('paid','shipped','completed'); is_test = false
- Currency handling: order_amount * fx_rate_to_usd
- Aggregation: SUM
- Grain: day
SQL sketch
SELECT
DATE_TRUNC('day', created_at_utc) AS day,
SUM(order_amount * fx_rate_to_usd) AS gross_revenue_usd
FROM fact_orders
WHERE order_status IN ('paid','shipped','completed')
AND is_test = false
GROUP BY 1;
Example 2: Conversion Rate
Business rule: Orders divided by sessions in the same time window; both exclude bots and test traffic.
- Numerator: distinct orders
- Denominator: distinct sessions
- Filters: is_bot = false, is_test = false
- Grain: day, with consistent attribution window
Metric formula sketch
conversion_rate = orders_count / sessions_count
Example 3: Active Users (30D Rolling)
Business rule: Distinct users with any qualifying event in the last 30 days, excluding internal users.
- Window: last 30 days from selected date
- Event condition: session_count > 0 OR event_type IN ('login','purchase')
- Filters: is_internal_user = false
- Aggregation: COUNT DISTINCT user_id
How to design a reusable calculation
- Name it clearly: metric names should be descriptive and unique (e.g., gross_revenue_usd).
- State the question: what business question does it answer?
- Define inputs: tables, columns, filters, time zone, currency.
- Set grain: at what level is it valid? (day, week, month, customer)
- Specify formula and aggregation: SUM, AVG, COUNT DISTINCT, ratios.
- Defaults: default time window, rounding, display format.
- Quality checks: thresholds, reconciliations, and test queries.
- Ownership and version: who maintains it; how changes are communicated.
KPI governance that scales
- Owners: each KPI has a named owner (team + person).
- Change control: propose & review updates; provide migration notes.
- Versioning: support v1, v2; mark deprecation dates.
- Documentation: plain-English definition and do/don't examples.
- Testing: unit tests on logic and reconciliations to source-of-truth numbers.
- Discoverability: searchable catalog with tags and owners.
Quality checks
Common checks
- Null-safe math and zero-division protection
- Consistent time zone and calendar
- Bot/test data excluded
- Currency conversion applied correctly
- Reconciliation vs. Finance or a trusted report
Self-check method
- Pull 3 independent samples (e.g., last day, last month, a holiday week).
- Recompute metric with a simple SQL and compare to semantic layer output.
- Explain any mismatch > 1% (or a threshold you set) before publishing.
Exercises
These mirror the exercises below and help you practice defining reusable metrics with clear governance.
Exercise 1: Governed Revenue KPI
Create a reusable KPI definition for Gross Revenue (USD) with filters, grain, time rules, and owner.
- Columns available: order_id, customer_id, order_amount, order_status, is_test, fx_rate_to_usd, created_at_utc
- Exclude: test orders; only include paid/shipped/completed
- Default time grain: day; Time zone: UTC
Show guidance
- Include explicit currency handling and rounding.
- Write both the plain-English definition and a formula sketch.
- Assign an owner and a version.
Exercise 2: Refactor Active Users
Two dashboards use different "Active Users" logic. Unify them as a single reusable metric with a clear rule and tests.
- Dashboard A: users with session_count > 0
- Dashboard B: users with any event_type IN ('login','purchase')
- Exclude: internal users; Window: last 30 days rolling
Show guidance
- Define one qualifying rule combining both conditions.
- Set the window and time zone; specify COUNT DISTINCT user_id.
- Propose a test to prevent regressions.
Checklist
- Clear, unique metric name
- Plain-English definition and formula
- Explicit grain and time rules
- Filters and exclusions defined
- Type/format, units, rounding set
- Owner and version noted
- Quality checks documented
Common mistakes and how to self-check
- Silent time zone drift: Numbers differ by region. Self-check: compare UTC vs. local; pick one standard and stick to it.
- Hidden filters in dashboards: A chart adds extra filters. Self-check: ensure baked-in filters live in the metric, not only in the chart.
- Mixed grains: Summing daily ratios to monthly totals. Self-check: define valid grains; compute ratios at the right level.
- Currency mismatch: Summing different currencies. Self-check: convert to a base currency in the metric logic.
- Ambiguous names: "Revenue" means many things. Self-check: add qualifiers (gross/net, refunds included/excluded, currency).
Practical projects
- Build a KPI catalog: document 10 core metrics with name, owner, definition, tests, and examples.
- Refactor 3 dashboards to use governed metrics only; record before/after discrepancies and fixes.
- Add versioning: deprecate one outdated KPI, announce replacement, and set a removal date.
Learning path
- Define two foundational metrics (Revenue, Active Users).
- Add a ratio metric (Conversion Rate) using composed building blocks.
- Introduce time windows (7D, 30D) and ensure consistency across all three.
- Implement tests and a simple review workflow with owners.
Mini challenge
Draft a governed metric for Net Revenue (USD) that subtracts refunds within the same period. Include:
- Name, owner, version
- Plain-English definition
- Formula sketch and filters
- Grain, time rules, and rounding
- One quality test you would automate
Next steps
- Complete the exercises and take the quick test below.
- Implement one governed metric in your workspace and have a peer review it.
- Schedule a monthly KPI review to keep definitions current.
Quick test is available to everyone; only logged-in users will have their progress saved.