Why this matters
Translating stakeholder needs into BI specifications turns fuzzy requests into buildable dashboards, semantic models, and metrics. As a BI Analyst, you will often face requests like 'We need a dashboard for growth' or 'Show me churn'. Without a clear spec, teams misinterpret goals, ship the wrong metrics, or spend weeks reworking.
- Real tasks you will do: define KPIs and their formulas; map data sources; choose metric grain; document business rules; sketch visuals; write acceptance tests; set refresh SLAs; plan security and access.
- Outcome: a spec that engineers and analysts can build, test, and maintain without guesswork.
Concept explained simply
A BI spec is a bridge between intent and implementation. It translates business questions into unambiguous instructions a data team can build.
Mental model
Think in layers: Why → Who → What → How → Quality → Guardrails.
- Why: business objective and decisions supported
- Who: users, scenarios, and required drill paths
- What: KPIs, definitions, filters, and visuals
- How: sources, joins, grain, transformations
- Quality: data freshness, completeness, and validation checks
- Guardrails: security, edge cases, and acceptance criteria
Copy-ready BI spec template
1. Objective - Problem/goal: - Decisions enabled: 2. Users & scope - Primary users: - Use cases & frequency: - Out of scope: 3. KPIs & definitions (one per metric) - Name: - Business meaning: - Formula: - Grain (per day, per account, etc.): - Filters (included/excluded): - Dimensions available (slice by): - Edge cases & default values: - Data lineage (table.field): 4. Data model & logic - Sources: - Joins & keys: - Transformations & rules: - Data freshness (SLA): 5. Visuals & interactions - Views (charts/tables): - Drill paths: - Required filters: - Layout notes: 6. Security & governance - Row-level rules: - PII handling: - Access roles: 7. Acceptance criteria (tests) - Sample calculations with expected results: - Threshold checks (e.g., nulls & duplicates): 8. Rollout & maintenance - Owner: - Review cadence: - Change log:
Worked examples
E-commerce conversion dashboard
- Objective: improve checkout conversion
- KPI: Conversion rate
- Definition: orders / sessions
- Grain: daily
- Filters: include paid and organic; exclude internal traffic
- Formula (pseudo): daily_cr = count_distinct(order_id where is_test=false) / count_distinct(session_id where is_internal=false)
- Data: web.sessions, sales.orders joined on visitor_id
- Acceptance test: on 2025-01-05, expected daily_cr ≈ 2.1% based on sample data
B2B sales pipeline health
- Objective: increase qualified opportunities
- KPI: SQL to Win Rate
- Definition: won_deals / sql_opportunities
- Grain: monthly, per sales team
- Filters: stage_date between month start/end; exclude partners
- Join: crm.opportunities join crm.accounts on account_id
- Edge case: opportunities with zero amount excluded
- Acceptance: month=2025-02, team=East, expected 21–23%
Customer support SLA
- Objective: meet first-response SLA
- KPI: % tickets responded within 4 hours
- Definition: tickets with first_response_mins ≤ 240 / all tickets
- Grain: weekly by queue
- Data: support.tickets; measure first reply from agent only
- Security: hide PII; row-level by queue owners
- Acceptance: Week 12, Queue Billing: 88–90%
How to derive formulas and grain
- Start from the decision: what will change when the KPI moves?
- Choose the unit of counting (denominator first), then the successful events (numerator).
- Set the grain to match decision cadence (e.g., daily ops vs monthly strategy).
- List exclusions and default handling (nulls, tests, refunds, duplicates).
- Trace every field to its source table and key.
Data mapping mini-table
Metric: Net revenue Fields: orders.amount, refunds.amount Rule: net_revenue = sum(order_amount) - sum(refund_amount) Grain: daily, per country Edge: currency conversion at order_date FX rate
Handling ambiguity
- Ask for decisions, not just dashboards: 'What decisions will this inform?'
- Clarify user scenarios: 'Who needs this? How often?'
- Probe definitions: 'What exactly counts as a churned customer?'
- Confirm exclusions: 'Should internal traffic or test orders be excluded?'
- Set quality: 'How fresh must this be? What accuracy is acceptable?'
Ambiguity to clarity — question prompts
- Success looks like...
- This metric goes up when...
- If two sources disagree, which is authoritative?
- What is out of scope for the first release?
Edge cases and constraints
- Late-arriving data: define backfill windows and reprocessing rules.
- Time zones: set canonical time zone for grain (e.g., UTC vs local market).
- Identity stitching: document identity rules for user/account joins.
- Seasonality: prefer rolling windows to smooth volatility when relevant.
Exercises (do these before the quick test)
Progress saving note: the quick test is available to everyone; only logged-in users get saved progress.
- Define a KPI properly
Task: Stakeholder asks for a 'Signup Conversion' metric. Produce a spec snippet with business meaning, formula, grain, filters, and acceptance criteria.Checklist
- Clear denominator and numerator
- Explicit grain (daily/weekly etc.)
- Inclusions/exclusions listed
- Source tables and keys
- One acceptance value or range
- Choose the right grain and logic
Task: Revenue by product category should match finance monthly numbers. Specify grain, currency handling, returns policy, and acceptance tests.Checklist
- Monthly grain aligned with fiscal calendar
- FX conversion rule with date
- Refunds and cancellations treatment
- Sample month reconciliation range
- Rewrite a vague ask
Task: 'We need a dashboard for growth'. Rewrite into objective, users, top 3 KPIs with definitions, visuals, and SLA.
Common mistakes and self-check
- Missing grain. Self-check: can someone compute it at the wrong level?
- Undefined exclusions. Self-check: list test data, staff usage, and edge events.
- Ambiguous joins. Self-check: write the join keys per source.
- No acceptance tests. Self-check: provide one date/team with expected value or range.
- Misaligned freshness. Self-check: confirm decision cadence vs refresh SLA.
Practical projects
- Spec and build: Marketing funnel dashboard with at least 5 KPIs, daily grain, and 6 acceptance checks.
- Reconciliation: Create a spec that reconciles BI revenue to finance GL with documented differences and tests.
- Support SLA: Draft a spec including row-level security and a privacy section for PII handling.
Who this is for
- BI Analysts and Analytics Engineers designing dashboards and semantic layers
- Product Analysts translating product questions into KPIs
- Data PMs coordinating measurement plans
Prerequisites
- Basic SQL and joins
- Understanding of dimensions, facts, and metric grain
- Ability to read simple data models
Learning path
- Practice defining KPIs with clear numerator/denominator and grain
- Map fields to sources and write transformation rules
- Add acceptance tests and data quality checks
- Draft visuals and interactions that match decisions
- Review with stakeholders; iterate and finalize spec
Next steps
- Do the exercises above and compare to the provided solutions
- Take the quick test below to check understanding
- Apply the template to one real business request
Mini challenge
Given a churn analysis request, produce a 1-page spec that includes: churn definition, grain, cohort logic, exclusions, and one acceptance test using last quarter data.