Why this matters
Slow measures make dashboards lag, time out, or show inconsistent results across pages. As a BI Developer, you often need to: speed up KPIs like Revenue, Margin %, DAU/MAU; remove duplicated logic across visuals; push heavy work to the right layer (database vs BI tool); and ensure measures scale as data grows.
- Scenario: Finance asks why Revenue is inconsistent between two pages. Root cause: slightly different measure filters.
- Scenario: A DISTINCTCOUNT measure makes your daily dashboard take 40 seconds. You need to pre-aggregate or change approach.
- Scenario: Sales margin measure iterates rows unnecessarily. A rewrite cuts time by 70%.
Concept explained simply
A measure is a calculation evaluated at query time. Optimization means making it do less work and push work to the most efficient place. Start with the data model: the best measure on a poor model will still be slow.
Mental model
- Do less: avoid scanning unnecessary rows; reduce distinct values; pre-aggregate where stable.
- Do earlier: push computation down to the database or ETL if it rarely changes.
- Do smarter: leverage aggregations and filter context instead of row-by-row loops.
Common building blocks to remember
- Star schema with clean relationships reduces filters and joins.
- Low-cardinality columns group faster than high-cardinality ones.
- Pre-aggregations (daily or monthly) serve most dashboards better than raw-level scans.
- Avoid repeated complex expressions inside visuals; centralize in one measure.
Key principles of measure optimization
- Prefer aggregations over row iteration when possible.
- Reduce DISTINCTCOUNT with precomputed surrogate keys or pre-aggregated tables.
- Limit filter scopes; avoid expensive filters applied repeatedly across visuals.
- Push heavy, stable calculations to the source (views, materialized views) or ETL.
- Cache and reuse: create reusable base measures; avoid duplicating logic per visual.
- Choose efficient data types; trim unused columns; reduce text/high-cardinality where possible.
Worked examples
Example 1: Replace row-by-row revenue with an aggregated approach
Problem: A measure sums price times quantity by iterating every row.
// Slow pattern (row iterator)
Revenue (slow) = SUMX(Sales, Sales[Quantity] * Sales[UnitPrice])
Optimized options:
- Option A (ETL/source): add Sales[ExtendedAmount] = Quantity * UnitPrice, then measure becomes SUM(Sales[ExtendedAmount]).
- Option B (pre-aggregated table): create a daily aggregated table by product and day, summing ExtendedAmount. Most visuals can query it instead of the raw table.
-- Source-side pre-aggregation (daily)
CREATE MATERIALIZED VIEW mv_sales_daily AS
SELECT order_date, product_id, SUM(quantity * unit_price) AS revenue
FROM sales
GROUP BY order_date, product_id;
Result: Less row iteration at query time, faster visuals, predictable performance.
Example 2: Taming DISTINCTCOUNT for active users
Problem: Daily Active Users uses DISTINCTCOUNT on a giant events table.
-- Slow pattern
SELECT event_date, COUNT(DISTINCT user_id) AS dau
FROM events
WHERE event_date BETWEEN CURRENT_DATE - INTERVAL '30 day' AND CURRENT_DATE
GROUP BY event_date;
Optimized approach: pre-derive daily unique users in ETL or via a materialized view. Distinct is computed once per day, then served quickly.
CREATE MATERIALIZED VIEW mv_dau AS
SELECT event_date, COUNT(*) AS dau
FROM (
SELECT DISTINCT event_date, user_id FROM events
) d
GROUP BY event_date;
In BI, use a simple SUM over mv_dau[dau] filtered by date. Benefit: stable, repeatable performance and consistent numbers across pages.
Example 3: Margin % without repeated filters
Problem: Each visual computes margin % with repeated filters, causing redundant evaluation.
// Before
Margin Amount = SUMX(Sales, Sales[Revenue] - Sales[Cost])
Margin % (slow) = DIVIDE([Margin Amount], [Revenue]) // both may depend on row iterators
Optimized approach: build a single base measure for revenue and cost that use pre-aggregated columns or source views. Then define Margin % once and reuse everywhere.
// After
Revenue = SUM(Sales[ExtendedAmount])
Cost = SUM(Sales[ExtendedCost])
Margin % = DIVIDE([Revenue] - [Cost], [Revenue])
Result: Centralized calculation, less duplicate work across visuals, easier validation.
Quick self-check checklist
- Is your model a star schema with clear one-to-many relationships?
- Did you avoid row-by-row iterators where a simple SUM or COUNT works?
- Are high-cost DISTINCTCOUNTs replaced by pre-aggregations when possible?
- Did you push stable calculations to the source or ETL?
- Are measures reusable and referenced by other measures instead of copy/paste logic?
- Are data types compact (integers for keys, numeric where needed, minimized text)?
Exercises
Note: Everyone can take the exercises and test. Only logged-in users will have their progress saved.
Exercise 1 — Rewrite a slow measure
You have a fact table Sales(order_date, product_id, quantity, unit_price, cost). Your current BI measure is:
Revenue (slow) = SUMX(Sales, Sales[quantity] * Sales[unit_price])
Task:
- Propose a faster approach using either: (a) a source-side column or materialized view, or (b) a more efficient measure strategy.
- Provide the SQL or BI expression you would use.
- Explain how your approach reduces work at query time.
Exercise 2 — Optimize DISTINCTCOUNT of users
Given a table events(event_time, user_id, country), analysts need DAU and 7-day rolling DAU. The current approach runs DISTINCTCOUNT(user_id) per day on the raw table.
Task:
- Create a source-side strategy (view or materialized view) that precomputes daily unique users.
- Show the BI-side measure(s) to compute DAU and a 7-day rolling DAU using the pre-aggregated data.
- Explain how this improves dashboard performance.
Common mistakes and how to self-check
- Mistake: Using row iterators by default. Fix: Check if a simple SUM on a precomputed column would work.
- Mistake: High-cardinality joins everywhere. Fix: Validate your star schema; avoid joining multiple big fact tables in a single visual.
- Mistake: DISTINCTCOUNT in every visual. Fix: Pre-aggregate distinct counts by the needed grain.
- Mistake: Copy-pasted measures with minor variations. Fix: Build base measures and reference them.
- Mistake: Unbounded filters. Fix: Limit date ranges and unnecessary dimensions in visuals.
Self-check routine
- Profile the slowest visual: what measure dominates time?
- Inspect the measure: any iterators or DISTINCTCOUNT?
- Can you push logic to a view or materialized view?
- Can you reduce grain or re-use a base measure?
- Re-test with the same filters to verify gains.
Who this is for
- BI Developers building dashboards and KPIs in tools like Power BI, Looker, or Tableau.
- Analytics Engineers who model data for BI consumption and want predictable performance.
Prerequisites
- Basic SQL (SELECT, JOIN, GROUP BY).
- Familiarity with your BI tool's measure syntax (e.g., DAX, LookML, calculated fields).
- Understanding of star schemas (facts and dimensions).
Practical projects
- Project 1: Convert three slow measures to source-side pre-aggregations and compare dashboard load time before/after.
- Project 2: Replace DISTINCTCOUNT-based DAU with a daily unique users materialized view and build a weekly trend visual.
- Project 3: Create a reusable measures layer (Revenue, Cost, Margin %, Avg Order Value) and remove duplicate logic from visuals.
Learning path
- Model foundations: star schema and relationships.
- Measure patterns: aggregations vs iterators; filter context basics.
- Pre-aggregation strategies: views and materialized views.
- Performance checks: profiling visuals and queries.
- Governance: central measure definitions and naming.
Mini challenge
Your dashboard shows: Revenue, Margin %, and DAU by day. Current load time is 25 seconds. Propose a three-step plan to:
- Reduce the load to under 5 seconds by adjusting measures and/or data model.
- Ensure consistent numbers across all pages.
- Minimize maintenance for future metrics.
One possible approach
- Source-side: create mv_sales_daily (revenue, cost by day and product) and mv_dau (dau by day). Refresh daily.
- BI layer: define base measures over these views; reference them for Margin % and rolling windows.
- Visuals: limit default date to last 90 days; remove redundant filters; reuse base measures.
Next steps
- Apply pre-aggregations to your top two slowest dashboards.
- Create a shared measure catalog to prevent duplication.
- Set up a weekly review to profile query times and iterate.
Quick Test
Take the test to check your understanding. Everyone can take it; only logged-in users will have progress saved.