luvv to helpDiscover the Best Free Online Tools
Topic 7 of 8

Measure Optimization Basics

Learn Measure Optimization Basics for free with explanations, exercises, and a quick test (for BI Developer).

Published: December 24, 2025 | Updated: December 24, 2025

Why this matters

Slow measures make dashboards lag, time out, or show inconsistent results across pages. As a BI Developer, you often need to: speed up KPIs like Revenue, Margin %, DAU/MAU; remove duplicated logic across visuals; push heavy work to the right layer (database vs BI tool); and ensure measures scale as data grows.

  • Scenario: Finance asks why Revenue is inconsistent between two pages. Root cause: slightly different measure filters.
  • Scenario: A DISTINCTCOUNT measure makes your daily dashboard take 40 seconds. You need to pre-aggregate or change approach.
  • Scenario: Sales margin measure iterates rows unnecessarily. A rewrite cuts time by 70%.

Concept explained simply

A measure is a calculation evaluated at query time. Optimization means making it do less work and push work to the most efficient place. Start with the data model: the best measure on a poor model will still be slow.

Mental model

  • Do less: avoid scanning unnecessary rows; reduce distinct values; pre-aggregate where stable.
  • Do earlier: push computation down to the database or ETL if it rarely changes.
  • Do smarter: leverage aggregations and filter context instead of row-by-row loops.
Common building blocks to remember
  • Star schema with clean relationships reduces filters and joins.
  • Low-cardinality columns group faster than high-cardinality ones.
  • Pre-aggregations (daily or monthly) serve most dashboards better than raw-level scans.
  • Avoid repeated complex expressions inside visuals; centralize in one measure.

Key principles of measure optimization

  • Prefer aggregations over row iteration when possible.
  • Reduce DISTINCTCOUNT with precomputed surrogate keys or pre-aggregated tables.
  • Limit filter scopes; avoid expensive filters applied repeatedly across visuals.
  • Push heavy, stable calculations to the source (views, materialized views) or ETL.
  • Cache and reuse: create reusable base measures; avoid duplicating logic per visual.
  • Choose efficient data types; trim unused columns; reduce text/high-cardinality where possible.

Worked examples

Example 1: Replace row-by-row revenue with an aggregated approach

Problem: A measure sums price times quantity by iterating every row.

// Slow pattern (row iterator)
Revenue (slow) = SUMX(Sales, Sales[Quantity] * Sales[UnitPrice])

Optimized options:

  • Option A (ETL/source): add Sales[ExtendedAmount] = Quantity * UnitPrice, then measure becomes SUM(Sales[ExtendedAmount]).
  • Option B (pre-aggregated table): create a daily aggregated table by product and day, summing ExtendedAmount. Most visuals can query it instead of the raw table.
-- Source-side pre-aggregation (daily)
CREATE MATERIALIZED VIEW mv_sales_daily AS
SELECT order_date, product_id, SUM(quantity * unit_price) AS revenue
FROM sales
GROUP BY order_date, product_id;

Result: Less row iteration at query time, faster visuals, predictable performance.

Example 2: Taming DISTINCTCOUNT for active users

Problem: Daily Active Users uses DISTINCTCOUNT on a giant events table.

-- Slow pattern
SELECT event_date, COUNT(DISTINCT user_id) AS dau
FROM events
WHERE event_date BETWEEN CURRENT_DATE - INTERVAL '30 day' AND CURRENT_DATE
GROUP BY event_date;

Optimized approach: pre-derive daily unique users in ETL or via a materialized view. Distinct is computed once per day, then served quickly.

CREATE MATERIALIZED VIEW mv_dau AS
SELECT event_date, COUNT(*) AS dau
FROM (
  SELECT DISTINCT event_date, user_id FROM events
) d
GROUP BY event_date;

In BI, use a simple SUM over mv_dau[dau] filtered by date. Benefit: stable, repeatable performance and consistent numbers across pages.

Example 3: Margin % without repeated filters

Problem: Each visual computes margin % with repeated filters, causing redundant evaluation.

// Before
Margin Amount = SUMX(Sales, Sales[Revenue] - Sales[Cost])
Margin % (slow) = DIVIDE([Margin Amount], [Revenue])  // both may depend on row iterators

Optimized approach: build a single base measure for revenue and cost that use pre-aggregated columns or source views. Then define Margin % once and reuse everywhere.

// After
Revenue = SUM(Sales[ExtendedAmount])
Cost = SUM(Sales[ExtendedCost])
Margin % = DIVIDE([Revenue] - [Cost], [Revenue])

Result: Centralized calculation, less duplicate work across visuals, easier validation.

Quick self-check checklist

  • Is your model a star schema with clear one-to-many relationships?
  • Did you avoid row-by-row iterators where a simple SUM or COUNT works?
  • Are high-cost DISTINCTCOUNTs replaced by pre-aggregations when possible?
  • Did you push stable calculations to the source or ETL?
  • Are measures reusable and referenced by other measures instead of copy/paste logic?
  • Are data types compact (integers for keys, numeric where needed, minimized text)?

Exercises

Note: Everyone can take the exercises and test. Only logged-in users will have their progress saved.

Exercise 1 — Rewrite a slow measure

You have a fact table Sales(order_date, product_id, quantity, unit_price, cost). Your current BI measure is:

Revenue (slow) = SUMX(Sales, Sales[quantity] * Sales[unit_price])

Task:

  • Propose a faster approach using either: (a) a source-side column or materialized view, or (b) a more efficient measure strategy.
  • Provide the SQL or BI expression you would use.
  • Explain how your approach reduces work at query time.

Exercise 2 — Optimize DISTINCTCOUNT of users

Given a table events(event_time, user_id, country), analysts need DAU and 7-day rolling DAU. The current approach runs DISTINCTCOUNT(user_id) per day on the raw table.

Task:

  • Create a source-side strategy (view or materialized view) that precomputes daily unique users.
  • Show the BI-side measure(s) to compute DAU and a 7-day rolling DAU using the pre-aggregated data.
  • Explain how this improves dashboard performance.

Common mistakes and how to self-check

  • Mistake: Using row iterators by default. Fix: Check if a simple SUM on a precomputed column would work.
  • Mistake: High-cardinality joins everywhere. Fix: Validate your star schema; avoid joining multiple big fact tables in a single visual.
  • Mistake: DISTINCTCOUNT in every visual. Fix: Pre-aggregate distinct counts by the needed grain.
  • Mistake: Copy-pasted measures with minor variations. Fix: Build base measures and reference them.
  • Mistake: Unbounded filters. Fix: Limit date ranges and unnecessary dimensions in visuals.
Self-check routine
  1. Profile the slowest visual: what measure dominates time?
  2. Inspect the measure: any iterators or DISTINCTCOUNT?
  3. Can you push logic to a view or materialized view?
  4. Can you reduce grain or re-use a base measure?
  5. Re-test with the same filters to verify gains.

Who this is for

  • BI Developers building dashboards and KPIs in tools like Power BI, Looker, or Tableau.
  • Analytics Engineers who model data for BI consumption and want predictable performance.

Prerequisites

  • Basic SQL (SELECT, JOIN, GROUP BY).
  • Familiarity with your BI tool's measure syntax (e.g., DAX, LookML, calculated fields).
  • Understanding of star schemas (facts and dimensions).

Practical projects

  • Project 1: Convert three slow measures to source-side pre-aggregations and compare dashboard load time before/after.
  • Project 2: Replace DISTINCTCOUNT-based DAU with a daily unique users materialized view and build a weekly trend visual.
  • Project 3: Create a reusable measures layer (Revenue, Cost, Margin %, Avg Order Value) and remove duplicate logic from visuals.

Learning path

  1. Model foundations: star schema and relationships.
  2. Measure patterns: aggregations vs iterators; filter context basics.
  3. Pre-aggregation strategies: views and materialized views.
  4. Performance checks: profiling visuals and queries.
  5. Governance: central measure definitions and naming.

Mini challenge

Your dashboard shows: Revenue, Margin %, and DAU by day. Current load time is 25 seconds. Propose a three-step plan to:

  • Reduce the load to under 5 seconds by adjusting measures and/or data model.
  • Ensure consistent numbers across all pages.
  • Minimize maintenance for future metrics.
One possible approach
  1. Source-side: create mv_sales_daily (revenue, cost by day and product) and mv_dau (dau by day). Refresh daily.
  2. BI layer: define base measures over these views; reference them for Margin % and rolling windows.
  3. Visuals: limit default date to last 90 days; remove redundant filters; reuse base measures.

Next steps

  • Apply pre-aggregations to your top two slowest dashboards.
  • Create a shared measure catalog to prevent duplication.
  • Set up a weekly review to profile query times and iterate.

Quick Test

Take the test to check your understanding. Everyone can take it; only logged-in users will have progress saved.

Practice Exercises

2 exercises to complete

Instructions

Fact table: Sales(order_date, product_id, quantity, unit_price, cost). Current measure:

Revenue (slow) = SUMX(Sales, Sales[quantity] * Sales[unit_price])

Tasks:

  • Propose a faster approach using either a source-side column/materialized view or a more efficient measure strategy.
  • Provide the SQL or BI expression.
  • Explain why it reduces work at query time.
Expected Output
A solution that uses either a precomputed ExtendedAmount column or a materialized view (e.g., mv_sales_daily) and a simple SUM-based measure; justification of fewer row iterations.

Measure Optimization Basics — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Measure Optimization Basics?

AI Assistant

Ask questions about this tool