luvv to helpDiscover the Best Free Online Tools
Topic 1 of 8

Freshness Completeness Volume Checks

Learn Freshness Completeness Volume Checks for free with explanations, exercises, and a quick test (for Data Platform Engineer).

Published: January 11, 2026 | Updated: January 11, 2026

Why this matters

Example 2 — Completeness checks: null rates and partition presence
Step 2. Store metrics in a table (e.g., data_quality.metrics) with columns: dataset, partition, metric_type, metric_value, observed_at, status.
Step 3. Schedule checks right after ingestion/transform tasks; fail fast on red alerts.
Step 4. Thresholds: start static (e.g., freshness <= 26h); evolve to dynamic bands (rolling median/IQR).
Step 5. Ownership: each dataset has an on-call contact and a short runbook for common failures.

Exercises you can run

These mirror the exercises below. You can complete them using your warehouse. The quick test at the bottom is available to everyone; sign in to save your progress.

Exercise 1 — Table freshness snapshot

Write a query that, for a list of tables, emits freshness in minutes and a PASS/FAIL based on an SLA column.

  • Inputs: table_name, event_ts_column, expected_interval_hours
  • Output: table_name, max_event_ts, freshness_minutes, status
Hint
Use MAX() over the event timestamp and compare to CURRENT_TIMESTAMP(), then CASE on a threshold.

Exercise 2 — Volume anomaly detector

Compute daily row counts for the last 30 days and flag the most recent day as ALERT if it is outside a ±35% band around a 7-day rolling median.

  • Inputs: dataset and date column
  • Output: dt, cnt, baseline_median, status
Hint
Use window functions (PERCENTILE_CONT or APPROX_QUANTILES) to compute a rolling median.

Operational checklist

  • Each critical dataset has freshness, completeness, and volume checks
  • Partition-level checks for daily/hourly tables
  • Metrics land in a centralized table with dataset ownership
  • Alerts include runbook links and suppression rules for known windows
  • Dynamic thresholds for seasonality; static thresholds for strict SLAs

Common mistakes and self-check

  • Using load_time when event_time is available. Self-check: compare both and quantify skew.
  • Only table-level checks. Self-check: do partitions exist for the last N days?
  • Static thresholds on seasonal data. Self-check: examine weekly cycles before setting bounds.
  • Alert storms. Self-check: group alerts per dataset and use cool-down periods.
  • Ignoring small null rates. Self-check: trend nulls; small drifts predict future breaks.

Practical projects

  • Build a metrics table and a daily job that records freshness, completeness (PK null rate), and volume for your top 10 tables.
  • Create a dashboard showing last 14 days of metrics with red/amber/green status.
  • Implement dynamic volume bands using rolling medians and compare vs a static threshold approach.

Learning path

  • Start: Implement basic freshness checks for your ingestion layer.
  • Next: Add completeness metrics (PK nulls, partition presence).
  • Then: Add volume checks with dynamic baselines and alerting.
  • Advanced: Per-dimension completeness and schema drift monitoring.

Next steps

  • Automate metrics collection post-pipeline runs
  • Adopt runbooks for top failure modes
  • Roll out to all critical datasets and review thresholds monthly

Mini challenge

Your marketing_events table is hourly. Yesterday 18:00–21:00 shows normal counts but 22:00 is 0. Draft the minimal queries and thresholds to catch this with freshness, completeness, and volume checks. Hint: partition-level freshness and a per-hour volume band will expose the anomaly.

Practice Exercises

2 exercises to complete

Instructions

Create a single SQL query that produces a freshness report for multiple tables using a parameters table named dq_config with columns: dataset (STRING), event_ts_column (STRING), expected_interval_hours (INT). For each dataset, compute max_event_ts, freshness_minutes (now - max_event_ts), and status where status is PASS if freshness_minutes <= expected_interval_hours * 60 + 120 (2h grace) else FAIL.

Assume datasets exist and event_ts_column names are valid for each dataset.

Expected Output
A result set with columns: dataset, max_event_ts, freshness_minutes, status (PASS/FAIL) for all rows in dq_config.

Freshness Completeness Volume Checks — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Freshness Completeness Volume Checks?

AI Assistant

Ask questions about this tool