Why this matters
Product Analysts use retention dashboards to answer questions that drive growth: Did the new onboarding improve D7 retention? Which countries churn faster? Do notifications actually bring users back? A solid dashboard turns events into decisions.
- Track core health: D1/D7/D30 retention, WAU/MAU stickiness, churn.
- Compare cohorts before vs. after a release.
- Spot segments with abnormal drop-offs (platform, country, acquisition channel).
- Quantify reactivation and the impact of lifecycle campaigns.
Concept explained simply
Retention measures how many users return after starting to use your product (their cohort date). Cohorts group users by when they started (e.g., sign-up date or first key action). We then check who comes back on day 1, day 7, day 30, etc.
Mental model: a leaky bucket
Think of your user base as a bucket. New users pour in, some activate and keep returning, others leak out. Product work either patches leaks (improves retention) or increases inflow. A retention dashboard shows where the leaks are by cohort and segment.
Key metrics and definitions
- Cohort: users grouped by first key action date (e.g., first session or first value moment).
- Activation: your definition of a meaningful first value event (e.g., completes onboarding + creates first project).
- Classic retention (Dn): users from cohort who were active on day n divided by cohort size.
- Rolling retention (Dn): users from cohort who returned on or after day n (less strict; usually higher).
- Survival curve: percentage of the cohort still active by day n.
- Churn rate (period): 1 minus the survival rate of that period.
- Stickiness: DAU/MAU (or WAU/MAU) showing how frequently users return within a month.
- Reactivation: previously churned users who came back after a defined inactivity window (e.g., 14 days).
Formulas (plain language)
- Classic D7 retention = users from cohort with an event on day 7 / total users in cohort.
- Rolling D7 retention = users from cohort with an event on day 7 or later / total users in cohort.
- DAU/MAU stickiness = unique users active in last day / unique users active in last 30 days.
Worked examples
Example 1: Compute classic D1 and D7
Cohort 2025-03-01 has 1,000 users. 380 were active on day 1; 210 on day 7.
- D1 retention = 380 / 1,000 = 38%
- D7 retention = 210 / 1,000 = 21%
If rolling D7 shows 28%, that means some users skipped day 7 but returned later.
Example 2: Activation choice changes retention
If the cohort is defined by first app open, D1 retention might look low because many never reach value. If you define activation as "onboarded + first project created", cohort size shrinks but retention quality improves. Choose the event that best represents first value, and keep it consistent.
Example 3: Segment comparison
D7 retention for iOS = 24%, Android = 18%. After filtering Android to version 5.2 only, D7 = 13%. This points to a version-specific issue rather than Android overall. Segment deeper before concluding.
Build a retention dashboard in your BI tool
- Define cohorts and events
- Pick activation event (e.g., onboarding_completed or first_value_event).
- Set cohort date = date of user’s first activation event.
- Prepare data
- Compute first_seen per user.
- Label each event with day offset from cohort date (0, 1, 2, ...).
- Mark activity flags per day (active = 1 if any qualifying event on that day).
- Aggregate
- For each cohort date, compute cohort_size and percent active on day 1, 7, 14, 30.
- Calculate rolling retention if needed (active on or after day n).
- Visualize
- Heatmap: cohorts (rows) Ă— day offsets (columns), cells show retention %.
- Line chart: survival curve per cohort or overall.
- Bar/line: D1/D7/D30 by segment (platform, country, channel).
- Add controls
- Date range picker for cohort start.
- Segment filters: platform, app version, country, acquisition channel.
- Toggle classic vs rolling retention.
- QA checks
- Cohort sizes match user acquisition logs.
- Day 0 always 100% by definition for activated cohorts.
- Retention never exceeds 100% (classic).
Example SQL outline (conceptual)
-- 1) First seen (activation) per user
WITH first_seen AS (
SELECT user_id, MIN(activation_date) AS cohort_date
FROM user_activation_events
GROUP BY user_id
),
-- 2) Daily activity flags relative to cohort
activity AS (
SELECT a.user_id, f.cohort_date,
DATE_DIFF(a.event_date, f.cohort_date, DAY) AS day_offset,
1 AS active
FROM product_activity a
JOIN first_seen f USING (user_id)
WHERE a.event_date >= f.cohort_date
),
-- 3) Aggregate to retention table
retention AS (
SELECT cohort_date,
COUNT(DISTINCT user_id) AS cohort_size,
SAFE_DIVIDE(COUNT(DISTINCT IF(day_offset=1, user_id, NULL)), COUNT(DISTINCT user_id)) AS d1,
SAFE_DIVIDE(COUNT(DISTINCT IF(day_offset=7, user_id, NULL)), COUNT(DISTINCT user_id)) AS d7,
SAFE_DIVIDE(COUNT(DISTINCT IF(day_offset=30, user_id, NULL)), COUNT(DISTINCT user_id)) AS d30
FROM activity
GROUP BY cohort_date
)
SELECT * FROM retention ORDER BY cohort_date DESC;Adapt functions to your SQL dialect.
Common mistakes and self-checks
- Mixing classic and rolling retention in one chart. Self-check: labels clearly say "classic" or "rolling".
- Wrong denominator (e.g., using active users on day n instead of cohort size). Self-check: Day 0 shows 100% for activated cohorts.
- Timezone drift leading to off-by-one days. Self-check: lock dashboard to a single timezone or user-local with clear label.
- Activation too broad (first open). Self-check: pick the first value event; compare metrics before and after change.
- Dirty segments (including test users/bots). Self-check: filter out internal/test accounts.
- Version rollups hiding outages. Self-check: break down by app version when retention drops.
Exercises
These mirror the tasks below. Do them in SQL or your BI tool. Use the checklist to verify.
Exercise 1: Build a cohort table
Create a simple retention table with columns: cohort_date, cohort_size, d1, d7, d30 (classic). Assume you have user_activation_events(user_id, activation_date) and product_activity(user_id, event_date).
- Define cohorts using the activation event.
- Compute classic D1, D7, D30.
- Output last 3 cohort rows.
Checklist
- Day 0 equals 100% (implied by definition).
- Dn never exceeds 100%.
- Last 3 cohorts show cohort_size > 0.
Exercise 2: Diagnose a drop
Given the following D7 retention by platform for the same two weeks of cohorts:
Week A: iOS 24%, Android 22%
Week B: iOS 23%, Android 15%
Android v5.1 in Week B: 22%
Android v5.2 in Week B: 12%Identify the likely cause segment and list two actions.
Checklist
- Segment identified at version level.
- At least two actionable follow-ups (e.g., rollback, targeted message).
Practical projects
- Build a retention heatmap with cohort rows and day columns; add platform and country filters.
- Create a survival curve comparing pre-release vs post-release cohorts.
- Add a reactivation panel: reactivated users per week after 14 days inactivity.
- Design an alert rule: flag when D7 drops more than 5 percentage points week-over-week for any segment with cohort_size ≥ 500.
Who this is for
- Product Analysts and data-savvy PMs who need actionable retention views.
- Data/BI Analysts standardizing cohort logic across teams.
Prerequisites
- Basic SQL or comfort with BI data modeling.
- Understanding of user events and unique user IDs.
- Access to activation and activity event data.
Learning path
- Clarify activation and cohort rules with stakeholders.
- Prototype a small cohort table for one month of data.
- Add classic and rolling retention toggles.
- Introduce segmentation (platform, version, country, channel).
- QA with spot checks; document definitions on the dashboard.
- Set alerts and a weekly review ritual.
Mini challenge
Your D1 improves 5 points but D7 stays flat for two consecutive weeks. In one paragraph, propose a hypothesis and 3 quick checks using your dashboard.
Possible directions
- Onboarding change increases early engagement but no sustained value.
- Check D1 to D3 drop curve, feature engagement depth, and segment-level D7.
- Compare survival curves for power users vs. new users.
Next steps
- Standardize a Retention Data Contract: exact events, timezones, filtering.
- Automate cohort generation daily and archive past results for stability.
- Share a one-page dashboard guide so others interpret correctly.
About saving progress
The quick test is available to everyone. If you are logged in, your progress will be saved automatically.