Why this matters
Feature adoption cohorts show how quickly and widely users start using a new or existing feature. Product Analysts use them to answer questions like:
- Did the launch last week drive higher adoption than previous launches?
- Which signup cohorts adopt the feature fastest?
- What is the time-to-first-use for the feature, and how does it vary by platform or plan?
- Are users retaining the feature after the first try?
These insights guide go-to-market tactics, onboarding improvements, and roadmap decisions.
Concept explained simply
A feature adoption cohort groups users by a starting event (e.g., signup month or release week) and measures the percent who use a specific feature over time since that start.
Mental model
Think of a grid: rows are cohorts (e.g., users who signed up in Jan, Feb, Mar). Columns are time since cohort start (day/week 0, 1, 2...). Each cell shows the share of the cohort that has used the feature at least once by that time. Warmer colors or higher numbers mean faster or higher adoption.
Key choices you must define
- Cohort key: signup month, first app open, feature release week, or first eligibility date.
- Eligibility: who can realistically use the feature (e.g., only team-plan users).
- Window: days or weeks since cohort start (7, 14, 30, 90 days).
- Metric: any use vs. repeated use vs. retained use (e.g., used in 2 distinct weeks).
- Denominator: all users in cohort, or only eligible users exposed to the feature.
Step-by-step: Build a feature adoption cohort
- Define the feature event: The event that indicates use (e.g., Feature X Used).
- Define eligibility: Who can use it? Gate by plan, platform, country, or app version if needed.
- Choose cohort key: Commonly signup month; for launches, use release week; for gated features, use first eligibility date.
- Choose the time axis: Days or weeks since cohort start. Pick a maximum window (e.g., 8 weeks).
- Create adoption flags: For each user and time bucket, flag if the feature was used at least once by that time.
- Aggregate: Compute percent adopted = adopted_users / eligible_users per cohort and time bucket.
- Interpret: Compare cohort curves and detect shifts after changes or campaigns.
Starter SQL skeleton (conceptual)
-- Tables (conceptual):
-- events(user_id, event_name, event_time)
-- users(user_id, signup_date, plan, platform)
-- feature_release(release_date)
-- 1) Choose cohort: by signup month
WITH base AS (
SELECT u.user_id,
DATE_TRUNC('month', u.signup_date) AS cohort_month,
u.plan, u.platform
FROM users u
), feature_use AS (
SELECT e.user_id,
MIN(e.event_time) AS first_use_time
FROM events e
WHERE e.event_name = 'feature_x_used'
GROUP BY 1
), joined AS (
SELECT b.user_id,
b.cohort_month,
b.plan,
b.platform,
fu.first_use_time
FROM base b
LEFT JOIN feature_use fu USING(user_id)
), expanded AS (
SELECT cohort_month,
generate_series(0, 56) AS day_since
FROM (SELECT DISTINCT cohort_month FROM joined) c
), adoption AS (
SELECT e.cohort_month,
e.day_since,
COUNT(*) FILTER (WHERE j.user_id IS NOT NULL) AS eligible,
COUNT(*) FILTER (
WHERE j.first_use_time IS NOT NULL
AND j.first_use_time <= (j.cohort_month + (e.day_since || ' days')::interval)
) AS adopted
FROM expanded e
JOIN joined j USING (cohort_month)
GROUP BY 1,2
)
SELECT cohort_month, day_since, adopted::float/NULLIF(eligible,0) AS adoption_rate
FROM adoption
ORDER BY cohort_month, day_since;
Spreadsheet approach (conceptual)
Use a pivot:
- Rows: Cohort month (from signup date)
- Columns: Day/Week since cohort start (difference between usage date and cohort start)
- Values: % users with min(feature_use_date) within that day/week threshold
Worked examples
Example 1: Signup-month cohorts for Favorites
Objective: What percent of new signups use Favorites within 7, 14, 30 days?
- Cohort: signup month
- Eligibility: all new users
- Event: favorites_add
- Windows: 7/14/30 days
Interpretation: If March shows 40% by day 7 vs 28% in February, onboarding changes in March likely helped early adoption.
Mini check
- Denominator = all new users each month
- Metric = any favorites_add at least once
- Compare month-over-month
Example 2: Release-week cohorts for Dark Mode
Objective: After releasing Dark Mode, did users adopt it faster in later releases?
- Cohort: release week (users first exposed via app version)
- Eligibility: users on app version >= 5.2
- Event: dark_mode_toggle_on
- Windows: week 0–8
Interpretation: If week 0 adoption increases from 5% to 12% after adding a hint card, the nudge is effective.
Exposure gating tip
Only include sessions after the user updates to version 5.2. Pre-update activity is not eligible.
Example 3: Eligibility-based cohorts for Team Sharing
Objective: How quickly do team-plan accounts adopt Team Sharing?
- Cohort: first day on Team plan
- Eligibility: team-plan users only
- Event: share_with_team
- Windows: day 0–30
Interpretation: If adoption by day 7 is 18% for SMB vs 9% for Enterprise, consider tailored onboarding.
Diagnose and improve adoption
- Time-to-adoption: Median days to first use
- Depth: Share of adopters with 3+ uses in 14 days
- Retention: % who use feature again in week 2 after first use
- Segmentation: platform, country, plan, acquisition channel
Practical levers
- Onboarding placement: earlier, clearer, contextual
- Education: tooltips, empty states, checklists
- Eligibility: simplify gating; reduce friction
- Performance: load time and reliability strongly affect early adoption
Exercises
Complete these in your tool of choice. Solutions are provided below each exercise.
Exercise 1: Define the right cohort and denominator
You launched Push Notifications Preferences on March 1. You want to measure adoption in the first 14 days by users who updated to app version 3.4.
- Decide: cohort key
- Decide: eligibility and denominator
- Decide: event and windows
Show solution
Cohort: First day a user is on app version 3.4 (release exposure date).
Eligibility/denominator: All users who updated to 3.4 (eligible sessions after update). Exclude pre-update days.
Event: notifications_pref_saved (any save).
Windows: Day 0–14 since update day.
Metric: % of eligible users with at least one notifications_pref_saved by each day.
Exercise 2: Write a 30-day adoption query skeleton
Compute the percent of users who used Feature X within 30 days of signup, by signup month.
Show solution
WITH base AS (
SELECT user_id, DATE_TRUNC('month', signup_date) AS cohort_month, signup_date
FROM users
), first_use AS (
SELECT user_id, MIN(event_time) AS first_use
FROM events
WHERE event_name = 'feature_x_used'
GROUP BY 1
), joined AS (
SELECT b.user_id, b.cohort_month, b.signup_date, f.first_use
FROM base b LEFT JOIN first_use f USING(user_id)
)
SELECT cohort_month,
COUNT(*) AS users,
COUNT(*) FILTER (
WHERE first_use IS NOT NULL AND first_use <= signup_date + INTERVAL '30 day'
) AS adopted_30d,
COUNT(*) FILTER (
WHERE first_use IS NOT NULL AND first_use <= signup_date + INTERVAL '30 day'
)::float / NULLIF(COUNT(*),0) AS adoption_rate_30d
FROM joined
GROUP BY 1
ORDER BY 1;
Self-check checklist
- Cohort start is unambiguous and reproducible
- Denominator matches eligibility
- Time window is relative to cohort start
- Event truly represents feature use (not just viewing)
- You can explain each cell: who, when, why
Common mistakes and how to self-check
- Mixing exposure and non-exposure time: Ensure cohort start or eligibility ensures users could actually use the feature.
- Wrong denominator: Using all active users instead of eligible users understates adoption. Confirm denominator logic.
- Double-counting users: Deduplicate by user before aggregations; use first-use timestamps.
- Using absolute counts only: Always normalize to percents to compare cohorts of different sizes.
- Too short/long windows: Use relevant windows (e.g., 7/14/30 days). For slow-moving B2B, consider weeks.
- Ignoring seasonality or campaigns: Annotate launches, promos, or outages to explain shifts.
Self-audit mini task
Pick one recent cohort cell. Can you trace 5 random users and confirm the event and timestamp rules hold? If not, fix definitions.
Practical projects
- Heatmap dashboard: Build a cohort heatmap for a key feature (7, 14, 30 days). Add platform and plan filters.
- Treatment compare: Create A/B segmented cohorts to compare adoption curves for two onboarding variants.
- Time-to-first-use: Compute median and p90 days-to-adoption for the last 6 signup cohorts; present insights and recommendations.
Who this is for
Product Analysts, Data Analysts, Growth Analysts, and PMs who need trustworthy adoption insights to guide product decisions.
Prerequisites
- Basic SQL or spreadsheet pivot skills
- Understanding of events, users, and timestamps
- Comfort with percentages and time windows
Learning path
- Understand cohort definitions (signup, release, eligibility)
- Define the adoption event and denominator
- Build cohort tables (SQL or spreadsheet)
- Add time windows and visualize
- Segment and interpret patterns
- Automate and monitor over time
Mini challenge
Pick a recently shipped feature. Define:
- Cohort: choose one
- Eligibility: concise rule
- Event: exact name
- Windows: 7/14/30 days
- Success threshold: what good looks like (e.g., 25% by day 14)
Draft your interpretation rule: "If day-14 adoption decreases by 5pp vs prior cohort, then investigate onboarding screens."
Next steps
- Instrument missing events if your adoption signal is ambiguous.
- Add retention-after-first-use to distinguish trials from true adoption.
- Schedule weekly refresh and annotate launches in dashboards.
Quick Test
Available to everyone. Only logged-in users will have progress saved.