luvv to helpDiscover the Best Free Online Tools
Topic 8 of 12

Making Recommendations

Learn Making Recommendations for free with explanations, exercises, and a quick test (for Data Analyst).

Published: December 20, 2025 | Updated: December 20, 2025

Who this is for

This subskill is for Data Analysts who turn findings into clear, actionable recommendations that decision-makers can trust and implement.

Prerequisites

  • Basic descriptive statistics (means, proportions, trends)
  • Comfort building simple charts (line, bar, funnel)
  • Ability to frame a business question and define a primary metric

Why this matters

In real roles, you will not be judged only by charts—you will be asked: What should we do and why? Strong recommendations help teams choose, act, and measure results. Typical tasks include:

  • Proposing A/B tests or rollouts after identifying a conversion drop
  • Prioritizing product or marketing actions using impact vs effort
  • Summarizing trade-offs and risks for leadership decisions
  • Defining success metrics, owners, and timelines

Concept explained simply

A recommendation is a concise decision proposal backed by evidence and designed for action. It should answer: What should we do, why, what impact do we expect, how will we do it, what could go wrong, and how will we know it worked?

Mental model: The WWH-HR-D loop

  • What: The action or decision to take
  • Why: Evidence and logic that justify it
  • How: Steps, owner, timeline, resources
  • Hypothesis & Impact: Expected change in a key metric
  • Risks & Alternatives: What to watch, options if wrong
  • Decision & Measurement: Go/no-go rule and success criteria

A simple structure for recommendations

  1. Decision: Approve X (or run an experiment)
  2. Objective metric: Target metric and baseline
  3. Expected impact: Direction and magnitude (even if a range)
  4. Rationale: Key facts, not the entire analysis
  5. Plan: Steps, owner, timeline
  6. Risks & mitigations: Biggest uncertainties and how to monitor
  7. Success check: What result qualifies as success and next step
Tip: Keep it to one screen

Executives scan. Aim for 5–8 bullets and a supporting chart if needed.

Prioritization methods

When you have multiple ideas, choose the best few to propose.

  • Impact vs Effort: High-impact, low-effort first
  • ICE: Impact × Confidence ÷ Effort (score each 1–10)
  • RICE: Reach × Impact × Confidence ÷ Effort (consider audience size)
How to score fairly
  • Define a shared scale (1 = minimal, 10 = massive)
  • Calibrate with examples (“previous banner test yielded +2% CTR = Impact 3”)
  • Use ranges when uncertain; reflect uncertainty in lower Confidence

Confidence and assumptions

Always state your confidence and the assumptions your recommendation depends on. If confidence is low, recommend a test with a decision rule.

  • Confidence bands: High (≥80%), Medium (50–79%), Low (<50%)
  • Assumptions: e.g., seasonality stable, data complete, no major channel changes
  • Decision rule: e.g., Ship if uplift ≥ +1.5pp at p < 0.05 for 2 weeks

Worked examples

1) E-commerce: Mobile add-to-cart drop

  • Finding: Mobile add-to-cart rate fell from 8.2% to 7.1% (−1.1pp) after new image carousel; page weight +1.4MB; LCP +1.2s.
  • Recommendation (draft): Run an A/B test replacing carousel with a static hero on mobile.
  • Why: Performance correlation across 4 product pages; historical tests show +0.6–1.0pp when LCP −1s.
  • Impact: Expect +0.6–0.9pp add-to-cart (Medium–High impact), Confidence: Medium.
  • Plan: Eng team implements variant (2 days), QA (1 day), test for 2 weeks. Owner: Web PM.
  • Risks: Creative consistency; mitigate with brand-approved static image.
  • Decision rule: Ship if uplift ≥ +0.5pp at p < 0.05; otherwise iterate on image size.

2) SaaS: Reducing churn in first 30 days

  • Finding: 35% of churned users never completed onboarding step 2; emails have 9% CTR.
  • Recommendation: Add in-product checklist with progress bar + contextual tooltips; deprecate email nudges.
  • Why: Session replays show confusion at data import; best practices and internal micro-test suggest +12–18% step completion.
  • Impact: Predict churn reduction −2–3pp (Medium), Confidence: Medium.
  • Plan: Design (3 days), build (5 days), release behind feature flag. Owner: Growth PM.
  • Success: Step-2 completion +15% and churn −2pp vs control after 4 weeks.

3) Marketing: Reallocate spend

  • Finding: Paid Social CPA $78 vs Paid Search CPA $49; marginal CPA on Social rising week over week.
  • Recommendation: Shift 15% budget from Social to Search branded + high-intent non-brand.
  • Why: Diminishing returns curve; Search impression share 72% with available inventory.
  • Impact: Forecast blended CPA −6–9% (High), Confidence: High.
  • Plan: Reallocate for 14 days; monitor daily CPA and impression share. Owner: Performance Lead.
  • Risk: Lower top-funnel; mitigate with retargeting cap adjustment.
  • Decision: Maintain shift if blended CPA ≤ −5% without MQL quality drop.

Exercises

Exercise 1: Turn findings into a recommendation

Scenario: Mobile add-to-cart rate dropped 12% relative after launching an image-heavy carousel. LCP worsened by 1.2s. Desktop unaffected. Your task: write a recommendation using the structure above.

Hint

Propose an A/B test with a static hero image, define success as add-to-cart uplift threshold, and assign an owner.

Suggested solution

What: A/B test static hero replacing carousel on mobile. Why: LCP +1.2s correlates with −1.1pp add-to-cart; historical perf tests improved conversion. Impact: +0.6–0.9pp add-to-cart; Confidence: Medium. How: Eng (2 days) + QA (1 day), 2-week test; Owner: Web PM. Risks: Brand consistency; mitigate by using approved imagery. Decision rule: Ship if ≥ +0.5pp uplift at p < 0.05; else iterate on asset size.

Exercise 2: Prioritize with ICE

Score each idea (Impact 1–10, Confidence 0.5–1.0, Effort 1–10; higher Effort = harder). Compute ICE = Impact × Confidence ÷ Effort and rank.

  • A) Reduce image size (I=6, C=0.8, E=2)
  • B) New referral program (I=7, C=0.6, E=6)
  • C) Improve search relevance (I=9, C=0.7, E=8)
  • D) Add onboarding checklist (I=5, C=0.9, E=2)
  • E) Reallocate ad spend (I=6, C=0.9, E=3)
Hint

Lower effort increases the ICE score. Watch how Confidence affects borderline cases.

Suggested solution

Compute ICE: A=2.4, B=0.7, C≈0.79, D=2.25, E=1.8. Rank: A (1st), D (2nd), E (3rd), C (4th), B (5th). Recommend starting with A and D; E as quick follow-up.

Quality checklist (self-review)

Common mistakes and how to self-check

  • Vague actions: Fix by starting with a verb and a decision (Approve X / Test Y).
  • No metric: Always name the primary metric and baseline.
  • Overstuffed rationale: Keep 2–4 key facts; move details to appendix if needed.
  • No owner/timeline: Assign one person and a realistic timeframe.
  • Ignoring risk: State uncertainties and define a monitor/rollback.
  • Overconfidence: Use confidence bands and ranges for impact.
Self-check prompt

If this were wrong, what would have to be true? How would we learn fast and cheaply?

Practical projects

  1. Conversion rescue memo: Analyze a recent traffic or conversion change (real or sample data). Produce a one-page recommendation with decision rule and owner.
  2. Prioritization board: List 10 growth ideas, score with ICE or RICE, and propose the top 3 with short rationale.
  3. Experiment playbook: Write 3 test cards (hypothesis, metric, MDE, duration rule of thumb) and a go/no-go template.

Learning path

  1. Confirm your business goal and primary metric
  2. Summarize findings into 2–4 key facts
  3. Draft a recommendation using the structure
  4. Estimate impact and confidence; decide test vs ship
  5. Prioritize with ICE/RICE
  6. Get feedback from a peer and revise
  7. Present and capture decisions and follow-ups

Mini challenge

You find that 22% of users abandon at checkout step 3 (payment). A new payment provider promises faster load times but a 0.3% fee increase. Draft a 6-bullet recommendation (What, Why, Impact, How, Risks, Decision rule). Keep it under 120 words.

Example answer

Switch 30% of payment traffic to Provider B via A/B. Why: Step-3 latency −800ms historically yields +0.4–0.7pp completion; fee +0.3% offsets partially. Impact: +0.5pp conversion; net revenue +0.2–0.4% (Medium), Confidence: Medium. How: Payments team config (1 day), run 2 weeks. Risks: Gateway reliability; monitor error rate. Decision: Roll to 100% if completion ≥ +0.4pp with no error increase and revenue net ≥ +0.2%.

Next steps

  • Complete the quick test below to lock in the structure and prioritization concepts.
  • Note: Anyone can take the test; if you log in, your progress will be saved.

Practice Exercises

2 exercises to complete

Instructions

Use the scenario: Mobile add-to-cart rate dropped 12% relative after an image-heavy carousel launch; LCP worsened by 1.2s; desktop unaffected. Write a 6–8 bullet recommendation: What, Why, Impact (+range) with Confidence, How (owner, timeline), Risks, Decision rule.

Expected Output
A concise, action-focused recommendation with metric, impact range, confidence, owner, timeline, risks, and a clear decision rule.

Making Recommendations — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Making Recommendations?

AI Assistant

Ask questions about this tool