luvv to helpDiscover the Best Free Online Tools
Topic 4 of 8

Landing Page Experiment Basics

Learn Landing Page Experiment Basics for free with explanations, exercises, and a quick test (for Marketing Analyst).

Published: December 22, 2025 | Updated: December 22, 2025

Why this matters

Marketing Analysts frequently need to prove which landing page version drives more sign-ups, purchases, or lead form submissions. Solid A/B tests let you make confident recommendations, reduce wasted ad spend, and systematically improve conversion rate (CVR).

  • Prioritize landing page ideas by expected impact and risk.
  • Write clear hypotheses stakeholders can align on.
  • Estimate sample size and duration before launching.
  • Choose the right primary metric and guardrails (e.g., bounce rate, page speed).
  • Analyze results without bias or early peeking.

Concept explained simply

A landing page A/B test randomly assigns visitors to two versions (A: control, B: variation). After enough visitors see each version, you compare outcomes (e.g., conversion rate) and decide if B truly outperforms A or if the difference is just noise.

Mental model

Think of your test like a fair coin flip for each eligible visitor: heads → control, tails → variant. Keep everything else the same. Let the coin flips pile up until you have enough to tell whether the coin changed the outcome in a meaningful way.

Core components of a landing page experiment

  • Objective: What business outcome are we trying to improve?
  • Hypothesis (clear template): Because of [insight], changing [element] for [audience] will increase [primary metric] from [baseline] to [target] within [timeframe].
  • Unit of randomization: Usually user-level (to avoid users seeing both variants). Session-level only if users rarely return and you cannot persist assignment.
  • Variants & split: A (control) vs B (variant), often 50/50 split.
  • Eligibility rules: Who gets included/excluded (e.g., new visitors from paid campaigns, excluding internal IPs)?
  • Primary metric: A single decision-driving metric (e.g., form submit rate).
  • Guardrail metrics: Watch for harm (bounce rate, time to first contentful paint, error rate).
  • Sample size & duration: Estimate before the test, and run through at least a full business cycle (typically ≥7 days).
  • Decision rule: Predefine success criteria (e.g., statistical significance + no guardrail harm).
Quick sample size rough rule (back-of-the-envelope)

For conversion rate tests, a common rough estimate per variant is:

n per variant ≈ 16 × p × (1 − p) / d²

  • p = baseline conversion rate (as a decimal)
  • d = minimum absolute lift you want to detect (MDE). If baseline is 4% and you want a 20% relative lift → target 4.8% → d = 0.8% = 0.008

This is a quick approximation (roughly 80% power, 5% significance). Use it to plan; exact calculators may differ.

Worked examples

Example 1 — Headline change for lead form

  • Baseline form submit rate (p): 3.5% (0.035)
  • Target (MDE): +15% relative → 4.0% (≈0.04025), so d ≈ 0.00525
  • n per variant ≈ 16 × 0.035 × 0.965 / (0.00525²) ≈ ~19,600–20,000 visitors
  • Traffic: 10,000 eligible visitors/day → 5,000 per variant/day → ~4 days for sample, but run at least 7–10 days to cover weekday effects and guard against variability.
  • Decision rule: Ship if variant improves submit rate and guardrails show no degradation.

Example 2 — Compress hero image to improve speed

  • Primary metric: Form submit rate
  • Guardrails: LCP (page speed), bounce rate, console errors
  • Hypothesis: Smaller image improves speed → lower bounce → higher submits.
  • Outcome: If form submit increases and LCP does not worsen (preferably improves), variant is a win.

Example 3 — CTA color/contrast update

  • Risk: Users must not see both versions. Use user-level randomization with sticky assignment.
  • Metric sanity: Count unique conversions per user to avoid double-counting repeated clicks.
  • Decision: Only ship if primary metric improves and no accessibility issues (contrast) are introduced.

Run it: step-by-step

  1. Define objective: e.g., increase form submissions from paid search traffic.
  2. Write hypothesis with baseline and target.
  3. Choose unit of randomization and traffic split.
  4. Select metrics: one primary, 2–3 guardrails.
  5. Estimate sample size and expected duration.
  6. Lock the plan (decision rule, data checks).
  7. Implement variant, QA instrumentation and assignment.
  8. Launch, monitor guardrails daily (don’t peek for decision).
  9. Finish when sample and time criteria are met.
  10. Analyze, decide, and document learnings.

Instrumentation & QA checklist

  • Events fire once per user (or as intended) and with correct properties.
  • Assignment is random and sticky; users don’t switch variants.
  • Eligibility/exclusions work (no employees, bots, or test traffic).
  • Page speed and error tracking enabled for both variants.
  • All metrics visible in your analytics before launch.

Analyze results (simple and safe)

  • Compute observed conversion rates for A and B, absolute and relative lift.
  • Use your stats tool to get significance; don’t make calls before the planned sample/time is reached.
  • Check guardrails: if any harm is detected (e.g., bounce up, LCP worse), be cautious even if primary improves.
  • Decide per your pre-set rule: Ship, don’t ship, or iterate.
Common decision rules
  • Frequentist: p-value < 0.05 on primary metric, no guardrail harm, plan met.
  • Bayesian: 95% credible interval excludes zero (positive), no guardrail harm, plan met.

Pick one approach in advance and stick to it for the test.

Exercises

Complete these mini-tasks, then compare with the solutions. Your answers won’t be saved unless you’re logged in.

  1. Exercise 1: Draft a test plan for a headline change using the template below.
  2. Exercise 2: Estimate sample size and test duration from given inputs.
Exercise 1 — Instructions

Use this template:

  • Objective: [what to improve]
  • Hypothesis: Because of [insight], changing [element] for [audience] will increase [primary metric] from [baseline]% to [target]%.
  • Unit of randomization: [user/session]
  • Split: [e.g., 50/50]
  • Eligibility: [who’s in/out]
  • Primary metric: [one]
  • Guardrails: [2–3]
  • Sample size (rough): [show calculation]
  • Planned duration: [≥7 days, cover weekdays]
  • Decision rule: [what must be true to ship]
Exercise 2 — Instructions
  • Baseline CVR: 5%
  • Target lift: +10% relative
  • Eligible traffic: 8,000 visitors/day
  • Split: 50/50
  • Estimate per-variant sample size using n ≈ 16 × p × (1−p) / d², then estimate days.

Common mistakes and how to self-check

  • Peeking early: Don’t stop when p-value briefly dips below 0.05; wait for planned sample/time.
  • Multiple primary metrics: Choose one. Others are secondary/guardrails.
  • Randomization leaks: Users seeing both variants corrupts results. Verify sticky assignment.
  • Too-short tests: Run at least a full week unless traffic is extremely high and seasonality is controlled.
  • Ignoring performance: New images/scripts can slow pages and drop conversions.
Self-check prompts
  • Is my hypothesis specific and measurable?
  • Did I predefine sample size, duration, and decision rules?
  • Can any user land in both variants?
  • Are guardrails monitored daily?
  • Will my analysis method match the plan?

Who this is for

Marketing Analysts, Growth Marketers, and Product Marketers who need to increase landing page conversion with evidence-based decisions.

Prerequisites

  • Basic understanding of conversion funnels
  • Comfort with percentages and simple arithmetic
  • Access to analytics or experimentation reporting (any standard tool)

Learning path

  • Start with landing page test fundamentals (this lesson)
  • Advance to sample size, power, and MDE trade-offs
  • Learn test analysis (confidence intervals or Bayesian intervals)
  • Scale with multiple concurrent tests and shared guardrails

Practical projects

  • Redesign a hero section headline and subtext; ship a measured test.
  • Speed-focused variant: compress media, measure LCP and conversion effect.
  • Form optimization: reduce fields and test submit rate changes.

Mini challenge

Pick one element on your current landing page that adds friction (e.g., long headline, vague CTA, heavy hero image). Write a one-sentence hypothesis and one primary metric. Estimate a rough sample size using the quick rule and set a realistic duration. What guardrail could block you from shipping even if CVR improves?

Next steps

  • Create a reusable test plan template for your team.
  • Standardize guardrails for all landing page experiments (bounce, LCP, error rate).
  • Document learnings in a shared log to avoid repeating tests.

Progress & saving

The quick test below is available to everyone. If you log in, your progress and quiz results will be saved to your profile.

Practice Exercises

2 exercises to complete

Instructions

Use the template to create a concise, testable plan. Include baseline and target, unit of randomization, metrics, guardrails, sample size estimate, duration, and decision rule. Keep it to 8–10 bullets.

Expected Output
A clear plan with one primary metric, 2–3 guardrails, a rough per-variant sample size, and a duration covering at least one full week.

Landing Page Experiment Basics — Quick Test

Test your knowledge with 9 questions. Pass with 70% or higher.

9 questions70% to pass

Have questions about Landing Page Experiment Basics?

AI Assistant

Ask questions about this tool