Why this matters
Strong handoff documentation turns business intent into shippable software with fewer surprises. As a Business Analyst, your handoff guides developers on what to build and enables QA to verify it. Done well, it prevents scope creep, cuts rework, and speeds releases.
- Real task: Convert a feature idea into a single, testable package.
- Real task: Align Dev and QA on scope, edge cases, and data rules.
- Real task: Track changes and sign-offs across sprints.
Who this is for
Business Analysts, Product Analysts, and junior PMs who collaborate with engineering and QA.
Prerequisites
- Basic user story writing and acceptance criteria (Given/When/Then).
- Familiarity with software delivery stages (grooming, sprint planning, QA, UAT).
- Comfort with simple flows and data fields (IDs, enums, status codes).
Concept explained simply
Handoff documentation is a compact, authoritative package that answers three questions: Why are we building this? What exactly must work? How will we verify it?
Mental model: The 3-layer package
- Narrative: Business goal, scope, risks.
- Requirements: Stories, acceptance criteria, flows, data contracts, NFRs.
- Evidence: UX assets, examples, test scenarios, traceability, change log.
When each layer is present, Dev knows the target, QA knows how to test it, and stakeholders know what to expect.
What goes into a solid handoff package
- Overview & Goal: One-sentence purpose and success metric.
- Scope & Out of Scope: What is included and explicitly excluded.
- User Stories & Acceptance Criteria: Testable, INVEST-friendly.
- User Flows: Happy paths + key edge paths.
- Data Contract: Fields, types, formats, enums, validation, error codes.
- UI/UX Assets: Screens, states, microcopy, empty/error/loading.
- Non-Functional Requirements: Performance, security, accessibility, localization, analytics.
- Dependencies: Services, feature flags, migrations, third parties.
- Risks & Assumptions: What might fail; what we assume is true.
- Rollout Plan: Environments, toggles, migration steps, rollback.
- QA Test Plan: Scenarios, data sets, negative cases, edge cases.
- Traceability: Story-to-criteria-to-test-case mapping.
- Versioning & Change Log: What changed, when, and why.
- Sign-offs & Contacts: Who approved, who to ping for blockers.
Ready-for-Dev checklist
- Problem and goal are clear and measurable.
- Each story has unambiguous acceptance criteria.
- All dependencies and feature flags identified.
- Data fields and error handling defined.
- Designs include all critical states (loading/empty/error/success).
- Non-functional requirements are explicit.
- Risks and rollback approach noted.
- Change log and version labeled (v1.0, v1.1...).
Ready-for-Test (QA) checklist
- Traceability: Each acceptance criterion maps to at least one test case.
- Negative cases and edge cases listed.
- Test data requirements and test users described.
- Environment, toggles, and logs/telemetry instructions included.
- Expected errors/status codes and messages documented.
- Accessibility and localization notes provided where applicable.
Worked examples
Example 1: Add MFA to Login
Goal: Reduce account takeover by adding optional OTP.
- Scope: Email/password + OTP via email. SMS excluded.
- Flow: Login -> OTP prompt on recognized risk -> Verify -> Success.
- Data Contract: POST /auth/otp { userId:string, otp:string (6 digits), expiresIn: number } Errors: OTP_EXPIRED, OTP_INVALID, TOO_MANY_ATTEMPTS
Acceptance Criteria (samples):
- Given a valid user and OTP, when user submits OTP within expiry, then login succeeds and session is created.
- Given 6 invalid OTP attempts, when the user retries, then account is temporarily locked for 15 minutes.
- Given expired OTP, when user submits, then user sees "Code expired. Request a new code."
NFRs: OTP verification under 2s; lockout persists across services; audit log entries for OTP sent/verified/failed.
Example 2: Refund API Endpoint
Goal: Allow customer support to issue refunds via internal tool.
- Scope: Refunds for settled card payments only. Partial refunds allowed.
- Data Contract: POST /payments/{id}/refund { amount_cents:int, reason:enum["duplicate","customer_request","other"], note?:string } Responses: 201 with { refund_id, status:"pending"|"succeeded"|"failed" }
Acceptance Criteria (samples):
- Given a settled payment, when a partial refund amount is less than or equal to remaining balance, then the API returns 201 and status pending.
- Given an unsettled payment, when refund requested, then respond 400 with code PAYMENT_NOT_SETTLED.
- Given reason "other" with empty note, then respond 400 with code NOTE_REQUIRED.
NFRs: Idempotency via Idempotency-Key; 99.9% availability; audit trails.
Example 3: Report Filter Enhancement
Goal: Add date range and status filter to Orders report.
- Scope: Created_at date range (UTC) and status enum [pending, shipped, canceled].
- UI States: Empty state message when no results; remember last used filters per user.
Acceptance Criteria (samples):
- Given date range and status selected, when user applies, then the result table shows only matching orders.
- Given invalid date range (start after end), when apply, then show inline error and disable request.
- Given user returns to page, then last filters auto-populate.
NFRs: Query returns first page under 3s for 100k orders; accessible labels for screen readers.
Build your handoff in 6 steps
1) Clarify goal and scope
Write a one-liner goal, define scope/out of scope, and success metric.
2) Model flows and states
Sketch the happy path and 2–3 edge paths. Include loading/error/empty states.
3) Write acceptance criteria
Use Given/When/Then. Each criterion must be testable and observable.
4) Define data and errors
List fields, formats, validations, enums, and error codes/messages.
5) Specify NFRs and dependencies
Performance, security, accessibility, analytics. Note feature flags and services.
6) Map to tests and version
Create a traceability table and start a change log (v1.0). Get sign-offs.
Exercises
Complete these in your own words. Compare with the sample solution to self-check.
Exercise 1: Acceptance criteria for password reset
Write 5–7 Given/When/Then criteria for a password reset flow that sends an email link, expires in 30 minutes, and enforces strong passwords.
- Include success, expired link, reused link, weak password, and rate limiting cases.
Show solution
Sample acceptance criteria:
- Given a registered email, when the user requests a reset, then show confirmation and send a reset link valid for 30 minutes.
- Given an expired link, when opened, then show "Link expired" and offer to resend.
- Given a used link, when opened again, then show "Link already used" and offer to resend.
- Given a new password not meeting policy (8+ chars, 1 number, 1 symbol), when submitted, then show inline errors and do not change password.
- Given a valid new password and valid link, when submitted, then update password and invalidate the link.
- Given more than 5 reset requests in an hour, when another is requested, then throttle and show "Try again later."
- Given success, when the user logs in, then the old password no longer works.
Exercise 2: Mini handoff checklist
Create a one-page handoff for adding a "Remember me" checkbox to login.
- Define goal, scope, acceptance criteria (3–5), data fields (e.g., token TTL), NFRs, and a simple QA plan.
Show solution
Example outline:
- Goal: Reduce repeat logins by 40%.
- Scope: Checkbox defaults off; persists session for 14 days; not on shared/public devices banner (out of scope).
- Acceptance Criteria: When checked, keep user logged in for 14 days; when unchecked, default session length; when user logs out, remember-me session invalidated; device limit 5, oldest dropped on 6th.
- Data: cookie remember_token (httponly, secure), expiry=14d, device_id hashed.
- NFRs: Login under 2s; security: rotate token every 7 days; audit log.
- QA Plan: Test check/uncheck, logout invalidates, multiple devices behavior, cookie attributes present, negative cases (expired token).
Common mistakes and how to self-check
- Vague criteria: Replace "works as expected" with observable outcomes.
- Missing error states: Always define messages and codes.
- Ignoring data contracts: Types, enums, and formats must be explicit.
- Forgetting NFRs: Performance, accessibility, analytics are part of scope.
- No traceability: Link each criterion to at least one test case.
- Poor versioning: Keep a change log with what/why, not just dates.
Self-check: 5-minute spot audit
- Can QA write tests without asking you anything?
- Can Dev implement without guessing a field name or error message?
- Are the top 3 edge cases listed?
- Is rollback defined if the rollout fails?
- Is there one page/file that's the single source of truth?
Practical projects
- Project A: Payment method add/remove handoff. Deliver: stories, criteria, flows, data contract, test plan, change log.
- Project B: Notification preferences center. Deliver: state diagram, i18n notes, accessibility checklist, analytics events.
- Project C: Export CSV feature. Deliver: field mapping, size limits, timeouts, pagination, negative test cases.
Learning path
- Step 1: Practice Given/When/Then on simple flows.
- Step 2: Add data contracts and errors to your specs.
- Step 3: Build traceability from criteria to test cases.
- Step 4: Layer in NFRs and rollout/rollback.
- Step 5: Pilot your handoff with a developer and a QA, collect feedback, iterate.
Mini challenge
Your team adds a "Deactivate account" feature. List 5 acceptance criteria covering confirmation flow, data retention, reactivation, audit logging, and access revocation. Keep them specific and testable.
Quick Test note
The Quick Test is available to everyone. If you are logged in, your progress will be saved.