Menu

Topic 5 of 8

Retries And Alerts

Learn Retries And Alerts for free with explanations, exercises, and a quick test (for ETL Developer).

Published: January 11, 2026 | Updated: January 11, 2026

Why this matters

In ETL and orchestration, things fail: APIs rate-limit, databases lock, networks hiccup. Smart retries and clear alerts keep data fresh, SLAs met, and on-call humans sane.

  • Meet SLAs: Recover automatically from transient errors without waking someone up at 2 AM.
  • Reduce noise: Alert only when human action is needed; suppress flapping alerts.
  • Protect downstream: Stop bad data from propagating by failing fast on permanent errors.

Concept explained simply

Think of retries as a thermostat for failures. When heat (errors) spikes briefly, the thermostat (retries with backoff) stabilizes the temperature (pipeline health) without human intervention. If the spike persists, raise an alert so a human can fix the root cause.

Key pieces

  • Transient vs permanent failure: Transient (e.g., 429, timeouts) often succeed if retried; permanent (e.g., 400 bad request, schema mismatch) need a fix, not retries.
  • Backoff and jitter: Wait longer between retries (backoff) and add randomness (jitter) to avoid thundering herd.
  • Idempotency: Safe to run multiple times without breaking data. Design tasks to be idempotent if they may retry.
  • Timeouts and Circuit breakers: Cap how long a single try runs; stop retrying after a sensible limit.
  • Alerts and escalation: Notify the right person with context; escalate only if unresolved.
Mental model: Traffic lights for failures
  • Green: automatic recovery; silent retries.
  • Yellow: repeated transient failures; warn and keep retrying within limits.
  • Red: permanent or exhausted retries; alert with context and stop.

Design defaults that work 80% of the time

  1. Classify errors: Retry on timeouts, connection errors, 429, 5xx. Do not retry on 4xx (except 429).
  2. Retry policy: 5 attempts, exponential backoff starting at 2 minutes, max delay 15 minutes, add ±20% jitter.
  3. Timeouts: Per-try timeout 10–20 minutes; total task budget within SLA.
  4. Alerting: Alert only on final failure or after X consecutive runs fail. Include run ID, task name, impact, last error, and next steps.
  5. Escalation: Pager only when data freshness or revenue is at risk; otherwise send channel/email and create a ticket.

Worked examples

Example 1: Airflow task with retries and alerts

Goal: Extract from a rate-limited API. Retry on 429/5xx timeouts; alert on final failure.

# Pseudocode-like parameters (conceptual)
default_args = {
  'retries': 5,
  'retry_exponential_backoff': True,
  'retry_delay': '2m', # starting delay
  'max_retry_delay': '15m',
  'execution_timeout': '20m',
  'email_on_failure': False  # avoid noisy per-try alerts
}

# Task-level handling
# - Inspect HTTP status: if 429/5xx -> raise RetryableError
# - If 4xx (not 429) -> raise NonRetryableError to fail fast

# Alerting (conceptual):
# On final failure: send message with DAG id, run id, task id, impact, and last 50 lines of logs.

Why it works: Transient errors self-heal; permanent errors surface quickly with context.

Example 2: Prefect task retries and notification
# Conceptual Prefect-style config
@task(retries=5, retry_delay_seconds=120, retry_jitter_factor=0.2, timeout_seconds=1200)
def load_to_warehouse():
    # raise RetryError for connection/timeouts
    # raise Fail for schema mismatch or validation errors

@flow
def pipeline():
    load_to_warehouse()
    # On final failure: post to on-call channel and open ticket.

Note: Jitter reduces synchronized retries when many tasks fail together.

Example 3: Cron + Bash wrapper
#!/usr/bin/env bash
set -euo pipefail

attempts=5
base_delay=120   # seconds
max_delay=900

for i in $(seq 1 $attempts); do
  if ./run_job.sh; then
    echo "Success"; exit 0
  fi
  if [[ $i -eq $attempts ]]; then
    echo "Final failure: alerting" >&2
    # send alert with last logs snippet and run metadata
    exit 1
  fi
  # exponential backoff with jitter
  delay=$(( base_delay * 2**(i-1) ))
  if (( delay > max_delay )); then delay=$max_delay; fi
  jitter=$(( RANDOM % (delay / 5 + 1) ))  # ~20% jitter
  sleep $(( delay + jitter ))
done

Tip: Make run_job.sh idempotent or guarded by upserts/merge to avoid duplicates.

Checklist before you ship

  • Retries only on transient errors; permanent errors fail fast.
  • Exponential backoff with jitter configured.
  • Per-try timeout and max total time respect the SLA.
  • Task is idempotent or has compensating actions.
  • Alerts include owner, impact, last error, and next steps.
  • Escalation path and quiet-hours policy agreed with stakeholders.

Who this is for

ETL Developers, Data Engineers, and Analytics Engineers who schedule and operate pipelines and want reliable, low-noise operations.

Prerequisites

  • Basic knowledge of your orchestrator (Airflow, Prefect, Dagster, dbt Cloud, or cron).
  • Understanding of HTTP status codes, database errors, and SLAs.
  • Ability to read logs and identify error patterns.

Learning path

  1. Identify transient vs permanent failures in your pipelines.
  2. Add backoff, jitter, and timeouts to retryable tasks.
  3. Make tasks idempotent; add guards to writes.
  4. Define alert content and escalation thresholds.
  5. Run a failure drill and tune noise down.

Common mistakes and how to self-check

  • Retrying non-retryable errors: If an error is 4xx (not 429) or a schema mismatch, retries waste time. Self-check: sample last failures; if repeats unchanged, stop retrying.
  • No jitter: Simultaneous retries can overwhelm services. Self-check: Did many tasks retry at the exact same second?
  • No timeouts: A hung task blocks the queue. Self-check: Any task running longer than historical P95 without timing out?
  • Alert fatigue: Per-try alerts spam channels. Self-check: Alert volume vs incidents—ratio should trend down over time.
  • Lack of context: Alerts without links or run IDs prolong MTTR. Self-check: Can a new on-call person act within 5 minutes using the alert alone?

Practical projects

  • Retrofit one flaky pipeline with a robust retry policy, then compare success rates week-over-week.
  • Build an alert template that includes run metadata, impact, and runbook steps; adopt it across two pipelines.
  • Implement idempotent loads using MERGE/UPSERT; verify duplicates do not occur after forced retries.

Exercises

Exercise 1: Design a retry and alert policy for a rate-limited API

Context: A daily extract hits HTTP 429 and occasional timeouts. SLA is data ready by 07:00; typical run is 15 minutes starting at 06:00.

  1. Define retryable vs non-retryable errors.
  2. Propose attempts, backoff schedule (with jitter), and per-try timeout.
  3. Ensure total time fits SLA with buffer.
  4. Draft the final-failure alert content and escalation path.
  • Retry policy written
  • SLA impact checked
  • Alert template drafted
Tips
  • Start 2m backoff, cap 15m, add ±20% jitter.
  • Timeout per try 10–20m; stop if total exceeds 45m.
  • Alert only on final failure or 3 consecutive daily failures.

Exercise 2: Triage the logs

Given logs: Try1: 500; Try2: timeout; Try3: 429; Try4: 400 Bad Request (invalid parameter). Decide:

  1. Where should retries stop? Why?
  2. What code change or guard prevents this error next time?
  3. Update the policy to fail fast when the invalid parameter appears.
  • Failure type identified
  • Policy updated
  • Preventive action proposed
Tips
  • Stop on 4xx (except 429) and alert immediately.
  • Validate parameters before calling the API.

Mini challenge

Pick one pipeline with more than 3 failures last month. Classify top 2 failure types as transient or permanent, implement one policy improvement (retry tuning or fast-fail), and measure the next two weeks: success rate, average duration, and alert count.

Quick Test

Available to everyone. If you log in, we’ll save your progress and results.

Next steps

  • Apply the 80/20 defaults to one real task this week.
  • Schedule a failure drill with your team: force a timeout and review alert clarity.
  • Document your retry/alert standards and reuse them across pipelines.

Practice Exercises

2 exercises to complete

Instructions

Context: Daily extract may hit 429 and timeouts. SLA 07:00, start 06:00, typical 15 minutes.

  1. List retryable vs non-retryable errors for this job.
  2. Propose attempts, backoff (start 2m, cap 15m), ±20% jitter, per-try timeout 10–20m.
  3. Ensure total time stays within 60 minutes start-to-end, leaving buffer before 07:00.
  4. Draft final-failure alert: include task name, run ID, impact, last error, next steps, and owner.
Expected Output
A concise policy document (5–10 lines) covering retries, backoff, jitter, timeouts, final-failure alert content, and escalation.

Retries And Alerts — Quick Test

Test your knowledge with 7 questions. Pass with 70% or higher.

7 questions70% to pass

Have questions about Retries And Alerts?

AI Assistant

Ask questions about this tool