luvv to helpDiscover the Best Free Online Tools
Topic 8 of 10

Environments And Deployments

Learn Environments And Deployments for free with explanations, exercises, and a quick test (for Analytics Engineer).

Published: December 23, 2025 | Updated: December 23, 2025

Why this matters

As an Analytics Engineer, you ship data models that power dashboards, finance reports, and product decisions. Proper environments (dev, staging, prod) and reliable deployments make your work safe, auditable, and fast to iterate.

  • Ship new models without breaking production.
  • Promote changes with confidence using CI and automated tests.
  • Control costs and runtime via targeted runs and schedules.
  • Enable quick rollback if something goes wrong.

Concept explained simply

An environment is a safe place to run dbt with specific settings: credentials, schema/database names, and runtime behaviors. Deployments are automated runs of dbt (CI, scheduled jobs) that build and test models in those environments.

Mental model

Think of three lanes on a road:

  • Dev: Your shoulder lane. Experiment, iterate, fail safely.
  • Staging: The middle lane. Final checks that mirror production.
  • Prod: The express lane. Locked down, predictable, stable outputs.

You move changes from dev → staging → prod using branches, pull requests, CI, and scheduled jobs.

Key building blocks

Profiles and targets (profiles.yml)

Define multiple targets inside one profile, e.g., dev, staging, prod. Each target sets credentials, database/warehouse, schema, and threads.

my_profile:
  target: dev
  outputs:
    dev:
      type: bigquery
      project: my-proj
      dataset: analytics_dev
      threads: 4
    staging:
      type: bigquery
      project: my-proj
      dataset: analytics_stg
      threads: 8
    prod:
      type: bigquery
      project: my-proj
      dataset: analytics
      threads: 16
Environment-specific configs

Use target-aware config in dbt_project.yml or model config blocks to change behavior per environment (e.g., schema suffixes, tags, materializations).

models:
  +schema: "{{ target.name }}"
  marts:
    +materialized: table
    +tags: ["mart"]
    staging:
      +materialized: view
Selectors, state, and slim CI

Selectors filter what to run. State-based selection runs only changed models and their dependents:

dbt build --select state:modified+ --defer --state path/to/prod_artifacts
  • state:modified+ = changed models plus downstream.
  • --defer = reuse prod-built refs when unchanged.
  • --state = where prod artifacts (manifest.json) live.
Jobs: CI vs. scheduled
  • CI jobs: run on pull requests; quick, selective, block merges if tests fail.
  • Staging jobs: run on merge to main; near-prod checks.
  • Prod jobs: scheduled (e.g., hourly, nightly) for stable outputs.

Worked examples

1) Profiles with clean schemas per environment

Goal: isolate dev/staging/prod objects.

  1. Create targets in profiles.yml with distinct schemas/datasets.
  2. Set default schema suffix in dbt_project.yml using {{ target.name }}.
  3. Run locally in dev: dbt build --target dev.

Result: dev models land in analytics_dev, staging in analytics_stg, prod in analytics.

2) Prod nightly with weekly full-refresh

Goal: fast nightly builds, deeper weekly rebuild for incremental models.

Commands
# Nightly (incremental where possible)
dbt build --target prod --select tag:mart tag:staging

# Weekly deep rebuild (Sunday)
dbt build --target prod --full-refresh --select tag:mart

Result: nightly runs are fast and cheap, while the weekly job resets drift and handles schema changes.

3) Pull request slim CI

Goal: validate only changed models and their dependents using prod state.

Commands
dbt deps
# Assume prod artifacts synced to ./prod_state
# Or downloaded from object storage/previous job artifacts

dbt build \
  --target staging \
  --select state:modified+ \
  --defer \
  --state ./prod_state

Result: quick CI that blocks breaking changes without rebuilding the whole project.

Set it up in 7 steps

  1. Name environments: dev, staging, prod.
  2. Create targets in profiles.yml with separate schemas/datasets.
  3. Adopt naming: use {{ target.name }} in schema to isolate objects.
  4. Tag models (e.g., staging, mart) for selective runs.
  5. Enable tests in CI/staging to block merges when failing.
  6. Store artifacts from prod jobs so CI can use --state.
  7. Schedule: nightly prod, weekly full-refresh, PR CI on push.

Exercises

Do these in a sample dbt project. Mirror them with the same IDs below so your progress aligns with the quick check. Your progress is saved if you are logged in; otherwise you can still complete everything for free.

Exercise ex1: Three targets + schema isolation

Create dev, staging, prod targets in profiles.yml and ensure each writes to a different schema/dataset. Run a model in each.

Exercise ex2: Slim CI and Prod schedule

Write the exact commands for a PR CI job (state-based selective build) and for a prod nightly job plus a weekly full-refresh job.

Self-check checklist

  • Dev, staging, prod targets exist and select different schemas
  • dbt build --target dev created objects in the dev schema
  • CI command uses state:modified+ and --defer --state
  • Prod nightly avoids --full-refresh
  • Weekly job includes --full-refresh for incremental marts

Common mistakes and how to self-check

  • One shared schema for all envs: leads to collisions. Fix: use separate schemas or {{ target.name }} suffixes.
  • No CI gating: merges break prod later. Fix: PR CI with dbt build and tests.
  • Rebuilding everything on CI: slow and costly. Fix: state:modified+ with --defer.
  • Missing artifacts: CI cannot defer. Fix: persist prod manifest.json and point --state to it.
  • Forgetting tests in staging: bugs slip into prod. Fix: run dbt test in CI/staging.
  • Using full-refresh nightly: excessive cost. Fix: reserve full-refresh weekly or when needed.
  • Credentials reuse: dev writer in prod is risky. Fix: separate service accounts/roles per environment.
  • Not tagging models: hard to select. Fix: add tags in dbt_project.yml.
Quick self-audit
  1. List your targets: dev/staging/prod exist and point to different schemas?
  2. Run dbt ls --select state:modified+ on a PR branch: does it return only changed models?
  3. Open your last prod run logs: are artifacts stored and accessible to CI?
  4. Check schedules: is full-refresh limited to weekly or specific maintenance windows?

Practical projects

  • Spin up a toy project with 10 models; implement dev/staging/prod and promote a change via PR to prod.
  • Add tags (staging, intermediate, mart) and create selectors to build only marts nightly.
  • Store prod artifacts in a folder, run a local CI simulation using --defer and confirm unchanged models are reused.

Mini challenge

You need to change the grain of a key mart (breaking change). Draft a safe rollout:

  1. Create a new version of the model (e.g., fct_orders_v2) and tag it mart + v2.
  2. Run in dev, then staging with CI gates (state:modified+).
  3. Backfill in prod during a scheduled window (weekly full-refresh).
  4. Switch downstream models to reference v2, then deprecate v1.
Hint

Use parallel models (v1 and v2) so current dashboards keep working while you validate v2.

Who this is for

  • Analytics Engineers and Data Analysts using dbt Core or dbt Cloud.
  • Engineers responsible for stable BI and reporting pipelines.

Prerequisites

  • Basic dbt knowledge (models, seeds, tests).
  • Access to a SQL warehouse (e.g., BigQuery, Snowflake, Redshift, Postgres).

Learning path

  1. dbt basics: models, tests, documentation.
  2. Selectors and tags for targeted runs.
  3. Environments and deployments (this lesson).
  4. Performance and cost controls (incremental models, concurrency).

Next steps

  • Create or update your profiles.yml to include all targets.
  • Set up a PR CI job using slim CI.
  • Schedule nightly and weekly prod jobs.
  • Run the quick test below to confirm understanding.

Progress & test

The quick test is available to everyone for free. If you are logged in, your progress will be saved automatically.

Practice Exercises

2 exercises to complete

Instructions

  1. In your profiles.yml, create three targets: dev, staging, prod with distinct schemas/datasets (e.g., analytics_dev, analytics_stg, analytics).
  2. In dbt_project.yml, set +schema: "{{ target.name }}" at the project or models level.
  3. Run dbt build --target dev, then --target staging, and confirm objects land in the correct schemas.
Expected Output
Models and tests execute successfully in dev and staging. Objects appear in the correct environment-specific schemas/datasets.

Environments And Deployments — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Environments And Deployments?

AI Assistant

Ask questions about this tool