Why this matters
As an Analytics Engineer, you ship data models that power dashboards, finance reports, and product decisions. Proper environments (dev, staging, prod) and reliable deployments make your work safe, auditable, and fast to iterate.
- Ship new models without breaking production.
- Promote changes with confidence using CI and automated tests.
- Control costs and runtime via targeted runs and schedules.
- Enable quick rollback if something goes wrong.
Concept explained simply
An environment is a safe place to run dbt with specific settings: credentials, schema/database names, and runtime behaviors. Deployments are automated runs of dbt (CI, scheduled jobs) that build and test models in those environments.
Mental model
Think of three lanes on a road:
- Dev: Your shoulder lane. Experiment, iterate, fail safely.
- Staging: The middle lane. Final checks that mirror production.
- Prod: The express lane. Locked down, predictable, stable outputs.
You move changes from dev → staging → prod using branches, pull requests, CI, and scheduled jobs.
Key building blocks
Profiles and targets (profiles.yml)
Define multiple targets inside one profile, e.g., dev, staging, prod. Each target sets credentials, database/warehouse, schema, and threads.
my_profile:
target: dev
outputs:
dev:
type: bigquery
project: my-proj
dataset: analytics_dev
threads: 4
staging:
type: bigquery
project: my-proj
dataset: analytics_stg
threads: 8
prod:
type: bigquery
project: my-proj
dataset: analytics
threads: 16
Environment-specific configs
Use target-aware config in dbt_project.yml or model config blocks to change behavior per environment (e.g., schema suffixes, tags, materializations).
models:
+schema: "{{ target.name }}"
marts:
+materialized: table
+tags: ["mart"]
staging:
+materialized: view
Selectors, state, and slim CI
Selectors filter what to run. State-based selection runs only changed models and their dependents:
dbt build --select state:modified+ --defer --state path/to/prod_artifacts
state:modified+= changed models plus downstream.--defer= reuse prod-built refs when unchanged.--state= where prod artifacts (manifest.json) live.
Jobs: CI vs. scheduled
- CI jobs: run on pull requests; quick, selective, block merges if tests fail.
- Staging jobs: run on merge to main; near-prod checks.
- Prod jobs: scheduled (e.g., hourly, nightly) for stable outputs.
Worked examples
1) Profiles with clean schemas per environment
Goal: isolate dev/staging/prod objects.
- Create targets in
profiles.ymlwith distinct schemas/datasets. - Set default schema suffix in
dbt_project.ymlusing{{ target.name }}. - Run locally in dev:
dbt build --target dev.
Result: dev models land in analytics_dev, staging in analytics_stg, prod in analytics.
2) Prod nightly with weekly full-refresh
Goal: fast nightly builds, deeper weekly rebuild for incremental models.
Commands
# Nightly (incremental where possible)
dbt build --target prod --select tag:mart tag:staging
# Weekly deep rebuild (Sunday)
dbt build --target prod --full-refresh --select tag:mart
Result: nightly runs are fast and cheap, while the weekly job resets drift and handles schema changes.
3) Pull request slim CI
Goal: validate only changed models and their dependents using prod state.
Commands
dbt deps
# Assume prod artifacts synced to ./prod_state
# Or downloaded from object storage/previous job artifacts
dbt build \
--target staging \
--select state:modified+ \
--defer \
--state ./prod_state
Result: quick CI that blocks breaking changes without rebuilding the whole project.
Set it up in 7 steps
- Name environments: dev, staging, prod.
- Create targets in profiles.yml with separate schemas/datasets.
- Adopt naming: use
{{ target.name }}in schema to isolate objects. - Tag models (e.g.,
staging,mart) for selective runs. - Enable tests in CI/staging to block merges when failing.
- Store artifacts from prod jobs so CI can use
--state. - Schedule: nightly prod, weekly full-refresh, PR CI on push.
Exercises
Do these in a sample dbt project. Mirror them with the same IDs below so your progress aligns with the quick check. Your progress is saved if you are logged in; otherwise you can still complete everything for free.
Exercise ex1: Three targets + schema isolation
Create dev, staging, prod targets in profiles.yml and ensure each writes to a different schema/dataset. Run a model in each.
Exercise ex2: Slim CI and Prod schedule
Write the exact commands for a PR CI job (state-based selective build) and for a prod nightly job plus a weekly full-refresh job.
Self-check checklist
- Dev, staging, prod targets exist and select different schemas
-
dbt build --target devcreated objects in the dev schema - CI command uses
state:modified+and--defer --state - Prod nightly avoids
--full-refresh - Weekly job includes
--full-refreshfor incremental marts
Common mistakes and how to self-check
- One shared schema for all envs: leads to collisions. Fix: use separate schemas or
{{ target.name }}suffixes. - No CI gating: merges break prod later. Fix: PR CI with
dbt buildand tests. - Rebuilding everything on CI: slow and costly. Fix:
state:modified+with--defer. - Missing artifacts: CI cannot defer. Fix: persist prod
manifest.jsonand point--stateto it. - Forgetting tests in staging: bugs slip into prod. Fix: run
dbt testin CI/staging. - Using full-refresh nightly: excessive cost. Fix: reserve full-refresh weekly or when needed.
- Credentials reuse: dev writer in prod is risky. Fix: separate service accounts/roles per environment.
- Not tagging models: hard to select. Fix: add tags in
dbt_project.yml.
Quick self-audit
- List your targets: dev/staging/prod exist and point to different schemas?
- Run
dbt ls --select state:modified+on a PR branch: does it return only changed models? - Open your last prod run logs: are artifacts stored and accessible to CI?
- Check schedules: is full-refresh limited to weekly or specific maintenance windows?
Practical projects
- Spin up a toy project with 10 models; implement dev/staging/prod and promote a change via PR to prod.
- Add tags (
staging,intermediate,mart) and create selectors to build only marts nightly. - Store prod artifacts in a folder, run a local CI simulation using
--deferand confirm unchanged models are reused.
Mini challenge
You need to change the grain of a key mart (breaking change). Draft a safe rollout:
- Create a new version of the model (e.g.,
fct_orders_v2) and tag itmart+v2. - Run in dev, then staging with CI gates (
state:modified+). - Backfill in prod during a scheduled window (weekly full-refresh).
- Switch downstream models to reference
v2, then deprecatev1.
Hint
Use parallel models (v1 and v2) so current dashboards keep working while you validate v2.
Who this is for
- Analytics Engineers and Data Analysts using dbt Core or dbt Cloud.
- Engineers responsible for stable BI and reporting pipelines.
Prerequisites
- Basic dbt knowledge (models, seeds, tests).
- Access to a SQL warehouse (e.g., BigQuery, Snowflake, Redshift, Postgres).
Learning path
- dbt basics: models, tests, documentation.
- Selectors and tags for targeted runs.
- Environments and deployments (this lesson).
- Performance and cost controls (incremental models, concurrency).
Next steps
- Create or update your profiles.yml to include all targets.
- Set up a PR CI job using slim CI.
- Schedule nightly and weekly prod jobs.
- Run the quick test below to confirm understanding.
Progress & test
The quick test is available to everyone for free. If you are logged in, your progress will be saved automatically.