Skip to main content

dbt environments

dbt makes it easy to maintain separate development, CI, and production environments through the use of targets within a profile. A typical profile will have a dev target set as the default so that, while making changes, your objects are built in your development environment without affecting production queries made by end users. Once you are confident in your changes, you can deploy the code to production by running your dbt project with a prod target.

Running dbt in production

Learn more about different approaches to running dbt in production in this guide.

Separation strategies

Targets give you flexibility in how to separate your environments. The three main approaches are:

ApproachHowWhen to use
Separate schemas (recommended)Each environment writes to a different schema in the same databaseWorks for most teams; lowest cost and easiest to set up
Separate databasesEach environment targets a different database.Useful when schema-level access controls are insufficient.
Separate accounts or clustersEach environment connects to a completely different warehouse account or cluster.Needed for strict network or compliance isolation between environments.
Loading table...

We recommend separate schemas within one database for most teams. It is the easiest to set up and the most cost-effective solution on modern cloud data warehouses.

Setting up schemas per developer

When multiple developers use dbt, each person should write to their own development schema so they don't overwrite each other's work. A pattern that works well is naming your dev target schema dbt_<username>:

~/.dbt/profiles.yml
my_project:
target: dev
outputs:
dev:
type: postgres # replace with your adapter
host: localhost
user: "{{ env_var('DBT_DEV_USER') }}"
password: "{{ env_var('DBT_DEV_PASSWORD') }}"
port: 5432
dbname: analytics
schema: "dbt_{{ env_var('DBT_USERNAME') }}" # e.g. dbt_alice, dbt_bob
threads: 4

prod:
type: postgres
host: prod-warehouse.example.com
user: "{{ env_var('DBT_PROD_USER') }}"
password: "{{ env_var('DBT_PROD_PASSWORD') }}"
port: 5432
dbname: analytics
schema: analytics
threads: 8
Credentials in profiles.yml

Use env_var() for sensitive values like passwords and usernames — keep them out of version control. See using environment variables in profiles.yml below.

There is no need to create your target schema ahead of time — dbt checks whether it exists at run time and creates it if it doesn't.

Setting your local environment variables

Each developer needs to export the variables that their profiles.yml references. Add these to your shell profile (for example, ~/.zshrc, ~/.bashrc, or ~/.bash_profile) so they are set automatically:

# ~/.zshrc or ~/.bashrc
export DBT_USERNAME="alice"
export DBT_DEV_USER="alice"
export DBT_DEV_PASSWORD="my_dev_password"

After editing, reload your shell:

source ~/.zshrc

Verify the variables are set:

echo $DBT_USERNAME
# alice

Using environment variables in profiles.yml

Any field in profiles.yml can reference an environment variable using the {{ env_var('VAR_NAME') }} Jinja function. You can also supply a default value as the second argument to avoid compilation errors in environments where a variable isn't set:

schema: "dbt_{{ env_var('DBT_USERNAME', 'default') }}"

This approach follows the twelve-factor app methodology — credentials and environment-specific values live in the environment, not in code.

For more, see the env_var reference.

Setting up a CI environment

In a CI pipeline (such as GitHub Actions, GitLab CI, or CircleCI), set environment variables at the pipeline level so that dbt can connect to your warehouse and build into an isolated schema.

A recommended CI schema naming pattern is dbt_cloud_pr_<PR_NUMBER> or simply ci — this prevents CI runs from writing over production or development schemas.

# .github/workflows/dbt-ci.yml
name: dbt CI

on:
pull_request:

jobs:
dbt-check:
runs-on: ubuntu-latest
env:
DBT_USERNAME: ci
DBT_DEV_USER: ${{ secrets.DBT_PROD_USER }}
DBT_DEV_PASSWORD: ${{ secrets.DBT_PROD_PASSWORD }}

steps:
- uses: actions/checkout@v3

- name: Install dbt
run: pip install dbt-postgres # replace with your adapter

- name: Run dbt
run: dbt build --target dev --profiles-dir .

Store secrets (passwords, tokens) in your CI platform's secret store — GitHub Actions Secrets, GitLab CI/CD Variables, etc. — rather than in your repository.

Separating environments at the database or account level

Sometimes schema-level separation is insufficient. For example:

  • Your warehouse enforces different network policies per account.
  • Compliance requirements prevent dev and prod data from sharing an account.
  • You want to use a lower-spec warehouse tier for development.

To target a different database, update the dbname (Postgres/Redshift) or database (Snowflake/BigQuery) field per target in your profile:

my_project:
target: dev
outputs:
dev:
type: snowflake
account: "{{ env_var('SNOWFLAKE_ACCOUNT') }}"
database: analytics_dev # separate database for dev
schema: "dbt_{{ env_var('DBT_USERNAME') }}"
warehouse: dev_warehouse
...

prod:
type: snowflake
account: "{{ env_var('SNOWFLAKE_ACCOUNT') }}"
database: analytics # production database
schema: analytics
warehouse: prod_warehouse
...

To target a different account or cluster entirely, change the account (Snowflake), host (Postgres/Redshift), or project (BigQuery) value per target:

my_project:
target: dev
outputs:
dev:
type: snowflake
account: "{{ env_var('SNOWFLAKE_DEV_ACCOUNT') }}" # separate dev account
...

prod:
type: snowflake
account: "{{ env_var('SNOWFLAKE_PROD_ACCOUNT') }}" # production account
...

How dbt names schemas across environments

By default, dbt builds all models into the schema defined in target.schema. When you use custom schemas (for example, +schema: marketing), dbt appends the custom schema to the target schema: <target_schema>_<custom_schema>.

This means:

Environmenttarget.schemaCustom schema configResulting schema
Dev (alice)dbt_alicenonedbt_alice
Dev (alice)dbt_alicemarketingdbt_alice_marketing
Prodanalyticsnoneanalytics
Prodanalyticsmarketinganalytics_marketing
Loading table...

The target schema prefix ensures that no two environments write to the same location, even when custom schemas are in use.

For advanced patterns — such as using the raw custom schema in production while using only the target schema in dev and CI — refer to Custom schemas.

Was this page helpful?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

0
Loading