Building Isolated Developer Environments with GitHub Actions Branch Conventions
TL;DR: By combining GitHub Actions branch patterns with Terraform workspaces,
I’ve created a system where developers get their own isolated AWS environment
simply by pushing to a {name}/dev branch. No tickets, no waiting, no conflicts.
The Problem: Environment Contention
If you’ve worked on a team deploying to shared cloud infrastructure, you’ve felt this pain:
- “Can I use the dev environment today? I need to test a breaking change.”
- “Who deployed to sandbox? My feature is broken now.”
- “I can’t test my Lambda changes because someone else is mid-deployment.”
Shared environments can create bottlenecks. Developers queue up, or worse, step on each other’s changes. The traditional solution - spin up a new environment manually - is slow, error-prone, and often forgotten (hello, cloud bill).
The Fix: Let Branch Names Drive Environments
Our current client already uses a pattern where different Git branches should be automatically deployed to certain environments. We wanted to expand upon that with a simple idea : if a developer pushs to a specially named branch, they get an environment all of their own. Here’s how I built it.
Branch Naming as Configuration
Our CI/CD pipeline inspects the branch name and derives the deployment target:
# .github/workflows/ci-cd.yml
on:
push:
branches:
- main # Production
- test # Staging
- dev # Shared development
- "chris/dev" # Chris's isolated environment
- "john/dev" # John's isolated environment
- "smith/dev" # Smith's isolated environment
The interesting bit is in the setup job:
jobs:
setup:
name: Determine environment
runs-on: ubuntu-latest
outputs:
environment: ${{ steps.set-env.outputs.environment }}
aws_account_id: ${{ steps.set-env.outputs.aws_account_id }}
tf_working_dir: ${{ steps.set-env.outputs.tf_working_dir }}
namespace: ${{ steps.set-env.outputs.namespace }}
steps:
- name: Set environment based on branch
id: set-env
run: |
BRANCH="${GITHUB_HEAD_REF:-${GITHUB_REF_NAME}}"
echo "Branch: $BRANCH"
if [[ "$BRANCH" == "main" ]]; then
echo "environment=primary-deploy" >> $GITHUB_OUTPUT
echo "aws_account_id=${{ env.AWS_PRIMARY_ID }}" >> $GITHUB_OUTPUT
echo "tf_working_dir=terraform/primary" >> $GITHUB_OUTPUT
echo "namespace=dvai" >> $GITHUB_OUTPUT
elif [[ "$BRANCH" == "test" ]]; then
echo "environment=secondary-deploy" >> $GITHUB_OUTPUT
echo "aws_account_id=${{ env.AWS_SECONDARY_ID }}" >> $GITHUB_OUTPUT
echo "tf_working_dir=terraform/secondary" >> $GITHUB_OUTPUT
echo "namespace=dvai" >> $GITHUB_OUTPUT
else
# Developer branches go to sandbox account
echo "environment=sandbox-deploy" >> $GITHUB_OUTPUT
echo "aws_account_id=${{ env.AWS_SANDBOX_ID }}" >> $GITHUB_OUTPUT
# Extract developer name for isolated namespace
if [[ "$BRANCH" =~ ^(chris|john|msith)/.+ ]]; then
NAMESPACE="${BASH_REMATCH[1]}"
echo "Using developer namespace: $NAMESPACE"
echo "tf_working_dir=terraform/$NAMESPACE" >> $GITHUB_OUTPUT
else
NAMESPACE="dvai"
echo "Using default namespace: $NAMESPACE"
echo "tf_working_dir=terraform/sandbox" >> $GITHUB_OUTPUT
fi
echo "namespace=$NAMESPACE" >> $GITHUB_OUTPUT
fi
Key decisions here:
-
Allowlist approach: We explicitly list known developers (
chris|john|smith) rather than accepting any{name}/pattern. This prevents accidents likefix/bugcreating a namespace called “fix”. -
Separate Terraform directories: Each developer gets their own Terraform state via
tf_working_dir=terraform/$NAMESPACE. This means complete isolation. Chris can destroy and recreate his environment without affecting John. -
Shared sandbox account: All developer environments deploy to the same AWS account but with namespaced resources. This keeps costs predictable while maintaining isolation.
Namespaced Resources
The namespace output flows through to Terraform and resource naming:
# terraform/chris/context.auto.tfvars
namespace = "chris"
stage = "sandbox"
environment = "dev"
This creates resources like:
- S3 bucket:
chris-sandbox-dev-videos - Lambda:
chris-sandbox-dev-process-video - API Gateway:
chris-sandbox-dev-api
Compare to the shared environment:
- S3 bucket:
dvai-sandbox-dev-videos - Lambda:
dvai-sandbox-dev-process-video
No collisions. No conflicts.
Docker Images with Content-Based Tagging
Developer environments share ECR registry but use namespaced repositories. I’m building linting, running unit tests, and building Docker and Zip based Lambdas in the CI side of the pipeline, and only building and pushing images if the code’s changed.
- name: Build, tag, and push Docker images to ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
NAMESPACE: ${{ needs.setup.outputs.namespace }}
run: |
# Generate content hash from source files
CONTENT_HASH=$(find . -type f \( -name "*.py" -o -name "Dockerfile" \) \
-exec sha256sum {} \; | sort | sha256sum | cut -c1-16)
ECR_REPOSITORY="${NAMESPACE}-generate-labelled-video"
# Skip build if this exact version exists
if aws ecr describe-images --repository-name $ECR_REPOSITORY \
--image-ids imageTag=$CONTENT_HASH 2>/dev/null; then
echo "Image already exists, skipping build"
else
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$CONTENT_HASH .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$CONTENT_HASH
fi
Benefits:
- Content-addressed: Same code = same hash = skip the build
- Namespaced repos:
chris-generate-labelled-videovsdvai-generate-labelled-video - No tag conflicts: Multiple developers can push simultaneously
Why bother with all this?
Faster feedback loops: Before, it was “I’ll test this in dev tomorrow when it’s free.” Now? Push to chris/dev, get a deployment in 5 minutes.
Fearless experimentation: Need to test a breaking database migration? Do it in your namespace. Worst case, you destroy and recreate your environment - no impact on teammates.
Realistic integration testing: Unlike local development with mocked services, these are real AWS resources. You’re testing actual IAM permissions, real S3 event triggers, genuine API Gateway configurations.
PR previews made simple: When Chris opens a PR from chris/dev to main, reviewers can actually interact with his deployed changes. The PR description can link to https://chris.sandbox.example.com for live testing.
Onboarding acceleration: New developer joins? Add their name to the branch pattern, create their Terraform directory, and they’re deploying on day one.
The Directory Structure
terraform/
├── module-video-ai/ # Shared infrastructure module
│ ├── lambda.tf
│ ├── buckets.tf
│ └── ...
├── primary/ # Production (main branch)
│ ├── backend.tf
│ ├── main.tf
│ └── context.auto.tfvars
├── secondary/ # Staging (test branch)
├── sandbox/ # Shared dev (dev branch)
├── chris/ # Chris's environment (chris/* branches)
├── john/ # John's environment (steve/* branches)
└── smith/ # Smith's environment
Each developer directory is minimal - mostly just variable overrides:
# terraform/chris/context.auto.tfvars
region = "eu-west-1"
namespace = "chris"
stage = "sandbox"
environment = "dev"
The heavy lifting lives in module-video-ai/, shared by all environments.
Gotchas and Lessons Learned
1. Terraform State Isolation is Critical
Each environment needs its own state file. We use separate S3 backend keys:
# terraform/chris/backend.tf
backend "s3" {
bucket = "dvai-terraform-state"
key = "chris/terraform.tfstate" # Unique per developer
region = "eu-west-1"
}
2. Allowlist Branch Patterns
Our first iteration used ^([a-zA-Z]+)/.+ to extract any prefix. Then someone
pushed feature/login and created a “feature” namespace. Now we explicitly list
developers.
3. Resource Quotas
AWS accounts have service quotas. With multiple developers spinning up Lambda functions, API Gateways, and CloudFront distributions, you’ll hit limits faster than expected. Request increases proactively.
4. Cost Visibility
The environments we’re building are reasonably service based, so there’s little or no cost to spin up new environments. If this were building something like EC2 instances, with ongoing hourly costs, it might not be appropriate. In any case, Tag everything with the namespace so you can track costs per developer:
locals {
tags = {
Namespace = var.namespace
Environment = var.environment
ManagedBy = "terraform"
}
}
Then set up AWS Cost Explorer to group by the Namespace tag.
5. Cleanup Automation
Developer environments accumulate. Consider a scheduled workflow that identifies stale environments (no commits in 30 days?) and sends Slack reminders - or auto-destroys with warning.
Extending the Pattern
This approach scales beyond our use case:
- Feature branch environments: Match
feature/*patterns for PR preview environments that auto-destroy on merge - Multi-region testing: Developer environments in
eu-west-1, production inus-east-1- same code, different contexts - Compliance environments: Dedicated namespace for security testing with enhanced logging
Wrapping up
Developers shouldn’t think about environments - they should think about features. By encoding environment configuration into branch names, we’ve made isolated deployments a side effect of normal git workflow.
Push a branch, get an environment. Done.
This pattern is part of our Video AI Analysis platform, which uses AWS Rekognition, Transcribe, and Bedrock to process videos and generate compliance reports. The full CI/CD pipeline handles Lambda deployments, Docker builds, and Terraform automation across four AWS accounts.
As always, you can add my RSS feed to your reader of choice and if you made it this far thanks for reading!
Chris