GitHub Actions has become my go-to CI/CD platform for AWS deployments. After setting up pipelines for over 30 projects, I've distilled the patterns that work best in production. This guide covers everything from secure authentication to multi-environment deployment strategies.
Why GitHub Actions for AWS?
Before diving in, here's why I recommend GitHub Actions over alternatives like CodePipeline or Jenkins:
- Native GitHub integration: PR checks, branch protection, and code review workflows are seamless
- OIDC support: No long-lived AWS credentials needed
- Marketplace actions: Thousands of pre-built actions for common tasks
- Matrix builds: Parallel testing across multiple configurations
- Cost: Free for public repos, generous free tier for private repos
Setting Up OIDC Authentication
Stop storing AWS access keys as GitHub secrets. OIDC (OpenID Connect) lets GitHub Actions assume an IAM role directly, with short-lived credentials that rotate automatically.
# Terraform to set up OIDC provider and role
resource "aws_iam_openid_connect_provider" "github" {
url = "https://token.actions.githubusercontent.com"
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = ["6938fd4d98bab03faadb97b34396831e3780aea1"]
}
resource "aws_iam_role" "github_actions" {
name = "github-actions-deploy"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Federated = aws_iam_openid_connect_provider.github.arn
}
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
"token.actions.githubusercontent.com:aud" = "sts.amazonaws.com"
}
StringLike = {
"token.actions.githubusercontent.com:sub" = "repo:myorg/myrepo:*"
}
}
}]
})
}
Then in your workflow:
permissions:
id-token: write
contents: read
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions-deploy
aws-region: us-east-1
Building and Pushing Docker Images to ECR
Here's my production-tested workflow for building Docker images with layer caching:
name: Build and Push to ECR
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
ECR_REPOSITORY: myapp
AWS_REGION: us-east-1
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build \
--cache-from $ECR_REGISTRY/$ECR_REPOSITORY:latest \
-t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG \
-t $ECR_REGISTRY/$ECR_REPOSITORY:latest .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest
Deploying to ECS
After pushing the image, deploy to ECS by updating the task definition:
- name: Download task definition
run: |
aws ecs describe-task-definition \
--task-definition myapp \
--query taskDefinition > task-definition.json
- name: Update task definition with new image
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: task-definition.json
container-name: myapp
image: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ github.sha }}
- name: Deploy to ECS
uses: aws-actions/amazon-ecs-deploy-task-definition@v2
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: myapp-service
cluster: production
wait-for-service-stability: true
wait-for-minutes: 10
Terraform Plan/Apply Workflows
This is the workflow I use for infrastructure changes. Plan runs on PRs, apply runs on merge to main:
name: Terraform
on:
pull_request:
paths: ['infrastructure/**']
push:
branches: [main]
paths: ['infrastructure/**']
jobs:
plan:
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.7.0
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
- name: Terraform Init
working-directory: infrastructure
run: terraform init
- name: Terraform Plan
id: plan
working-directory: infrastructure
run: terraform plan -no-color -out=tfplan
continue-on-error: true
- name: Comment Plan on PR
uses: actions/github-script@v7
with:
script: |
const output = `#### Terraform Plan
\`\`\`
${{ steps.plan.outputs.stdout }}
\`\`\`
*Pushed by: @${{ github.actor }}*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
apply:
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
- name: Terraform Init & Apply
working-directory: infrastructure
run: |
terraform init
terraform apply -auto-approve
Multi-Environment Deployment
For deploying across dev, staging, and production, I use environment-based workflows with manual approval gates:
jobs:
deploy-staging:
environment: staging
runs-on: ubuntu-latest
steps:
- name: Deploy to Staging
run: ./deploy.sh staging
deploy-production:
needs: deploy-staging
environment:
name: production
url: https://myapp.com
runs-on: ubuntu-latest
steps:
- name: Deploy to Production
run: ./deploy.sh production
Configure environment protection rules in GitHub Settings to require manual approval before production deployments.
Reusable Workflows
Don't repeat yourself across repositories. Create reusable workflows:
# .github/workflows/reusable-deploy.yml
name: Reusable ECS Deploy
on:
workflow_call:
inputs:
environment:
required: true
type: string
cluster:
required: true
type: string
secrets:
AWS_ROLE_ARN:
required: true
jobs:
deploy:
runs-on: ubuntu-latest
environment: ${{ inputs.environment }}
steps:
# ... deployment steps
Slack Notifications
Every pipeline should notify the team on success and failure:
- name: Notify Slack
if: always()
uses: 8398a7/action-slack@v3
with:
status: ${{ job.status }}
fields: repo,message,commit,author,action,eventName,ref,workflow
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}
Key Takeaways
- Always use OIDC instead of long-lived access keys
- Plan on PRs, apply on merge for infrastructure changes
- Use environments with protection rules for production gates
- Cache aggressively - Docker layers, dependencies, Terraform providers
- Notify your team on every deployment success and failure
- Reuse workflows across repositories to maintain consistency
These patterns have served me well across dozens of production deployments. Start simple, iterate, and always prioritize security over convenience.