Define staging and production deployment pipelines #104
Reference in New Issue
Block a user
Delete Branch "copilot/define-deployment-pipeline"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Deployment Pipeline Implementation ✅
This PR implements staging and production deployment pipelines with automated workflows and comprehensive documentation.
Acceptance Criteria Progress
Containerize backend and web services with twelve-factor configuration
Create staging environment pipeline with migrations, fixtures, and smoke checks
Implement production deployment workflow with approval gates and rollback
Capture deployment playbook and environment variables
Recent Updates (PR Review Fixes)
All review comments have been addressed:
Key Features Implemented
Testing & Validation
✅ All validation checks passed:
Related: #10 (Ops bucket - deployment paths and observability)
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.
Pull Request Overview
This PR implements a complete Docker-based deployment pipeline for the Internet-ID application, supporting staging and production environments. The implementation follows the twelve-factor app methodology and includes comprehensive documentation for operational procedures.
Reviewed Changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 8 comments.
Show a summary per file
web/DockerfileDockerfile.apiweb/next.config.mjs.github/workflows/deploy-staging.yml.github/workflows/deploy-production.ymldocker-compose.staging.ymldocker-compose.production.ymlscripts/smoke-test.shdocs/ops/*.mdpackage.json💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
@@ -0,0 +1,320 @@name: Deploy to ProductionUsing
grep -v 'WARNING'to filter Docker login output could suppress important warnings and still leak sensitive information if the grep pattern doesn't match. Usedocker login ... > /dev/null 2>&1or redirect to a log file instead of filtering with grep.The
--no-recreateflag prevents recreation of containers, but in a blue-green deployment you want to create new containers with the new image. This flag will prevent the deployment from using the newly pulled images. Remove--no-recreateor use--force-recreateinstead.@@ -0,0 +1,251 @@name: Deploy to StagingUsing
grep -v 'WARNING'to filter Docker login output could suppress important warnings and still leak sensitive information if the grep pattern doesn't match. Usedocker login ... > /dev/null 2>&1or redirect to a log file instead of filtering with grep.The
--buildflag will rebuild images locally on the server, but the images were already built and pushed in the build job. This defeats the purpose of the CI build step and could result in different images being deployed. Remove the--buildflag to use the pre-built images from the registry.@@ -1,35 +1,77 @@# Dockerfile for Internet-ID API ServerThe deps stage installs all dependencies (including devDependencies) but the comment on line 58-59 says it's copying 'production dependencies from deps stage'. The deps stage should use
npm ci --legacy-peer-deps --only=productionto match the intent. Currently, the runner stage will include devDependencies which inflates the production image size unnecessarily.Copying node_modules from the deps stage and then running
npx prisma generateon line 73 will modify the copied node_modules. This is fragile because the Prisma client should already be in node_modules from the deps stage. Consider either: (1) generating Prisma client in the deps stage before copying, or (2) accepting that devDependencies are needed in production for Prisma generation.@@ -0,0 +1,194 @@version: "3.9"Installing bash and aws-cli via
apk addin the command runs on every container restart and is inefficient. Consider creating a custom Dockerfile for the backup service that includes these dependencies pre-installed.@@ -0,0 +1,139 @@version: "3.9"Installing bash via
apk addin the command runs on every container restart and is inefficient. Consider either: (1) creating a custom Dockerfile for the backup service that includes bash, or (2) rewrite the backup scripts to use sh instead of bash since Alpine uses busybox sh by default.@copilot apply changes based on the comments in this thread
@@ -1,35 +1,77 @@# Dockerfile for Internet-ID API ServerFixed in
857b67d. Now copying node_modules from the builder stage which includes the Prisma client generated during the build process, eliminating the need to regenerate in the runner stage.@@ -0,0 +1,251 @@name: Deploy to StagingFixed in
857b67d. Changed to use> /dev/null 2>&1for proper output redirection.@@ -0,0 +1,251 @@name: Deploy to StagingFixed in
857b67d. Removed the--buildflag to use pre-built images from the registry.@@ -0,0 +1,320 @@name: Deploy to ProductionFixed in
857b67d. Changed to use> /dev/null 2>&1for proper output redirection.@@ -1,35 +1,77 @@# Dockerfile for Internet-ID API ServerFixed in
857b67d. The deps stage now uses--only=productionand the Prisma client is copied from the builder stage which has the complete node_modules with the generated client.@@ -0,0 +1,139 @@version: "3.9"Fixed in
857b67d. Created a customDockerfile.backupwith bash pre-installed, and updated docker-compose.staging.yml to use this image.@@ -0,0 +1,194 @@version: "3.9"Fixed in
857b67d. Created a customDockerfile.backupwith bash and aws-cli pre-installed, and updated docker-compose.production.yml to use this image.@@ -0,0 +1,320 @@name: Deploy to ProductionFixed in
857b67d. Removed the--no-recreateflag to allow proper container recreation with new images during blue-green deployment.