Add production deployment infrastructure with Kubernetes, Terraform, and multi-strategy CI/CD #145

Merged
Copilot merged 6 commits from copilot/design-production-deployment-strategy into main 2025-11-02 17:27:49 -06:00
Copilot commented 2025-11-02 11:14:17 -06:00 (Migrated from github.com)

Production Deployment Strategy & Infrastructure - Kubernetes & IaC

Addressed all PR review feedback with the following improvements:

Changes Made

  1. VPC Flow Logs IAM Policy - Scoped permissions to specific CloudWatch log group instead of wildcard
  2. RDS Final Snapshot - Removed timestamp() to prevent Terraform drift detection on every plan
  3. Canary Deployment - Added warning about replica-based traffic splitting limitations and recommendation for service mesh
  4. Smoke Tests - Fixed to use ingress URL or port-forward instead of expecting LoadBalancer on ClusterIP service
  5. Kubernetes Auth API - Updated from deprecated v1beta1 to v1 for EKS authentication
  6. Database Migrations - Moved from init container to separate Job to prevent race conditions

Security Improvements

  • Least-privilege IAM policies for VPC Flow Logs
  • Migration job prevents concurrent executions (completions: 1, parallelism: 1)
  • No race conditions or deadlocks during deployments

Infrastructure Improvements

  • Terraform plans no longer show drift from timestamp changes
  • Smoke tests work correctly with ClusterIP services
  • Future-proof Kubernetes API versions (v1 instead of v1beta1)
  • Better documentation for canary deployment limitations

Files Modified

  • terraform/modules/vpc/main.tf - Scoped IAM policy resource
  • terraform/modules/rds/main.tf - Removed timestamp from final snapshot
  • terraform/main.tf - Updated auth API version to v1
  • scripts/deployment/canary-deploy.sh - Added warning about traffic splitting
  • .github/workflows/deploy-production.yml - Fixed smoke tests, improved migration
  • k8s/base/backend-deployment.yaml - Removed init container
  • k8s/base/migration-job.yaml - New separate migration job (created)
  • k8s/base/kustomization.yaml - Added migration job to resources
  • k8s/README.md - Updated documentation for migrations
Original prompt

This section details on the original issue you should resolve

<issue_title>Production Deployment Strategy & Infrastructure - Kubernetes & IaC</issue_title>
<issue_description>## 🎯 Objective

Design and implement a production deployment strategy with CI/CD, infrastructure as code, and blue-green deployment capabilities.

📋 Requirements

Infrastructure as Code

  • Terraform/CloudFormation

    • VPC and networking setup
    • Compute resources (EC2/ECS/Kubernetes)
    • Database (RDS PostgreSQL)
    • Redis (ElastiCache)
    • Load balancers
    • Auto-scaling groups
    • DNS and certificates
  • Container Orchestration

    • Kubernetes cluster (EKS/GKE/AKS)
    • Docker Compose for simple deployments
    • Helm charts for Kubernetes
    • Service mesh (optional - Istio/Linkerd)

Deployment Environments

  • Environment Setup

    • Development (local Docker)
    • Staging (cloud mirror of production)
    • Production (highly available)
    • Separate databases per environment
    • Environment-specific configs
  • Cloud Provider Setup

    • AWS/GCP/Azure account
    • VPC and security groups
    • IAM roles and policies
    • Resource tagging strategy
    • Cost optimization

Deployment Strategy

  • Blue-Green Deployment

    • Two identical production environments
    • Traffic routing between blue/green
    • Instant rollback capability
    • Zero-downtime deployments
    • Health check validation
  • Canary Deployment

    • Gradual traffic shift (5% → 25% → 50% → 100%)
    • Automated rollback on errors
    • Metrics monitoring during rollout
    • User segmentation for testing
  • Rolling Deployment

    • Update instances one at a time
    • Maintain service availability
    • Automatic rollback on failure

Load Balancing & SSL

  • Load Balancer Configuration

    • Application load balancer (ALB)
    • SSL/TLS termination
    • Health checks
    • Session persistence
    • WebSocket support
  • SSL Certificates

    • Let's Encrypt automation
    • Wildcard certificates
    • Certificate renewal automation
    • HTTPS enforcement
    • HSTS headers

Auto-scaling

  • Horizontal Scaling

    • CPU-based scaling
    • Memory-based scaling
    • Request-based scaling
    • Scheduled scaling (predictable load)
    • Min/max instance configuration
  • Database Scaling

    • Read replicas
    • Connection pooling
    • Automatic failover
    • Backup automation

🛠️ Implementation Details

Kubernetes Deployment

# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spywatcher-backend
  labels:
    app: spywatcher
    tier: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: spywatcher
      tier: backend
  template:
    metadata:
      labels:
        app: spywatcher
        tier: backend
    spec:
      containers:
      - name: backend
        image: ghcr.io/onnwee/spywatcher-backend:latest
        ports:
        - containerPort: 3001
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: spywatcher-secrets
              key: database-url
        - name: REDIS_URL
          valueFrom:
            secretKeyRef:
              name: spywatcher-secrets
              key: redis-url
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /health/live
            port: 3001
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 3001
          initialDelaySeconds: 10
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: spywatcher-backend
spec:
  selector:
    app: spywatcher
    tier: backend
  ports:
  - port: 80
    targetPort: 3001
  type: LoadBalancer
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: spywatcher-backend-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: spywatcher-backend
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

Terraform Infrastructure

# terraform/main.tf
provider "aws" {
  region = "us-east-1"
}

# VPC
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  
  tags = {
    Name = "spywatcher-vpc"
  }
}

# RDS PostgreSQL
resource "aws_db_instance" "postgres" {
  identifier        = "spywatcher-db"
  engine            = "postgres"
  engine_version    = "15.3"
  instance_class    = "db.t3.medium"
  allocated_storage = 100
  storage_encrypted = true
  
  db_name  = "spywatcher"
  username = var.db_username
  password =...

</details>

- Fixes subculture-collective/discord-spywatcher#22

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 We'd love your input! Share your thoughts on Copilot coding agent in our [2 minute survey](https://gh.io/copilot-coding-agent-survey).
## Production Deployment Strategy & Infrastructure - Kubernetes & IaC Addressed all PR review feedback with the following improvements: ### Changes Made 1. **VPC Flow Logs IAM Policy** - Scoped permissions to specific CloudWatch log group instead of wildcard 2. **RDS Final Snapshot** - Removed timestamp() to prevent Terraform drift detection on every plan 3. **Canary Deployment** - Added warning about replica-based traffic splitting limitations and recommendation for service mesh 4. **Smoke Tests** - Fixed to use ingress URL or port-forward instead of expecting LoadBalancer on ClusterIP service 5. **Kubernetes Auth API** - Updated from deprecated v1beta1 to v1 for EKS authentication 6. **Database Migrations** - Moved from init container to separate Job to prevent race conditions ### Security Improvements - ✅ Least-privilege IAM policies for VPC Flow Logs - ✅ Migration job prevents concurrent executions (completions: 1, parallelism: 1) - ✅ No race conditions or deadlocks during deployments ### Infrastructure Improvements - ✅ Terraform plans no longer show drift from timestamp changes - ✅ Smoke tests work correctly with ClusterIP services - ✅ Future-proof Kubernetes API versions (v1 instead of v1beta1) - ✅ Better documentation for canary deployment limitations ### Files Modified - `terraform/modules/vpc/main.tf` - Scoped IAM policy resource - `terraform/modules/rds/main.tf` - Removed timestamp from final snapshot - `terraform/main.tf` - Updated auth API version to v1 - `scripts/deployment/canary-deploy.sh` - Added warning about traffic splitting - `.github/workflows/deploy-production.yml` - Fixed smoke tests, improved migration - `k8s/base/backend-deployment.yaml` - Removed init container - `k8s/base/migration-job.yaml` - New separate migration job (created) - `k8s/base/kustomization.yaml` - Added migration job to resources - `k8s/README.md` - Updated documentation for migrations <!-- START COPILOT CODING AGENT SUFFIX --> <details> <summary>Original prompt</summary> ---- *This section details on the original issue you should resolve* <issue_title>Production Deployment Strategy & Infrastructure - Kubernetes & IaC</issue_title> <issue_description>## 🎯 Objective Design and implement a production deployment strategy with CI/CD, infrastructure as code, and blue-green deployment capabilities. ## 📋 Requirements ### Infrastructure as Code - [ ] **Terraform/CloudFormation** - VPC and networking setup - Compute resources (EC2/ECS/Kubernetes) - Database (RDS PostgreSQL) - Redis (ElastiCache) - Load balancers - Auto-scaling groups - DNS and certificates - [ ] **Container Orchestration** - Kubernetes cluster (EKS/GKE/AKS) - Docker Compose for simple deployments - Helm charts for Kubernetes - Service mesh (optional - Istio/Linkerd) ### Deployment Environments - [ ] **Environment Setup** - Development (local Docker) - Staging (cloud mirror of production) - Production (highly available) - Separate databases per environment - Environment-specific configs - [ ] **Cloud Provider Setup** - AWS/GCP/Azure account - VPC and security groups - IAM roles and policies - Resource tagging strategy - Cost optimization ### Deployment Strategy - [ ] **Blue-Green Deployment** - Two identical production environments - Traffic routing between blue/green - Instant rollback capability - Zero-downtime deployments - Health check validation - [ ] **Canary Deployment** - Gradual traffic shift (5% → 25% → 50% → 100%) - Automated rollback on errors - Metrics monitoring during rollout - User segmentation for testing - [ ] **Rolling Deployment** - Update instances one at a time - Maintain service availability - Automatic rollback on failure ### Load Balancing & SSL - [ ] **Load Balancer Configuration** - Application load balancer (ALB) - SSL/TLS termination - Health checks - Session persistence - WebSocket support - [ ] **SSL Certificates** - Let's Encrypt automation - Wildcard certificates - Certificate renewal automation - HTTPS enforcement - HSTS headers ### Auto-scaling - [ ] **Horizontal Scaling** - CPU-based scaling - Memory-based scaling - Request-based scaling - Scheduled scaling (predictable load) - Min/max instance configuration - [ ] **Database Scaling** - Read replicas - Connection pooling - Automatic failover - Backup automation ## 🛠️ Implementation Details ### Kubernetes Deployment ```yaml # k8s/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: spywatcher-backend labels: app: spywatcher tier: backend spec: replicas: 3 selector: matchLabels: app: spywatcher tier: backend template: metadata: labels: app: spywatcher tier: backend spec: containers: - name: backend image: ghcr.io/onnwee/spywatcher-backend:latest ports: - containerPort: 3001 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: spywatcher-secrets key: database-url - name: REDIS_URL valueFrom: secretKeyRef: name: spywatcher-secrets key: redis-url resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "1Gi" cpu: "1000m" livenessProbe: httpGet: path: /health/live port: 3001 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /health/ready port: 3001 initialDelaySeconds: 10 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: spywatcher-backend spec: selector: app: spywatcher tier: backend ports: - port: 80 targetPort: 3001 type: LoadBalancer --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: spywatcher-backend-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: spywatcher-backend minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 ``` ### Terraform Infrastructure ```hcl # terraform/main.tf provider "aws" { region = "us-east-1" } # VPC resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" enable_dns_hostnames = true enable_dns_support = true tags = { Name = "spywatcher-vpc" } } # RDS PostgreSQL resource "aws_db_instance" "postgres" { identifier = "spywatcher-db" engine = "postgres" engine_version = "15.3" instance_class = "db.t3.medium" allocated_storage = 100 storage_encrypted = true db_name = "spywatcher" username = var.db_username password =... </details> - Fixes subculture-collective/discord-spywatcher#22 <!-- START COPILOT CODING AGENT TIPS --> --- 💬 We'd love your input! Share your thoughts on Copilot coding agent in our [2 minute survey](https://gh.io/copilot-coding-agent-survey).
onnwee (Migrated from github.com) reviewed 2025-11-02 11:14:17 -06:00
PatrickFanella (Migrated from github.com) reviewed 2025-11-02 11:14:17 -06:00
copilot-pull-request-reviewer[bot] (Migrated from github.com) reviewed 2025-11-02 15:39:03 -06:00
copilot-pull-request-reviewer[bot] (Migrated from github.com) left a comment

Pull Request Overview

This PR adds comprehensive production deployment infrastructure for the Spywatcher application, including Terraform modules for AWS infrastructure, Kubernetes manifests, Helm charts, deployment scripts, and CI/CD automation via GitHub Actions.

  • Complete AWS infrastructure using Terraform (VPC, EKS, RDS, Redis, ALB with WAF)
  • Production-ready Kubernetes manifests with Kustomize overlays for staging and production
  • Helm charts for simplified deployment and configuration management
  • Advanced deployment strategies: Rolling updates, Blue-Green, and Canary deployments
  • GitHub Actions CI/CD pipeline with automated build, test, deploy, and rollback

Reviewed Changes

Copilot reviewed 55 out of 55 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
terraform/variables.tf Root Terraform variables defining AWS region, VPC, EKS, RDS, Redis, and certificate configuration
terraform/outputs.tf Root module outputs exposing VPC, EKS, RDS, Redis, and ALB resources
terraform/main.tf Main Terraform configuration orchestrating VPC, EKS, RDS, Redis, and ALB modules
terraform/modules/vpc/main.tf VPC module creating networking infrastructure with subnets, NAT gateways, and flow logs
terraform/modules/eks/main.tf EKS module creating Kubernetes cluster with node groups and add-ons
terraform/modules/rds/main.tf RDS module creating PostgreSQL database with encryption and backups
terraform/modules/redis/main.tf Redis module creating ElastiCache cluster with encryption and auth
terraform/modules/alb/main.tf ALB module creating load balancer with WAF protection
k8s/base/* Base Kubernetes manifests for deployments, services, and resources
k8s/overlays/* Environment-specific Kustomize overlays for staging and production
scripts/deployment/* Blue-green and canary deployment automation scripts
helm/spywatcher/* Helm chart templates and values for simplified deployment
.github/workflows/deploy-production.yml GitHub Actions workflow for automated CI/CD pipeline
README.md Updated README with production deployment section
DEPLOYMENT.md Comprehensive deployment guide and procedures
INFRASTRUCTURE.md Infrastructure architecture overview and operational details

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

## Pull Request Overview This PR adds comprehensive production deployment infrastructure for the Spywatcher application, including Terraform modules for AWS infrastructure, Kubernetes manifests, Helm charts, deployment scripts, and CI/CD automation via GitHub Actions. - Complete AWS infrastructure using Terraform (VPC, EKS, RDS, Redis, ALB with WAF) - Production-ready Kubernetes manifests with Kustomize overlays for staging and production - Helm charts for simplified deployment and configuration management - Advanced deployment strategies: Rolling updates, Blue-Green, and Canary deployments - GitHub Actions CI/CD pipeline with automated build, test, deploy, and rollback ### Reviewed Changes Copilot reviewed 55 out of 55 changed files in this pull request and generated 6 comments. <details> <summary>Show a summary per file</summary> | File | Description | | ---- | ----------- | | terraform/variables.tf | Root Terraform variables defining AWS region, VPC, EKS, RDS, Redis, and certificate configuration | | terraform/outputs.tf | Root module outputs exposing VPC, EKS, RDS, Redis, and ALB resources | | terraform/main.tf | Main Terraform configuration orchestrating VPC, EKS, RDS, Redis, and ALB modules | | terraform/modules/vpc/main.tf | VPC module creating networking infrastructure with subnets, NAT gateways, and flow logs | | terraform/modules/eks/main.tf | EKS module creating Kubernetes cluster with node groups and add-ons | | terraform/modules/rds/main.tf | RDS module creating PostgreSQL database with encryption and backups | | terraform/modules/redis/main.tf | Redis module creating ElastiCache cluster with encryption and auth | | terraform/modules/alb/main.tf | ALB module creating load balancer with WAF protection | | k8s/base/* | Base Kubernetes manifests for deployments, services, and resources | | k8s/overlays/* | Environment-specific Kustomize overlays for staging and production | | scripts/deployment/* | Blue-green and canary deployment automation scripts | | helm/spywatcher/* | Helm chart templates and values for simplified deployment | | .github/workflows/deploy-production.yml | GitHub Actions workflow for automated CI/CD pipeline | | README.md | Updated README with production deployment section | | DEPLOYMENT.md | Comprehensive deployment guide and procedures | | INFRASTRUCTURE.md | Infrastructure architecture overview and operational details | </details> --- 💡 <a href="/subculture-collective/discord-spywatcher/new/main/.github/instructions?filename=*.instructions.md" class="Link--inTextBlock" target="_blank" rel="noopener noreferrer">Add Copilot custom instructions</a> for smarter, more guided reviews. <a href="https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot" class="Link--inTextBlock" target="_blank" rel="noopener noreferrer">Learn how to get started</a>.
@@ -0,0 +1,291 @@
name: Deploy to Production
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-11-02 15:39:02 -06:00

The smoke test attempts to get the LoadBalancer hostname from a ClusterIP service (as defined in k8s/base/backend-service.yaml line 12: type: ClusterIP), which won't have a loadBalancer status. This will result in an empty BACKEND_URL and cause the smoke tests to fail. Consider testing via the ingress URL or using port-forwarding for internal service testing.

          # Port-forward backend service to localhost:8080
          kubectl port-forward svc/spywatcher-backend 8080:3000 -n spywatcher &
          PORT_FORWARD_PID=$!
          
          # Wait for port-forward to be ready
          sleep 5
          
          # Test health endpoints
          echo "Testing liveness endpoint..."
          curl -f "http://localhost:8080/health/live" || (kill $PORT_FORWARD_PID && exit 1)
          
          echo "Testing readiness endpoint..."
          curl -f "http://localhost:8080/health/ready" || (kill $PORT_FORWARD_PID && exit 1)
          
          echo "Smoke tests passed!"
          
          # Kill port-forward process
          kill $PORT_FORWARD_PID
The smoke test attempts to get the LoadBalancer hostname from a ClusterIP service (as defined in k8s/base/backend-service.yaml line 12: `type: ClusterIP`), which won't have a loadBalancer status. This will result in an empty BACKEND_URL and cause the smoke tests to fail. Consider testing via the ingress URL or using port-forwarding for internal service testing. ```suggestion # Port-forward backend service to localhost:8080 kubectl port-forward svc/spywatcher-backend 8080:3000 -n spywatcher & PORT_FORWARD_PID=$! # Wait for port-forward to be ready sleep 5 # Test health endpoints echo "Testing liveness endpoint..." curl -f "http://localhost:8080/health/live" || (kill $PORT_FORWARD_PID && exit 1) echo "Testing readiness endpoint..." curl -f "http://localhost:8080/health/ready" || (kill $PORT_FORWARD_PID && exit 1) echo "Smoke tests passed!" # Kill port-forward process kill $PORT_FORWARD_PID ```
@@ -0,0 +1,193 @@
apiVersion: apps/v1
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-11-02 15:39:03 -06:00

Running database migrations in an init container means migrations run for every pod replica during deployment, which can cause race conditions and deadlocks with multiple pods trying to migrate simultaneously. Consider running migrations as a separate Kubernetes Job before deployment or using a leader election mechanism.


Running database migrations in an init container means migrations run for every pod replica during deployment, which can cause race conditions and deadlocks with multiple pods trying to migrate simultaneously. Consider running migrations as a separate Kubernetes Job before deployment or using a leader election mechanism. ```suggestion ```
@@ -0,0 +91,4 @@
# Calculate replica counts based on percentages
local total_replicas=3
local canary_replicas=$(( (total_replicas * canary_weight + 50) / 100 ))
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-11-02 15:39:02 -06:00

The replica calculation for canary_weight=5% will result in canary_replicas=1 due to integer arithmetic rounding, which is actually 33% of total traffic (1 out of 3 replicas), not 5%. This defeats the purpose of gradual canary rollout at low percentages. Consider using a service mesh or ingress controller for accurate traffic splitting instead of relying on replica counts.

The replica calculation for canary_weight=5% will result in `canary_replicas=1` due to integer arithmetic rounding, which is actually 33% of total traffic (1 out of 3 replicas), not 5%. This defeats the purpose of gradual canary rollout at low percentages. Consider using a service mesh or ingress controller for accurate traffic splitting instead of relying on replica counts.
@@ -0,0 +1,141 @@
terraform {
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-11-02 15:39:03 -06:00

The client.authentication.k8s.io/v1beta1 API version for EKS authentication is deprecated as of Kubernetes 1.24. Consider using client.authentication.k8s.io/v1 instead for future compatibility with EKS cluster version 1.28 specified in variables.

The `client.authentication.k8s.io/v1beta1` API version for EKS authentication is deprecated as of Kubernetes 1.24. Consider using `client.authentication.k8s.io/v1` instead for future compatibility with EKS cluster version 1.28 specified in variables.
@@ -0,0 +1,164 @@
resource "aws_db_subnet_group" "main" {
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-11-02 15:39:02 -06:00

Using timestamp() in the final snapshot identifier will cause Terraform to detect changes on every plan since it's always different. Consider using a variable or removing the timestamp to avoid unnecessary drift detection. The snapshot identifier already includes the identifier which should be unique enough.

Using `timestamp()` in the final snapshot identifier will cause Terraform to detect changes on every plan since it's always different. Consider using a variable or removing the timestamp to avoid unnecessary drift detection. The snapshot identifier already includes the identifier which should be unique enough.
@@ -0,0 +1,213 @@
resource "aws_vpc" "main" {
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-11-02 15:39:02 -06:00

The IAM policy grants VPC Flow Logs permissions on all resources (Resource = \"*\"). This should be scoped to the specific CloudWatch log group to follow the principle of least privilege. Consider changing to Resource = aws_cloudwatch_log_group.flow_log.arn.

The IAM policy grants VPC Flow Logs permissions on all resources (`Resource = \"*\"`). This should be scoped to the specific CloudWatch log group to follow the principle of least privilege. Consider changing to `Resource = aws_cloudwatch_log_group.flow_log.arn`.
onnwee commented 2025-11-02 17:13:50 -06:00 (Migrated from github.com)

@copilot apply changes based on the comments in this thread

@copilot apply changes based on the comments in [this thread](https://github.com/subculture-collective/discord-spywatcher/pull/145#pullrequestreview-3409042172)
copilot-swe-agent[bot] (Migrated from github.com) reviewed 2025-11-02 17:18:29 -06:00
@@ -0,0 +1,164 @@
resource "aws_db_subnet_group" "main" {
copilot-swe-agent[bot] (Migrated from github.com) commented 2025-11-02 17:18:29 -06:00

Fixed in commit 95fdde6. Removed the timestamp() call from final_snapshot_identifier to prevent Terraform drift. Now uses: "${var.identifier}-final-snapshot"

Fixed in commit 95fdde6. Removed the timestamp() call from final_snapshot_identifier to prevent Terraform drift. Now uses: `"${var.identifier}-final-snapshot"`
copilot-swe-agent[bot] (Migrated from github.com) reviewed 2025-11-02 17:18:29 -06:00
@@ -0,0 +1,213 @@
resource "aws_vpc" "main" {
copilot-swe-agent[bot] (Migrated from github.com) commented 2025-11-02 17:18:29 -06:00

Fixed in commit 95fdde6. Scoped the IAM policy resource to the specific CloudWatch log group ARN: "${aws_cloudwatch_log_group.flow_log.arn}:*"

Fixed in commit 95fdde6. Scoped the IAM policy resource to the specific CloudWatch log group ARN: `"${aws_cloudwatch_log_group.flow_log.arn}:*"`
copilot-swe-agent[bot] (Migrated from github.com) reviewed 2025-11-02 17:18:29 -06:00
@@ -0,0 +1,291 @@
name: Deploy to Production
copilot-swe-agent[bot] (Migrated from github.com) commented 2025-11-02 17:18:29 -06:00

Fixed in commit 95fdde6. Updated smoke tests to check for ingress first, and fall back to port-forwarding if ingress is not available. This correctly handles ClusterIP services without expecting LoadBalancer status.

Fixed in commit 95fdde6. Updated smoke tests to check for ingress first, and fall back to port-forwarding if ingress is not available. This correctly handles ClusterIP services without expecting LoadBalancer status.
copilot-swe-agent[bot] (Migrated from github.com) reviewed 2025-11-02 17:18:29 -06:00
@@ -0,0 +1,193 @@
apiVersion: apps/v1
copilot-swe-agent[bot] (Migrated from github.com) commented 2025-11-02 17:18:29 -06:00

Fixed in commit 95fdde6. Removed the init container and created a separate migration-job.yaml with completions: 1 and parallelism: 1 to prevent race conditions. The GitHub Actions workflow now creates unique migration jobs before deployment.

Fixed in commit 95fdde6. Removed the init container and created a separate `migration-job.yaml` with `completions: 1` and `parallelism: 1` to prevent race conditions. The GitHub Actions workflow now creates unique migration jobs before deployment.
copilot-swe-agent[bot] (Migrated from github.com) reviewed 2025-11-02 17:18:29 -06:00
@@ -0,0 +1,141 @@
terraform {
copilot-swe-agent[bot] (Migrated from github.com) commented 2025-11-02 17:18:29 -06:00

Fixed in commit 95fdde6. Updated both Kubernetes and Helm provider configurations to use client.authentication.k8s.io/v1 instead of the deprecated v1beta1 API version.

Fixed in commit 95fdde6. Updated both Kubernetes and Helm provider configurations to use `client.authentication.k8s.io/v1` instead of the deprecated `v1beta1` API version.
copilot-swe-agent[bot] (Migrated from github.com) reviewed 2025-11-02 17:18:30 -06:00
@@ -0,0 +91,4 @@
# Calculate replica counts based on percentages
local total_replicas=3
local canary_replicas=$(( (total_replicas * canary_weight + 50) / 100 ))
copilot-swe-agent[bot] (Migrated from github.com) commented 2025-11-02 17:18:30 -06:00

Fixed in commit 95fdde6. Added a warning comment in the script about replica-based traffic splitting limitations and recommendation to use a service mesh (Istio, Linkerd) or ingress controller for accurate percentage-based traffic splitting.

Fixed in commit 95fdde6. Added a warning comment in the script about replica-based traffic splitting limitations and recommendation to use a service mesh (Istio, Linkerd) or ingress controller for accurate percentage-based traffic splitting.
Sign in to join this conversation.