Skip to main content
  1. Posts/

Deploying a Microservices Todo Application

·23 mins
Table of Contents

So, you’ve built a microservices TODO application that runs on your laptop. Maybe you have a frontend, a few APIs, a database. Heck, it even works perfectly at localhost:3000. But it’s too boring to have this masterpiece sit quietly on your local machine, you want others to access it! But:

  • How exactly do I get this online again?
  • How do I manage multiple services in production?
  • How do I deploy updates without breaking things?
  • How do I get HTTPS working?
  • How do I avoid clicking through AWS console menus for every deployment?

This guide answers those questions by teaching you the same tools used in production environments: Docker, Terraform, Ansible, and CI/CD pipelines.

Prerequisites

  • A working microservices application on your local machine (we have one ready)
  • Basic command line skills
  • Git fundamentals
  • AWS account (free tier works fine)
  • Domain name (a few dollars from Namecheap or Hostinger)

Let’s begin.

First Steps: Understanding the Application

You’ve forked and cloned this repo to your local machine. The application consists of:

  • Frontend - Vue.js UI
  • Auth API - Go-based authentication service (port 8081)
  • Todos API - Node.js CRUD operations (port 8082)
  • Users API - Java Spring Boot user profiles (port 8083)
  • Log Processor - Python worker processing Redis queue
  • Redis - Message queue

Each service has its own directory with a README explaining how to run it locally. You’ve installed dependencies (Node.js, Go, Java, Python) and verified each service works:

cd frontend && npm install && npm run dev
cd auth-api && go run main.go
cd todos-api && npm install && npm start
cd users-api && ./mvnw clean install && java -jar target/users-api-0.0.1-SNAPSHOT.jar
cd log-message-processor && pip3 install -r requirements.txt && python3 main.py

But… running six terminals doesn’t sound right, does it? Let’s fix that.

Containerize Everything with Docker

You have verified each service runs locally. But (a) it’s stressful to start six separate processes manually, and (b) it doesn’t translate well in production. An easy solution is to package each service in a Docker container and orchestrate them with Docker compose.

Implementation

Create a Dockerfile in each service directory:

users-api/Dockerfile:

FROM maven:3.8-openjdk-8-slim as build-stage

WORKDIR /app

COPY pom.xml ./
RUN mvn dependency:go-offline -B

COPY src ./src
RUN mvn clean install -DskipTests

FROM eclipse-temurin:8-jre-jammy as production-stage

ARG JAR_FILE=/app/target/users-api-0.0.1-SNAPSHOT.jar
COPY --from=build-stage ${JAR_FILE} /app/app.jar

EXPOSE 8083

ENTRYPOINT ["java", "-jar", "/app/app.jar"]

todos-api/Dockerfile:

FROM node:8 as deps

WORKDIR /app
COPY package*.json ./
RUN npm install

FROM node:8-slim as prod

WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

EXPOSE 8082

CMD ["npm", "start"]

log-message-processor/Dockerfile:

FROM python:3.11-slim

WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python3", "main.py"]

auth-api/Dockerfile:

FROM golang:1.21-alpine as build-stage

WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download

COPY . .
RUN CGO_ENABLED=0 go build -o /auth-api .

FROM alpine:latest

WORKDIR /
COPY --from=build-stage /auth-api /auth-api

EXPOSE 8081

ENTRYPOINT ["/auth-api"]

frontend/Dockerfile:

FROM node:14 as build-stage

WORKDIR /app
COPY package*.json ./
RUN npm install

COPY . .
RUN npm run build

FROM nginx:alpine as production-stage

COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

The frontend uses Nginx as a reverse proxy:

frontend/nginx.conf:

server {
    listen 80;
    server_name localhost;
    root /usr/share/nginx/html;
    index index.html;

    location /login {
        proxy_pass http://auth-api:8081/login;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    location /todos {
        proxy_pass http://todos-api:8082/todos;
        proxy_set_header Host $host;
        proxy_set_header Authorization $http_authorization;
    }

    location / {
        try_files $uri $uri/ /index.html;
    }
}

Orchestrate with Docker Compose

Create docker-compose.yml at the project root to manage all services:

services:
  traefik:
    image: traefik:v3.2
    container_name: traefik
    command:
      - "--api.dashboard=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
      - "--entrypoints.web.http.redirections.entryPoint.scheme=https"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.myresolver.acme.email=${ACME_EMAIL}"
      - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
      - "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./letsencrypt:/letsencrypt"
    networks:
      - web

  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    container_name: frontend
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.frontend.rule=Host(`${DOMAIN}`)"
      - "traefik.http.routers.frontend.entrypoints=websecure"
      - "traefik.http.routers.frontend.tls.certresolver=myresolver"
      - "traefik.http.services.frontend.loadbalancer.server.port=80"
    networks:
      - web

  auth-api:
    build:
      context: ./auth-api
      dockerfile: Dockerfile
    container_name: auth-api
    environment:
      - AUTH_API_PORT=8081
      - USERS_API_ADDRESS=http://users-api:8083
      - JWT_SECRET=${JWT_SECRET}
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.auth-api.rule=Host(`${DOMAIN}`) && PathPrefix(`/api/auth`)"
      - "traefik.http.routers.auth-api.entrypoints=websecure"
      - "traefik.http.routers.auth-api.tls.certresolver=myresolver"
      - "traefik.http.middlewares.auth-api-stripprefix.stripprefix.prefixes=/api/auth"
      - "traefik.http.routers.auth-api.middlewares=auth-api-stripprefix"
      - "traefik.http.services.auth-api.loadbalancer.server.port=8081"
    networks:
      - web

  todos-api:
    build:
      context: ./todos-api
      dockerfile: Dockerfile
    container_name: todos-api
    environment:
      - JWT_SECRET=${JWT_SECRET}
      - REDIS_HOST=redis-queue
      - REDIS_PORT=6379
      - REDIS_CHANNEL=log_channel
      - TODOS_API_PORT=8082
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.todos-api.rule=Host(`${DOMAIN}`) && PathPrefix(`/api/todos`)"
      - "traefik.http.routers.todos-api.entrypoints=websecure"
      - "traefik.http.routers.todos-api.tls.certresolver=myresolver"
      - "traefik.http.middlewares.todos-api-stripprefix.stripprefix.prefixes=/api/todos"
      - "traefik.http.routers.todos-api.middlewares=todos-api-stripprefix"
      - "traefik.http.services.todos-api.loadbalancer.server.port=8082"
    networks:
      - web

  users-api:
    build:
      context: ./users-api
      dockerfile: Dockerfile
    container_name: users-api
    environment:
      - JWT_SECRET=${JWT_SECRET}
      - SERVER_PORT=8083
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.users-api.rule=Host(`${DOMAIN}`) && PathPrefix(`/api/users`)"
      - "traefik.http.routers.users-api.entrypoints=websecure"
      - "traefik.http.routers.users-api.tls.certresolver=myresolver"
      - "traefik.http.middlewares.users-api-stripprefix.stripprefix.prefixes=/api/users"
      - "traefik.http.routers.users-api.middlewares=users-api-stripprefix"
      - "traefik.http.services.users-api.loadbalancer.server.port=8083"
    networks:
      - web

  log-message-processor:
    build:
      context: ./log-message-processor
      dockerfile: Dockerfile
    container_name: log-message-processor
    command: sh -c "sleep 5 && python3 main.py"
    environment:
      - REDIS_HOST=redis-queue
      - REDIS_PORT=6379
      - REDIS_CHANNEL=log_channel
    networks:
      - web

  redis-queue:
    image: redis:6.2-alpine
    container_name: redis-queue
    volumes:
      - redis_data:/data
    networks:
      - web

volumes:
  redis_data:
    driver: local

networks:
  web:
    driver: bridge

Wait. Wait. WAIT! What’s “Traefik” doing here?

Traefik is a reverse proxy that handles:

  • Routing traffic to the right service based on URL paths
  • Automatic HTTPS certificates from Let’s Encrypt
  • HTTP to HTTPS redirection

The Traefik labels on each service is how Traefik knows to route https://your-domain.com/api/<service> to each service container.

Create Environment File

  • Copy .env.example to .env:
cp .env.example .env
  • Edit .env:
DOMAIN=localhost
ACME_EMAIL=test@example.com
JWT_SECRET=your-super-secret-key-here

AUTH_API_PORT=8081

TODOS_API_PORT=8082

SERVER_PORT=8083
REDIS_HOST=redis-queue
REDIS_PORT=6379

Test Locally

docker compose up -d --build

Docker Compose will automatically:

  1. Build all service images
  2. Start containers in the correct order
  3. Set up networking so services can find each other by name
  4. Start Traefik to handle routing

Visit http://localhost in your browser and you should see the login page!

Why this is like so cool? Services can now find each other by name (like http://auth-api:8081) anddd your app also now works the same on any machine with Docker installed.

Get a Domain and Configure DNS

Before deploying to the cloud, you need a domain. We want to be able to access our app on a foine domain (todo.example.com for example), not that ugly ass IP address. To do that:

Buy a Domain

  1. Go to Namecheap or Hostinger
  2. Search for an available domain
  3. Purchase!

Configure DNS

We’ll come back to this after we have a server IP address.

Deploy to AWS Manually

Okay, so. You got it to work locally, but your laptop isn’t a production server. You need something that runs 24/7 and with a public IP so others can reach it from anywhere in the world!!!

How do you do that? Simple. Launch an AWS EC2 instance and deploy manually. This helps understand why an automated solution is perhaps needed.

Implementation

  1. Launch an EC2 Instance

    • Create an AWS account (if you haven’t already)
    • Log into AWS Console
    • EC2 > Launch Instance
    • Choose Ubuntu 22.04 LTS
    • c71.flex.large (2 vCPU, 4GB RAM)
    • Create new key pair: deploy-key.pem
    • Security group: Allow ports 22 (SSH), 80 (HTTP), 443 (HTTPS)
    • Launch
  2. Connect to Server

chmod 400 deploy-key.pem

ssh -i deploy-key.pem ubuntu@<your-instance-ip>

sudo apt update && sudo apt upgrade -y
  1. Install Dependencies
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker ubuntu

sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.0/docker-compose-linux-x86_64" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
  1. Reconnect for group changes
exit
ssh -i deploy-key.pem ubuntu@<your-instance-ip>
  1. Update DNS Now

You now have your server IP address:

  • Go to your domain registrar’s DNS management
  • Create an A record:
    • Type: A
    • Name: @ (for root domain) or todo (for subdomain)
    • Value: Your EC2 instance IP address
    • TTL: 300 seconds
  • Save and wait 5-30 minutes for propagation
  1. Test DNS propagation:
nslookup your-domain.com

Deploy Application

git clone https://github.com/yourusername/todo-application.git
cd todo-application

cp .env.example .env
nano .env

Update .env:

DOMAIN=your-domain.com
ACME_EMAIL=your-valid-email@example.com
JWT_SECRET=$(openssl rand -base64 32)

Start services:

docker compose up -d --build

Check status:

docker ps
docker logs traefik
docker logs frontend

After 2-3 minutes for certificate issuance, visit https://your-domain.com. You should see your application with a valid SSL certificate!

The Manual Pain Points

You just:

  • Clicked through multiple AWS console screens
  • Ran countless commands manually
  • Configured DNS separately
  • Have no record of what you installed or how
  • To replicate this (staging? dev?), you’d have to repeat everything

These are the exact problems automation solves.

Automate Infrastructure with Terraform

Manual AWS setup isn’t repeatable, versionable, or shareable. The easiest solution to that problem is to define your infrastructure as code with Terraform so it’s repeatable, versionabβ€”you get the idea.

Prerequisites: Remote State Setup

First, create storage for Terraform state in AWS:

  • Create S3 bucket for Terraform state
aws s3api create-bucket \
  --bucket hng13-stage6-state-bucket \
  --region us-east-1
  • Enable versioning
aws s3api put-bucket-versioning \
  --bucket hng13-stage6-state-bucket \
  --versioning-configuration Status=Enabled
  • Create DynamoDB table for state locking
aws dynamodb create-table \
  --table-name hgn13-stage6-state-lock \
  --attribute-definitions AttributeName=LockID,AttributeType=S \
  --key-schema AttributeName=LockID,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST \
  --region us-east-1

Why? Terraform state contains everything about your infrastructure. S3 stores it safely with versioning while DynamoDB prevents two people from modifying infrastructure simultaneously. Genius, I know.

Create Infrastructure Directory

mkdir -p infra/terraform/templates
cd infra/terraform

Terraform Configuration Files

infra/terraform/main.tf:

terraform {
  required_version = ">= 1.0"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    local = {
      source  = "hashicorp/local"
      version = "~> 2.4"
    }
  }

  backend "s3" {
    bucket         = "hng13-stage6-state-bucket"
    key            = "hng13-stage6/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "hgn13-stage6-state-lock"
  }
}

provider "aws" {
  region = var.aws_region
  
  default_tags {
    tags = {
      Project     = "HNG13-Stage6"
      ManagedBy   = "Terraform"
      Environment = var.environment
    }
  }
}

infra/terraform/variables.tf:

variable "aws_region" {
  description = "AWS region to deploy resources"
  type        = string
  default     = "us-east-1"
}

variable "environment" {
  description = "Environment name"
  type        = string
  default     = "production"
}

variable "project_name" {
  description = "Project name for resource naming"
  type        = string
  default     = "hng13-stage6"
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t3.medium"
}

variable "key_name" {
  description = "SSH key pair name"
  type        = string
}

variable "domain_name" {
  description = "Domain name for the application"
  type        = string
}

variable "acme_email" {
  description = "Email for Let's Encrypt certificates"
  type        = string
}

variable "git_repo_url" {
  description = "Git repository URL"
  type        = string
}

variable "git_branch" {
  description = "Git branch to deploy"
  type        = string
  default     = "main"
}

variable "jwt_secret" {
  description = "JWT secret for application"
  type        = string
  sensitive   = true
}

infra/terraform/resources.tf:

# Get latest Ubuntu 22.04 LTS AMI
data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"] # Canonical

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

# Security Group
resource "aws_security_group" "microservices" {
  name        = "${var.project_name}-sg"
  description = "Security group for microservices application"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "SSH access"
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTP access"
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTPS access"
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow all outbound traffic"
  }

  tags = {
    Name = "${var.project_name}-sg"
  }

  lifecycle {
    create_before_destroy = true
  }
}

# EC2 Instance
resource "aws_instance" "microservices" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = var.instance_type
  key_name      = var.key_name

  vpc_security_group_ids = [aws_security_group.microservices.id]

  root_block_device {
    volume_size           = 30
    volume_type           = "gp3"
    delete_on_termination = true
    encrypted             = true
  }

  user_data = <<-EOF
              #!/bin/bash
              set -e
              apt-get update
              apt-get upgrade -y
              hostnamectl set-hostname ${var.project_name}
              EOF

  tags = {
    Name = "${var.project_name}-server"
  }

  lifecycle {
    ignore_changes = [user_data, ami]
  }
}

# Elastic IP
resource "aws_eip" "microservices" {
  instance = aws_instance.microservices.id
  domain   = "vpc"

  tags = {
    Name = "${var.project_name}-eip"
  }
}

# Generate Ansible Inventory
resource "local_file" "ansible_inventory" {
  content = templatefile("${path.module}/templates/inventory.tpl", {
    server_ip    = aws_eip.microservices.public_ip
    ssh_user     = "ubuntu"
    ssh_key_path = "~/.ssh/id_ed25519"
    domain_name  = var.domain_name
    acme_email   = var.acme_email
    git_repo_url = var.git_repo_url
    git_branch   = var.git_branch
    jwt_secret   = var.jwt_secret
  })

  filename        = "${path.module}/../ansible/inventory.ini"
  file_permission = "0644"

  depends_on = [aws_eip.microservices]
}

infra/terraform/outputs.tf:

output "instance_id" {
  description = "ID of the EC2 instance"
  value       = aws_instance.microservices.id
}

output "instance_public_ip" {
  description = "Public IP address"
  value       = aws_eip.microservices.public_ip
}

output "ssh_connection_string" {
  description = "SSH connection string"
  value       = "ssh -i ~/.ssh/id_ed25519 ubuntu@${aws_eip.microservices.public_ip}"
}

output "application_url" {
  description = "Application URL"
  value       = "https://${var.domain_name}"
}

infra/terraform/templates/inventory.tpl:

[microservices]
${server_ip} ansible_user=${ssh_user} ansible_ssh_private_key_file=${ssh_key_path}

[microservices:vars]
ansible_python_interpreter=/usr/bin/python3
domain_name=${domain_name}
acme_email=${acme_email}
git_repo_url=${git_repo_url}
git_branch=${git_branch}
jwt_secret=${jwt_secret}

infra/terraform/terraform.tfvars:

aws_region    = "us-east-1"
environment   = "production"
project_name  = "hng13-stage6"
instance_type = "t3.medium"
key_name      = "deploy-key"
domain_name   = "your-domain.com"
acme_email    = "your-email@example.com"
git_repo_url  = "https://github.com/yourusername/todo-application.git"
git_branch    = "main"
jwt_secret    = "your-generated-secret"

Deploy with Terraform

cd infra/terraform

terraform init

terraform plan

terraform apply

Terraform will magically:

  1. Create a security group
  2. Launch an EC2 instance
  3. Allocate Elastic IP (fancy word for permanent public IP addressβ€”I know right. AWS really screwed up this one)
  4. Output connection details

Now, update your DNS A record with the IP from terraform output instance_public_ip.

To destroy all you just did (cause sometimes, it’s best to let the world burrrrnnnnn):

cd infra/terraform

# DESTROYYYY
terraform destory -auto-approve

Look at that! You can now create/destroy an entire infrastructure with one command. +1200 Aura points.

Yeah, but like why Tho

  • Infrastructure is version controlled
  • Can recreate environment
  • Changes are reviewable before applying
  • Can share setup with team
  • Can create multiple environments (dev, staging, prod)

Automate Configuration with Ansible

Yeah, yeah. Terraform created the server, but you still had to SSH in manually to install Docker, git and other dependencies alongside cloning code and starting services. BORING.

All my homies automate server configuration with Ansible.

Create Ansible Structure

mkdir -p infra/ansible/roles/{dependencies,deploy}/{tasks,templates}

Ansible Configuration

infra/ansible/ansible.cfg:

[defaults]
inventory = inventory.ini
host_key_checking = False
retry_files_enabled = False
interpreter_python = auto_silent

[privilege_escalation]
become = True
become_method = sudo
become_user = root
become_ask_pass = False

[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s

infra/ansible/playbook.yml:

---
- name: Deploy Microservices Application
  hosts: microservices
  become: yes
  gather_facts: yes

  pre_tasks:
    - name: Update apt cache
      apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Wait for system to be ready
      wait_for_connection:
        timeout: 300

  roles:
    - dependencies
    - deploy

  post_tasks:
    - name: Display deployment information
      debug:
        msg:
          - "Deployment completed!"
          - "Application URL: https://{{ domain_name }}"
          - "Server IP: {{ ansible_host }}"

Role 1: Dependencies

infra/ansible/roles/dependencies/tasks/main.yml:

---
- name: Install required system packages
  ansible.builtin.apt:
    name:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg
      - software-properties-common
      - git
      - python3-pip
      - unzip
    state: present
    update_cache: yes

- name: Add Docker GPG key
  ansible.builtin.apt_key:
    url: https://download.docker.com/linux/ubuntu/gpg
    state: present

- name: Add Docker repository
  ansible.builtin.apt_repository:
    repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
    state: present

- name: Install Docker
  ansible.builtin.apt:
    name:
      - docker-ce
      - docker-ce-cli
      - containerd.io
      - docker-buildx-plugin
      - docker-compose-plugin
    state: present

- name: Start and enable Docker
  ansible.builtin.systemd_service:
    name: docker
    state: started
    enabled: yes

- name: Add ubuntu user to docker group
  ansible.builtin.user:
    name: ubuntu
    groups: docker
    append: yes

- name: Install Docker Compose standalone
  ansible.builtin.get_url:
    url: "https://github.com/docker/compose/releases/download/v2.24.0/docker-compose-linux-x86_64"
    dest: /usr/local/bin/docker-compose
    mode: "0755"

- name: Install Python Docker SDK
  ansible.builtin.pip:
    name:
      - docker
      - docker-compose
    state: present

Role 2: Deploy

infra/ansible/roles/deploy/tasks/main.yml:

---
- name: Create application directory
  file:
    path: /opt/app
    state: directory
    owner: ubuntu
    group: ubuntu
    mode: '0755'

- name: Check if repository exists
  stat:
    path: /opt/app/.git
  register: git_repo

- name: Clone repository
  git:
    repo: "{{ git_repo_url }}"
    dest: /opt/app
    version: "{{ git_branch }}"
    force: yes
  become_user: ubuntu
  when: not git_repo.stat.exists

- name: Pull latest changes
  git:
    repo: "{{ git_repo_url }}"
    dest: /opt/app
    version: "{{ git_branch }}"
    update: yes
  become_user: ubuntu
  register: git_pull
  when: git_repo.stat.exists

- name: Create .env file from template
  template:
    src: env.j2
    dest: /opt/app/.env
    owner: ubuntu
    group: ubuntu
    mode: '0644'
  register: env_file

- name: Create letsencrypt directory
  file:
    path: /opt/app/letsencrypt
    state: directory
    owner: ubuntu
    group: ubuntu
    mode: '0755'

- name: Create acme.json with correct permissions
  file:
    path: /opt/app/letsencrypt/acme.json
    state: touch
    owner: ubuntu
    group: ubuntu
    mode: '0600'

- name: Stop existing containers if changes detected
  shell: docker compose down
  args:
    chdir: /opt/app
  become_user: ubuntu
  when: git_pull.changed or env_file.changed
  ignore_errors: yes

- name: Start Docker Compose services
  shell: docker compose up -d --build
  args:
    chdir: /opt/app
  become_user: ubuntu

- name: Wait for services to start
  pause:
    seconds: 30

- name: Check running containers
  command: docker ps
  become_user: ubuntu
  register: docker_ps
  changed_when: false

- name: Display running containers
  debug:
    var: docker_ps.stdout_lines

infra/ansible/roles/deploy/templates/env.j2:

# Traefik Configuration
DOMAIN={{ domain_name }}
ACME_EMAIL={{ acme_email }}

# Frontend
PORT=8080
AUTH_API_ADDRESS=http://auth-api:8081
TODOS_API_ADDRESS=http://todos-api:8082

# Auth API
AUTH_API_PORT=8081
JWT_SECRET={{ jwt_secret }}
USERS_API_ADDRESS=http://users-api:8083

# Todos API
JWT_SECRET={{ jwt_secret }}
REDIS_HOST=redis-queue
REDIS_PORT=6379
REDIS_CHANNEL=log_channel

# Users API
SERVER_PORT=8083
JWT_SECRET={{ jwt_secret }}

# Log Processor
REDIS_HOST=redis-queue
REDIS_PORT=6379
REDIS_CHANNEL=log_channel

Run Ansible

After Terraform completes (inventory file created):

cd infra/ansible
ansible-playbook playbook.yml -v

Ansible will:

  1. Install Docker and dependencies
  2. Clone your repository
  3. Create configured .env file
  4. Start all services with Docker Compose

Visit https://your-domain.com and your app is live!

Like the pros do it.

Why tho

  • Server configuration is codified
  • Idempotent (another fancy word that means “safe to run multiple times”)
  • Same setup on every server
  • No manual SSH sessions

Full Automation with CI/CD

Hm. So. Running terraform apply -auto-approve creates the infra and ansible-playbook does the configuration. Cool. But manually for every change? Yeah, nope. We most certainly can do better.

We automate the entire deployment pipeline!

Prerequisites: GitHub Setup

1. Create AWS Resources for Terraform State (if you haven’t)

aws s3api create-bucket --bucket hng13-stage6-state-bucket --region us-east-1
aws s3api put-bucket-versioning --bucket hng13-stage6-state-bucket --versioning-configuration Status=Enabled
aws dynamodb create-table --table-name hgn13-stage6-state-lock --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --billing-mode PAY_PER_REQUEST --region us-east-1

2. Create SSH Key Pair (if you haven’t)

ssh-keygen -t ed25519 -C "deployment-key" -f ~/.ssh/id_ed25519

Upload public key to AWS EC2 > Key Pairs, with the name “hng13-stage6-deploy-key”

3. Configure GitHub Secrets

Using GitHub CLI:

gh auth login

# AWS credentials
gh secret set AWS_ACCESS_KEY_ID
gh secret set AWS_SECRET_ACCESS_KEY
gh secret set AWS_REGION -b "us-east-1"

# SSH keys
gh secret set SSH_PRIVATE_KEY < ~/.ssh/id_ed25519
gh secret set SSH_PUBLIC_KEY < ~/.ssh/id_ed25519.pub

# Application configuration
gh secret set DOMAIN_NAME -b "your-domain.com"
gh secret set ACME_EMAIL -b "your-valid-email@example.com"
gh secret set GIT_REPO_URL -b "https://github.com/yourusername/repo.git"
gh secret set JWT_SECRET -b "$(openssl rand -base64 32)"

# SMTP for notifications (using Gmail)
gh secret set SMTP_SERVER -b "smtp.gmail.com"
gh secret set SMTP_PORT -b "587"
gh secret set SMTP_USERNAME -b "your-email@gmail.com"
gh secret set SMTP_PASSWORD  # Gmail App Password
gh secret set SMTP_FROM_EMAIL -b "DevOps Pipeline <your-email@gmail.com>"
gh secret set NOTIFICATION_EMAIL -b "your-email@gmail.com"

Set a Gmail App Password by going to Google Account > Security > 2-Step Verification > App passwords

4. Create Production Environment

Via GitHub UI: Settings > Environments > New environment > “production” > Add yourself as required reviewer

Infrastructure Workflow

Create .github/workflows/infrastructure.yml:

name: Infrastructure Deployment

on:
  push:
    branches: [main]
    paths:
      - 'infra/terraform/**'
      - 'infra/ansible/**'
      - '.github/workflows/infrastructure.yml'
  workflow_dispatch:

env:
  TF_VERSION: '1.6.0'
  AWS_REGION: ${{ secrets.AWS_REGION }}

jobs:
  terraform-plan:
    name: Terraform Plan & Drift Detection
    runs-on: ubuntu-latest
    outputs:
      has_drift: ${{ steps.drift_check.outputs.has_drift }}
    
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: ${{ env.TF_VERSION }}
          terraform_wrapper: false

      - name: Create terraform.tfvars
        working-directory: ./infra/terraform
        run: |
          cat > terraform.tfvars <<EOF
          aws_region = "${{ secrets.AWS_REGION }}"
          environment = "production"
          project_name = "hng13-stage6"
          instance_type = "t3.medium"
          key_name = "hng13-stage6-deploy-key"
          domain_name = "${{ secrets.DOMAIN_NAME }}"
          acme_email = "${{ secrets.ACME_EMAIL }}"
          git_repo_url = "${{ secrets.GIT_REPO_URL }}"
          git_branch = "main"
          jwt_secret = "${{ secrets.JWT_SECRET }}"
          EOF

      - name: Terraform Init
        working-directory: ./infra/terraform
        run: terraform init

      - name: Terraform Validate
        working-directory: ./infra/terraform
        run: terraform validate

      - name: Terraform Plan
        id: plan
        working-directory: ./infra/terraform
        run: |
          terraform plan -no-color -out=tfplan
          terraform show -no-color tfplan > plan_output.txt

      - name: Check for Drift
        id: drift_check
        working-directory: ./infra/terraform
        run: |
          if grep -q "No changes" plan_output.txt; then
            echo "has_drift=false" >> $GITHUB_OUTPUT
            echo "βœ… No drift detected"
          else
            echo "has_drift=true" >> $GITHUB_OUTPUT
            echo "⚠️ DRIFT DETECTED"
          fi

      - name: Upload Plan
        uses: actions/upload-artifact@v4
        with:
          name: terraform-plan
          path: ./infra/terraform/tfplan

      - name: Upload Plan Output
        uses: actions/upload-artifact@v4
        with:
          name: plan-output
          path: ./infra/terraform/plan_output.txt

  send-drift-notification:
    name: Email Drift Alert
    runs-on: ubuntu-latest
    needs: terraform-plan
    if: needs.terraform-plan.outputs.has_drift == 'true'
    
    steps:
      - uses: actions/checkout@v4

      - name: Download Plan Output
        uses: actions/download-artifact@v4
        with:
          name: plan-output

      - name: Send Email
        uses: dawidd6/action-send-mail@v3
        with:
          server_address: ${{ secrets.SMTP_SERVER }}
          server_port: ${{ secrets.SMTP_PORT }}
          username: ${{ secrets.SMTP_USERNAME }}
          password: ${{ secrets.SMTP_PASSWORD }}
          subject: "⚠️ Terraform Drift Detected"
          to: ${{ secrets.NOTIFICATION_EMAIL }}
          from: ${{ secrets.SMTP_FROM_EMAIL }}
          body: |
            Infrastructure drift detected!
            
            Repository: ${{ github.repository }}
            Branch: ${{ github.ref_name }}
            Commit: ${{ github.sha }}
            
            Review and approve:
            ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
          attachments: plan_output.txt

  wait-for-approval:
    name: Manual Approval Gate
    runs-on: ubuntu-latest
    needs: [terraform-plan, send-drift-notification]
    if: needs.terraform-plan.outputs.has_drift == 'true'
    environment:
      name: production
    
    steps:
      - name: Approval Required
        run: echo "Waiting for manual approval..."

  terraform-apply:
    name: Apply Infrastructure Changes
    runs-on: ubuntu-latest
    needs: [terraform-plan, wait-for-approval]
    if: |
      always() &&
      needs.terraform-plan.result == 'success' &&
      (needs.terraform-plan.outputs.has_drift == 'false' || needs.wait-for-approval.result == 'success')
    
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: ${{ env.TF_VERSION }}
          terraform_wrapper: false

      - name: Create terraform.tfvars
        working-directory: ./infra/terraform
        run: |
          cat > terraform.tfvars <<EOF
          aws_region = "${{ secrets.AWS_REGION }}"
          environment = "production"
          project_name = "hng13-stage6"
          instance_type = "t3.medium"
          key_name = "hng13-stage6-deploy-key"
          domain_name = "${{ secrets.DOMAIN_NAME }}"
          acme_email = "${{ secrets.ACME_EMAIL }}"
          git_repo_url = "${{ secrets.GIT_REPO_URL }}"
          jwt_secret = "${{ secrets.JWT_SECRET }}"
          EOF

      - name: Terraform Init
        working-directory: ./infra/terraform
        run: terraform init

      - name: Download Plan
        if: needs.terraform-plan.outputs.has_drift == 'true'
        uses: actions/download-artifact@v4
        with:
          name: terraform-plan
          path: ./infra/terraform

      - name: Terraform Apply
        working-directory: ./infra/terraform
        run: |
          if [ "${{ needs.terraform-plan.outputs.has_drift }}" == "true" ]; then
            terraform apply -auto-approve tfplan
          else
            terraform apply -auto-approve
          fi

      - name: Get Server IP
        id: terraform_output
        working-directory: ./infra/terraform
        run: |
          echo "instance_ip=$(terraform output -raw instance_public_ip)" >> $GITHUB_OUTPUT

      - name: Add SSH Key
        uses: webfactory/ssh-agent@v0.9.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Wait for SSH
        run: |
          IP=${{ steps.terraform_output.outputs.instance_ip }}
          timeout 120 bash -c "until nc -z -w5 $IP 22; do sleep 5; done"

      - name: Install Ansible
        run: sudo apt-get update && sudo apt-get install -y ansible

      - name: Run Ansible Deployment
        working-directory: ./infra/ansible
        run: ansible-playbook -i inventory.ini playbook.yml -v

      - name: Send Success Email
        if: success()
        uses: dawidd6/action-send-mail@v3
        with:
          server_address: ${{ secrets.SMTP_SERVER }}
          server_port: ${{ secrets.SMTP_PORT }}
          username: ${{ secrets.SMTP_USERNAME }}
          password: ${{ secrets.SMTP_PASSWORD }}
          subject: "βœ… Infrastructure Deployed"
          to: ${{ secrets.NOTIFICATION_EMAIL }}
          from: ${{ secrets.SMTP_FROM_EMAIL }}
          body: |
            Infrastructure deployed successfully!
            
            Application: https://${{ secrets.DOMAIN_NAME }}
            Server IP: ${{ steps.terraform_output.outputs.instance_ip }}

Application Workflow

Create .github/workflows/application.yml:

name: Application Deployment

on:
  push:
    branches: [main]
    paths:
      - 'frontend/**'
      - 'auth-api/**'
      - 'todos-api/**'
      - 'users-api/**'
      - 'log-message-processor/**'
      - 'docker-compose.yml'
      - '.github/workflows/application.yml'
  workflow_dispatch:

jobs:
  test-build:
    name: Test Docker Builds
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Test Frontend Build
        run: docker build -t frontend:test ./frontend

      - name: Test Auth API Build
        run: docker build -t auth-api:test ./auth-api

      - name: Test Todos API Build
        run: docker build -t todos-api:test ./todos-api

      - name: Test Users API Build
        run: docker build -t users-api:test ./users-api

      - name: Test Log Processor Build
        run: docker build -t log-processor:test ./log-message-processor

  deploy-application:
    name: Deploy to Server
    runs-on: ubuntu-latest
    needs: test-build
    environment: production
    
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: '1.6.0'
          terraform_wrapper: false

      - name: Get Server IP
        id: terraform_output
        working-directory: ./infra/terraform
        run: |
          terraform init
          echo "server_ip=$(terraform output -raw instance_public_ip)" >> $GITHUB_OUTPUT

      - name: Add SSH Key
        uses: webfactory/ssh-agent@v0.9.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Install Ansible
        run: sudo apt-get update && sudo apt-get install -y ansible

      - name: Generate Ansible Inventory
        working-directory: ./infra/ansible
        run: |
          cat > inventory.ini <<EOF
          [microservices]
          ${{ steps.terraform_output.outputs.server_ip }} ansible_user=ubuntu
          
          [microservices:vars]
          ansible_python_interpreter=/usr/bin/python3
          domain_name=${{ secrets.DOMAIN_NAME }}
          acme_email=${{ secrets.ACME_EMAIL }}
          git_repo_url=${{ secrets.GIT_REPO_URL }}
          git_branch=main
          jwt_secret=${{ secrets.JWT_SECRET }}
          EOF

      - name: Run Ansible Deployment
        working-directory: ./infra/ansible
        run: ansible-playbook -i inventory.ini playbook.yml -v

      - name: Send Email
        if: always()
        uses: dawidd6/action-send-mail@v3
        with:
          server_address: ${{ secrets.SMTP_SERVER }}
          server_port: ${{ secrets.SMTP_PORT }}
          username: ${{ secrets.SMTP_USERNAME }}
          password: ${{ secrets.SMTP_PASSWORD }}
          subject: "${{ job.status == 'success' && 'βœ…' || '❌' }} Application Deployment"
          to: ${{ secrets.NOTIFICATION_EMAIL }}
          from: ${{ secrets.SMTP_FROM_EMAIL }}
          body: |
            Application deployment ${{ job.status }}
            
            URL: https://${{ secrets.DOMAIN_NAME }}
            Commit: ${{ github.sha }}

How It Works

Infrastructure Workflow triggers when you change Terraform or Ansible files:

  1. Plan: Terraform plans changes
  2. Drift Detection: Checks if infrastructure drifted from desired state
  3. Email Alert: Sends email if drift detected
  4. Manual Approval: Pauses for your approval via GitHub environment
  5. Apply: Applies changes after approval (or automatically if no drift)
  6. Configure: Runs Ansible to set up the server
  7. Notify: Emails you when complete

Application Workflow triggers when you change application code:

  1. Test: Builds all Docker images to verify they work
  2. Deploy: Runs Ansible to pull latest code and restart services
  3. Notify: Emails you about success/failure

The Single Command Experience

Initial deployment:

git add infra/
git commit -m "Add infrastructure automation"
git push origin main

GitHub Actions handles everything:

  • Creates AWS resources
  • Configures server
  • Deploys application
  • Emails you when done

Testing Your Setup

1. Initial Deployment

# Ensure all secrets are configured
gh secret list

# Push infrastructure code
git add .
git commit -m "Initial infrastructure setup"
git push origin main

Watch GitHub Actions. Within 5-10 minutes, you’ll receive an email confirming deployment.

2. Verify Application

# Check DNS propagation
nslookup your-domain.com

# Test endpoints
curl -I https://your-domain.com  # Should return 200
curl https://your-domain.com/api/auth  # Should return error (expected - no auth provided)

Open browser: https://your-domain.com - you should see the login page with a valid SSL certificate.

3. Test Drift Detection

Make a manual change in AWS Console (e.g., edit a security group rule), then:

git commit --allow-empty -m "Trigger drift check"
git push origin main

You should:

  1. Receive email about drift detection
  2. See workflow paused in GitHub Actions
  3. Review the plan output attachment
  4. Approve or reject in GitHub environment

4. Test Application Deployment

# Make any code change
echo "// Test deployment" >> todos-api/routes/todos.js

git add .
git commit -m "Test deployment workflow"
git push origin main

GitHub Actions will:

  • Build Docker images
  • Deploy to server
  • Send success email

5. Verify Idempotency

Critical test - run Terraform without changes:

cd infra/terraform
terraform plan

Output should be: “No changes. Your infrastructure matches the configuration.”

This proves:

  • No resource recreation
  • Drift detection works
  • Setup is truly idempotent

Final Thoughts

You started with services on localhost. Now you have:

  • Infrastructure that provisions itself
  • Configuration that applies itself
  • Deployments that happen automatically
  • Drift that detects itself
  • Approvals that gate critical changes
  • Notifications that keep you informed

All triggered by git push.

The beautiful part? It’s safer than manual deployments:

  • No “oops, forgot to update that config file”
  • No “which command did I run last time?”
  • No “it worked on my machine”
  • No surprise changes to infrastructure

Everything is:

  • Versioned - in Git
  • Reviewable - in pull requests
  • Testable - in CI before production
  • Auditable - in GitHub Actions logs
  • Repeatable - run it 100 times, same result

This is DevOps. Not just deploying faster, but deploying safer, with confidence that what worked yesterday will work today, and you’ll know immediately if something drifts.

Now go deploy something. You’ve earned it.

Quick Reference

Essential Commands

# Terraform
terraform init              # Initialize
terraform plan              # Preview changes
terraform apply             # Apply changes
terraform destroy           # Destroy infrastructure
terraform output            # Show outputs

# Ansible
ansible-playbook playbook.yml -v              # Run playbook
ansible -i inventory.ini all -m ping          # Test connectivity
ansible-playbook playbook.yml --check         # Dry run

# Docker
docker compose up -d --build    # Build and start
docker compose down             # Stop and remove
docker compose logs -f          # View logs
docker compose ps               # List containers
docker system prune -a          # Clean up

# GitHub CLI
gh secret list                  # List secrets
gh secret set KEY              # Set secret
gh workflow run workflow.yml   # Trigger workflow
gh run list                    # List workflow runs

Project Structure

todo-application/
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       β”œβ”€β”€ infrastructure.yml    # Infra deployment pipeline
β”‚       └── application.yml       # App deployment pipeline
β”œβ”€β”€ infra/
β”‚   β”œβ”€β”€ terraform/
β”‚   β”‚   β”œβ”€β”€ main.tf              # Provider & backend config
β”‚   β”‚   β”œβ”€β”€ variables.tf         # Input variables
β”‚   β”‚   β”œβ”€β”€ resources.tf         # AWS resources
β”‚   β”‚   β”œβ”€β”€ outputs.tf           # Outputs + inventory gen
β”‚   β”‚   β”œβ”€β”€ terraform.tfvars     # Variable values
β”‚   β”‚   └── templates/
β”‚   β”‚       └── inventory.tpl    # Ansible inventory template
β”‚   └── ansible/
β”‚       β”œβ”€β”€ ansible.cfg          # Ansible configuration
β”‚       β”œβ”€β”€ playbook.yml         # Main playbook
β”‚       β”œβ”€β”€ inventory.ini        # Auto-generated
β”‚       └── roles/
β”‚           β”œβ”€β”€ dependencies/    # Install Docker, etc.
β”‚           β”‚   └── tasks/
β”‚           β”‚       └── main.yml
β”‚           └── deploy/          # Deploy application
β”‚               β”œβ”€β”€ tasks/
β”‚               β”‚   └── main.yml
β”‚               └── templates/
β”‚                   └── env.j2
β”œβ”€β”€ frontend/                    # Vue.js frontend
β”œβ”€β”€ auth-api/                    # Go auth service
β”œβ”€β”€ todos-api/                   # Node.js todos service
β”œβ”€β”€ users-api/                   # Java users service
β”œβ”€β”€ log-message-processor/       # Python log processor
β”œβ”€β”€ docker-compose.yml           # Service orchestration
β”œβ”€β”€ .env                         # Environment variables
└── README.md

Workflow Triggers

Infrastructure Workflow runs when:

  • Changes to infra/terraform/**
  • Changes to infra/ansible/**
  • Manual trigger via GitHub Actions UI

Application Workflow runs when:

  • Changes to any service directory
  • Changes to docker-compose.yml
  • Manual trigger via GitHub Actions UI
Reply by Email