Skip to main content
  1. Posts/

Automated Deployment with Bash

·6 mins
Table of Contents

You’ve built a cool new application. It works perfectly on your laptop. Now, how do you get it to run on an actual server where real users can access it? You could manually SSH into the server, install Docker, copy files, configure Nginx, and pray it all goes as planned. Or… you could just like automate everything?

Cause that’s exactly what we’re gonna do—with good’ol Bash!

Our bash script handles ten stages, each of them solving a real problem you’d face in manual deployment:

(See full script here)

Stage 1: Collecting Info

The script needs to know what you want to deploy and where to. It asks for your Git repository URL, authentication token, which branch to use, SSH credentials for your server, and what port your application needs to run on. Nothing crazy, just simple prompts with sensible defaults where possible.

read -p "Git Repository URL: " GIT_URL
read -sp "Personal Access Token: " PAT && echo
read -p "Branch (default: main): " BRANCH
BRANCH=${BRANCH:-main}
read -p "SSH Username: " SSH_USER
read -p "Server IP: " SERVER_IP
read -p "SSH Key Path (default: ~/.ssh/id_rsa): " SSH_KEY
SSH_KEY=${SSH_KEY:-~/.ssh/id_rsa}
read -p "Application Port: " APP_PORT

Stage 2: Getting Your Code

The script clones your repository using that personal access token you provided. This time, pulling from your specified remote branch if the directory already exists.

log "Cloning repository..."
if [ -d "$WORKSPACE" ]; then
    log "Repository exists, pulling latest changes..."
    cd "$WORKSPACE" && git pull origin "$BRANCH"
else
    REPO_PATH=$(echo "$GIT_URL" | sed 's|https://github.com/||' | sed 's|\.git$||')
    git clone -b "$BRANCH" "https://${PAT}@github.com/${REPO_PATH}.git" "$WORKSPACE"
fi
cd "$WORKSPACE"
log "✓ Repository ready"

Stage 3: File Checks

The script looks for either a Dockerfile or a docker-compose.yml file.

[ -f "Dockerfile" ] || [ -f "docker-compose.yml" ] || error "No Dockerfile or docker-compose.yml found"
if [ -f "docker-compose.yml" ]; then
    log "✓ Found docker-compose.yml"
else
    log "✓ Found Dockerfile"
fi

Stage 4: SSH test

The script tests the SSH connection first with a simple echo command. If that fails, we know immediately. No point in trying to push files to a server we can’t even reach.

log "Testing SSH connection..."
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=no "$SSH_USER@$SERVER_IP" "echo Connected" || error "SSH failed"
log "✓ SSH connection verified"

Stage 5: Setting Up Server Dependencies

Now we’re on the remote machine, and we need to make sure it has everything required. The script updates package lists, installs Docker and Docker compose if missing, and sets up Nginx. It also adds your user to the Docker group so you don’t need sudo for every Docker command.

log "Installing Docker and Nginx on remote server..."
ssh -i "$SSH_KEY" "$SSH_USER@$SERVER_IP" << 'EOF'
sudo apt update -y
if ! command -v docker &> /dev/null; then
    echo "Installing Docker..."
    curl -fsSL https://get.docker.com | sudo sh
    sudo usermod -aG docker $USER
fi
if ! command -v docker-compose &> /dev/null; then
    echo "Installing Docker Compose..."
    sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
fi
sudo apt install -y nginx
sudo systemctl enable docker nginx
sudo systemctl start docker nginx
EOF
log "✓ Dependencies installed"

Stage 6: Transferring Files to Remote Server

Let’s get your application files onto the server. We use rsync instead of scp because rsync is smarter as it only transfers files that have changed, and handles directories better. The git directory is excluded because we don’t need version control history cluttering up the production environment.

log "Transferring files to remote server..."
rsync -avz --exclude='.git' -e "ssh -i $SSH_KEY" ./ "$SSH_USER@$SERVER_IP:/tmp/$APP_NAME/" >> "$LOG_FILE" 2>&1
log "✓ Files transferred"

Stage 7: Deployment!!!

This is where your application actually gets built and run. The script first stops any existing containers with the same name to avoid conflicts. Then it checks whether you’re using docker-compose or a standalone Dockerfile and handles each case appropriately.

log "Deploying application..."
ssh -i "$SSH_KEY" "$SSH_USER@$SERVER_IP" << EOF
cd /tmp/$APP_NAME
echo "Stopping existing containers..."
docker-compose down 2>/dev/null || true
docker stop $APP_NAME 2>/dev/null || true
docker rm $APP_NAME 2>/dev/null || true

if [ -f docker-compose.yml ]; then
    echo "Building and starting with docker-compose..."
    docker-compose up -d --build
else
    echo "Building and starting with docker..."
    docker build -t $APP_NAME .
    docker run -d --name $APP_NAME -p $APP_PORT:$APP_PORT $APP_NAME
fi

echo "Waiting for container to start..."
sleep 5
docker ps | grep -q $APP_NAME || exit 1
echo "Container is running"
EOF
log "✓ Application deployed"

Stage 8: Making the App Accessible

Application is running, but it’s only listening on localhost at whatever port you specified. Nginx acts as the front door, accepting requests on port 80 and forwarding them to your application. The script generates an Nginx configuration file, does a little configuration wizardry, then reloads Nginx.

log "Configuring Nginx reverse proxy..."
ssh -i "$SSH_KEY" "$SSH_USER@$SERVER_IP" "cat > /tmp/nginx-$APP_NAME" << EOF
server {
    listen 80;
    server_name _;
    location / {
        proxy_pass http://localhost:$APP_PORT;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    }
}
EOF

ssh -i "$SSH_KEY" "$SSH_USER@$SERVER_IP" << EOF
sudo mv /tmp/nginx-$APP_NAME /etc/nginx/sites-available/$APP_NAME
sudo ln -sf /etc/nginx/sites-available/$APP_NAME /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
sudo nginx -t && sudo systemctl reload nginx
EOF
log "✓ Nginx configured"

Stage 9: Logs

Throughout this process, every action gets logged with timestamps. If something goes wrong, you can look at the log file and see exactly what happened. The script uses a trap function to catch unexpected errors and log them too. Each stage uses meaningful exit codes, so you know not just that something failed, but where it failed.

set -e

LOG_FILE="deploy_$(date +%Y%m%d_%H%M%S).log"

log() { 
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}

error() { 
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] ERROR: $1" | tee -a "$LOG_FILE" >&2
    exit 1
}

Stage 10: The Cleanup™

The script includes a cleanup mode (activated with --cleanup flag) that tears everything down gracefully. It stops containers, removes them, deletes volumes, removes Nginx configs, and cleans up the transferred files allowing you to redeploy from a clean slate when needed.

cleanup() {
    log "=== Cleanup Mode ==="
    read -p "SSH Username: " SSH_USER
    read -p "Server IP: " SERVER_IP
    read -p "SSH Key Path (default: ~/.ssh/id_rsa): " SSH_KEY
    SSH_KEY=${SSH_KEY:-~/.ssh/id_rsa}
    read -p "App Name: " APP_NAME
    
    log "Cleaning up $APP_NAME on $SERVER_IP..."
    
    ssh -i "$SSH_KEY" "$SSH_USER@$SERVER_IP" << EOF
cd /tmp/$APP_NAME 2>/dev/null || true
docker-compose down --volumes --remove-orphans 2>/dev/null || true
docker stop $APP_NAME 2>/dev/null || true
docker rm $APP_NAME 2>/dev/null || true
sudo rm -f /etc/nginx/sites-available/$APP_NAME
sudo rm -f /etc/nginx/sites-enabled/$APP_NAME
sudo nginx -t && sudo systemctl reload nginx 2>/dev/null || true
rm -rf /tmp/$APP_NAME
EOF
    
    log "✅ Cleanup complete!"
    exit 0
}

if [ "${1:-}" = "--cleanup" ]; then
    cleanup
fi

Done!

Everything in this script mirrors actual production workflows. Companies don’t manually SSH into servers and run commands, they have CI/CD pipelines that do exactly what this script does with a little more snaz.

Welcome to DevOps mate.

Reply by Email