Tech With Htunn
  • Blog Content
  • 🤖Artificial Intelligence
    • 🧠Building an Intelligent Agent with Local LLMs and Azure OpenAI
    • 📊Revolutionizing IoT Monitoring: My Personal Journey with LLM-Powered Observability
  • 📘Core Concepts
    • 🔄Understanding DevSecOps
    • ⬅️Shifting Left in DevSecOps
    • 📦Understanding Containerization
    • ⚙️What is Site Reliability Engineering?
    • ⏱️Understanding Toil in SRE
    • 🔐What is Identity and Access Management?
    • 📊Microsoft Graph API: An Overview
    • 🔄Understanding Identity Brokers
  • 🔎Security Testing
    • 🔍SAST vs DAST: Understanding the Differences
    • 🧩Software Composition Analysis (SCA)
    • 📋Software Bill of Materials (SBOM)
    • 🧪Dependency Scanning in DevSecOps
    • 🐳Container Scanning in DevSecOps
  • 🔄CI/CD Pipeline
    • 🔁My Journey with Continuous Integration in DevOps
    • 🚀My Journey with Continuous Delivery and Deployment in DevOps
  • 🧮Fundamentals
    • 💾What is Data Engineering?
    • 🔄Understanding DataOps
    • 👷The Role of a Cloud Architect
    • 🏛️Cloud Native Architecture
    • 💻Cloud Native Applications
  • 🏛️Architecture & Patterns
    • 🏅Medallion Architecture in Data Engineering
    • 🔄ETL vs ELT Pipeline: Understanding the Differences
  • 🔒Authentication & Authorization
    • 🔑OAuth 2.0 vs OIDC: Key Differences
    • 🔐Understanding PKCE in OAuth 2.0
    • 🔄Service Provider vs Identity Provider Initiated SAML Flows
  • 📋Provisioning Standards
    • 📊SCIM in Identity and Access Management
    • 📡Understanding SCIM Streaming
  • 🏗️Design Patterns
    • ⚡Event-Driven Architecture
    • 🔒Web Application Firewalls
  • 📊Reliability Metrics
    • 💰Error Budgets in SRE
    • 📏SLA vs SLO vs SLI: Understanding the Differences
    • ⏱️Mean Time to Recovery (MTTR)
Powered by GitBook
On this page
  • Why I Embraced Containerization (and You Should Too)
  • The Anatomy of My Container Setup
  • 1. The Dockerfile: My Application Blueprint
  • 2. Docker Compose: My Local Development Environment
  • How Containerization Changed My Cloud-Native Development
  • My GitLab CI/CD Pipeline for Container Deployment
  • GitLab Container Registry: My Private Docker Hub
  • Real-World Lessons from My Containerization Journey
  • 1. Start Small and Iterate
  • 2. Optimize Your Container Images
  • 3. Make Containers Stateless
  • 4. Monitor Your Containers
  • 5. Security Is Non-Negotiable
  • Are Containers Always the Answer?
  • Getting Started on Your Own Containerization Journey
  1. Core Concepts

Understanding Containerization

As a developer who's been in the trenches for over a decade, I've lived through the evolution from traditional deployments to virtual machines and now to containers. Let me tell you, discovering containerization was like finding the missing puzzle piece in my cloud-native development workflow. In this post, I'll share my personal journey with containerization and how it transformed the way I build and deploy applications using Docker and GitLab.

Why I Embraced Containerization (and You Should Too)

When I first started building cloud applications, I constantly battled the infamous "works on my machine" syndrome. My team would spend hours troubleshooting environment mismatches between development and production. After one particularly painful release that required three overnight debugging sessions, I knew there had to be a better way.

That's when I discovered containerization. At its core, containerization lets you package your application and everything it needs—dependencies, libraries, and configuration—into a single, portable unit called a container. Think of it as a lightweight, standalone "shipping box" for your code that runs exactly the same way everywhere.

The Anatomy of My Container Setup

After years of refining my approach, here's what I've found works best for cloud-native applications:

1. The Dockerfile: My Application Blueprint

Every container journey begins with a Dockerfile—essentially the recipe for building your container image. Here's one I used recently for a Node.js microservice:

# Start with the official Node image (I prefer specific versions over 'latest')
FROM node:18.16.0-alpine

# Working directory inside the container
WORKDIR /app

# Install dependencies first (leverages Docker cache)
COPY package*.json ./
RUN npm ci --only=production

# Copy application code
COPY src/ ./src/
COPY config/ ./config/

# Create a non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# Expose the application port
EXPOSE 8080

# Health check ensures the container is truly ready
HEALTHCHECK --interval=30s --timeout=3s \
  CMD wget -q --spider http://localhost:8080/health || exit 1

# Set environment variables
ENV NODE_ENV=production

# Command to start the app
CMD ["node", "src/server.js"]

I've learned that a good Dockerfile follows these principles:

  • Start with a specific, lightweight base image

  • Use multi-stage builds for complex applications

  • Run containers as non-root users

  • Include health checks

  • Optimize for the Docker cache

2. Docker Compose: My Local Development Environment

While Kubernetes handles my production environment, I use Docker Compose for local development. It lets me run my entire application stack with a single command:

version: '3.8'

services:
  api:
    build: ./api
    ports:
      - "8080:8080"
    environment:
      - DB_HOST=postgres
      - DB_USER=apiuser
      - DB_PASSWORD=securepassword
      - DB_NAME=apidb
    depends_on:
      - postgres
    volumes:
      - ./api/src:/app/src:ro
      
  postgres:
    image: postgres:14-alpine
    environment:
      - POSTGRES_USER=apiuser
      - POSTGRES_PASSWORD=securepassword
      - POSTGRES_DB=apidb
    volumes:
      - postgres-data:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data

volumes:
  postgres-data:
  redis-data:

This setup gives me a consistent development environment that I can start with docker-compose up and tear down with docker-compose down without worrying about cluttering my machine.

How Containerization Changed My Cloud-Native Development

Moving to containerization fundamentally changed how I approach application development:

  1. I stopped worrying about dependencies: Once my application works in a container, it works everywhere—my laptop, my colleague's machine, staging, or production.

  2. I build more modular applications: Containers encouraged me to adopt microservices architecture. Each service lives in its own container, making it easier to develop, test, and scale independently.

  3. I deploy more confidently: When I know that the exact container I tested is what's going to production, I can deploy with much greater confidence.

  4. I scale effortlessly: Need more capacity? Just spin up more container instances. The cloud-native approach means my application can scale horizontally without modification.

My GitLab CI/CD Pipeline for Container Deployment

The real power of containerization became apparent when I integrated it with GitLab's CI/CD pipeline. Here's the .gitlab-ci.yml file I use for a typical microservice:

stages:
  - test
  - build
  - scan
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest

# Run tests before building the image
test:
  stage: test
  image: node:18.16.0-alpine
  script:
    - npm ci
    - npm run lint
    - npm test
  cache:
    paths:
      - node_modules/

# Build the image and push to GitLab Container Registry
build:
  stage: build
  image: docker:20.10.16
  services:
    - docker:20.10.16-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $CONTAINER_TEST_IMAGE .
    - docker push $CONTAINER_TEST_IMAGE
  only:
    - branches

# Scan the container image for vulnerabilities
scan:
  stage: scan
  image: 
    name: aquasec/trivy:latest
    entrypoint: [""]
  script:
    - trivy image --severity HIGH,CRITICAL $CONTAINER_TEST_IMAGE
  only:
    - branches

# For main branch, tag as latest and deploy
deploy:
  stage: deploy
  image: docker:20.10.16
  services:
    - docker:20.10.16-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker pull $CONTAINER_TEST_IMAGE
    - docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
    - docker push $CONTAINER_RELEASE_IMAGE
    # Deploy using kubectl (assuming k8s integration is set up)
    - kubectl set image deployment/my-app container=$CONTAINER_RELEASE_IMAGE
  only:
    - main

This pipeline:

  1. Runs my tests to ensure code quality

  2. Builds a Docker image and uploads it to GitLab Container Registry

  3. Scans the image for security vulnerabilities

  4. For the main branch, tags the image as "latest" and deploys it

GitLab Container Registry: My Private Docker Hub

One of the biggest upgrades to my workflow was switching from Docker Hub to GitLab Container Registry. Having my container images stored alongside my code in the same GitLab project provides:

  1. Integrated security: GitLab automatically scans my images for vulnerabilities

  2. Simplified access control: Team members with access to the repository can pull images

  3. Built-in versioning: Every commit can produce a uniquely tagged image

  4. Reduced context switching: Everything is in one place

To push to the GitLab Container Registry, I set up my CI/CD variables and use commands like:

# Login to the registry
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

# Build and tag the image with the commit SHA
docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .

# Push the image to the registry
docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA

Real-World Lessons from My Containerization Journey

After containerizing dozens of applications, here are the key lessons I've learned:

1. Start Small and Iterate

My first attempts to containerize a monolithic application were frustrating. Now I recommend starting with a simple service, containerizing it successfully, and then moving on to more complex parts of your application.

2. Optimize Your Container Images

My early containers were huge—over 1GB for a simple Node.js application! I've since learned to:

  • Use smaller base images (Alpine-based images are my go-to)

  • Implement multi-stage builds to separate build and runtime dependencies

  • Clean up unnecessary files in the same Dockerfile layer they're created in

3. Make Containers Stateless

One painful lesson was storing data inside containers. Don't do this! Make your containers stateless and use mounted volumes or external services for persistence.

4. Monitor Your Containers

Containers need proper monitoring. I use Prometheus and Grafana to track container metrics and set up alerts for any issues.

5. Security Is Non-Negotiable

I always scan my container images for vulnerabilities, run containers as non-root users, and implement proper access controls for my container registry.

Are Containers Always the Answer?

While I'm a huge container advocate, I've learned they're not always the right tool:

  • Legacy applications can be challenging to containerize without significant refactoring

  • Applications with specialized hardware requirements might be better served by VMs

  • Ultra-high performance workloads might need bare metal environments

But for most modern cloud-native applications, containers have become my default choice.

Getting Started on Your Own Containerization Journey

If you're just starting with containers, here's my recommended approach:

  1. Begin with Docker Desktop: It gives you a complete development environment

  2. Containerize a simple application: Create a basic Dockerfile and get it working

  3. Use Docker Compose: Create a multi-container environment for your application

  4. Set up a CI/CD pipeline: Automate your container builds and deployments with GitLab

  5. Learn orchestration: Once comfortable, look into Kubernetes for production deployments

The containerization journey transformed how I build and deploy applications. While there was a learning curve, the benefits of consistency, portability, and scalability have been well worth the investment.

In my next post, I'll dive deeper into how I secure my containerized applications using GitLab's container scanning and other security best practices. Stay tuned!

PreviousShifting Left in DevSecOpsNextWhat is Site Reliability Engineering?

Last updated 1 day ago

📘
📦