Pods and Workloads

Introduction

When I first started deploying applications to Kubernetes, I made the mistake of focusing only on pods. I would create individual pod definitions, deploy them, and wonder why Kubernetes wasn't living up to its reputation for resilience and self-healing. A pod would die, and it wouldn't come back. Scale up? I'd have to create more pod definitions manually. Rolling updates? Forget about it.

The breakthrough came when I understood that pods are ephemeral, disposable units—they're not meant to be managed individually in production. Instead, Kubernetes provides higher-level workload resources like Deployments, StatefulSets, DaemonSets, Jobs, and CronJobs that manage pods for you. These controllers implement the patterns you need for real applications: self-healing, scaling, rolling updates, and specialized workload types.

In this article, I'll share what I've learned about pods and the workload resources that manage them, drawing from experience deploying stateless web services, stateful databases, batch processing jobs, and background tasks across development and production environments.

Table of Contents

Understanding Pods

A pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in your cluster and can contain one or more containers that share network and storage resources.

Pod Anatomy

spinner

Simple Pod Definition

Multi-Container Pods

Containers in a pod share the same network namespace (IP address and port space) and can communicate via localhost.

Init Containers

Init containers run before app containers and must complete successfully before the app containers start. They're useful for setup tasks, waiting for services, or populating volumes.

Pod Networking

Pods get their own IP address and all containers in a pod share that IP:

Pod Storage

Pod Lifecycle and States

Understanding pod lifecycle helps debug issues and design robust applications.

Pod Phase Diagram

spinner

Container States

Waiting:

Running:

Terminated:

Pod Conditions

Deployments and ReplicaSets

Deployments are the most common way to run stateless applications. They manage ReplicaSets, which in turn manage pods.

Deployment Architecture

spinner

Complete Deployment Example

Managing Deployments

Deployment Strategies

Rolling Update (default):

Recreate:

Blue-Green Deployment Pattern

Canary Deployment Pattern

StatefulSets for Stateful Applications

StatefulSets manage stateful applications that require stable network identities, persistent storage, and ordered deployment/scaling.

StatefulSet vs Deployment

spinner

StatefulSet Example

StatefulSet Features

Stable Network Identity:

Ordered Deployment and Scaling:

Persistent Storage:

Managing StatefulSets

DaemonSets for Node-Level Services

DaemonSets ensure a copy of a pod runs on all (or some) nodes. They're perfect for node monitoring, log collection, and network plugins.

DaemonSet Example

Node Selection for DaemonSets

Jobs for Batch Processing

Jobs create one or more pods and ensure they successfully complete. Perfect for batch processing, data migrations, and one-time tasks.

Simple Job

Parallel Jobs

Fixed Completion Count:

Work Queue Pattern:

Managing Jobs

CronJobs for Scheduled Tasks

CronJobs create Jobs on a schedule, similar to cron on Unix systems.

CronJob Example

Common Cron Schedules

Managing CronJobs

Health Checks and Probes

Kubernetes uses probes to determine container health and readiness.

Probe Types

spinner

HTTP Probes

TCP Probes

Exec Probes

gRPC Probes (Kubernetes 1.24+)

Resource Requests and Limits

Resource management is crucial for stable clusters.

Resource Types

Resource Units

CPU:

  • 1 = 1 vCPU/Core

  • 1000m = 1000 millicores = 1 CPU

  • 0.5 = 500m = 0.5 CPU

Memory:

  • 128Mi = 128 mebibytes (1Mi = 1024 KiB)

  • 128M = 128 megabytes (1M = 1000 KB)

  • 1Gi = 1 gibibyte = 1024 Mi

  • 1G = 1 gigabyte = 1000 M

Quality of Service Classes

spinner

Guaranteed QoS:

Burstable QoS:

BestEffort QoS:

LimitRange for Namespace Defaults

ResourceQuota for Namespace Limits

Pod Disruption Budgets

Pod Disruption Budgets ensure minimum availability during voluntary disruptions (node maintenance, updates, etc.).

PodDisruptionBudget Example

PDB with Percentage

Checking PDB Status

What I Learned

Understanding pods and workload resources transformed how I deploy and manage applications in Kubernetes:

Pods Are Ephemeral: I learned early that pods are disposable. Don't treat them like VMs or long-lived servers. Design applications to handle pod restarts gracefully, and use higher-level controllers to manage them.

Choose the Right Workload Type: Deployments for stateless apps, StatefulSets for databases, DaemonSets for node services, Jobs for batch work, and CronJobs for scheduled tasks. Each has specific behaviors optimized for its use case.

Health Checks Are Critical: Implementing proper liveness, readiness, and startup probes dramatically improved application reliability. The time invested in creating accurate health check endpoints pays off immediately.

Resource Requests Matter: Setting appropriate resource requests and limits prevents resource contention and ensures predictable scheduling. I learned to profile applications under load to determine realistic values.

StatefulSets Need Care: StatefulSets have different behaviors than Deployments—ordered scaling, persistent storage, stable network identities. Understanding these differences prevents confusion when things don't work as expected.

PDBs Prevent Outages: Pod Disruption Budgets saved us from accidental outages during node maintenance. Always define PDBs for critical services—they're a simple way to codify availability requirements.

Start Simple, Then Optimize: Begin with basic Deployments and add complexity (affinity rules, init containers, sidecars) only when needed. Premature optimization makes debugging harder.

These workload primitives are the foundation of running applications on Kubernetes. Master them, and you'll be able to deploy virtually any workload type reliably and efficiently. In the next articles, we'll explore how to expose these applications through Services and Ingress, configure them with ConfigMaps and Secrets, and add persistent storage.

Last updated