Latest Kubernetes Resources

Introduction

Every time I spin up a new cluster, I go through the same ritual: write a Deployment, attach a Service, wrap it in a Namespace, bolt on RBAC, tune the HPA. After years of doing this I have a mental checklist of the resources that actually matter in production. This article is that checklist — covering the full complement of modern Kubernetes resource types (API version v1.28+), with a real Go HTTP microservice as the workload running through every example.

I will not cover every available resource (there are hundreds of CRDs out there). I cover the ones that show up in every serious cluster.


The Go Microservice

All YAML manifests in this article deploy or configure the same Go HTTP service. Here is the source:

goapp/
├── main.go
├── go.mod
└── Dockerfile

main.go:

package main

import (
	"encoding/json"
	"fmt"
	"log/slog"
	"net/http"
	"os"
	"time"
)

type HealthResponse struct {
	Status    string `json:"status"`
	Timestamp string `json:"timestamp"`
	Version   string `json:"version"`
}

type ItemResponse struct {
	ID    int    `json:"id"`
	Name  string `json:"name"`
	Price int    `json:"price"`
}

func main() {
	logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
	port := os.Getenv("PORT")
	if port == "" {
		port = "8080"
	}

	mux := http.NewServeMux()
	mux.HandleFunc("GET /healthz", func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "application/json")
		json.NewEncoder(w).Encode(HealthResponse{
			Status:    "ok",
			Timestamp: time.Now().UTC().Format(time.RFC3339),
			Version:   os.Getenv("APP_VERSION"),
		})
	})

	mux.HandleFunc("GET /items", func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "application/json")
		json.NewEncoder(w).Encode([]ItemResponse{
			{ID: 1, Name: "Widget A", Price: 100},
			{ID: 2, Name: "Widget B", Price: 200},
		})
	})

	mux.HandleFunc("POST /items", func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "application/json")
		w.WriteHeader(http.StatusCreated)
		json.NewEncoder(w).Encode(ItemResponse{ID: 3, Name: "Widget C", Price: 300})
	})

	addr := fmt.Sprintf(":%s", port)
	logger.Info("starting server", "addr", addr, "version", os.Getenv("APP_VERSION"))
	if err := http.ListenAndServe(addr, mux); err != nil {
		logger.Error("server error", "err", err)
		os.Exit(1)
	}
}

go.mod:

Dockerfile:

Build and push:


Workload Resources

Deployment

Deployment is the standard controller for stateless services. apps/v1 has been stable since Kubernetes 1.9.

Key points I always enforce:

  • maxUnavailable: 0 — never go below desired replica count during rollout

  • revisionHistoryLimit: 5 — keeps rollback history manageable

  • readOnlyRootFilesystem: true + emptyDir for /tmp — prevents container breakouts

  • automountServiceAccountToken: false — principle of least privilege

StatefulSet

Use StatefulSet when pods need stable network identity or persistent storage per-replica.

DaemonSet

DaemonSet ensures exactly one pod runs on every node — perfect for log collectors, node exporters, or network agents.

Job and CronJob

Jobs for batch processing; CronJobs for scheduled tasks.


Autoscaling Resources

HorizontalPodAutoscaler (v2)

autoscaling/v2 is stable since Kubernetes 1.26. It supports CPU, memory, and custom metrics simultaneously.

VerticalPodAutoscaler (VPA)

VPA is installed separately via the VPA Helm chart or official manifests. It recommends (or automatically updates) resource requests/limits.

I use updateMode: "Off" in production and review VPA recommendations during the weekly infra review. Auto-apply (updateMode: "Auto") is useful in dev environments.


Reliability and Policy Resources

PodDisruptionBudget

PDB prevents your service from being unavailable during voluntary disruptions (node drains, rolling upgrades).

Alternative: use maxUnavailable instead of minAvailable.

PDB works with kubectl drain. Without it, a node drain could evict all goapp pods simultaneously.

PriorityClass

PriorityClass controls scheduling order when the cluster is under resource pressure.

Assign to pods:


Resource Management

ResourceQuota

ResourceQuota limits total resource consumption per namespace.

LimitRange

LimitRange sets default and maximum resource values for containers that do not specify them explicitly.


Network Security

NetworkPolicy

NetworkPolicy is enforced by the CNI plugin (Cilium, Calico, etc.). The default behavior in most clusters is allow-all — NetworkPolicy flips that.


Configuration and ServiceAccount


Applying Everything

Apply in order:

Check the result:


Resource API Version Reference

Resource
API Group
Stable Since

Deployment

apps/v1

K8s 1.9

StatefulSet

apps/v1

K8s 1.9

DaemonSet

apps/v1

K8s 1.9

Job

batch/v1

K8s 1.21

CronJob

batch/v1

K8s 1.21

HorizontalPodAutoscaler

autoscaling/v2

K8s 1.26

PodDisruptionBudget

policy/v1

K8s 1.21

PriorityClass

scheduling.k8s.io/v1

K8s 1.14

ResourceQuota

v1 (core)

stable

LimitRange

v1 (core)

stable

NetworkPolicy

networking.k8s.io/v1

K8s 1.7

VPA

autoscaling.k8s.io/v1

separate install


What I Learned

  • Always set both PDB and HPA together — HPA scales you up under load, PDB keeps you safe during node drains.

  • LimitRange prevents unbounded containers — a single container without limits can starve the whole node.

  • NetworkPolicy default-deny-all should be your baseline in any namespace handling user data.

  • VPA in Off mode is a free performance recommendation engine — check it weekly.

  • CronJob timeZone field (stable in 1.27) eliminates the DST ambiguity bugs I used to deal with.


Next

Last updated