OPA Architecture and Deployment Modes

πŸ“– Introduction

Before I deployed OPA to a real cluster, I needed to understand how it actually works at runtime. The documentation makes the policy language clear, but the deployment model takes some thought. OPA can run as a server, a sidecar, an embedded library, or a CLI tool β€” and choosing the right mode shapes how you integrate it.

This article covers the internal architecture of OPA and the four ways to deploy it, so the Kubernetes-specific setup in the next article makes sense.


πŸ—οΈ OPA Internal Architecture

At its core, OPA is a evaluation engine:

  1. Policies (Rego files) are compiled into a query plan

  2. At query time, OPA loads input and data, then executes the plan

  3. The result is returned as JSON

spinner

The Data Store

OPA maintains an in-memory document store for the data document. You populate it by:

  • Loading JSON/YAML files at startup (opa run -d data.json)

  • Pushing data via the REST API (PUT /v1/data/...)

  • Loading a bundle (covered in Article 06)

Policy Compilation

When OPA loads a .rego file, it compiles it immediately. Compilation validates syntax, resolves imports, and optimizes the policy into an internal representation. This means policy errors are caught at load time, not evaluation time.


πŸ”Œ Deployment Modes

OPA can be deployed in four ways. Each suits different integration patterns.

Mode 1: CLI (Local Evaluation)

Use this for:

  • Local development and testing

  • CI/CD pipeline policy gates

  • One-shot policy checks in scripts

No server needed. OPA loads policies and input, evaluates, exits.

spinner

Mode 2: REPL (Interactive)

Use this for:

  • Exploring and debugging policies

  • Learning Rego interactively

  • Testing edge cases before writing unit tests

Mode 3: OPA Server (REST API)

OPA starts an HTTP server. Any system can query it over REST:

spinner

This is how Kubernetes Gatekeeper and many sidecar deployments work under the hood.

Key API endpoints:

Endpoint
Method
Purpose

/v1/data/<path>

POST

Evaluate a query

/v1/policies/<id>

PUT

Upload a policy

/v1/data/<path>

PUT

Update data

/v1/query

POST

Ad-hoc query evaluation

/health

GET

Liveness check

Mode 4: Go Library (Embedded)

For Go applications, OPA can be embedded directly:

The compiled query is safe to cache and share across goroutines β€” OPA evaluates each call independently from the cached plan.

Use this when:

  • You're writing a Go service that needs policy decisions inline

  • Network latency to an OPA server is unacceptable

  • You want zero external dependencies at runtime


πŸ”„ Sidecar vs Centralized Deployment

When using OPA as a server (Mode 3), you have two topology choices:

spinner
Approach
Pros
Cons

Centralized

Single policy update point, easier monitoring

Single point of failure, network latency

Sidecar

Low latency, no external dependency

Policy sync complexity, more instances to manage

For Kubernetes admission control, OPA runs as a Deployment with replicas β€” not a true sidecar, but centralized within the cluster.


πŸ”’ OPA Server Security Considerations

By default, OPA's REST API has no authentication. For any real deployment:

OPA also supports mTLS, OIDC token verification, and custom authentication plugins.


πŸ“‘ How OPA Gets Policies and Data

Three patterns for keeping OPA's policies and data current:

1. Static Load at Startup

Policies are loaded once. Requires restart to update. Fine for development, limiting for production.

2. REST API Push

Fine-grained control, but you manage the push mechanism.

OPA periodically pulls a bundle β€” a compressed archive of policies and data β€” from a bundle server. This is the standard production pattern.

Bundles are covered in depth in Article 06.


πŸ“Š OPA Decision Logging

In production, you want to know what decisions OPA is making. OPA supports decision logging β€” every evaluation can be sent to a remote endpoint:

Each log entry includes the input, the decision, the policy that matched, and timing information. This is critical for auditing and debugging policy behavior in production.


🧭 What's Next

Now that you understand how OPA runs, the next article puts it in Kubernetes using OPA Gatekeeper β€” the standard way to use OPA as an admission controller.

Next: Article 04 β€” OPA Gatekeeper on Kubernetes


πŸ“Ž References

Last updated