# Ports and Adapters (Hexagonal)

## The Insight That Changed How I Thought About Interfaces

When I was working on the chatbot service in my POS system, I had a problem: the same core logic needed to work in three different contexts. In production, it received messages via HTTP from the API. In background processing, it received work from a Redis queue. In tests, it was called directly from Python functions. Three different entry points, same domain logic.

Without hexagonal architecture, I had three versions of the same wiring code, and each new "way to trigger the chatbot" required touching the business logic. With hexagonal architecture, the core logic became a library that any driver could call through a defined port. Adding a new entry point meant writing a new adapter, not touching the core.

The name "hexagonal" is a diagram choice — Alistair Cockburn drew the core as a hexagon to emphasise that all sides are equal. The real concept is **ports and adapters**: the core defines ports (interfaces), and adapters plug into those ports from the outside.

## Table of Contents

* [The Core Idea](#the-core-idea)
* [Ports vs Adapters](#ports-vs-adapters)
* [Driver Ports and Driven Ports](#driver-ports-and-driven-ports)
* [Relationship to Onion Architecture](#relationship-to-onion-architecture)
* [Practical Example: Chatbot Service](#practical-example-chatbot-service)
* [Multiple Adapters for the Same Port](#multiple-adapters-for-the-same-port)
* [Testing with Test Adapters](#testing-with-test-adapters)
* [When to Choose Hexagonal over Onion](#when-to-choose-hexagonal-over-onion)
* [Lessons Learned](#lessons-learned)

***

## The Core Idea

The hexagonal model divides the application into three zones:

{% @mermaid/diagram content="graph LR
subgraph Left\["Left Side: Driver Adapters<br/>(How the app is called)"]
HTTP\[HTTP / REST Adapter]
QUEUE\[Queue Consumer Adapter]
CLI\[CLI Adapter]
TEST\[Test Adapter]
end

```
subgraph Core["Core (Hexagon)"]
    PORT_IN[Driver Port<br/>Application Interface]
    LOGIC[Business Logic<br/>Use Cases<br/>Domain]
    PORT_OUT[Driven Port<br/>Repository Interface<br/>External Service Interface]
end

subgraph Right["Right Side: Driven Adapters<br/>(What the app calls)"]
    DB[Database Adapter]
    LLM[LLM Provider Adapter]
    NOTIF[Notification Adapter]
end

HTTP --> PORT_IN
QUEUE --> PORT_IN
CLI --> PORT_IN
TEST --> PORT_IN

PORT_IN --> LOGIC
LOGIC --> PORT_OUT

PORT_OUT --> DB
PORT_OUT --> LLM
PORT_OUT --> NOTIF

style Core fill:#ffeaa7" %}
```

* **Left side (Driver Adapters):** How the world calls the application (HTTP, CLI, queue messages)
* **Core:** The application's business logic, completely isolated from delivery mechanisms
* **Right side (Driven Adapters):** What the application calls (databases, external APIs, file systems)

***

## Ports vs Adapters

| Concept            | Definition                                    | Example                                                             |
| ------------------ | --------------------------------------------- | ------------------------------------------------------------------- |
| **Port**           | An interface defined by the core              | `ChatbotPort`, `ConversationRepository`                             |
| **Driver Adapter** | Translates incoming calls to port invocations | `FastAPIAdapter` that calls `ChatbotPort.chat()`                    |
| **Driven Adapter** | Implements a port that the core calls         | `MongoConversationRepository` implementing `ConversationRepository` |

Ports are owned by the core. Adapters are owned by the infrastructure/UI rings. Adapters depend on ports; ports never depend on adapters.

***

## Driver Ports and Driven Ports

### Driver Ports (Inbound)

Driver ports are the interfaces through which external actors call the application. They are defined in the core and implemented by the application services/use cases.

```python
# core/ports/inbound.py
from abc import ABC, abstractmethod
from dataclasses import dataclass

@dataclass
class ChatRequest:
    tenant_id: str
    session_id: str
    message: str

@dataclass
class ChatResponse:
    reply: str
    session_id: str
    context_used: bool

class ChatbotPort(ABC):
    @abstractmethod
    def chat(self, request: ChatRequest) -> ChatResponse: ...

    @abstractmethod
    def clear_session(self, tenant_id: str, session_id: str) -> None: ...
```

### Driven Ports (Outbound)

Driven ports are the interfaces the core uses when it needs something from the outside world. The core defines what it needs; adapters implement it.

```python
# core/ports/outbound.py
from abc import ABC, abstractmethod

class ConversationRepository(ABC):
    @abstractmethod
    def load_history(self, tenant_id: str, session_id: str) -> list[dict]: ...

    @abstractmethod
    def save_message(self, tenant_id: str, session_id: str, role: str, content: str) -> None: ...

class LLMProvider(ABC):
    @abstractmethod
    def complete(self, messages: list[dict], system_prompt: str) -> str: ...
```

***

## Practical Example: Chatbot Service

The chatbot service in my POS system handles natural language queries about orders, menu items, and restaurant status. The core logic is the same regardless of whether the query comes from HTTP or a queue.

```python
# core/services/chatbot_service.py — implements the driver port
from ..ports.inbound import ChatbotPort, ChatRequest, ChatResponse
from ..ports.outbound import ConversationRepository, LLMProvider

SYSTEM_PROMPT = """
You are a helpful assistant for a restaurant management system.
Answer questions about orders, menu items, and inventory based on the context provided.
Be concise and factual.
"""

class ChatbotService(ChatbotPort):
    def __init__(
        self,
        conversation_repo: ConversationRepository,
        llm: LLMProvider
    ):
        self._repo = conversation_repo
        self._llm = llm

    def chat(self, request: ChatRequest) -> ChatResponse:
        history = self._repo.load_history(
            request.tenant_id,
            request.session_id
        )

        messages = history + [{"role": "user", "content": request.message}]

        reply = self._llm.complete(messages, SYSTEM_PROMPT)

        self._repo.save_message(request.tenant_id, request.session_id, "user", request.message)
        self._repo.save_message(request.tenant_id, request.session_id, "assistant", reply)

        return ChatResponse(
            reply=reply,
            session_id=request.session_id,
            context_used=len(history) > 0
        )

    def clear_session(self, tenant_id: str, session_id: str) -> None:
        # clears session context...
        pass
```

***

## Multiple Adapters for the Same Port

The real power: I can swap adapters without touching the core.

```python
# adapters/driven/llm/openai_adapter.py — production
from openai import OpenAI
from ...core.ports.outbound import LLMProvider

class OpenAIAdapter(LLMProvider):
    def __init__(self, api_key: str, model: str = "gpt-4o-mini"):
        self._client = OpenAI(api_key=api_key)
        self._model = model

    def complete(self, messages: list[dict], system_prompt: str) -> str:
        response = self._client.chat.completions.create(
            model=self._model,
            messages=[{"role": "system", "content": system_prompt}] + messages
        )
        return response.choices[0].message.content
```

```python
# adapters/driven/llm/local_llm_adapter.py — local dev / offline
import ollama
from ...core.ports.outbound import LLMProvider

class LocalLLMAdapter(LLMProvider):
    def __init__(self, model: str = "tinyllama"):
        self._model = model

    def complete(self, messages: list[dict], system_prompt: str) -> str:
        response = ollama.chat(
            model=self._model,
            messages=[{"role": "system", "content": system_prompt}] + messages
        )
        return response["message"]["content"]
```

In development, I use `LocalLLMAdapter` with Ollama. In production, I use `OpenAIAdapter`. The `ChatbotService` core does not know or care which one is wired in.

```python
# Driver adapters — multiple entry points to the same core
# adapters/driver/http_adapter.py
from fastapi import APIRouter
from ...core.services.chatbot_service import ChatbotService
from ...core.ports.inbound import ChatRequest

router = APIRouter()

def get_chatbot_service() -> ChatbotService:
    # Wiring the service with the right adapters
    from ..driven.mongo_repo import MongoConversationRepository
    from ..driven.llm.openai_adapter import OpenAIAdapter
    from config import settings
    return ChatbotService(
        conversation_repo=MongoConversationRepository(settings.MONGO_URL),
        llm=OpenAIAdapter(settings.OPENAI_API_KEY)
    )

@router.post("/chat")
def chat(body: dict):
    service = get_chatbot_service()
    response = service.chat(ChatRequest(
        tenant_id=body["tenant_id"],
        session_id=body["session_id"],
        message=body["message"]
    ))
    return {"reply": response.reply}
```

```python
# adapters/driver/queue_consumer.py — same core, different entry point
import redis
import json
from ...core.services.chatbot_service import ChatbotService
from ...core.ports.inbound import ChatRequest

class QueueChatAdapter:
    def __init__(self, service: ChatbotService, redis_url: str):
        self._service = service
        self._redis = redis.from_url(redis_url)

    def run(self):
        pubsub = self._redis.pubsub()
        pubsub.subscribe("chatbot.requests")
        for message in pubsub.listen():
            if message["type"] == "message":
                data = json.loads(message["data"])
                request = ChatRequest(**data)
                response = self._service.chat(request)
                # Publish response back
                self._redis.publish(
                    f"chatbot.response.{data['session_id']}",
                    json.dumps({"reply": response.reply})
                )
```

Same `ChatbotService`, two completely different drivers. The core wrote zero lines of adapter code.

***

## Testing with Test Adapters

```python
# tests/test_chatbot_service.py

from core.services.chatbot_service import ChatbotService
from core.ports.inbound import ChatRequest

class InMemoryConversationRepo:
    def __init__(self):
        self._store: dict[str, list] = {}

    def load_history(self, tenant_id, session_id):
        return self._store.get(f"{tenant_id}:{session_id}", [])

    def save_message(self, tenant_id, session_id, role, content):
        key = f"{tenant_id}:{session_id}"
        self._store.setdefault(key, []).append({"role": role, "content": content})

class ScriptedLLM:
    def __init__(self, responses: list[str]):
        self._responses = iter(responses)

    def complete(self, messages, system_prompt):
        return next(self._responses)

def test_chat_returns_llm_reply():
    service = ChatbotService(
        conversation_repo=InMemoryConversationRepo(),
        llm=ScriptedLLM(["There are 5 seats at table 3."])
    )
    response = service.chat(ChatRequest(
        tenant_id="restaurant_01",
        session_id="sess_001",
        message="How many seats at table 3?"
    ))
    assert response.reply == "There are 5 seats at table 3."
    assert response.context_used is False  # First message, no history

def test_context_used_after_first_message():
    repo = InMemoryConversationRepo()
    service = ChatbotService(
        conversation_repo=repo,
        llm=ScriptedLLM(["Reply 1", "Reply 2"])
    )
    request = ChatRequest(tenant_id="restaurant_01", session_id="sess_001", message="Hello")
    service.chat(request)  # First message
    response = service.chat(request)  # Second message — history exists now
    assert response.context_used is True
```

No HTTP server, no MongoDB, no OpenAI API key needed.

***

## When to Choose Hexagonal over Onion

Both achieve domain isolation. The difference is emphasis:

| Scenario                                              | Better Choice                                                     |
| ----------------------------------------------------- | ----------------------------------------------------------------- |
| Multiple entry points (HTTP + queue + CLI)            | Hexagonal — the multiple driver adapter model makes this explicit |
| Multiple external dependencies that may change        | Hexagonal — driven ports make swapping easy                       |
| Rich domain with complex rules                        | Onion — the ring model expresses domain centrality clearly        |
| Team coming from DDD background                       | Onion — maps naturally to DDD concepts                            |
| Need to test the same use case from multiple triggers | Hexagonal                                                         |

In practice, the two patterns are compatible and often combined.

***

## Lessons Learned

* **The port is the contract; the adapter is the implementation.** Never let the adapter's concerns bleed into the port definition.
* **Driver adapters are thin.** Their only job is to translate the outside world's format into the core's port interface.
* **Driven adapters are where the technical complexity lives.** Database connection pooling, API rate limiting, retries — all of that belongs in the adapter.
* **Naming ports after capabilities, not technologies.** `LLMProvider`, not `OpenAIProvider`. `ConversationRepository`, not `MongoRepository`. The port name should survive a technology change.
* **The wiring happens at the composition root.** Dependency injection, configuration, and adapter selection happen in one place — not scattered across the application.
