Part 2: Giving Agents Tools and Memory

Part of the Multi Agent Orchestration 101 Series

The Tool Problem I Kept Hitting

After getting my first agent loop working, I ran into a pattern that kept repeating: the agent would say it wanted to do something, but there was no clean way to map that decision to actual Python code. I was writing if "search" in response conditions everywhere. It was brittle and hard to extend.

The fix is structured tool definitions. This is the same mechanism that OpenAI's function calling and Anthropic's tool use are built on at the API level. Understanding it from first principles means you will know exactly what those APIs are doing when you start using them in Parts 3 and 4.


What Is a Tool?

A tool is a named, callable unit of work that an agent can invoke. It has three parts:

  1. A schema β€” what the tool is called, what it does, what parameters it accepts

  2. An implementation β€” the actual Python function

  3. A dispatcher β€” code that maps a name + arguments to a function call

This separation is important. The schema is what you give to the LLM (or use in your rule-based _think() logic). The implementation and dispatcher are pure Python that the LLM never sees.


Defining Tools with JSON Schema

I use Python dataclass + dict for tool definitions. It is verbose but explicit β€” you can see exactly what will be sent to an LLM later.

# tools.py
from __future__ import annotations
import asyncio
import json
from dataclasses import dataclass
from typing import Any, Awaitable, Callable


@dataclass
class ToolDefinition:
    name: str
    description: str
    parameters: dict[str, Any]  # JSON Schema object
    fn: Callable[..., Awaitable[str]]

    def to_dict(self) -> dict[str, Any]:
        """Schema representation sent to OpenAI / Anthropic."""
        return {
            "name": self.name,
            "description": self.description,
            "parameters": self.parameters,
        }

Here are two concrete tools I use in my own projects:


The Tool Dispatcher

The dispatcher maps a tool name to its ToolDefinition and calls it safely:

Agents now hold a ToolDispatcher instead of a raw dict. The key benefit: dispatcher.schemas() returns exactly the JSON array that OpenAI and Anthropic expect for their tools parameter. No translation needed when you wire in the real API.


Short-Term vs Long-Term Memory

Part 1's memory list is short-term memory β€” it lives in RAM and resets when the process exits. That is fine for a single request, but falls apart when:

  • An agent needs context from a previous session

  • Two agents need to share state without passing messages

  • You want to inspect what an agent was "thinking" after the fact

I use two patterns depending on the use case.

Pattern A: Shared Context Dict (in-process)

For agents in the same process that need a shared scratchpad:

Pattern B: SQLite for Persistence

When I need history to survive restarts, I use SQLite via aiosqlite. It's a single file, zero infrastructure, and fast enough for local development and small production workloads.

To use it, call load_messages at agent startup to restore context, and save_message after every new message.


Wiring It Together: An Agent with Tools and Persistence

Here is an updated Agent base class that incorporates the dispatcher and optional persistence:


Sharing Context Between Two Agents

Here is a minimal complete example: two agents share a SharedContext. One writes a plan, the other executes it.


Key Takeaways

  • Separate the tool schema (what the LLM sees) from the implementation (Python)

  • A ToolDispatcher with .schemas() makes LLM integration trivial later

  • Short-term memory = list; long-term memory = SQLite (or Redis for scale)

  • SharedContext is the simplest way to share state in a single process


Up Next

Part 3: OpenAI Multi-Agent Workflow β€” replacing the stub _think() with real OpenAI function calling, and building a supervisor that delegates to worker agents.

Last updated