Part 1: Building Agents from Scratch in Python 3
Part of the Multi Agent Orchestration 101 Series
The Moment I Stopped Using Frameworks (Temporarily)
I had been using LangChain for a few months. It solved problems quickly, but every time something broke I had no idea where to look. The stack traces pointed at internal abstractions, not my code. I spent more time reading framework source than actually building.
So I opened a blank .py file and asked myself: what is the minimum amount of code needed for two programs to act as agents and coordinate?
Turns out β not much. This part shows you exactly that minimum. No frameworks. No magic. Just Python 3 and the standard library.
What is an Agent, Actually?
Before writing a single line, I want to be precise about vocabulary because "agent" means different things to different people.
For this series, an agent is a program that:
Perceives an input (a goal, a message, a tool result)
Decides what to do next (call a tool, ask another agent, respond)
Acts (executes the decision)
Loops until a stopping condition is met
That's it. An LLM is just the "decide" step. The rest is Python.
A multi-agent system is two or more agents that coordinate β sharing messages, delegating subtasks, or checking each other's work.
The Minimal Agent Loop
Let's build the simplest possible agent. No LLM yet β just the skeleton.
This is the core loop. _think() is intentionally abstract β Part 3 and 4 replace it with OpenAI and Claude calls respectively.
Adding a Message Bus
For two agents to coordinate, they need a way to pass messages. The simplest approach is an asyncio.Queue.
This is a deliberately trivial bus. In production (Part 5) you would swap it for Redis Streams or a proper message broker. But for learning, the queue is perfect because you can see every message in memory with a debugger.
A Concrete Example: Two Echo Agents
Let me prove the plumbing works before adding LLM complexity. Here are two agents that pass a counter back and forth.
Run it:
Alice receives 0, sends 1 to Bob. Bob receives 1, sends 2 back. This continues until the counter hits 5. Two agents, coordinating, with explicit message passing. No framework.
Agent Memory: Short-Term and Context Windows
The memory list in Agent is short-term memory β it resets each time you create a new instance. This mirrors an LLM's context window. Things to know:
Keep it bounded. An unbounded list will eventually overflow an LLM's context. I add a
max_memorytrim in my real projects.Role matters. The
rolefield maps directly to LLM message roles:user,assistant,tool. Getting this right is what makes LLM integration clean.Don't store secrets. Memory is often serialised for debugging. Strip API keys, tokens, and PII before they enter an agent's context.
Here's the trimming pattern I use:
What's Missing (On Purpose)
I deliberately left out:
LLM calls β covered in Parts 3 and 4
Persistent storage β covered in Part 2
Error handling β covered in Part 5
Parallel execution β covered in Part 3
The goal of this part is to have the skeleton clear in your head before adding complexity. If you can explain what _think(), memory, and the message bus do, you are ready for the next part.
Key Takeaways
An agent is a loop: perceive β decide β act β repeat
Multi-agent coordination is message passing β start with a queue
The LLM is just the "decide" step; everything else is Python
Build the skeleton first, add intelligence second
Up Next
Part 2: Tools and Memory β adding structured tool definitions, a tool dispatcher, and patterns for sharing context across agents.
Last updated