Synchronous Communication

Introduction

Synchronous communication is the most intuitive way for services to interact—one service calls another and waits for a response. Through building systems with dozens of interconnected services, I've learned that while synchronous calls are simple to understand, they introduce coupling and cascading failure risks that require careful management.

This article covers HTTP/REST communication patterns, gRPC for high-performance scenarios, service discovery, and load balancing strategies.

Communication Patterns Overview

spinner
Pattern
Use Case
Trade-offs

Synchronous

Need immediate response

Simple but creates coupling

Asynchronous

Fire-and-forget, eventual consistency

Complex but decoupled

HTTP/REST Communication

Basic HTTP Client

import httpx
from typing import TypeVar, Generic
from pydantic import BaseModel
from dataclasses import dataclass

T = TypeVar("T")


@dataclass
class ServiceConfig:
    base_url: str
    timeout: float = 30.0
    max_retries: int = 3


class HTTPServiceClient:
    """Base HTTP client for service-to-service communication."""
    
    def __init__(self, config: ServiceConfig):
        self.config = config
        self.client = httpx.AsyncClient(
            base_url=config.base_url,
            timeout=httpx.Timeout(config.timeout),
        )
    
    async def get(self, path: str, params: dict | None = None) -> dict:
        response = await self.client.get(path, params=params)
        response.raise_for_status()
        return response.json()
    
    async def post(self, path: str, data: dict) -> dict:
        response = await self.client.post(path, json=data)
        response.raise_for_status()
        return response.json()
    
    async def put(self, path: str, data: dict) -> dict:
        response = await self.client.put(path, json=data)
        response.raise_for_status()
        return response.json()
    
    async def delete(self, path: str) -> None:
        response = await self.client.delete(path)
        response.raise_for_status()
    
    async def close(self):
        await self.client.aclose()


# Usage
class UserServiceClient(HTTPServiceClient):
    """Client for User Service."""
    
    async def get_user(self, user_id: str) -> dict:
        return await self.get(f"/users/{user_id}")
    
    async def create_user(self, email: str, name: str) -> dict:
        return await self.post("/users", {"email": email, "name": name})


# In Order Service
user_client = UserServiceClient(
    ServiceConfig(base_url="http://user-service:8000")
)
user = await user_client.get_user("123")

Request/Response Correlation

Error Handling in Service Calls

gRPC for High-Performance Communication

Protocol Buffers Definition

gRPC Server Implementation

gRPC Client

REST vs gRPC Comparison

Aspect
REST/HTTP
gRPC

Protocol

HTTP/1.1, HTTP/2

HTTP/2

Payload

JSON (text)

Protocol Buffers (binary)

Performance

Good

Excellent

Streaming

Limited

Bidirectional

Browser Support

Native

Requires proxy

Tooling

Abundant

Growing

Use Case

Public APIs

Internal services

Service Discovery

Client-Side Discovery

spinner

DNS-Based Discovery

Load Balancing

Client-Side Load Balancing

Health-Aware Load Balancing

Practical Patterns

Timeout and Deadline Propagation

Request Hedging

Practical Exercise

Exercise: Build a Resilient Service Client

Key Takeaways

  1. HTTP/REST for simplicity - Use for most service-to-service communication

  2. gRPC for performance - Consider for high-throughput internal APIs

  3. Propagate context - Trace IDs, deadlines, and correlation IDs

  4. Handle failures gracefully - Proper error classification and handling

  5. Use load balancing - Distribute traffic across instances

What's Next?

Synchronous communication creates coupling and cascading failures. In Article 5: Asynchronous Communication, we'll explore message queues, event-driven patterns, and eventual consistency.


This article is part of the Microservice Architecture 101 series.

Last updated