Automatic Instrumentation

The Magic of Auto-Instrumentation

When I first added OpenTelemetry to my production microservices, I was skeptical about auto-instrumentation. How could a library automatically understand my application's behavior without me writing custom code? But after seeing it trace every database query, Redis operation, and HTTP call automatically, I became a believer.

The reality is that 80% of your observability needs can be satisfied with auto-instrumentation alone. It's only the business-specific logic and critical paths that need custom spans. Let me show you how to leverage this superpower.

Available Auto-Instrumentation Libraries

OpenTelemetry provides instrumentation for virtually every popular Node.js library:

HTTP & Web Frameworks

'@opentelemetry/instrumentation-express'    // Express.js
'@opentelemetry/instrumentation-fastify'    // Fastify
'@opentelemetry/instrumentation-koa'        // Koa
'@opentelemetry/instrumentation-http'       // Native HTTP/HTTPS
'@opentelemetry/instrumentation-fetch'      // Fetch API

Databases

'@opentelemetry/instrumentation-pg'         // PostgreSQL
'@opentelemetry/instrumentation-mysql'      // MySQL
'@opentelemetry/instrumentation-mongodb'    // MongoDB
'@opentelemetry/instrumentation-redis-4'    // Redis 4.x
'@opentelemetry/instrumentation-ioredis'    // ioredis

Message Queues

Cloud Services

Setting Up Auto-Instrumentation

Let's enhance our order service with PostgreSQL and Redis.

Install dependencies:

Update src/instrumentation.ts:

Building a Real Application with PostgreSQL

Start PostgreSQL with Docker:

Create database schema (src/db.ts):

Adding Redis for Caching

Start Redis with Docker:

Create cache layer (src/cache.ts):

Updated Application with Full Auto-Instrumentation

Update src/app.ts:

What Gets Traced Automatically

With this setup, every single operation is automatically traced:

HTTP Layer:

  • Incoming HTTP requests (method, URL, status code, duration)

  • Request/response headers

  • User agent information

Database Layer (PostgreSQL):

  • SQL queries with parameters

  • Query duration

  • Connection pool operations

  • Database name and operation type

Cache Layer (Redis):

  • Redis commands (GET, SET, INCR, DEL)

  • Command arguments

  • Response times

  • Connection details

Example Trace in Jaeger:

Configuration Options

Ignoring Specific Operations

Enhancing Database Reporting

Sanitizing Sensitive Data

Real Production Learnings

1. Cache Hit Ratio Visibility

Auto-instrumentation revealed that my Redis cache hit rate was only 23%. I thought I had effective caching, but traces showed most requests hit the database. This led me to increase TTL and pre-warm critical data.

2. Connection Pool Exhaustion

During load testing, I noticed database query spans with 500ms wait times before execution. The auto-instrumented PostgreSQL library showed connection pool saturation. I increased the pool size from 10 to 30 connections.

3. Inefficient Queries

Auto-instrumentation showed that SELECT * FROM orders WHERE user_id = $1 was taking 300ms for users with many orders. I added pagination and saw query times drop to 15ms.

Debugging with Auto-Instrumentation

Scenario: Slow Endpoint

User reports: "Order lookup is slow"

  1. Find slow traces in Jaeger filtered by /api/orders/:id

  2. Analyze span duration:

    • HTTP request: 850ms

    • Redis GET: 2ms (cache miss)

    • PostgreSQL SELECT: 820ms ← Problem!

  3. Check query details in span attributes

  4. Identify missing index on frequently queried column

  5. Add index, see queries drop to 12ms

All of this without writing a single custom trace!

Best Practices

1. Start with Auto-Instrumentation

Don't write custom spans until you've maxed out auto-instrumentation. It covers 80% of your needs.

2. Configure, Don't Disable

Instead of disabling noisy instrumentation, configure it to exclude specific paths or sanitize data.

3. Monitor Overhead

Auto-instrumentation adds minimal overhead (~2-5%), but always measure in your environment.

4. Use Semantic Conventions

Auto-instrumentation follows semantic conventions automatically. When you add custom attributes, follow the same patterns.

5. Test in Staging

Always test auto-instrumentation configuration in staging before production. Some libraries may have compatibility issues.

What's Next

You now have comprehensive automatic tracing for your entire stack. Continue to Manual Instrumentation Deep Dive where you'll learn:

  • When to use custom spans vs auto-instrumentation

  • Creating nested span hierarchies

  • Adding business-specific attributes

  • Span events and annotations

  • Error handling and exception recording


Previous: ← Getting Started with TypeScript | Next: Manual Instrumentation Deep Dive β†’

Auto-instrumentation gives you the forest view. Manual instrumentation adds the tree details.

Last updated