Instrumenting TypeScript Applications: From Zero to Production Metrics
The First Time I Added Metrics
I still remember the first time I instrumented a TypeScript application with Prometheus. I was nervousβwould it slow down my API? Would it be complicated? Would I break something in production?
Turns out, adding Prometheus metrics was one of the easiest and most impactful changes I've made. Within an hour, I had basic metrics flowing. Within a day, I had comprehensive monitoring. Within a week, I caught a performance issue I didn't even know existed.
Let me show you exactly how I do it, step by step.
Setting Up prom-client
The prom-client library is the official Prometheus client for Node.js. It has excellent TypeScript support and a clean API.
Installation
npminstallprom-client# oryarnaddprom-client
TypeScript Types
Good news: prom-client includes TypeScript definitions out of the box. No need for @types/prom-client.
Basic Setup: Exposing the /metrics Endpoint
Every Prometheus-instrumented application needs a /metrics endpoint. Let's start with the minimal setup.
Express Example
That's it! You now have a working /metrics endpoint.
Test it:
You'll see output like:
Fastify Example
If you use Fastify, here's the equivalent:
NestJS Example
For NestJS fans:
Instrumenting HTTP Requests
Now let's track actual HTTP traffic. This is the most important metric for any API.
Complete HTTP Metrics Middleware
Database Metrics
Tracking database performance is critical. Here's how I instrument database queries.
PostgreSQL with pg Library
Prisma ORM
If you use Prisma:
Business Metrics
Beyond technical metrics, track business metrics that matter.
Custom Metrics for Specific Scenarios
Queue Processing
Cache Hit/Miss Rates
Production-Ready Metrics Setup
Here's my complete, production-ready metrics setup:
Testing Your Metrics
Before deploying, test that your metrics work:
Performance Considerations
Q: Does instrumentation slow down my app?
In my experience, the performance impact is negligible:
Incrementing a counter: ~0.001ms
Observing a histogram: ~0.01ms
Total overhead: typically <1% of request time
Q: How many metrics is too many?
Keep total time series under control:
1,000-10,000 series: perfectly fine
10,000-100,000 series: manageable
100,000+ series: need optimization
Key Takeaways
Start simple - /metrics endpoint with default metrics
Instrument HTTP layer - Request count, duration, status codes
Track database queries - Query duration and connection pool
Add business metrics - Signups, purchases, etc.
Use TypeScript - Type safety prevents label typos
Test your metrics - Ensure they're exposed correctly
// src/metrics.ts
import { Registry, collectDefaultMetrics } from 'prom-client';
// Create a registry
export const register = new Registry();
// Collect default Node.js metrics (memory, CPU, event loop, etc.)
collectDefaultMetrics({
register,
prefix: 'nodejs_'
});
// src/server.ts
import express from 'express';
import { register } from './metrics';
const app = express();
const PORT = 3000;
// Your regular routes
app.get('/api/users', (req, res) => {
res.json({ users: [] });
});
// Prometheus metrics endpoint
app.get('/metrics', async (req, res) => {
res.setHeader('Content-Type', register.contentType);
res.send(await register.metrics());
});
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
console.log(`Metrics available at http://localhost:${PORT}/metrics`);
});
curl http://localhost:3000/metrics
# HELP nodejs_heap_size_total_bytes Process heap size from Node.js in bytes.
# TYPE nodejs_heap_size_total_bytes gauge
nodejs_heap_size_total_bytes 18874368
# HELP nodejs_heap_size_used_bytes Process heap size used from Node.js in bytes.
# TYPE nodejs_heap_size_used_bytes gauge
nodejs_heap_size_used_bytes 9876544