Instrumenting TypeScript Applications: From Zero to Production Metrics

The First Time I Added Metrics

I still remember the first time I instrumented a TypeScript application with Prometheus. I was nervousβ€”would it slow down my API? Would it be complicated? Would I break something in production?

Turns out, adding Prometheus metrics was one of the easiest and most impactful changes I've made. Within an hour, I had basic metrics flowing. Within a day, I had comprehensive monitoring. Within a week, I caught a performance issue I didn't even know existed.

Let me show you exactly how I do it, step by step.

Setting Up prom-client

The prom-client library is the official Prometheus client for Node.js. It has excellent TypeScript support and a clean API.

Installation

npm install prom-client
# or
yarn add prom-client

TypeScript Types

Good news: prom-client includes TypeScript definitions out of the box. No need for @types/prom-client.

Basic Setup: Exposing the /metrics Endpoint

Every Prometheus-instrumented application needs a /metrics endpoint. Let's start with the minimal setup.

Express Example

That's it! You now have a working /metrics endpoint.

Test it:

You'll see output like:

Fastify Example

If you use Fastify, here's the equivalent:

NestJS Example

For NestJS fans:

Instrumenting HTTP Requests

Now let's track actual HTTP traffic. This is the most important metric for any API.

Complete HTTP Metrics Middleware

Database Metrics

Tracking database performance is critical. Here's how I instrument database queries.

PostgreSQL with pg Library

Prisma ORM

If you use Prisma:

Business Metrics

Beyond technical metrics, track business metrics that matter.

Custom Metrics for Specific Scenarios

Queue Processing

Cache Hit/Miss Rates

Production-Ready Metrics Setup

Here's my complete, production-ready metrics setup:

Testing Your Metrics

Before deploying, test that your metrics work:

Performance Considerations

Q: Does instrumentation slow down my app?

In my experience, the performance impact is negligible:

  • Incrementing a counter: ~0.001ms

  • Observing a histogram: ~0.01ms

  • Total overhead: typically <1% of request time

Q: How many metrics is too many?

Keep total time series under control:

  • 1,000-10,000 series: perfectly fine

  • 10,000-100,000 series: manageable

  • 100,000+ series: need optimization

Key Takeaways

  1. Start simple - /metrics endpoint with default metrics

  2. Instrument HTTP layer - Request count, duration, status codes

  3. Track database queries - Query duration and connection pool

  4. Add business metrics - Signups, purchases, etc.

  5. Use TypeScript - Type safety prevents label typos

  6. Test your metrics - Ensure they're exposed correctly

  7. Keep cardinality low - Avoid high-cardinality labels

In the next article, we'll learn PromQL to query all these metrics effectively.


Previous: Prometheus Architecture Next: PromQL Basics

Last updated