Custom Exporters

Why Multiple Backends?

In my production environment, I need telemetry in three places:

  1. Jaeger - For distributed tracing and debugging (short-term retention, 7 days)

  2. Prometheus - For metrics and alerting (medium-term, 30 days)

  3. AWS CloudWatch - For long-term compliance and auditing (365 days)

Each backend has a different purpose. OpenTelemetry lets you send telemetry to all of them simultaneously.

Multi-Exporter Setup

import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
import { PrometheusExporter } from '@opentelemetry/exporter-prometheus';
import { PeriodicExportingMetricReader, ConsoleMetricExporter } from '@opentelemetry/sdk-metrics';
import { BatchSpanProcessor, ConsoleSpanExporter } from '@opentelemetry/sdk-trace-base';

const sdk = new NodeSDK({
  // Multiple trace exporters
  spanProcessors: [
    // Primary: OTLP to Jaeger
    new BatchSpanProcessor(
      new OTLPTraceExporter({
        url: 'http://localhost:4318/v1/traces',
      })
    ),
    
    // Development: Console for debugging
    new BatchSpanProcessor(new ConsoleSpanExporter()),
  ],
  
  // Multiple metric exporters
  metricReader: new PeriodicExportingMetricReader({
    exporter: new OTLPMetricExporter({
      url: 'http://localhost:4318/v1/metrics',
    }),
    exportIntervalMillis: 60000, // Export every 60s
  }),
});

// Also expose Prometheus scrape endpoint
const prometheusExporter = new PrometheusExporter({
  port: 9464,
}, () => {
  console.log('Prometheus endpoint: http://localhost:9464/metrics');
});

sdk.start();

Cloud Provider Exporters

AWS CloudWatch (X-Ray)

Run AWS OTel Collector:

Google Cloud Trace

Azure Monitor

Building a Custom Exporter

Sometimes you need to send telemetry to a proprietary system. Here's how to build a custom exporter:

Filtering Spans Before Export

Sometimes you don't want to export everything:

Enriching Spans Before Export

Add extra attributes based on business logic:

Batch Configuration

Control how spans are batched:

Tuning guidance:

  • High volume: Increase maxExportBatchSize to 1024+

  • Low latency: Decrease scheduledDelayMillis to 1000ms

  • Memory constrained: Decrease maxQueueSize

Exporter Error Handling

Handle export failures gracefully:

Multi-Environment Configuration

Different exporters for different environments:

Production Multi-Backend Setup

Here's my actual production configuration:

Monitoring Exporter Health

Track exporter success/failure:

Query in Prometheus:

Best Practices

  1. Use multiple exporters for different purposes (debugging, alerting, compliance)

  2. Configure batching to reduce network overhead

  3. Handle failures gracefully with retries and error logging

  4. Monitor exporter health with metrics

  5. Filter unnecessary data before export to reduce costs

  6. Use the Collector for complex routing and transformation

  7. Test exporters in staging before production

What's Next

Continue to OpenTelemetry Collector to learn:

  • Centralized telemetry pipeline

  • Data transformation and filtering

  • Multi-backend routing

  • Scalability and high availability


Previous: ← Resource Detection | Next: OpenTelemetry Collector →

Export once, observe everywhere.

Last updated