# Custom Exporters

## Why Multiple Backends?

In my production environment, I need telemetry in three places:

1. **Jaeger** - For distributed tracing and debugging (short-term retention, 7 days)
2. **Prometheus** - For metrics and alerting (medium-term, 30 days)
3. **AWS CloudWatch** - For long-term compliance and auditing (365 days)

Each backend has a different purpose. OpenTelemetry lets you send telemetry to all of them simultaneously.

## Multi-Exporter Setup

```typescript
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
import { PrometheusExporter } from '@opentelemetry/exporter-prometheus';
import { PeriodicExportingMetricReader, ConsoleMetricExporter } from '@opentelemetry/sdk-metrics';
import { BatchSpanProcessor, ConsoleSpanExporter } from '@opentelemetry/sdk-trace-base';

const sdk = new NodeSDK({
  // Multiple trace exporters
  spanProcessors: [
    // Primary: OTLP to Jaeger
    new BatchSpanProcessor(
      new OTLPTraceExporter({
        url: 'http://localhost:4318/v1/traces',
      })
    ),
    
    // Development: Console for debugging
    new BatchSpanProcessor(new ConsoleSpanExporter()),
  ],
  
  // Multiple metric exporters
  metricReader: new PeriodicExportingMetricReader({
    exporter: new OTLPMetricExporter({
      url: 'http://localhost:4318/v1/metrics',
    }),
    exportIntervalMillis: 60000, // Export every 60s
  }),
});

// Also expose Prometheus scrape endpoint
const prometheusExporter = new PrometheusExporter({
  port: 9464,
}, () => {
  console.log('Prometheus endpoint: http://localhost:9464/metrics');
});

sdk.start();
```

## Cloud Provider Exporters

### AWS CloudWatch (X-Ray)

```bash
npm install @opentelemetry/exporter-trace-otlp-http
npm install @aws-sdk/client-xray
```

```typescript
import { NodeSDK } from '@opentelemetry/sdk-node';
import { AWSXRayPropagator } from '@opentelemetry/propagator-aws-xray';
import { AWSXRayIdGenerator } from '@opentelemetry/id-generator-aws-xray';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';

const sdk = new NodeSDK({
  resource: new Resource({
    [ATTR_SERVICE_NAME]: 'order-service',
  }),
  
  textMapPropagator: new AWSXRayPropagator(),
  idGenerator: new AWSXRayIdGenerator(),
  
  traceExporter: new OTLPTraceExporter({
    // Use AWS Distro for OpenTelemetry Collector
    url: 'http://localhost:4318/v1/traces',
  }),
});

sdk.start();
```

**Run AWS OTel Collector**:

```yaml
# aws-otel-collector-config.yaml
receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:
    timeout: 1s
    send_batch_size: 50

exporters:
  awsxray:
    region: us-east-1

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [awsxray]
```

```bash
docker run --rm -p 4318:4318 \
  -v $(pwd)/aws-otel-collector-config.yaml:/etc/otel-collector-config.yaml \
  -e AWS_REGION=us-east-1 \
  -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
  -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
  public.ecr.aws/aws-observability/aws-otel-collector:latest \
  --config=/etc/otel-collector-config.yaml
```

### Google Cloud Trace

```bash
npm install @google-cloud/opentelemetry-cloud-trace-exporter
```

```typescript
import { TraceExporter } from '@google-cloud/opentelemetry-cloud-trace-exporter';
import { NodeSDK } from '@opentelemetry/sdk-node';

const sdk = new NodeSDK({
  traceExporter: new TraceExporter({
    projectId: 'my-gcp-project',
  }),
});

sdk.start();
```

### Azure Monitor

```bash
npm install @azure/monitor-opentelemetry-exporter
```

```typescript
import { AzureMonitorTraceExporter } from '@azure/monitor-opentelemetry-exporter';
import { NodeSDK } from '@opentelemetry/sdk-node';

const sdk = new NodeSDK({
  traceExporter: new AzureMonitorTraceExporter({
    connectionString: process.env.APPLICATIONINSIGHTS_CONNECTION_STRING,
  }),
});

sdk.start();
```

## Building a Custom Exporter

Sometimes you need to send telemetry to a proprietary system. Here's how to build a custom exporter:

```typescript
import { SpanExporter, ReadableSpan } from '@opentelemetry/sdk-trace-base';
import { ExportResult, ExportResultCode } from '@opentelemetry/core';
import axios from 'axios';

interface CustomBackendSpan {
  traceId: string;
  spanId: string;
  name: string;
  timestamp: number;
  duration: number;
  attributes: Record<string, any>;
}

export class CustomBackendExporter implements SpanExporter {
  private endpoint: string;
  private apiKey: string;
  
  constructor(config: { endpoint: string; apiKey: string }) {
    this.endpoint = config.endpoint;
    this.apiKey = config.apiKey;
  }
  
  async export(
    spans: ReadableSpan[],
    resultCallback: (result: ExportResult) => void
  ): Promise<void> {
    try {
      // Transform OTel spans to custom format
      const customSpans: CustomBackendSpan[] = spans.map(span => ({
        traceId: span.spanContext().traceId,
        spanId: span.spanContext().spanId,
        name: span.name,
        timestamp: span.startTime[0] * 1000 + span.startTime[1] / 1000000,
        duration: span.duration[0] * 1000 + span.duration[1] / 1000000,
        attributes: span.attributes,
      }));
      
      // Send to custom backend
      await axios.post(
        `${this.endpoint}/api/traces`,
        { spans: customSpans },
        {
          headers: {
            'Authorization': `Bearer ${this.apiKey}`,
            'Content-Type': 'application/json',
          },
          timeout: 5000,
        }
      );
      
      resultCallback({ code: ExportResultCode.SUCCESS });
    } catch (error) {
      console.error('Export failed:', error);
      resultCallback({ code: ExportResultCode.FAILED });
    }
  }
  
  async shutdown(): Promise<void> {
    // Cleanup if needed
    console.log('Custom exporter shut down');
  }
  
  async forceFlush(): Promise<void> {
    // Force export any buffered spans
    console.log('Force flush');
  }
}

// Usage
const sdk = new NodeSDK({
  spanProcessors: [
    new BatchSpanProcessor(
      new CustomBackendExporter({
        endpoint: 'https://my-observability-platform.com',
        apiKey: process.env.CUSTOM_API_KEY!,
      })
    ),
  ],
});
```

## Filtering Spans Before Export

Sometimes you don't want to export everything:

```typescript
import { SpanExporter, ReadableSpan } from '@opentelemetry/sdk-trace-base';
import { ExportResult, ExportResultCode } from '@opentelemetry/core';

export class FilteringExporter implements SpanExporter {
  private wrapped: SpanExporter;
  
  constructor(exporter: SpanExporter) {
    this.wrapped = exporter;
  }
  
  async export(
    spans: ReadableSpan[],
    resultCallback: (result: ExportResult) => void
  ): Promise<void> {
    // Filter out health check spans
    const filtered = spans.filter(span => {
      // Skip health checks
      if (span.name === 'GET /health') {
        return false;
      }
      
      // Skip very short spans (< 1ms)
      const durationMs = span.duration[0] * 1000 + span.duration[1] / 1000000;
      if (durationMs < 1) {
        return false;
      }
      
      return true;
    });
    
    // Export filtered spans
    return this.wrapped.export(filtered, resultCallback);
  }
  
  async shutdown(): Promise<void> {
    return this.wrapped.shutdown();
  }
  
  async forceFlush(): Promise<void> {
    return this.wrapped.forceFlush();
  }
}

// Usage
const sdk = new NodeSDK({
  spanProcessors: [
    new BatchSpanProcessor(
      new FilteringExporter(
        new OTLPTraceExporter({
          url: 'http://localhost:4318/v1/traces',
        })
      )
    ),
  ],
});
```

## Enriching Spans Before Export

Add extra attributes based on business logic:

```typescript
import { SpanExporter, ReadableSpan } from '@opentelemetry/sdk-trace-base';
import { ExportResult } from '@opentelemetry/core';

export class EnrichingExporter implements SpanExporter {
  private wrapped: SpanExporter;
  
  constructor(exporter: SpanExporter) {
    this.wrapped = exporter;
  }
  
  async export(
    spans: ReadableSpan[],
    resultCallback: (result: ExportResult) => void
  ): Promise<void> {
    // Enrich spans
    const enriched = spans.map(span => {
      // Add cost estimation based on duration
      const durationMs = span.duration[0] * 1000 + span.duration[1] / 1000000;
      const costPerMs = 0.00001; // $0.00001 per millisecond
      
      return {
        ...span,
        attributes: {
          ...span.attributes,
          'span.cost_usd': durationMs * costPerMs,
          'span.export_time': Date.now(),
          'span.environment': process.env.NODE_ENV || 'development',
        },
      };
    });
    
    return this.wrapped.export(enriched as ReadableSpan[], resultCallback);
  }
  
  async shutdown(): Promise<void> {
    return this.wrapped.shutdown();
  }
  
  async forceFlush(): Promise<void> {
    return this.wrapped.forceFlush();
  }
}
```

## Batch Configuration

Control how spans are batched:

```typescript
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';

const sdk = new NodeSDK({
  spanProcessors: [
    new BatchSpanProcessor(
      new OTLPTraceExporter({
        url: 'http://localhost:4318/v1/traces',
      }),
      {
        maxQueueSize: 2048,           // Maximum queue size
        maxExportBatchSize: 512,      // Spans per batch
        scheduledDelayMillis: 5000,   // Export every 5 seconds
        exportTimeoutMillis: 30000,   // 30s timeout
      }
    ),
  ],
});
```

**Tuning guidance**:

* **High volume**: Increase `maxExportBatchSize` to 1024+
* **Low latency**: Decrease `scheduledDelayMillis` to 1000ms
* **Memory constrained**: Decrease `maxQueueSize`

## Exporter Error Handling

Handle export failures gracefully:

```typescript
import { SpanExporter, ReadableSpan } from '@opentelemetry/sdk-trace-base';
import { ExportResult, ExportResultCode } from '@opentelemetry/core';

export class RetryingExporter implements SpanExporter {
  private wrapped: SpanExporter;
  private maxRetries: number;
  
  constructor(exporter: SpanExporter, maxRetries = 3) {
    this.wrapped = exporter;
    this.maxRetries = maxRetries;
  }
  
  async export(
    spans: ReadableSpan[],
    resultCallback: (result: ExportResult) => void
  ): Promise<void> {
    let lastError: Error | null = null;
    
    for (let attempt = 0; attempt < this.maxRetries; attempt++) {
      try {
        await new Promise<void>((resolve, reject) => {
          this.wrapped.export(spans, (result) => {
            if (result.code === ExportResultCode.SUCCESS) {
              resolve();
            } else {
              reject(new Error(`Export failed: ${result.error}`));
            }
          });
        });
        
        // Success!
        resultCallback({ code: ExportResultCode.SUCCESS });
        return;
      } catch (error) {
        lastError = error as Error;
        
        // Exponential backoff
        const delay = Math.pow(2, attempt) * 1000;
        console.warn(`Export attempt ${attempt + 1} failed, retrying in ${delay}ms...`);
        await new Promise(r => setTimeout(r, delay));
      }
    }
    
    // All retries failed
    console.error(`Export failed after ${this.maxRetries} attempts:`, lastError);
    resultCallback({
      code: ExportResultCode.FAILED,
      error: lastError || new Error('Unknown error'),
    });
  }
  
  async shutdown(): Promise<void> {
    return this.wrapped.shutdown();
  }
  
  async forceFlush(): Promise<void> {
    return this.wrapped.forceFlush();
  }
}

// Usage
const sdk = new NodeSDK({
  spanProcessors: [
    new BatchSpanProcessor(
      new RetryingExporter(
        new OTLPTraceExporter({
          url: 'http://localhost:4318/v1/traces',
        }),
        3 // max retries
      )
    ),
  ],
});
```

## Multi-Environment Configuration

Different exporters for different environments:

```typescript
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { ConsoleSpanExporter } from '@opentelemetry/sdk-trace-base';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';

function getTraceExporter() {
  const env = process.env.NODE_ENV || 'development';
  
  switch (env) {
    case 'production':
      return new BatchSpanProcessor(
        new OTLPTraceExporter({
          url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://otel-collector:4318/v1/traces',
          headers: {
            'x-api-key': process.env.OTEL_API_KEY!,
          },
        })
      );
    
    case 'staging':
      return new BatchSpanProcessor(
        new OTLPTraceExporter({
          url: 'http://staging-otel-collector:4318/v1/traces',
        })
      );
    
    case 'development':
    default:
      return new BatchSpanProcessor(new ConsoleSpanExporter());
  }
}

const sdk = new NodeSDK({
  spanProcessors: [getTraceExporter()],
});

sdk.start();
```

## Production Multi-Backend Setup

Here's my actual production configuration:

```typescript
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
import { PrometheusExporter } from '@opentelemetry/exporter-prometheus';
import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { Resource } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION } from '@opentelemetry/semantic-conventions';

// Trace exporters
const jaegerExporter = new OTLPTraceExporter({
  url: process.env.JAEGER_ENDPOINT || 'http://jaeger:4318/v1/traces',
});

const cloudwatchExporter = new OTLPTraceExporter({
  url: process.env.CLOUDWATCH_ENDPOINT || 'http://aws-otel-collector:4318/v1/traces',
});

// Metric exporters
const prometheusExporter = new PrometheusExporter({
  port: 9464,
  endpoint: '/metrics',
});

const cloudwatchMetricExporter = new OTLPMetricExporter({
  url: process.env.CLOUDWATCH_METRICS_ENDPOINT || 'http://aws-otel-collector:4318/v1/metrics',
});

const sdk = new NodeSDK({
  resource: new Resource({
    [ATTR_SERVICE_NAME]: process.env.SERVICE_NAME || 'order-service',
    [ATTR_SERVICE_VERSION]: process.env.SERVICE_VERSION || '1.0.0',
  }),
  
  // Send traces to both Jaeger (debugging) and CloudWatch (compliance)
  spanProcessors: [
    new BatchSpanProcessor(jaegerExporter, {
      maxExportBatchSize: 512,
      scheduledDelayMillis: 5000,
    }),
    new BatchSpanProcessor(cloudwatchExporter, {
      maxExportBatchSize: 256,
      scheduledDelayMillis: 10000, // Less frequent for compliance
    }),
  ],
  
  // Metrics to both Prometheus (alerting) and CloudWatch (long-term)
  metricReader: new PeriodicExportingMetricReader({
    exporter: cloudwatchMetricExporter,
    exportIntervalMillis: 60000,
  }),
});

sdk.start();

console.log('OpenTelemetry exporters configured:');
console.log('- Traces: Jaeger + CloudWatch');
console.log('- Metrics: Prometheus (port 9464) + CloudWatch');
```

## Monitoring Exporter Health

Track exporter success/failure:

```typescript
import { metrics } from '@opentelemetry/api';

const meter = metrics.getMeter('exporter-health');

const exportSuccessCounter = meter.createCounter('exporter.export.success', {
  description: 'Successful exports',
});

const exportFailureCounter = meter.createCounter('exporter.export.failure', {
  description: 'Failed exports',
});

export class MonitoredExporter implements SpanExporter {
  private wrapped: SpanExporter;
  private exporterName: string;
  
  constructor(exporter: SpanExporter, name: string) {
    this.wrapped = exporter;
    this.exporterName = name;
  }
  
  async export(
    spans: ReadableSpan[],
    resultCallback: (result: ExportResult) => void
  ): Promise<void> {
    const startTime = Date.now();
    
    return this.wrapped.export(spans, (result) => {
      const duration = Date.now() - startTime;
      
      if (result.code === ExportResultCode.SUCCESS) {
        exportSuccessCounter.add(1, {
          exporter: this.exporterName,
        });
      } else {
        exportFailureCounter.add(1, {
          exporter: this.exporterName,
          error: result.error?.message || 'unknown',
        });
      }
      
      resultCallback(result);
    });
  }
  
  async shutdown(): Promise<void> {
    return this.wrapped.shutdown();
  }
  
  async forceFlush(): Promise<void> {
    return this.wrapped.forceFlush();
  }
}
```

Query in Prometheus:

```promql
# Export success rate
rate(exporter_export_success_total[5m]) /
(rate(exporter_export_success_total[5m]) + rate(exporter_export_failure_total[5m]))

# Alert on export failures
rate(exporter_export_failure_total{exporter="jaeger"}[5m]) > 0.1
```

## Best Practices

1. **Use multiple exporters** for different purposes (debugging, alerting, compliance)
2. **Configure batching** to reduce network overhead
3. **Handle failures gracefully** with retries and error logging
4. **Monitor exporter health** with metrics
5. **Filter unnecessary data** before export to reduce costs
6. **Use the Collector** for complex routing and transformation
7. **Test exporters in staging** before production

## What's Next

Continue to [OpenTelemetry Collector](https://blog.htunnthuthu.com/devops-and-sre/opentelemetry-101/opentelemetry-101-collector) to learn:

* Centralized telemetry pipeline
* Data transformation and filtering
* Multi-backend routing
* Scalability and high availability

***

**Previous**: [← Resource Detection](https://blog.htunnthuthu.com/devops-and-sre/opentelemetry-101/opentelemetry-101-resource-detection) | **Next**: [OpenTelemetry Collector →](https://blog.htunnthuthu.com/devops-and-sre/opentelemetry-101/opentelemetry-101-collector)

*Export once, observe everywhere.*
