Skip to main content

Observability & Telemetry (OTLP)

Beeceptor provides comprehensive observability for your mock servers through OpenTelemetry Protocol (OTLP) support. This enterprise feature lets you export distributed traces from every request to your mock endpoints to your preferred observability platform, whether it is a self hosted solution or a SaaS vendor.

info

This feature is available with the Enterprise plan.

Why Observability?

API mocking is a critical part of testing and integration workflows. Without observability, mock servers become blind spots in your distributed system. Adding telemetry to mock endpoints provides:

  • Distributed Tracing Correlation: Trace IDs propagate through mock responses, allowing you to connect test runs with specific mock behaviors in your APM dashboards
  • Performance Baselines: Measure response times, payload sizes, and latency percentiles to identify slow rules or misconfigured proxies before they impact test pipelines
  • Request Volume Metrics: Track request counts, error rates, and status code distributions per endpoint to understand usage patterns and detect anomalies
  • Failure Diagnosis: Pinpoint which rule matched, whether an upstream callout failed, and the exact error codes returned—all in one trace view

Beeceptor's OTLP integration exports this trace data directly to your observability stack (Jaeger, Grafana Tempo, Datadog, etc.). This is is built on OpenTelemetry, the industry-standard for distributed tracing. This means:

  • Vendor-agnostic: Works with any OTLP-compatible backend (Jaeger, Tempo, Datadog, Honeycomb, etc.).
  • Standardized format: Uses OpenTelemetry Protocol (OTLP) over HTTP.
  • No lock-in: Switch observability platforms without changing Beeceptor configuration.
  • Safe by design: Only exports relevant metrics - no internal implementation details or stack traces.

Configuration

Setting up observability requires configuring an OTLP endpoint where Beeceptor will export trace data. This is a one-time setup at the organization level. Once configured, all endpoints within your organization automatically emit traces.

Step 1: Access Observability Settings

Navigate to the observability settings as an organization admin:

  1. Click on Manage Organization from the top navigation bar
  2. Select the Observability tab from the settings menu
  3. Toggle Enable Observability to activate trace exports

Once enabled, Beeceptor will start exporting traces for every request hitting your mock endpoints.

Observability Settings in Beeceptor

Step 2: Configure OTLP Endpoint

The OTLP endpoint URL determines where Beeceptor sends trace data. This should point to an OTLP-compatible collector or directly to your observability platform's ingestion endpoint.

OTLP Endpoint URL

Enter the full URL of your OTLP receiver. Beeceptor uses HTTP transport and automatically appends /v1/traces to the path if not already present.

Example endpoint URLs:

PlatformEndpoint URL
Self-hosted Collectorhttp://your-server-ip:4318/v1/traces
New Relichttps://otlp.nr-data.net
Honeycombhttps://api.honeycomb.io
Grafana Cloudhttps://otlp-gateway-<region>.grafana.net/otlp

Authentication & Custom Headers

Most observability platforms require authentication via HTTP headers. Add the headers your backend expects. These are sent with every trace export request from Beeceptor.

HeaderValueUsed By
x-api-keyYour API keyGeneric OTLP backends
DD-API-KEYYour Datadog API keyDatadog
AuthorizationBasic <base64-credentials>Grafana Cloud, basic auth
x-honeycomb-teamYour team API keyHoneycomb

Step 3: Save & Verify

After entering your endpoint and authentication details:

  1. Click Save to apply the configuration
  2. Send a test request to any endpoint in your organization (e.g., curl https://your-endpoint.proxy.beeceptor.com/test)
  3. Open your observability platform and search for traces with service.name = beeceptor-mock

If traces appear within a few seconds, your configuration is working. If not, verify your endpoint URL and authentication headers.

Grafana Explore showing list of traces with beeceptor-mock service

Traced Data

Every HTTP request to your mock server generates a distributed trace. Each trace contains a root span representing the full request lifecycle, along with child spans for specific operations like rule evaluation and upstream calls.

Root Span:

The root span is named mock.request. It captures the complete request-response cycle. It includes metadata about the incoming request, the matched rule (if any), and the final response.

Request Attributes

AttributeDescription
service.nameAlways beeceptor-mock for identification in your APM
endpoint.nameYour Beeceptor endpoint name (e.g., my-api)
http.methodHTTP method (GET, POST, PUT, etc.)
http.targetRequest path (e.g., /users/123)
http.schemeProtocol used (http or https)
http.hostValue of the Host header
http.client_socket.addressClient IP address
request.idUnique identifier for correlating logs

Response Attributes

AttributeDescription
http.status_codeHTTP status code returned to the client
http.request.body.sizeSize of the request payload in bytes
http.response.body.sizeSize of the response payload in bytes
request.duration_msTotal time from request received to response sent

Rule Matching Attributes

AttributeDescription
rule.matchedtrue if a mock rule matched, false otherwise
rule.idID of the matched rule (useful for debugging)
rule.typeType of rule: mock, callout, crud, etc.
response.typeHow the response was generated: mock, proxy_global, local_tunnel, oas, graphql, etc.

Proxy Attributes (when applicable)

This span is available when the rule matched has an external HTTP callout.

AttributeDescription
proxy.targetUpstream URL (partially obfuscated for security)
proxy.upstream_statusHTTP status code from the upstream service
proxy.latency_msTime taken by the upstream call

Child Spans

Child spans provide granular visibility into specific operations within the request lifecycle.

rule.evaluation

This span captures the rule matching phase. Use it to understand how many rules were evaluated and which one (if any) matched the incoming request.

AttributeDescription
rules.countTotal number of rules evaluated
rule.matchedWhether any rule matched
rule.idID of the matched rule

proxy.upstream

Created when a proxy or callout rule forwards the request to an external service. This span helps you measure upstream latency and detect failures in your backend dependencies.

AttributeDescription
http.methodHTTP method used for the upstream call
http.status_codeResponse status from the upstream service
http.response.body.sizeSize of the upstream response
upstream.latency_msTime taken by the upstream service
errortrue if the upstream call failed

tunnel.upstream

Created when requests are forwarded through Beeceptor's local tunnel to a service running on your machine. This span is useful for debugging connectivity issues between Beeceptor and your local development environment.

AttributeDescription
http.status_codeResponse status from your local service
http.response.body.sizeSize of the response
upstream.latency_msResponse time from your local service
upstream.timeouttrue if the tunnel request timed out
error.typeError code if the connection failed (e.g., ECONNREFUSED)

SaaS Platforms (Setup)

Beeceptor exports traces over HTTP using the OTLP protocol. Most SaaS observability platforms accept OTLP data directly. Below are setup instructions for common platforms.

ServiceDetails
New RelicRequires api-key header with your New Relic license key. Refer to New Relic OTLP integration guide for setup.
DatadogRequires DD-API-KEY header with your API key. Refer to Datadog OTLP Ingestion documentation for endpoint configuration and setup instructions.
Grafana CloudRequires Authorization header with base64-encoded instanceID:token. See Grafana Cloud OTLP documentation for endpoint URLs by region.
HoneycombRequires x-honeycomb-team header with your API key and x-honeycomb-dataset header to specify your dataset. See Honeycomb OTLP documentation for details.
LightstepRequires lightstep-access-token header with your access token. See Lightstep OTLP documentation for configuration details.

FAQ

Q: Does this work with the Free plan?
A: No, observability features are only available with the Enterprise plan.

Q: Can I send traces to multiple backends?
A: Yes, use an OTLP Collector with multiple exporters configured.

Q: Are request/response bodies captured?
A: No, only metadata (headers, status codes, sizes) are captured for security and performance.

Q: How long are traces retained?
A: Retention is determined by your observability backend (Tempo, Jaeger, etc.), not Beeceptor.

Q: Can I disable telemetry for specific endpoints?
A: Telemetry is configured at the organization level and applies to all endpoints in that organization.

References