Skip to main content

Customize Test Data for OpenAPI Mock Servers

When you create a mock server from an OpenAPI specification, Beeceptor can generate Intelligent Mocks. This mode uses AI to examine the API schema, field names, and descriptions to produce realistic response payloads automatically.

Instead of returning generic placeholders like "string" or 0, Intelligent Mocking produces values that resemble real data. For example:

  • email fields receive realistic email addresses
  • product_name fields receive meaningful product names
  • created_at fields receive properly formatted timestamps

This allows your mock server to response with near real backend systems.

However, automated generation is only a starting point. Even when AI generates sensible values, real systems often require specific data patterns, formats, or constraints that the specification does not fully describe.

Beeceptor provides a dedicated UI that lets you review and control how test data is generated for every field*. With the following feature and interface you can replace AI generators, configure ranges, or define fixed values without modifying the OpenAPI specification.

The result is a mock server that remains schema-driven but gives you precise control over the data returned by each endpoint.

Common use cases

Even when AI generates sensible values, certain fields benefit from tighter control. This interface is often used to refine how data appears in responses.

  • Domain realism: Replace generic generators with domain-specific ones such as product names, addresses, pricing values, or contact details so responses resemble real application data.
  • Business-state fields: Constrain values like status, role, or tier using controlled options so UI states remain predictable.
  • Structured identifiers: Define patterns for invoice IDs, account codes, or internal keys to match real identifier formats.
  • Date correctness: Ensure string fields behave as timestamps or dates when temporal values are expected.
  • Cross-endpoint consistency: Keep field behavior consistent when the same property appears across multiple endpoints.

How This Works

Customize Test Data

After uploading your OpenAPI specification and enabling Intelligent Mocking, Beeceptor generates responses for each endpoint automatically.

To review or change how values are produced:

  1. Navigate to endpoint's Settings page.
  2. Under the API Specifications section, locate the Customize Test Data and open it.

OpenAPI test data configuration screen

Editing Data Generators

On the next screen, you see a table with all the schema fields. Each field is configurable, making it easy to review how data is generated for API responses.

  • Each row represents a single property from the OpenAPI schema.
  • The Property column shows the technical field name along with its schema or entity context. This helps you identify where the property belongs within specification.
  • The column Used in APIs lists the endpoints or operations where this field appears. This is useful when the same property is reused across multiple APIs.

For large specifications, the interface includes search and filtering tools. You can quickly locate a field by property name or narrow results by API path. Pagination helps navigate large schemas without overwhelming the view.

To modify a field, click the Edit icon. A configuration modal opens where you can pick a data generator.

Applying Changes

You can make several edits before applying them. All modifications are staged first, allowing you to review them together.

When ready, click Review and Commit to apply the changes. Until then, staged updates remain inactive and can be discarded individually or cleared entirely.