Skip to main content

Mocking AWS S3 API

Why Mock S3?

Amazon S3 is a widely used object store. But directly integrating S3 into your local or automated test environments has drawbacks:

  • Credentials and IAM permissions cause delays for development and testing because of their inherent difficult security and access setup.
  • Operations are billed for every read/write, including test runs. A performance test with high TPS might cause an unwanted bill.
  • Testing edge cases (e.g., 404 responses, timeouts, metadata headers) is impractical.
  • Certain CI/CD workflows can’t run offline or in isolated sandboxes.

A mock server solves this by intercepting S3 calls and returning pre-defined, controllable responses. Beeceptor enables this without writing a single line of backend mocking logic.

simple-storage-service-mock-server-architecture

Try a Live S3 Mock Server

Test S3 SDK calls without real AWS credentials. Mock responses, inspect requests, and simulate edge cases with Beeceptor.

Compatible with boto3, aws-sdk, and Java SDK
Inspect all S3 API calls in real time
Define custom XML/JSON responses and simulate errors

No AWS account neededFree and instant to use

Understanding S3 URL Semantics

S3 exposes two URL styles. The more common one in SDKs is virtual-hosted–style:

https://<bucket-name>.s3.<region>.amazonaws.com/<object-key>

For example:

https://company-bucket-name.s3.us-west-2.amazonaws.com/sample.txt

This means:

  • The bucket name (company-bucket-name) is encoded in the domain.
  • The region (us-west-2) is part of the subdomain.
  • The object key (sample.txt) is the path.

To mock this, we must preserve this structure in our mock endpoint.

How Beeceptor Maps Bucket URLs

Beeceptor allows you to register a custom subdomain for mocking. Suppose you register s3-mock-bucket as the bucket, and use mock as the region placeholder:

https://s3-mock-bucket.mock.beeceptor.com/sample.txt

This mirrors S3’s virtual-hosted–style URLs:

  • Here, the s3-mock-bucket maps to the bucket name. This is the name of the mock server or Beeceptor endpoint.
  • The mock acts as the static region placeholder.
  • The rest of the path simulates object keys.

Beeceptor can now intercept any HTTP request to that URL and respond as per your defined rules.

View S3 Mock Server https://app.beeceptor.com/mock-server/s3-mock-bucket

Configuring Your Environment

Set these environment variables before running your SDK code:

export AWS_ACCESS_KEY_ID=test-key
export AWS_SECRET_ACCESS_KEY=test-secret
export AWS_REGION=mock
export AWS_S3_BUCKET=s3-mock-bucket

These variables:

  • Are read by the SDK to avoid hardcoding
  • Use dummy values since authentication is bypassed
  • Use mock as a neutral region

Ensure consistent configuration across local environments, CI runners, or containerized tests

Configuring Your Environment

Set these environment variables before running your SDK code:

export AWS_ACCESS_KEY_ID=test-key
export AWS_SECRET_ACCESS_KEY=test-secret
export AWS_REGION=mock
export AWS_S3_BUCKET=s3-mock-bucket

These variables:

  • Are read by the SDK to avoid hardcoding
  • Use dummy values since authentication is bypassed
  • Use mock as a neutral region
  • Ensure consistent configuration across local environments, CI runners, or containerized tests

Code Examples

Each example below configures the S3 client to use Beeceptor instead of AWS.

Python (boto3)

import boto3
import os

bucket = os.getenv("AWS_S3_BUCKET")
region = os.getenv("AWS_REGION")
endpoint_url = f"https://{bucket}.{region}.beeceptor.com"

s3 = boto3.client(
's3',
endpoint_url=endpoint_url,
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
region_name=region,
)

response = s3.get_object(Bucket=bucket, Key='sample.txt')
print(response['Body'].read().decode('utf-8'))

Expected Output (when mock is configured, contents of sample.txt file):

Hello from Beeceptor!

Node.js (aws-sdk)

const { S3Client, GetObjectCommand } = require('@aws-sdk/client-s3');
const { Readable } = require('stream');

const bucket = 's3-mock-bucket';
const region = 'mock';
const endpoint = `https://s3-mock-bucket.mock.beeceptor.com`;

const s3Client = new S3Client({
region: region,
endpoint: endpoint,
credentials: {
accessKeyId: 'mock-access-key',
secretAccessKey: 'mock-secret-key',
},
forcePathStyle: true, // same as s3ForcePathStyle in v2
});

async function getObjectFromS3() {
try {
const command = new GetObjectCommand({
Bucket: bucket,
Key: 'sample.txt',
});

const response = await s3Client.send(command);

// Convert the ReadableStream to string
const bodyContents = await streamToString(response.Body);
console.log(bodyContents);
} catch (err) {
console.error("S3 Error", err);
}
}

// Helper to convert stream to string
const streamToString = (stream) =>
new Promise((resolve, reject) => {
const chunks = [];
stream.on('data', (chunk) => chunks.push(chunk));
stream.on('error', reject);
stream.on('end', () => resolve(Buffer.concat(chunks).toString('utf-8')));
});

getObjectFromS3();

Java (AWS SDK v2)

String bucket = System.getenv("AWS_S3_BUCKET");
String region = System.getenv("AWS_REGION");
String endpoint = String.format("https://%s.%s.beeceptor.com", bucket, region);

S3Client s3 = S3Client.builder()
.endpointOverride(URI.create(endpoint))
.credentialsProvider(
StaticCredentialsProvider.create(AwsBasicCredentials.create("test-key", "test-secret"))
)
.region(Region.of(region))
.build();

GetObjectRequest request = GetObjectRequest.builder()
.bucket(bucket)
.key("sample.txt")
.build();

ResponseInputStream<GetObjectResponse> response = s3.getObject(request);
System.out.println(new String(response.readAllBytes(), StandardCharsets.UTF_8));

Simulating S3 Operations

Beeceptor allows you to simulate the behavior of S3 by intercepting HTTP requests and responding based on customizable rules. While it doesn’t replicate S3's internal implementation, it provides enough control over inputs and outputs to test client behavior, error handling, and integration flows—without accessing AWS.

Supported Operations

HTTP MethodTypical S3 OperationNotes
GETRetrieve an objectUse to return static or dynamic content for a given object key.
PUTUpload an objectSimulate successful uploads or quota-related errors.
DELETEDelete an objectSimulate 204 No Content or errors like 404 if object doesn’t exist.
HEADCheck object metadataUseful for pre-checks before downloading large files.

Response Content Customization

Beeceptor supports dynamic templating within the response body. This is especially useful for simulating:

  • Metadata like LastModified, ETag, or ContentLength
  • Variable content sizes
  • Expiration and timestamp-based logic
  • Structured XML responses

Example: Mocked GET Response in XML

<?xml version="1.0" encoding="UTF-8"?>
<GetObjectOutput>
<LastModified>{{faker 'date.recent' 'iso'}}</LastModified>
<ContentLength>{{faker 'number.int' '{min:10000,max:25000}'}}</ContentLength>
<ContentType>text/plain</ContentType>
<ETag>"{{faker 'string.uuid' }}"</ETag>
</GetObjectOutput>

This simulates an object metadata response in the same shape as AWS S3.

Signature Verification

Beeceptor does not perform any signature verification. It accepts all requests, including those with invalid, expired, or missing AWS signatures. This allows SDKs to work with dummy credentials and makes local testing frictionless.

Beeceptor cannot be used to validate SigV4 correctness, presigned URL behavior, or IAM-based access controls. Use it to simulate API flows, not AWS security policies. For signature validation, test against real S3 or compatible emulators.

Conclusion and Recommendations

Beeceptor provides a lightweight, public S3-compatible endpoint perfect for test environments. You can drop it into any AWS SDK by overriding the endpoint and region via environment config. It reduces operational burden while maintaining behavioral fidelity.

To get started:

  • Set your bucket name (s3-mock-bucket) and region (mock).
  • Point your SDK to https://s3-mock-bucket.mock.beeceptor.tech.
  • Use dummy AWS credentials (test/test).
  • Enable forcePathStyle.

This setup accelerates development CI & performance testing for any S3-integrated system, all without touching AWS or any code changes.