Rate Limiting
FractalPack enforces per-organization rate limits on compute-intensive endpoints. CRUD endpoints (items, containers, orders) are not rate limited.
Rate limit categories
| Category | Endpoints | Description |
|---|---|---|
pack |
POST /api/v1/pack |
Single pack requests |
batch |
POST /api/v1/batch |
Batch pack jobs |
rates |
POST /api/v1/rates, /api/v1/ltl-rates, /api/v1/table-rates |
Shipping rate lookups |
Limits are enforced per organization, not per API key. All keys belonging to the same org share the same budget.
Response headers
Every response from a rate-limited endpoint includes these headers:
| Header | Description |
|---|---|
X-RateLimit-Limit |
Maximum requests allowed in the current window |
X-RateLimit-Remaining |
Requests remaining in the current window |
X-RateLimit-Reset |
Seconds until the rate limit window resets |
Use these headers to implement client-side throttling before hitting the limit.
429 Too Many Requests
When you exceed the limit, the API returns 429 with a Retry-After header:
HTTP/1.1 429 Too Many Requests
Content-Type: application/problem+json
Retry-After: 30
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 30
{
"type": "https://tools.ietf.org/html/rfc6585#section-4",
"title": "Too Many Requests",
"status": 429,
"detail": "Rate limit exceeded for this organization. Try again in 30 seconds."
}
Exponential backoff
When you receive a 429, wait the Retry-After duration before retrying. For repeated failures, use exponential backoff:
Python
import time
import requests
def pack_with_backoff(api_key, payload, max_retries=5):
url = "https://api.fractalpack.com/api/v1/pack"
headers = {"X-Api-Key": api_key, "Content-Type": "application/json"}
for attempt in range(max_retries):
resp = requests.post(url, headers=headers, json=payload)
if resp.status_code != 429:
return resp
retry_after = int(resp.headers.get("Retry-After", 1))
wait = max(retry_after, 2 ** attempt)
print(f"Rate limited. Retrying in {wait}s (attempt {attempt + 1}/{max_retries})")
time.sleep(wait)
raise Exception("Max retries exceeded")
JavaScript
async function packWithBackoff(apiKey, payload, maxRetries = 5) {
const url = "https://api.fractalpack.com/api/v1/pack";
for (let attempt = 0; attempt < maxRetries; attempt++) {
const resp = await fetch(url, {
method: "POST",
headers: { "X-Api-Key": apiKey, "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (resp.status !== 429) return resp;
const retryAfter = parseInt(resp.headers.get("Retry-After") || "1", 10);
const wait = Math.max(retryAfter, 2 ** attempt);
console.log(`Rate limited. Retrying in ${wait}s (attempt ${attempt + 1}/${maxRetries})`);
await new Promise((resolve) => setTimeout(resolve, wait * 1000));
}
throw new Error("Max retries exceeded");
}
Checking your limits
Query GET /api/v1/entitlements to see your organization's current rate limits and quotas:
curl https://api.fractalpack.com/api/v1/entitlements \
-H "X-Api-Key: fpk_test_..."
{
"enabledProducts": ["packing", "shipping"],
"maxUsers": 25,
"rateLimits": {
"pack": 100,
"batch": 10,
"rates": 200
},
"monthlyRequestQuota": 50000
}
The rateLimits object shows per-minute limits for each category. These limits are determined by your plan and can be customized -- contact support if you need higher throughput.
Best practices
- Read the headers. Check
X-RateLimit-Remainingbefore making additional requests. If it is low, slow down proactively. - Use batch for bulk work.
POST /api/v1/batchprocesses multiple pack requests in a single API call, which is more efficient than individual requests. - Respect
Retry-After. Always wait at least the indicated duration before retrying. - Distribute load. If you have periodic bulk workloads, spread them over time rather than sending them all at once.