Rate Limits
Understand rate limits by plan, response headers, and best practices for staying within your quota.
Rate Limits by Plan
Rate limits are applied per API key using a sliding window. Each plan has a requests-per-minute (RPM) limit, an hourly cap, and a burst allowance for short spikes.
| Plan | Requests/min | Requests/hour | Burst |
|---|---|---|---|
| Free | 100 | 1,000 | 10 |
| Starter | 300 | 5,000 | 50 |
| Pro | 1,000 | 20,000 | 200 |
| Enterprise | Custom | Custom | Custom |
Enterprise plans include custom rate limits, dedicated infrastructure, and priority support. Contact sales for details.
Response Headers
Every API response includes rate limit headers so you can track your usage in real-time and adjust your request rate proactively.
| Header | Description |
|---|---|
| X-RateLimit-Limit | Maximum requests allowed per minute for your plan. |
| X-RateLimit-Remaining | Number of requests remaining in the current window. |
| X-RateLimit-Reset | Unix timestamp (seconds) when the current rate limit window resets. |
| Retry-After | Seconds to wait before retrying. Only present on 429 responses. |
Example successful response headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 287
X-RateLimit-Reset: 1741513260
X-Request-Id: req_abc123Handling 429 Responses
When you exceed your rate limit, the API returns a 429 Too Many Requests response with a Retry-After header indicating how many seconds to wait.
HTTP/1.1 429 Too Many Requests
Retry-After: 12
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1741513260
{
"error": {
"type": "rate_limit_error",
"message": "Rate limit exceeded. Try again in 12 seconds.",
"code": "rate_limit_exceeded",
"request_id": "req_abc123"
}
}Exponential Backoff
Implement exponential backoff with jitter for automatic retry handling. Always prefer the Retry-After header value when present.
async function requestWithBackoff(url: string, opts: RequestInit, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const res = await fetch(url, opts);
if (res.ok) return res.json();
// Don't retry client errors (except 429)
if (res.status < 500 && res.status !== 429) {
throw await res.json();
}
if (attempt === maxRetries) {
throw await res.json();
}
// Use Retry-After header for 429
if (res.status === 429) {
const retryAfter = parseInt(res.headers.get('Retry-After') || '5');
await new Promise(r => setTimeout(r, retryAfter * 1000));
continue;
}
// Exponential backoff with jitter for 5xx
const base = Math.min(1000 * 2 ** attempt, 30000);
const jitter = base * 0.5 * Math.random();
await new Promise(r => setTimeout(r, base + jitter));
}
}| Retry | Base Delay | With Jitter |
|---|---|---|
| 1st | 1s | 1 - 1.5s |
| 2nd | 2s | 2 - 3s |
| 3rd | 4s | 4 - 6s |
Monitoring Usage
Track rate limit headers on every response to proactively throttle your requests before hitting the limit.
class RateLimitTracker {
remaining = Infinity;
resetAt = 0;
update(headers: Headers) {
const remaining = headers.get('X-RateLimit-Remaining');
const reset = headers.get('X-RateLimit-Reset');
if (remaining) this.remaining = parseInt(remaining);
if (reset) this.resetAt = parseInt(reset);
}
async waitIfNeeded() {
if (this.remaining > 10) return;
// Getting close to limit — slow down
const waitMs = Math.max(0, this.resetAt * 1000 - Date.now());
if (waitMs > 0) {
console.log(`Rate limit low (${this.remaining}), waiting ${waitMs}ms`);
await new Promise(r => setTimeout(r, waitMs));
}
}
}
const tracker = new RateLimitTracker();
async function apiCall(url: string, opts: RequestInit) {
await tracker.waitIfNeeded();
const res = await fetch(url, opts);
tracker.update(res.headers);
return res;
}Best Practices
- •Implement exponential backoff. Never busy-loop on 429 errors. Always use the
Retry-Afterheader value and add jitter to avoid thundering herd effects. - •Cache responses. Store product, location, and category data locally. These change infrequently and caching reduces unnecessary API calls.
- •Use webhooks instead of polling. Instead of polling for order updates or inventory changes, configure webhooks to receive real-time notifications. See the Webhooks guide.
- •Batch operations where possible. Use list endpoints with higher
limitvalues (up to 100) instead of making many individual GET requests. - •Monitor rate limit headers. Track
X-RateLimit-Remainingon every response and proactively slow down before hitting the limit. - •Distribute requests evenly. Spread requests across the full minute window rather than bursting at the start of each window.
- •Use conditional requests. Include
If-None-Matchwith ETags on GET requests. The server returns 304 Not Modified without counting against your rate limit.