API Configuration

Rate Limits & Throttling

Understand how #divisions enforces request limits, monitor your usage via response headers, and implement best practices to avoid throttling.

Limit Tiers #

Rate limits are calculated per API key and reset on a rolling 60-second window. Higher tiers include increased concurrency and dedicated throughput.

Starter
100 req/min
For prototypes & light usage
  • 1 concurrent connection
  • Rolling 60s window
  • Standard priority
  • Community support
Enterprise
Custom up to 50k
For high-scale workloads
  • Unlimited concurrency
  • Dedicated rate buckets
  • Burst allowance (2x)
  • Dedicated account manager

Response Headers #

Every API response includes rate limit metadata. Monitor these headers to track consumption and prevent 429 errors.

Header Description Example
X-RateLimit-Limit Maximum requests allowed per window 1000
X-RateLimit-Remaining Requests left in current window 842
X-RateLimit-Reset Unix timestamp when window resets 1698765432
Retry-After Seconds to wait before retrying (429 only) 45

Rate Limit Response #

When you exceed your limit, the API returns a 429 Too Many Requests status code. Always respect the Retry-After header.

⚠️

Continuing to send requests after a 429 response may result in temporary API key suspension.

HTTP/1.1 429 Too Many Requests
Retry-After: 32
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1698765480

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "You have exceeded your API rate limit. Please retry after 32 seconds.",
    "retry_after": 32,
    "documentation": "https://api.divisions.dev/docs/rate-limits"
  }
}

Handling & Best Practices #

Implement these patterns to ensure smooth API integration and minimize throttling impact.

⏱️

Exponential Backoff

Start with a 1s delay and double it after each 429 response, up to a maximum of 60s. Add jitter to prevent thundering herd.

📦

Batch Requests

Use bulk endpoints where available. Processing 100 items in one call uses 1 rate limit tick instead of 100.

💾

Cache Aggressively

Cache responses with ETag or Cache-Control headers. Avoid redundant calls for static or slowly-changing data.

📊

Monitor Headers

Log X-RateLimit-Remaining in production. Set alerts at 20% and 5% remaining to scale proactively.

Recommended Retry Logic

JavaScript / Node.js
async function fetchWithRetry(url, options = {}, maxRetries = 4) {
  let delay = 1000;
  for (let i = 0; i <= maxRetries; i++) {
    const res = await fetch(url, options);
    if (res.status !== 429) return res;

    const retryAfter = parseInt(res.headers.get('Retry-After') || delay / 1000);
    console.log(`Rate limited. Retrying in ${retryAfter}s...`);
    await new Promise(r => setTimeout(r, retryAfter * 1000));
    delay *= 2; // Exponential backoff
  }
  throw new Error('Max retries exceeded');
}

Frequently Asked Questions #

Can I get a temporary rate limit increase?
Yes. Enterprise customers can request 24-48 hour burst allowances via the dashboard or by contacting support. Professional tier users can upgrade or request a one-time temporary boost.
Do webhooks count toward my rate limit?
No. Outbound webhooks are processed on a separate infrastructure and do not consume your API request quota.
Are limits calculated globally or per endpoint?
Limits are shared across all endpoints per API key. This encourages efficient usage patterns like batching and caching.
What happens if I consistently hit my limit?
You'll receive 429 responses until the window resets. Repeated abuse or ignoring Retry-After headers may trigger a temporary 24-hour key suspension for safety.
💡

Need higher throughput for your production workload? Contact our sales team to discuss Enterprise tier options.