📋 Overview

Rate limits are applied per API key and are designed to protect our infrastructure while providing generous limits for production workloads. All limits use a sliding window algorithm for accurate counting.

ℹ️ How Rate Limits Work

When you exceed a rate limit, the API returns a 429 Too Many Requests status code. The response includes headers indicating how many requests remain and when the window resets.

🌐 Global Rate Limits

These limits apply to all API endpoints combined per API key:

Plan Requests / Minute Requests / Hour Requests / Day Burst Allowance
Starter 60 3,600 50,000 10
Pro 600 36,000 500,000 50
Enterprise 6,000 360,000 Unlimited 200
⚠️ Burst Allowance

Burst allowance lets you temporarily exceed your per-minute rate. Once the burst pool is exhausted, you revert to the base per-minute rate. The burst pool refills at a rate of 1 request per second.

🔗 Endpoint-Specific Limits

Certain endpoints have additional rate limits beyond the global limits:

GET /v3/configs

Retrieve configuration entries. Supports filtering, pagination, and environment scoping.

Per Minute 120 req/min
Per Hour 7,200 req/hr
Max Page Size 100 items
GET /v3/configs/:key

Retrieve a single configuration value by key. This is the highest-volume endpoint.

Per Minute 300 req/min
Per Hour 18,000 req/hr
Cache TTL 30s (SDK)
POST /v3/configs

Create new configuration entries. Requires write permissions.

Per Minute 30 req/min
Per Hour 1,800 req/hr
Max Body Size 1 MB
PUT /v3/configs/:key

Update an existing configuration value. Triggers real-time sync to all connected clients.

Per Minute 30 req/min
Per Hour 1,800 req/hr
Sync Propagation ~50ms
DELETE /v3/configs/:key

Delete a configuration entry. This action can be undone via versioning within the retention window.

Per Minute 10 req/min
Per Hour 600 req/hr
Retention 30 days
POST /v3/environments

Create a new environment (dev, staging, production, etc.).

Per Day 50
Max Environments Tier-based
Name Length 64 chars
POST /v3/keys

Generate new API keys. Rotating keys is encouraged for security.

Per Day 20
Max Active Keys Tier-based
Key Expiry Optional
WS wss://ws.appconfig.json/v3/stream

Real-time WebSocket connection for config change notifications. Uses the GET rate limit bucket.

Concurrent Connections Tier-based
Per Key 5 / 20 / 100
Ping Interval 30s

📨 Rate Limit Response Headers

Every API response includes headers that inform you of your current rate limit status:

Header Description Example
X-RateLimit-Limit Maximum requests allowed in the current window 600
X-RateLimit-Remaining Requests remaining in the current window 587
X-RateLimit-Reset Unix timestamp when the rate limit resets 1719705600
X-RateLimit-Burst Remaining burst requests available 45
Retry-After Seconds to wait before retrying (only on 429) 12

Example Rate-Limited Response

HTTP Response — 429 Too Many Requests
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 600
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1719705660
Retry-After: 42

{
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Too many requests. Please retry after 42 seconds.",
    "details": {
      "limit": 600,
      "remaining": 0,
      "reset_at": "2025-06-30T12:01:00Z",
      "retry_after_seconds": 42,
      "plan": "pro"
    }
  }
}

⚡ Rate Limit Error Codes

429
Too Many Requests
Rate limit exceeded. Check Retry-After header and back off.
400
Bad Request
Request payload exceeds the 1 MB maximum body size limit.
429 (WebSocket)
Connection Closed
Maximum concurrent WebSocket connections exceeded for this API key.

✅ Best Practices

1. Use SDK Caching

Our official SDKs include built-in caching with a configurable TTL. For read-heavy workloads, enable aggressive caching to minimize API calls:

Node.js SDK — Cached Configuration
// app-config.json SDK with caching
const { AppConfig } = require('@appconfig/sdk-node');

const config = new AppConfig({
  apiKey: process.env.APPCONFIG_API_KEY,
  cache: {
    enabled: true,
    ttl: 30000,       // 30 seconds
    staleWhileRevalidate: true
  },
  retry: {
    maxAttempts: 3,
    backoff: 'exponential',
    respectRetryAfter: true  // Respects 429 Retry-After
  }
});

2. Implement Exponential Backoff

Always implement exponential backoff when receiving 429 responses. This prevents thundering herd issues:

Exponential Backoff Pattern
async function fetchWithBackoff(url, options, attempt = 0) {
  try {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const retryAfter = parseInt(response.headers.'retry-after') || 
        Math.min(2 ** attempt * 1000, 30000); // Max 30s
      
      console.warn(`Rate limited. Retrying in ${retryAfter}ms...`);
      await new Promise(r => setTimeout(r, retryAfter));
      return fetchWithBackoff(url, options, attempt + 1);
    }
    
    return response.json();
  } catch (err) {
    if (attempt < 3) return fetchWithBackoff(url, options, attempt + 1);
    throw err;
  }
}

3. Batch Write Operations

Instead of making individual PUT requests for each config change, use the bulk update endpoint:

Bulk Update — /v3/configs/bulk
PUT /v3/configs/bulk

{
  "entries": [
    { "key": "feature_dark_mode", "value": true, "environment": "production" },
    { "key": "api_timeout",       "value": 5000, "environment": "production" },
    { "key": "max_connections",   "value": 100, "environment": "staging" }
  ]
}
✅ Pro Tip

Bulk operations count as a single request against your rate limit, regardless of the number of entries in the batch (max 500 entries per bulk request).

4. Monitor Your Usage

Use our usage dashboard or the /v3/usage endpoint to track your consumption in real-time and set up alerts before hitting limits:

Usage Endpoint — /v3/usage
GET /v3/usage?window=24h

{
  "plan": "pro",
  "window": "24h",
  "total_requests": 387420,
  "daily_limit": 500000,
  "remaining": 112580,
  "utilization_percent": 77.5,
  "breakdown": {
    "get": 342100,
    "post": 21340,
    "put": 18900,
    "delete": 5080
  }
}

📈 Increasing Your Limits

If you consistently hit rate limits, here are your options:

d>
Option Description Timeframe Requirements
Upgrade Plan Move to Pro or Enterprise tier for higher limits Immediate Payment
Custom Limits Negotiate custom rate limits with our sales team1–3 business days Enterprise plan
Dedicated Instance Self-hosted or isolated infrastructure with unlimited requests 1–2 weeks Enterprise plan
Optimize Usage Reduce API calls with caching, bulk operations, and WebSocket streaming Immediate Code changes
💡 Need More?

If your workload requires limits beyond the Enterprise tier, contact our sales team. We'll work with you to design a plan that fits your needs.

❓ Frequently Asked Questions

You'll receive a 429 Too Many Requests response with a Retry-After header indicating how many seconds to wait. Your request won't be processed until you retry after the specified time. We recommend implementing exponential backoff in your client code.

Rate limits are applied per API key. This means you can create multiple API keys to distribute load across different services or environments. However, all keys under the same account share the daily limit bucket.

No. When using our SDK with caching enabled, local cache hits do not make API calls and therefore don't count against your rate limit. Only actual API requests to our servers are counted. SDK cache misses trigger API calls that count normally.

Yes! For planned traffic spikes (product launches, sales events, etc.), contact our support team at least 48 hours in advance. We can provide temporary limit increases for up to 72 hours at no additional cost for Pro and Enterprise customers.

WebSocket connections are subject to a concurrent connection limit per API key, not a request rate limit. Once connected, real-time config updates stream to you with no per-message limits. However, connection attempts count against the GET rate limit bucket.

We use a sliding window algorithm. Per-minute limits reset every 60 seconds from the first request, per-hour limits reset every 60 minutes, and daily limits reset at midnight UTC. The burst pool refills at a rate of 1 request per second up to your maximum burst allowance.

Yes! In the App Config.json Dashboard, navigate to Settings > Notifications to configure alerts. You can set thresholds (e.g., 80%, 90%, 95%) and receive notifications via email, Slack, or webhook. Enterprise customers can also integrate with PagerDuty or Opsgenie.

Last updated: June 15, 2025 · ← Back to API Reference