Rate Limits

Pictograph API rate limits by plan tier. Learn about request quotas, 429 responses, and retry strategies.

The Pictograph API uses rate limiting to ensure fair usage and maintain service quality. Limits are applied per API key using a sliding window algorithm.

Rate Limits by Plan

Plan Requests/Hour
Free 1,000
Starter 5,000
Pro 20,000
Enterprise 100,000

Response Headers

Each API response includes rate limit headers:

Header Description
X-RateLimit-Limit Your rate limit per hour
X-RateLimit-Remaining Requests remaining in current window
X-RateLimit-Reset Unix timestamp when the window resets

Handling Rate Limits

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

JSON
{
  "error": "rate_limit_exceeded",
  "message": "Rate limit exceeded. Please retry after 120 seconds.",
  "retry_after": 120
}

SDK Automatic Handling

The Python SDK automatically handles rate limits by waiting and retrying when retry_after is less than 2 minutes:

Python
from pictograph import Client
from pictograph.exceptions import RateLimitError

client = Client(api_key="pk_live_...")

try:
    # SDK automatically waits and retries if rate limited
    datasets = client.datasets.list()
except RateLimitError as e:
    # Only raised if retry_after > 2 minutes
    print(f"Rate limited. Try again in {e.retry_after} seconds")

Manual Handling

For custom implementations:

Python
import time
import requests

def make_request_with_retry(url, headers, max_retries=3):
    for attempt in range(max_retries):
        response = requests.get(url, headers=headers)

        if response.status_code == 429:
            retry_after = int(response.headers.get('Retry-After', 60))
            print(f"Rate limited. Waiting {retry_after}s...")
            time.sleep(retry_after)
            continue

        return response

    raise Exception("Max retries exceeded")

Best Practices

  • Batch operations when possible instead of individual requests
  • Cache responses that don't change frequently
  • Use webhooks for real-time updates instead of polling
  • Implement exponential backoff for retries
  • Monitor your usage via rate limit headers

Burst Limits

In addition to hourly limits, there's a burst limit of 100 requests per second to prevent sudden spikes that could affect service quality.

Copied to clipboard