A Python library that automatically manages API rate limits, preventing 429 errors and optimizing API usage without requiring developers to manually track or implement rate limiting logic.
- 🚀 Automatic Detection: Automatically detects rate limits from HTTP response headers
- 🔄 Zero Configuration: Works out of the box with most APIs
- 💾 Persistent State: Supports in-memory, SQLite, and Redis storage
- 🔀 Multi-Process Safe: Share rate limits across multiple processes with Redis
- 🎯 Smart Waiting: Automatically waits when limits are reached
- 📊 Status Monitoring: Check current rate limit status anytime
- 🔌 Easy Integration: Works with
requests,httpx, andaiohttp - 🔄 Advanced Retry: Configurable retry strategies with exponential backoff
- 📊 Metrics: Built-in metrics collection and Prometheus export
- 🛠️ CLI Tools: Command-line interface for monitoring and management
pip install smartratelimitFor async support:
pip install smartratelimit[httpx] # For httpx support
pip install smartratelimit[aiohttp] # For aiohttp support
pip install smartratelimit[all] # For all optional dependenciesfrom smartratelimit import RateLimiter
# Create a rate limiter (auto-detects limits from headers)
limiter = RateLimiter()
# Make requests - rate limiting is automatic!
response = limiter.request('GET', 'https://api.github.com/users/octocat')
print(response.json())# Persist rate limits across application restarts
limiter = RateLimiter(storage='sqlite:///rate_limits.db')
response = limiter.request('GET', 'https://api.github.com/users')
# Rate limit state is saved to database# Share rate limits across multiple processes/workers
limiter = RateLimiter(storage='redis://localhost:6379/0')
# Works with Gunicorn, Celery, etc.
response = limiter.request('GET', 'https://api.github.com/users')# Set default limits for APIs that don't provide headers
limiter = RateLimiter(
default_limits={'requests_per_minute': 60}
)
for user in users:
response = limiter.request('POST', 'https://api.example.com/notify', json={'user': user})import requests
from smartratelimit import RateLimiter
session = requests.Session()
session.headers.update({'Authorization': 'Bearer token'})
limiter = RateLimiter()
limiter.wrap_session(session)
# Now all session requests are rate-limited
response = session.get('https://api.example.com/data')limiter = RateLimiter()
# Make some requests
limiter.request('GET', 'https://api.github.com/users')
# Check status
status = limiter.get_status('api.github.com')
if status:
print(f"Remaining: {status.remaining}/{status.limit}")
print(f"Resets in: {status.reset_in} seconds")
print(f"Utilization: {status.utilization * 100:.1f}%")limiter = RateLimiter()
# Manually set rate limits
limiter.set_limit('api.example.com', limit=100, window='1h')
limiter.set_limit('api.another.com', limit=60, window='1m')
# Window formats: '1h', '30m', '60s', '1d'limiter = RateLimiter(
headers_map={
'limit': 'X-My-API-Limit',
'remaining': 'X-My-API-Remaining',
'reset': 'X-My-API-Reset'
}
)limiter = RateLimiter(raise_on_limit=True)
try:
response = limiter.request('GET', 'https://api.example.com/data')
except RateLimitExceeded as e:
print(f"Rate limit exceeded: {e}")import httpx
from smartratelimit import AsyncRateLimiter
async with AsyncRateLimiter() as limiter:
async with httpx.AsyncClient() as client:
response = await limiter.arequest_httpx(
client, 'GET', 'https://api.github.com/users'
)
print(response.json())import aiohttp
from smartratelimit import AsyncRateLimiter
async with AsyncRateLimiter() as limiter:
async with aiohttp.ClientSession() as session:
response = await limiter.arequest_aiohttp(
session, 'GET', 'https://api.github.com/users'
)
data = await response.json()
print(data)from smartratelimit import RateLimiter
from smartratelimit.retry import RetryConfig, RetryHandler, RetryStrategy
# Configure retry with exponential backoff
retry_config = RetryConfig(
max_retries=3,
strategy=RetryStrategy.EXPONENTIAL,
base_delay=1.0,
backoff_factor=2.0,
)
retry_handler = RetryHandler(retry_config)
limiter = RateLimiter()
def make_request():
return limiter.request('GET', 'https://api.example.com/data')
# Automatically retry on 429, 503, 504
response = retry_handler.retry_sync(make_request)from smartratelimit import RateLimiter
from smartratelimit.metrics import MetricsCollector
limiter = RateLimiter()
metrics = MetricsCollector()
response = limiter.request('GET', 'https://api.github.com/users')
status = limiter.get_status('api.github.com')
metrics.record_request('api.github.com', response.status_code, status)
# Export Prometheus metrics
prometheus_metrics = metrics.export_prometheus()
print(prometheus_metrics)# Check rate limit status
smartratelimit status --endpoint api.github.com
# Probe endpoint for rate limits
smartratelimit probe https://api.github.com/users
# Clear stored rate limits
smartratelimit clear --endpoint api.github.com
# Clear all rate limits
smartratelimit clearThe library automatically detects rate limits from headers for:
- ✅ GitHub API
- ✅ Stripe API
- ✅ Twitter API
- ✅ OpenAI API
- ✅ Any API using standard
X-RateLimit-*headers - ✅ APIs with
Retry-Afterheaders (429 responses)
Create a new rate limiter.
Parameters:
storage(str): Storage backend. Options:'memory'(default): In-memory storage'sqlite:///path': SQLite storage (persistent, single-machine)'redis://host:port': Redis storage (distributed, multi-process)
default_limits(dict): Default limits when headers aren't available. Example:{'requests_per_minute': 60}headers_map(dict): Custom header name mappingraise_on_limit(bool): IfTrue, raiseRateLimitExceededinstead of waiting
Make a rate-limited HTTP request.
Parameters:
method(str): HTTP method (GET, POST, PUT, DELETE, PATCH)url(str): Request URL**kwargs: Additional arguments passed torequests.request()
Returns: requests.Response object
Wrap an existing requests.Session with rate limiting.
Get current rate limit status for an endpoint.
Returns: RateLimitStatus object or None if no info available
Manually set rate limit for an endpoint.
Parameters:
endpoint: Endpoint URL or domainlimit: Maximum number of requestswindow: Time window ('1h', '1m', '30s', '1d')
Clear stored rate limit data.
Parameters:
endpoint: Specific endpoint to clear, orNoneto clear all
Status information about current rate limits.
Properties:
endpoint(str): Endpoint URLlimit(int): Total rate limitremaining(int): Remaining requestsreset_time(datetime): When the limit resetswindow(timedelta): Time window for the limitreset_in(float): Seconds until reset (property)is_exceeded(bool): Whether limit is exceeded (property)utilization(float): Utilization percentage 0.0-1.0 (property)
from smartratelimit import RateLimiter
limiter = RateLimiter()
for url in urls:
response = limiter.request('GET', url)
html = response.text
# Process HTML...from fastapi import FastAPI
from smartratelimit import RateLimiter
app = FastAPI()
limiter = RateLimiter()
@app.get("/notify")
def notify_user(user_id: str):
response = limiter.request(
'POST',
'https://api.sendgrid.com/v3/mail/send',
json={'to': user_id, 'message': 'Hello!'}
)
return {"status": "sent"}from smartratelimit import RateLimiter
limiter = RateLimiter(default_limits={'requests_per_minute': 60})
results = []
for item in items:
response = limiter.request('POST', 'https://api.example.com/process', json=item)
results.append(response.json())- ✅ Basic rate limiting with token bucket algorithm
- ✅ Automatic header detection
- ✅ In-memory storage
- ✅
requestslibrary integration - ✅ Status monitoring
- ✅ SQLite persistence
- ✅ Redis backend for distributed applications
- ✅ Multi-process support
- ✅ Performance benchmarks
- ✅ Comprehensive test coverage
- ✅
httpxandaiohttpasync support - ✅ Advanced retry logic with configurable strategies
- ✅ CLI tools (status, clear, probe commands)
- ✅ Monitoring/metrics export (Prometheus format)
Contributions are welcome! Please read CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the MIT License.
See the LICENSE file for the full license text.
Comprehensive documentation is available in the docs/ directory:
- 📖 Quick Start Guide - Get started in 5 minutes
- 📚 Complete Tutorial - Step-by-step guide
- 📋 API Reference - Complete API documentation
- 💻 Examples - Real-world examples with free APIs
- 💾 Storage Backends - SQLite and Redis guide
- ⚡ Async Guide - Async/await usage
- 🔄 Retry Strategies - Advanced retry logic
- 📊 Metrics Guide - Collecting and exporting metrics
- 🛠️ CLI Guide - Command-line tools
- 🎯 Advanced Features - Advanced patterns
Inspired by the need for a simple, automatic rate limiting solution that works with any API without configuration.