8.3 KiB
Rate Limiting
This document describes the rate limiting implementation for the Internet-ID API to protect against abuse, DDoS attacks, and resource exhaustion.
Overview
The API implements tiered rate limiting with different limits based on endpoint categories:
- Strict limits: For expensive operations (uploads, on-chain transactions)
- Moderate limits: For read operations (content queries, verification)
- Relaxed limits: For health checks and status endpoints
Rate Limit Tiers
Strict Limits (10 requests/minute)
Applied to expensive operations that consume significant resources:
POST /api/upload- IPFS uploadsPOST /api/manifest- Manifest creation and uploadPOST /api/register- On-chain registrationPOST /api/bind- Platform bindingPOST /api/bind-many- Batch platform bindingPOST /api/one-shot- Complete registration flowPOST /api/verify- Content verification with file uploadPOST /api/proof- Proof generation with file upload
Moderate Limits (100 requests/minute)
Applied to read operations and queries:
GET /api/resolve- Resolve platform bindingsGET /api/public-verify- Public verification queriesGET /api/contents- List content recordsGET /api/contents/:hash- Get content by hashGET /api/contents/:hash/verifications- Get verificationsGET /api/verifications- List verificationsGET /api/verifications/:id- Get verification by IDGET /api/network- Network informationGET /api/registry- Registry address
Relaxed Limits (1000 requests/minute)
Applied to lightweight status endpoints:
GET /api/health- Health check
Configuration
Environment Variables
Configure rate limiting behavior in your .env file:
# Redis URL for distributed rate limiting (optional)
# If not set, uses in-memory store (not suitable for multi-instance deployments)
REDIS_URL=redis://localhost:6379
# API key that exempts from rate limiting (optional)
# Useful for internal services or trusted clients
RATE_LIMIT_EXEMPT_API_KEY=internal_service_key
Redis Store
For production deployments with multiple API instances, Redis is strongly recommended to ensure consistent rate limiting across all instances.
Without Redis: Each API instance maintains its own in-memory rate limit counters. This means:
- A client could make 10 requests/minute to Instance A and 10 requests/minute to Instance B
- Rate limits are reset when the API restarts
- Not suitable for load-balanced deployments
With Redis: All API instances share a centralized rate limit store:
- Rate limits are enforced consistently across all instances
- Limits persist through API restarts
- Suitable for production use with load balancing
Setting up Redis
Using Docker:
docker run -d --name redis -p 6379:6379 redis:7-alpine
Using Docker Compose (add to docker-compose.yml):
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
restart: unless-stopped
volumes:
redis_data:
Then set in .env:
REDIS_URL=redis://localhost:6379
For managed Redis services (AWS ElastiCache, Redis Cloud, etc.), use the connection URL provided by your service:
REDIS_URL=redis://username:password@hostname:port
Response Format
When rate limits are exceeded, the API returns:
Status Code: 429 Too Many Requests
Headers:
Retry-After: Seconds until the rate limit resetsRateLimit-Limit: Maximum requests allowed in the windowRateLimit-Remaining: Requests remaining in current windowRateLimit-Reset: Timestamp when the rate limit resets
Response Body:
{
"error": "Too Many Requests",
"message": "Rate limit exceeded. Please try again later.",
"retryAfter": 45
}
Client Implementation
Handling Rate Limits
Clients should:
- Check
RateLimit-Remainingheader to track remaining quota - When receiving
429, readRetry-Afterheader - Implement exponential backoff for retries
- Cache responses where appropriate to reduce API calls
Example (JavaScript/TypeScript)
async function makeRequest(url: string, options: RequestInit = {}) {
const response = await fetch(url, options);
// Check rate limit headers
const remaining = response.headers.get("RateLimit-Remaining");
const limit = response.headers.get("RateLimit-Limit");
console.log(`Rate limit: ${remaining}/${limit} remaining`);
if (response.status === 429) {
const retryAfter = response.headers.get("Retry-After");
console.warn(`Rate limited. Retry after ${retryAfter} seconds`);
// Wait and retry
await new Promise((resolve) => setTimeout(resolve, Number(retryAfter) * 1000));
return makeRequest(url, options);
}
return response;
}
Example (curl)
# Make a request and view rate limit headers
curl -i https://api.example.com/api/health
# Example response headers:
# RateLimit-Limit: 1000
# RateLimit-Remaining: 999
# RateLimit-Reset: 1635360000
Authenticated User Exemptions
Trusted clients can be exempted from rate limiting by setting RATE_LIMIT_EXEMPT_API_KEY:
-
Generate a secure API key:
openssl rand -hex 32 -
Set in
.env:RATE_LIMIT_EXEMPT_API_KEY=your_secure_key_here -
Include in requests:
curl -H "x-api-key: your_secure_key_here" https://api.example.com/api/upload
Security Notes:
- Keep exempt API keys secure and rotate regularly
- Only provide to trusted internal services
- Monitor usage of exempt keys for abuse
- Consider separate keys for different services
Monitoring
Rate Limit Hits
When rate limits are exceeded, the API logs:
[RATE_LIMIT_HIT] IP: 192.168.1.100, Path: /api/upload, Time: 2024-01-15T10:30:00.000Z
Recommended Monitoring
Monitor these metrics in production:
- Rate limit hit frequency by endpoint
- Top IP addresses hitting rate limits
- Rate limit hit patterns (time of day, specific endpoints)
- Redis connection health (if using Redis)
Example Log Aggregation Query
If using a log aggregation service (e.g., CloudWatch, Datadog, ELK):
[RATE_LIMIT_HIT]
| count by IP, Path
| sort count desc
Testing Rate Limits
Manual Testing
Test rate limits with curl:
# Test strict limit (10 req/min) on upload endpoint
for i in {1..15}; do
echo "Request $i"
curl -X POST \
-H "x-api-key: supersecret" \
-F "file=@test.txt" \
http://localhost:3001/api/upload
sleep 1
done
Expected: First 10 succeed, remaining fail with 429.
Automated Tests
See test/middleware/rate-limit.test.ts for comprehensive test coverage:
- Rate limit enforcement for each tier
- Redis vs in-memory store behavior
- Authenticated exemptions
- Header validation
- Error message format
Run tests:
npm test -- test/middleware/rate-limit.test.ts
Production Recommendations
- Use Redis: Essential for multi-instance deployments
- Monitor logs: Track rate limit hits to identify abuse patterns
- Set exemptions carefully: Only for trusted internal services
- Adjust limits: Based on actual usage patterns and capacity
- Alert on anomalies: High rate limit hit rates may indicate attacks
- Document for users: Include rate limits in API documentation
Troubleshooting
Rate limits not working
- Check Redis connection if
REDIS_URLis set - Verify middleware is applied to routes
- Check logs for initialization errors
Too strict / too lenient
- Adjust limits in
scripts/middleware/rate-limit.middleware.ts - Consider user feedback and actual usage patterns
- Monitor API performance under load
Redis connection issues
- Verify Redis is running:
redis-cli ping - Check network connectivity
- Review Redis logs for errors
- API will fall back to in-memory if Redis fails
Rate limits reset unexpectedly
- Using in-memory store without Redis (resets on API restart)
- Redis data eviction policy too aggressive
- Check Redis
maxmemoryandmaxmemory-policysettings