API Rate Limits
To ensure fair usage and system stability, the DanubeData API implements rate limiting on all endpoints.
Rate Limit Tiers
Rate limits vary based on your account tier:
| Account Tier | Requests per Minute | Requests per Hour | Requests per Day |
|---|---|---|---|
| Free | 60 | 1,000 | 10,000 |
| Starter | 120 | 3,000 | 50,000 |
| Pro | 300 | 10,000 | 200,000 |
| Enterprise | Custom | Custom | Custom |
Rate Limit Headers
Every API response includes rate limit information in the headers:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1699564800
Header Descriptions
X-RateLimit-Limit: Maximum requests allowed in the current windowX-RateLimit-Remaining: Number of requests remaining in the current windowX-RateLimit-Reset: Unix timestamp when the rate limit resets
Checking Your Rate Limit
In HTTP Responses
curl -I -H "Authorization: Bearer YOUR_TOKEN" \
https://danubedata.com/api/v1/vps
# Response headers:
# X-RateLimit-Limit: 60
# X-RateLimit-Remaining: 59
# X-RateLimit-Reset: 1699564860
Parsing Rate Limit Headers
// JavaScript example
axios.get('/api/v1/vps')
.then(response => {
const remaining = response.headers['x-ratelimit-remaining'];
const reset = response.headers['x-ratelimit-reset'];
console.log(`Requests remaining: ${remaining}`);
console.log(`Resets at: ${new Date(reset * 1000)}`);
});
# Python example
response = requests.get('/api/v1/vps', headers=headers)
remaining = response.headers.get('X-RateLimit-Remaining')
reset_time = response.headers.get('X-RateLimit-Reset')
print(f"Requests remaining: {remaining}")
print(f"Resets at: {datetime.fromtimestamp(int(reset_time))}")
Rate Limit Exceeded (429)
When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
{
"message": "Too Many Requests",
"retry_after": 45
}
Retry-After Header
The response includes a Retry-After header indicating when you can retry (in seconds):
HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1699564860
Best Practices
1. Implement Exponential Backoff
When receiving a 429 response, wait before retrying:
async function makeRequestWithRetry(url, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await axios.get(url);
return response.data;
} catch (error) {
if (error.response?.status === 429) {
const retryAfter = error.response.headers['retry-after'] || 60;
const delay = retryAfter * 1000 * Math.pow(2, i); // Exponential backoff
console.log(`Rate limited. Waiting ${delay}ms before retry...`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Max retries exceeded');
}
2. Monitor Rate Limit Headers
Check remaining requests before making additional calls:
function shouldMakeRequest(headers) {
const remaining = parseInt(headers['x-ratelimit-remaining']);
const limit = parseInt(headers['x-ratelimit-limit']);
// Stop if less than 10% remaining
return remaining > (limit * 0.1);
}
3. Cache Responses
Reduce API calls by caching responses:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function getCached(key, fetcher) {
const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await fetcher();
cache.set(key, { data, timestamp: Date.now() });
return data;
}
// Usage
const vps = await getCached('vps-list', () =>
api.get('/vps').then(r => r.data)
);
4. Batch Requests
Instead of multiple individual requests, use list endpoints:
# ❌ Bad: Multiple requests
GET /api/v1/vps/vps-123
GET /api/v1/vps/vps-456
GET /api/v1/vps/vps-789
# ✅ Good: Single request
GET /api/v1/vps
5. Use Webhooks
Instead of polling for changes, configure webhooks:
# ❌ Bad: Poll every minute
while true; do
curl /api/v1/vps/vps-123/status
sleep 60
done
# ✅ Good: Configure webhook once
curl -X PUT /api/v1/webhooks/config \
-d '{"webhook_url": "https://example.com/webhook"}'
Rate Limit Windows
Rate limits are calculated using sliding windows:
- Per-Minute: Rolling 60-second window
- Per-Hour: Rolling 60-minute window
- Per-Day: Rolling 24-hour window
This means:
- If you make 60 requests at 10:00:00, you can't make more until 10:01:00
- Limits gradually refill as time passes
Exemptions
Certain endpoints have different limits:
Higher Limits
- Health Check (
/api/v1/health) - Not rate limited - Status Endpoints - 2x normal rate limit
Lower Limits
- Bulk Operations - 10 requests per minute
- Snapshot Creation - 5 requests per minute
- Instance Creation - 20 requests per hour
Monitoring Usage
In Your Dashboard
View your API usage:
- Navigate to API Tokens
- Click on a token to view its usage statistics
- See graphs of:
- Requests per hour
- Rate limit hits
- Most used endpoints
Usage Alerts
Set up alerts for:
- Approaching rate limits (80% usage)
- Rate limit exceeded events
- Unusual usage patterns
Upgrading Your Limits
Need higher rate limits?
For Individuals
Upgrade to a higher tier:
- Starter: 2x rate limits
- Pro: 5x rate limits
For Enterprises
Contact sales for custom rate limits:
- Dedicated rate limit tier
- Burst capacity
- Priority API access
- SLA guarantees
Common Scenarios
Monitoring/Polling
If you need to poll for status updates frequently:
- Use webhooks instead - Best option
- Increase polling interval - Check every 5 minutes instead of every minute
- Use efficient status endpoints -
/statusendpoints use fewer resources
Bulk Operations
When performing bulk operations:
// Add delays between requests
async function processBatch(items) {
for (const item of items) {
await processItem(item);
// Wait 1 second between requests to avoid rate limits
await new Promise(r => setTimeout(r, 1000));
}
}
CI/CD Pipelines
For automated deployments:
- Use a dedicated API token with higher priority
- Implement retry logic with exponential backoff
- Cache static data (images, SSH keys)
- Run deployments sequentially, not in parallel
Troubleshooting
Consistently Hitting Rate Limits
Solutions:
- Implement caching
- Use webhooks instead of polling
- Batch requests where possible
- Upgrade to a higher tier
429 Errors in Production
Immediate Actions:
- Check the
Retry-Afterheader - Implement exponential backoff
- Review recent code changes for inefficient API usage
- Enable caching if not already enabled
Rate Limit Headers Missing
If rate limit headers are missing:
- Check you're using API v1 endpoints (
/api/v1/...) - Verify authentication is working correctly
- Contact support if the issue persists
Next Steps
- Visit
/docs/apito see rate limits for specific endpoints - Configure webhooks to reduce polling
- Implement caching in your application
- Monitor your API usage in the dashboard