Rate limits¶
Per-token budgets, headers, behavior on hit, and when to ask for a higher limit.
The numbers¶
| Token type | Per hour | Per-minute burst | Concurrent in-flight |
|---|---|---|---|
| Installation | 500 | 30 | 5 |
| User | 200 | 20 | 5 |
Webhooks are the strongly recommended path for change notification — polling burns the budget fast.
Headers we send¶
Every API response includes:
X-RateLimit-Reset is a Unix epoch timestamp — the moment the current hour-window resets. After that timestamp, Remaining returns to the full Limit.
When you hit a limit:
HTTP/1.1 429 Too Many Requests
Retry-After: 42
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1747000042
Content-Type: application/json
{
"error": "rate_limit_exceeded",
"message": "Rate limit hit. Retry after 42 seconds.",
"request_id": "..."
}
Retry-After is in seconds. Wait at least that long before your next request. Repeated 429s in a short window may trigger a temporary suspension — back off, don't thrash.
Three independent budgets¶
The three columns above are independent:
- The 500/h is a rolling hour. Burning all 500 in 5 minutes means 0 requests for the next 55 minutes.
- The 30/minute burst caps you at 30 in any 60-second window — even if your hourly budget has plenty of room.
- The 5-concurrent caps simultaneously in-flight requests, regardless of your hourly or per-minute counters.
Hitting the per-minute burst limit returns 429 rate_limit_exceeded (same error code as hourly).
Hitting the concurrent limit returns a different code:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
{
"error": "concurrent_limit_exceeded",
"message": "More than 5 in-flight requests for this token. Queue or back off.",
"request_id": "..."
}
There's no Retry-After for concurrent_limit_exceeded — wait for one of your in-flight requests to complete, then retry. Typical resolution time is well under a second.
Why polling burns budget¶
A naive sync that polls GET /events/{id}/participants every minute consumes 60 of your 500 hourly budget per hour just on that single poll. Add /program, /activities, /locations, and you're past 200/h before any actual data delivery.
Webhook-driven approaches use ~0 budget at idle. The integration listens, reacts, and only calls the API when something changed. The same budget that'd be drained by polling lets your integration handle hundreds of events with no risk of throttling.
If you're polling, ask yourself: can I subscribe to a webhook for this? Almost always the answer is yes. See webhook event catalog.
Best practices¶
Webhook-first¶
If you can react to a webhook, do. Polling should be a fallback for "something might have been missed" scenarios, not your primary data flow.
Cache aggressively¶
Static-ish data (event metadata, program structure) rarely changes. Cache it for at least an hour, refresh on event.published / event.unpublished webhooks. Don't refetch every API request.
Dedup webhook-driven follow-up fetches¶
The webhook payload itself carries the resource snapshot — most handlers don't need to fetch anything afterwards. If a handler does need to call the API for data NOT in the payload (e.g. you got application.approved and want the parent event's full metadata for context), dedupe those follow-up calls. 5 webhooks → 1 fetch of the parent event's metadata, not 5.
Backoff on 429¶
Don't retry immediately. Use exponential backoff with jitter:
Request your own X-Revento-Request-Id¶
Easier debugging when you ask for help. We log it on our side; matching IDs makes investigation seconds rather than minutes.
Asking for higher limits¶
Conservative defaults can be raised on request after launch. Reach out to your account contact with:
- Your integration's
client_id. - The current limits you're hitting.
- A description of why — what use case requires more, what's the steady-state traffic profile.
- The number of active customers / events on your integration.
Most "I need 5,000/h" requests turn out to be solvable by switching from polling to webhooks — we'll discuss before bumping the number.
Sandbox limits¶
Sandbox uses the same numbers as production. Your tests should validate that your code respects rate limits before shipping to production.
If your sandbox tests are constantly hitting the limit, the answer is probably "your tests are doing too many calls per token" rather than "the limit is too low."