Skip to content

Refresh tokens

Access tokens are short-lived (1 hour). Refresh tokens let you mint new ones without re-prompting the organizer or participant for consent.

This page covers the parts you'll actually need to implement — the rotation mechanics, the lifetime curve, and the one bug that bites everyone (concurrent-refresh races).

When to refresh

Two valid strategies — pick one and stick with it:

Reactive (most common)

When an API call returns 401 token_expired, refresh and retry once. Simple, works.

def call_api(token, url):
    res = http.get(url, headers={"Authorization": f"Bearer {token.access_token}"})
    if res.status == 401 and res.json()["error"] == "token_expired":
        token = refresh(token)
        res = http.get(url, headers={"Authorization": f"Bearer {token.access_token}"})
    return res

Proactive

Refresh before the access token actually expires — say, when now + 5 minutes > token.access_expires_at. Avoids the extra request on every API call but requires you to track expiration.

Pick proactive if you do scheduled batch work where a 401 mid-batch is annoying. Pick reactive if you do scattered request/response work where simplicity wins. Don't mix — concurrent calls hitting both refresh paths fights each other.

Refresh request

POST /oauth/token HTTP/1.1
Host: auth.revento.example
Content-Type: application/x-www-form-urlencoded

grant_type=refresh_token
&refresh_token={your refresh token}
&client_id={your client_id}
&client_secret={your client_secret}

No PKCE on refresh (PKCE is only for the initial code exchange). No redirect_uri.

Refresh response

{
  "access_token":       "rev_install_new_xyz...",
  "refresh_token":      "rev_refresh_new_abc...",
  "token_type":         "Bearer",
  "expires_in":         3600,
  "refresh_expires_in": 7776000,
  "scope":              "event.read participants.read program.read",
  "event_id":           "evt_abc123",
  "organization_id":    "org_xyz789"
}

Both expires_in and refresh_expires_in are in seconds. The 7,776,000 above is 90 days — the refresh-token sliding window. Each successful refresh resets refresh_expires_in back to 90 days, up to the 1-year hard cap from initial issue.

Critical: the response contains a new refresh token. Refresh tokens are one-time use — the moment you successfully exchange one, the old refresh token is invalidated. Store the new one immediately, atomically, before doing anything else with the new access token.

If your code does:

# WRONG
new_tokens = refresh(old_token)
do_something_that_might_fail(new_tokens.access_token)
db.save(new_tokens)  # never reached on failure → next refresh uses stale old_token → 400 invalid_grant

You'll occasionally end up in a state where the DB still has the old refresh token but the server has invalidated it. The next refresh attempt fails permanently (per family-revoke, below) and the organizer has to re-consent. Not great.

The robust pattern:

# Right
new_tokens = refresh(old_token)
db.save(new_tokens)              # persist FIRST
do_something(new_tokens.access_token)

Or even better, save inside the same transaction that triggered the refresh:

with db.transaction():
    new_tokens = refresh(old_token)
    db.save(new_tokens)
    return new_tokens

Refresh-token lifetime

Property Value
Access token 1 hour
Refresh token sliding window 90 days from last successful refresh
Hard cap from initial issue 1 year

The sliding window means: each successful refresh extends the refresh token's life by 90 days from that moment. A connection that refreshes weekly will never hit the 90-day idle expiry.

The 1-year hard cap is absolute. After 1 year from the original consent, the user must re-consent through the full OAuth flow — no amount of refreshing extends past this. Plan UX for it: 11 months in, a banner on your dashboard ("Reconnect required by {date}") gives the organizer a graceful path.

Family-revoke-on-reuse

Refresh tokens have a family_id. Every refresh chains to a parent_token_hash. If two refresh attempts use the same refresh token (the original AND a descendant from a successful prior refresh), Revento concludes the original was leaked or there's a race condition with stale state, and revokes the entire token family.

In practice:

  • Token A is the initial refresh token.
  • You refresh A successfully → get B.
  • Something goes wrong, your code retries with A again.
  • Revento sees A used twice. A and B and any descendants are all revoked.
  • The organizer's connection now requires re-consent.

This protects against an attacker who steals a refresh token and uses it concurrently with the legitimate holder. It also catches the "I forgot to save the new refresh token" bug above.

The remediation when it happens: full re-consent. There's no "undo." Make sure your code handles this case explicitly:

if response.status == 400 and response.json()["error"] == "invalid_grant":
    mark_installation_revoked()
    notify_operators("Connection {event_id} requires re-consent")

Concurrent-refresh races

The most common bug: two parts of your code refresh the same token at the same time.

Imagine your worker fleet has two pods both holding token A. A request comes in to pod 1, returns 401, pod 1 starts refreshing. While pod 1 is mid-refresh, another request hits pod 2, also returns 401, pod 2 starts refreshing.

  • Pod 1's refresh succeeds first → token A is revoked, B is the new refresh token.
  • Pod 2's refresh hits Revento with token A — already used. Revento sees a reuse, revokes the whole family. Now both A and B are revoked.

Avoidance:

Option 1: single-flight refresh

In-process: a mutex around the refresh, deduping concurrent attempts to one network call. The N concurrent callers all get the same new tokens.

Option 2: distributed lock

If your refresh might happen across multiple processes / pods, take a Redis (or equivalent) lock keyed on the installation id before refreshing. Other callers wait, reload the token from storage after the lock releases, and continue.

Option 3: serialize through a single worker

Route all refresh attempts to one designated worker via a queue. Other workers either wait or refresh-on-demand only on the dedicated worker.

Pick whichever fits your architecture. The cheapest is option 1 if your refresh attempts are within a single process; option 2 once you're horizontally scaled.

When the family is revoked (deliberately, by leak detection, or accidentally via concurrent refresh), the next API call returns 401 token_revoked. Refresh attempts return 400 invalid_grant.

Recovery requires the organizer to go back through the organizer-connect flow. Your integration should:

  1. Mark the installation requires_reconsent in your storage.
  2. Surface this to the organizer with a "Reconnect" CTA in your dashboard / admin UX. The CTA links to your /connect route, which kicks off OAuth again.
  3. Optionally, email or notify based on your customer relationship.

There is no programmatic recovery — re-consent is required.

Common pitfalls (recap)

  • Storing the new refresh token after using the new access token. Persist FIRST.
  • Sharing a refresh token across multiple processes without a lock. Family-revoke is unforgiving.
  • Catching 400 invalid_grant as a transient error and retrying. It's permanent. Mark the installation revoked.
  • Refreshing on every API call "just in case." Hammers the token endpoint, increases the chance of races. Refresh only when the access token is genuinely expired.
  • Treating 401 token_expired and 401 token_revoked the same. The first is recoverable by refreshing; the second requires re-consent. Read the structured error field, not just the status code.