Tesla: intercept 401 and refresh token on demand

Hi all

I am building a small web service that is essentially a nice looking wrapper around an external API. It reads user details from that external API, if user details list orders, then more API calls to read those orders details and present it all together in a nice looking web page.

API is oauth-authenticated and I work with it via server flow… not sure if it’s correct name. Basically it’s my server that gets an auth token and fetches data, user doesn’t need to approve anything.

I use Tesla to access this third party API and it works very well except for when auth token expires. Then API returns 401, I need to refresh token (yet another http request) and then retry what I was doing.

It’s not a rocket science to handle it, yet still somewhat inconvenient to bake this readiness to 401+retry everywhere.

In JavaScript axios library similar situations are handled with “interceptors”. Interceptors can catch 401, “put original request on pause”, go execute another request for refreshing token, resume (or actually retry) original request wi th the renewed token.

  • Can something like this be done with Tesla as well?
  • Or what would be the proper elixir way to renew tokens on demand?

Does the Retry middleware help?

https://hexdocs.pm/tesla/Tesla.Middleware.Retry.html#module-examples

Though I’m not sure if this will let you change the actual request being made, like hitting /oauth/token to get a new token if a request fails…

3 Likes

In my current work, I renew expired tokens (or soon to be expiring) before I do my external network calls. I added that “soon to be expiring” logic after observing some poor alignment in our Oban jobs that ran every 60m and then tokens expiring every 60m … and full pagination capture to go past expiration.

Sorry I don’t have a good solution for you, but wanted to chime in and see what other people say too.

1 Like

That is interesting, an approach I also wanted to try in the future. Are you refreshing the token with a job, or just on demand when you want to make a network call?

Right now, it is on demand. The module exposes a fetch_user function and to return the full list of users (which may require synchronous paginated results), we renew the token (if needed) and then start paginated requests.

There is an Oban Job that utilizes the module to do its work (which is out of cycle of normal user activity).

If you want to chat about our setup, I’m happy to do a private demo.

1 Like

I wanted to write such a Tesla middleware, but didn’t have time.

However you might be interested in oauth2_token_manager: it’s very beta, but the goal is to handle token renewal among several actors. Don’t use it in prod, but you can probably get inspiration.

You’ll also have to authenticate to the OAuth2 server when renewing the token. You can use GitHub - tanguilp/tesla_oauth2_client_auth: Tesla middlewares for OAuth2 and OpenID Connect client authentication for this which handles various client authentication schemes.

1 Like

That’s what I ended up doing for now. I take expires_in value and store token to a cachex cache with TTL a little smaller than it (just in case). Next time cache is used if still available.

That said refetching on demand would be good to have - after all token an suddenly be invalid for various reasons e.g. service can reset all the tokens if some secret is leaked or maybe they keep them so that suddenly rebooted storage makes them all invalid.

As @JohnnyCurran tells, something like Retry might be useful as probably retry is retrying the whole call somehow. My elixir skills, experience and will aren’t high enough to deepen into it myself right now, but that might be a good path.

I think you could do it by using the :should_retry option.

You should pass a function that checks the result to see if it was 401.

edit: I wish we could delete posts because I realised this was not helpful right after clicking submit. As mentioned above this is no good because you also need to handle actually fetching valid credentials before retrying.