PASETO vs JOSE (JWT - JSON Web Tokens) (protocols/standards for managing user sessions)

Hi everybody,

I’m working on a new API, and digging (once again, why not?) on how to provide auth capabilities to it, I found an interesting post about why we should avoid using JWTs and all the JOSE standards to do this:

https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-bad-standard-that-everyone-should-avoid

I remember reading another post by @joepie91 some time ago: http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-for-sessions/ with similar claims.

So, a proposed solution is the PASETO (Platform-Agnostic SEcurity TOkens) standard: https://paseto.io/ which is already implemented in several languages (including Elixir), it claims the benefits of JOSE without the many design deficits:

A Platform-Agnostic SEcurity TOken (PASETO) is a cryptographically
secure, compact, and URL-safe representation of claims intended for
space-constrained environments such as HTTP Cookies, HTTP
Authorization headers, and URI query parameters. A PASETO encodes
claims to be transmitted in a JSON [RFC8259] object, and is either
encrypted symmetrically or signed using public-key cryptography.

This with a stateful server to have control over sessions and/or refresh tokens look like a good way to go.

My main question is: why this? and not something else like Phoenix tokens, or Fernet, or just stay with JWTs which seems to be used everywhere now and more battle tested. I have no background or deep knowledge on cryptography, so any help to dilute my doubts will be appreciated.

3 Likes

I think it mainly depends on what you are going to use these protocols for. JWT and by extension PASETO are good for one-off requests, like “hey, image service, resize me this image, here’re my creds”. They might not have many advantages for anything else (as the second article you linked argues).

5 Likes

Yeah this, they are great for server-to-server secure data transfer through a client channel, but they are way too heavyweight for communication on a session.

1 Like

Thanks for your help @idi527 & @OvermindDL1, seems I’ll go the plain old sessions way then, with the help of Swarm and GenServer workers (one per user, multiple sessions on state, stop itself after some minutes without activity) as a cache as an attempt to control them and to not query the DB each time. Does this look like a viable way to implement sessions? Any details/concerns I should take care going this way?

I’d use an ets table (maybe an ets table per scheduler if you start experiencing lock contention) instead of processes for session data caching. But I’d also start doing it only once I notice that there is actually a need for a cache.

I use CacheX as a cache, it is backed by ETS. You can have it acquire data from lookup functions if it is timed out or doesn’t exist or so, thus all access can be done ‘through’ it. Anytime you want to ‘write’ to the database just clear the cache at the same key on all nodes or just directly inject the new data. This is fantastic for read-very-often data with little writing.

If you do lots of writing, just do database access every time, like really.

Honestly you should just start with database access anytime anyway, it’s super fast as it is, and with a good module setup you can change how it works (such as to CacheX) at any time.

4 Likes

Thanks for your help. Now I have an idea where to start.
CacheX looks like the way to go for an upgrade when needed. @OvermindDL1, how do you replicate and update CacheX across distributed nodes? Does the library includes any helpers or should it be manually handled?

Related to this: We might be working on slowly migrating a Rails app to a Phoenix one in the near future. What would be the best way to share sessions between these two? Is this a suitable use for one of these methods?

You can parse rails’ cookies in phoenix, and extract the session data from them.

3 Likes

I agree with @idi527, and I’ve been able to share sessions from a rails app to a phoenix app without making any changes to the Phoenix app (of course I needed to copy the secret key/token from Rails over to Phoenix)

Hey, I actually wrote the Paseto library for Elixir, so I’ll explain my reasoning behind it:

1.) Much saner defaults than JOSE (specifically the algorithm issue). We’re removing the decision making process behind chosen crypto by enforcing sane defaults
2.) Preferring JWT/Paseto enables language agnostic backends (beyond just a happy path of Ruby -> Phoenix), rather than relying only on semi-exclusive formats.

Finally, it’s really easy to get small JWT/Paseto’s if you only keep sane information in the token: bitmasks for auth flags, don’t stick the entire user model in the JWT, &c.

I wouldn’t use a Paseto unless I were working in a microservice architecture–I’d default to normal sessions.

8 Likes

Your multi-node cache is already your DB, CacheX is for local cache’s, and if you want to invalidate data in it on all your nodes then just RPC the delete command across all of them. :slight_smile:

2 Likes

Thanks for the input and clarifying the whys of PASETOs, good work with the library! :+1:
Will keep it in mind for when the time comes.

Makes sense, thanks for clarifying all these issues to me, I don’t have a lot of experience yet.

1 Like

Sorry to revive an old thread, but I came across it as I was doing research on whether to implement PASETO for our external API. Would you mind clarifying why PASETO might be heavyweight for communication on a session?

Thanks!

1 Like