Do refresh tokens provide a false sense of security?

“The server can pull the plug at any moment” might be a requirement or might just as well not be. There’s certainly contexts where the latter is the case – not everything involves access to not to be leaked data – and in the end it’s less of a technical decision than one of risk over benefit taken by the business. But as always this requires awareness of the risks involved and the business value gained by taking or not taking them.

Generally I’d still suggest keeping things simple, use db driven auth/static tokens, have http only sessions where useful. If there’s business value to be had from using less secure (mostly less central-db heavy) options then there’s money for changing out a previous implementation as well.

2 Likes

Regular, stateful, sessions. Just have table with fields session_id, data, and expire_at. Generate session ID via :crypto.secure_random_bytes(32) encode however you want and call it a day. It will be the most secure, most obvious, and easiest to manage approach.

cc @Pistrie

If you want a one-time-token then Phoenix.Token will be much better choice than JWT. It is simple, versioned, signed token. The lack of configurability is enormous gain there as you cannot accidentally use incorrect set of keys/features and make it much harder to fallback to insecure algos. If you want something cross-platform then PASETO or similar would be probably best choice (I am working on BASETO which will use BARE instead of JSON to encode the data, but that is irrelevant there, the idea is the same).

For browsers sessions just use HTTP-Only cookies, for API you can use Cookies or Authorisation header, that doesn’t really change much. If you want to be ultra secure, then you can try mTLS for that, rarely used, but super powerful approach.

8 Likes

Completely agree!

Unless you have a large microservice environment and you’ve measured that your authetication service is causing a problematic amount of latency then you don’t need any of the performance optimisations that JWT and other error prone tokens may give you.

1 Like

But… I’ve been doing that with PHP some 10+ years ago. Good to know it’s still a good practice! :smiley:

Also somewhat disappointing. For all the talk of quantum-resistant encryption and perfect forward secrecy one would expect at least some of that to spill over to application development security…

Reading about mTLS has proven interesting, thank you.

Phoenix.Token is now my new favorite thing.

2 Likes

It does, just when you want to have some way of authorisation and revocability then you still need to ping some centralised store. You cannot leap over physics.

PASETO is encoding agnostic.

From the specification:

If you want to propose a non-JSON encoding for PASETO, it might be worthwhile to register it as a suffix in the specification rather than fork as “BASETO”.

2 Likes

A bit off topic but what is BARE encoding? A quick search didn’t reveal much

https://baremessages.org/

Binary encoding, think protobuf, but way simpler and focused on tokens instead of RPC.

2 Likes

I don’t mind at all.

If you are keeping the refresh token in the client then it will be available to be stolen. Refresh tokens must be kept on the server side. When using tokens I personally prefer to refresh them on each API request. In other words an issued token last as long its used for the first time, but this isn’t enough to prevent an attacker from exploiting and abuse your backend, no matter if using cookies or token based authentication/authorization.

For a traditional web app requests (no Javascript) I prefer to use cookies with the HttpOnly flag set to not allow Javascript to access them and the Secure flag set.for them to only be sent in HTTPS connections. I also harden its usage with other flags, that you can read more about in the Mozilla docs at Restrict access to cookies.

For mobile apps you are better to use some secret, be it in whatever form you choose, like a JWT or an hashed string. Cookies on mobile apps can be used but they will not give the same guarantees of a browser.

Keeping a secret private in the client is where things become very tricky, because in a web app you just hit F12 to open the developer console and then you search and extract it. On a mobile app you have a lot of open source tools and methodologies to help you to extract secrets used to access backends.

This morning I replied to a question in Stackoverflow where I go in more detail on the steps to secure a secret in the mobile app:

On my reply I address to the following topics:

How to Extract an API key from a Mobile App with Static Binary Analysis:

The range of open source tools available for reverse engineering is huge, and we really can’t scratch the surface of this topic in this article, but instead we will focus in using the Mobile Security Framework(MobSF) to demonstrate how to reverse engineer the APK of our mobile app. MobSF is a collection of open source tools that present their results in an attractive dashboard, but the same tools used under the hood within MobSF and elsewhere can be used individually to achieve the same results.

During this article we will use the Android Hide Secrets research repository that is a dummy mobile app with API keys hidden using several different techniques.

Securing HTTPS with Certificate Pinning:

In order to demonstrate how to use certificate pinning for protecting the https traffic between your mobile app and your API server, we will use the same Currency Converter Demo mobile app that I used in the previous article.

In this article we will learn what certificate pinning is, when to use it, how to implement it in an Android app, and how it can prevent a MitM attack.

On this article you will learn how to use the Mobile Certificate Pinning Generator free tool to easily generate your Android and iOS configurations.

How to Bypass Certificate Pinning with Frida on an Android App:

Today I will show how to use the Frida instrumentation framework to hook into the mobile app at runtime and instrument the code in order to perform a successful MitM attack even when the mobile app has implemented certificate pinning.

Bypassing certificate pinning is not too hard, just a little laborious, and allows an attacker to understand in detail how a mobile app communicates with its API, and then use that same knowledge to automate attacks or build other services around it.

Frida is a very powerful tool and when used by a skilled attacker it will allow him to even hook to your app code to extract any secret from it without the need to disable pinning to perform the MitM attack. The attacker only needs to figure out the name of the function that uses or retrieves the secret in order to hook on it at runtime and extract such secret. To find the name of the function the attacker will statically reverse the mobile app binary and read your source code, even if the code is obfuscated.

Hands-on Mobile App and API Security - Runtime Secrets Protection

In a previous article we saw how to protect API keys by using Mobile App Attestation and delegating the API requests to a Proxy. This blog post will cover the situation where you can’t delegate the API requests to the Proxy, but where you want to remove the API keys (secrets) from being hard-coded in your mobile app to mitigate against the use of static binary analysis and/or runtime instrumentation techniques to extract those secrets.

We will show how to have your secrets dynamically delivered to genuine and unmodified versions of your mobile app, that are not under attack, by using Mobile App Attestation to secure the just-in-time runtime secret delivery. We will demonstrate how to achieve this with the same Astropiks mobile app from the previous article. The app uses NASA’s picture of the day API to retrieve images and descriptions, which requires a registered API key that will be initially hard-coded into the app.

At the end of the day your backend needs to know with a very high degree of confidence WHAT is doing the the request, not only WHO is in the request. Think of the who as the user on which the request is being done and think on the what as the thing issuing the request. is it a genuine and unmodifeied version of your app or it’s the request being made by a bot, a tool in the likes of Postman , a cURL request, etc… For your backend to be protected against being abused and exploited it needs to have a very high degree of confidence that the requests are only from trusted clients, aka you app, not any other origin, otherwise is like closing you home doors and leaving the windows open.

If you want to learn more about API and Mobile security I invite you to read some of my answers on Stackoverflow:

7 Likes

can you expand briefly on that one? I’m curious

So in the case where you want some API to be used by legit mobile clients only, this is a problem. But if the API is open to web requests (like a react or vue app) then what would you use?

For such web apps I generally use cookies. I don’t understand why some people bother with tokens and al’ since having a traditional session is so easy anyway. (yes it will not be “pure” REST but who cares, it’s an app, not just an API, and it’s been a long time, but IIRC there are options to let fetch/xhr request to send session cookies when they are http only). In that case, would you use the cookie in the mobile app as well?

Browsers are the ones in charge of sending the cookies, based on how you configured the cookies, as per the link I shared about Restrict access to cookies, but on a mobile app would be the developers responsibility to manage cookies, or have a library to do so.

From my limited knowledge on browsers inners, this would be a violation on how browsers security should work. Can you point me to some documentation on this?

Ah, I think I get you now. If the Javacript is from the same-origin then you can configure it to send the cookies, but if the Javascript isn’t the browser cannot send the cookies, otherwise it would break the fundamental security of how this was designed to work.

As I mentioned in my answer you can use whatever you want, the problem is to keep them from being extracted and reused outside the original client, the web or mobile app. Also, if you use cookies in a mobile app how will you establish trust on the very first API request? By other words how would you know that what is making the request is indeed the genuine and unmodified client of your backend?

it’s always a problem, no matter if the backend is only for web or for mobile or for both. The backends are blind when it comes to attest with an high degree of confidence that a request is indeed from what it expects, a genuine and unmodified app that the backend is allowed to serve request for.

For mobile apps

I recommend you to read this answer I gave to the question How to secure an API REST for mobile app?, especially the sections Hardening and Shielding the Mobile App, Securing the API Server and A Possible Better Solution.

For web apps

You can learn some useful techniques to help your API backend to try to respond only to requests coming from what you expect, your genuine web app, and to do so I invite you to read my answer to the question Secure api data from calls out of the app, especially the section dedicated to Defending the API Server.

2 Likes

Same origin yes, that rings snme bell.

Well in that case we do not care since the same API is available for the browser, so it is actually available for anything. In that specific case there is no need to validate the what. But for practical reasons it would be simpler to use the same mechanism as in the browser, so a cookie I guess.

Edit: I did not see the links, I’ll go read those.

Bear in mind that what I will say next is not target at you as an individual, it is rather a common situation I experience on recurring basis due to my profession.

I am used to ear this type of reason, but it falls apart when the backend falls under attack and causes financial and reputational damage to the business, and in some cases huge fines from regulators. In this situation the entire security needs to be revised in an hurry and devs scramble to find a solution and sometimes they may have serious difficulties on the implementation due to how everything was designed. Off-course, this isn’t an issue for backends that provide public data, like wikipedia.

I cannot blame developers for not knowing better, because I was in the same position 4 years ago, and this is due to the lack of security education in our careers.

1 Like

But is that true for all browsers though? What if I use a less secure browser, for example, Wlwhat if I use the browser inside tiktok?

I’m not sure what the security implications are of that as it’s only tiktok that can intercept those and not an outside attacker I hope?

If the users are stupid enough to login though “browser” from within facebook, tiktok or wechat, there is nothing we can do. On the other hand, I don’t think it is right to sanction clients. Mobile web is being damned.

I think your question is regarding on browsers sending the cookies and respecting or not the flags set on it. I only linked to Mozilla because they very good docs, but any browser compliant with the spec should do exactly the same.

Trying to properly secure a mobile app that is a wrapper around a web app is condemned to failure. Securing the the browser shipped inside a mobile app from being spied on by the mobile app that ships it its out of my knowledge.

TikTok can do a lot of things, they literally monitor each keystroke on your phone and send it to their backend, thus guess what that means for you. You can search the web for tiktok monitor key strokes and take your own judgement. Tiktok ass a track record on the security community of being the privacy nightmare for their users, and this is just being gentle.

About outside attackers they just need to perform MitM attack to intercept and manipulate the HTTPS channel between a mobile app and its backend. Learn how to do it yourself on this tutorial I wrote:

Performing a MitM attack against an HTTPS channel requires the capability for the attacker to be able to add the proxy server Certificate Authority (CA) into the Trust Store of the device running the mobile app and a popular approach is to manually upload the CA to the device, but this comes with some challenges, that may require to root the device and/or repackage the mobile app.

An easier way exists, and in this article I will show how to use an Android Emulator with a writable file system that will allow us to install the proxy certificate directly into the system trusted store, without the need to root the emulator or make changes in the mobile app.

That’s officially the most hilarious typo on this forum. :003:

6 Likes

I think I’m still struggling with what we’re solving exactly…

Let’s say we’re trying to protect the user from getting their data stolen/tampered. In that case we could say, well you used an insecure browser, so we cannot protect you… We gave you a token, you received that in an insecure environment and now someone stole it.
This scenario is not fun for us or for the client, but it’s only 1 impacted user.

But when we have the scenario as above, but now the user is an admin user who has access to everything, then suddenly the breach is way more impactful.
In this scenario we can’t say to our admin user, you should’ve used a secure browser, because we’re impacted heavily as well.

So if the user has a breached device, or uses insecure browsers, then the cookie is no guarantee to stop bad actors?

I guess my question is, should the above sentence be, they won’t give the same guarantee of a secure browser? And if that’s the case, is it worth looking at other solutions for browsers as well because we cannot know which browser a user will use? Or am I overthinking this?

Just as an additional note:

The refresh/access token distinction is not to save you from the tokens getting stolen.

It is against the user loosing their account on the authentication server.

Lets say you have a company and issue your access tokens with a couple of minutes and the refresh token with a month.

The worker leaves after a week, but still has a valid refresh token. Now that the account has been deactivated on the authentication server, they can not use the refresh token anymore to get a valid access token, despite the fact that the refresh token has not expired.

The same technique is used when you hit “log me out from all devices” in facebook or similar services. The long lived refresh token gets revoked by the auth server not accepted anymore when asking for a new access token.

This dual tokens are necessary, to avoid the consumer having to ping the authentication server again and again for every request, whether the authenticated user is still authentic. Or even worse: assume authenticity for a very long time…

3 Likes