Best way to handle multiple authentication for live_view app?

I’m looking for recommendations for best practices to handle different levels of authentication within a live_view app.

I’d like to hit the Postgres DB one time then pass the session off into Redis with a time-to-live but how do you keep all the components all authorized in the proper method?

How would you do it for non-liveview? Thats your answer.
Just pass along the auth on all requests to the data layer and have the data layer do row level or whatever based on what your auth says.

Because how LiveView works do you really need to use Redis or anything except DB? I mean you hit DB so much less because you don’t need to do authorization for every request compared to traditional web site/api. I haven’t yet used Phoenix 1.6 but changes in it related to live session I think you need to check for authorization even less.

1 Like

I’m confused perhaps. I assumed that if you had 4 parts of the app all needing different types of authentication then each part within the live view would need to be checked each time. Or does the system cache these results? It would be impossible to scale this up with millions of users all hitting the db.

If I use POW they have a mensia cache that is super fast you are correct nothing is needed.

Don’t you store the user/user_profile/user_roles or whatever you’re pulling from the db in the session after you get it the first time and use that everywhere?

security model

Millions of concurrent (online) users? Wow I don’t know anything about that scale. What do you mean by 4 parts? Do mean you have pages (areas) for normal users, admin pages etc?

LiveView works differently than your traditional web pages and REST/GraphQP API stuff. In LiveView authorization is only made when WebSocket is connected and with the changes in Phoenix 1.6 in my understanding it’s only done again if you move between pages that have different permission levels. So for example if you first go to pages that are meant for normal users you authorize once then move around on the user pages no new authorization needs to be done but when you go to admin page you are authorized again.

With this low amount of DB calls you need to do for authorization and you can’t handle that then you will have other problems because it’s gonna be so VERY little compared to other DB calls you are gonna be making.

We’re looking at citrus or the postgres back in it’s scalable and has linear scalability.

But anything we can take away from the postgres database just makes sense if you can do it in a smaller redis database and likely faster.

Citrus? You probably mean Citus right?

But you have the authorization data in the database not just in the Redis right? Data in Redis could be timed out like you said then you need to read from DB and put new data into Redis. It’s fine if you want to do it but because way LiveView works feels to me like it’s extra work for almost nothing because you have to do it so infrequently. Only negative I can think of is when you need to hit DB for authorization it’s gonna slower. Because first you have to check Redis then you need to load the data from the DB and put into Redis. Personally I would only add Redis or other caching when something actually needs it.

I don’t think I have anything left to say in this matter. It’s up to you to decide if you are going to use caching now or wait and see if it is an actual performance issue.

Yea, on the typo-was driving and using voice to text. Safety first.