LiveView unmount/terminate callback

Hi all, I’m wondering how one would do an unmount/terminate callback inside LiveView?

Right now I am simulating this with Phoenix Presence, but I think it’s a smelly solution and requires me to store data in Presence that is already in each User’s socket.

Instead of checking presence_diff to see if someone has left and then check their Presence data for something that I put there (which is already part of their socket), the unmount or After Leave for the LiveView would just take their socket and broadcast changes to the other users.

e.g. What I am doing is like a multi-user spreadsheet, where only one user can edit a cell of the spreadsheet at a time. So I need to unlock the cell if someone disconnects or leaves (closing the browser tab or disconnecting won’t call phx_blur).

I’m sure there are some other use cases, but generally I feel LiveView is lacking simple way to tap into the channel it’s already using. This post has some creative workarounds: Doing JS events in LiveView

But these are workarounds and are a bit convoluted. I would love to see Phoenix LiveView have a way to work with this out of the box.

The best way to handle a LV going away is the same way you handle a process going away, and that is to have another process monitor you, then react to the DOWN that is received. For your use case, you need to be careful on multimode because your monitors will be local-node only, so if you are relying on global locks, you’ll need to put this global info somewhere and have the monitors read/write to that. Note that presence is eventually consistent so it does not support the global lock usecase. Hope that helps!

3 Likes

Thanks for the reply Chris. Hope my feedback doesn’t sound too negative, working with LV is a game changer and I know it’s still in the developmental stage.

Do you have any examples how I could create this process in the LiveView to react to the DOWN? :slight_smile:

Also, why do it this way instead of just being able to add a terminate function to the channel that LV is using?

Update 1

your monitors will be local-node only, so if you are relying on global locks, you’ll need to put this global info somewhere and have the monitors read/write to that. 

Right now I am using the database to store the fact that a cell is locked, I’m not sure this will scale but it’s quick and easy to implement.

Note that presence is eventually consistent so it does not support the global lock usecase.

This is interesting, I’m not sure exactly what you mean. With Presence I have each User and then I have (editing_input: boolean, and input_id: integer) to store if the user is editing a cell. Then when the user leaves I get the presence_diff and unlock it in the socket for the other users (updating the db is another thing though…).

After spending hours going down a deep rabbit hole learning how OTP works (Agent, Task, Proccess, Supervisor, etc), I wrote a GenServer to monitor the LiveView like @chrismccord suggested.

Then I realized something: you can just write a function in the LiveView named terminate(reason, state) and it will be called when the LiveView process exits and include the socket. It’s essentially like the unmount function in React (or the terminate function in Phoenix.Channels… makes sense huh).

This is what I was looking for originally, if there’s a better reason to create an additional GenServer to monitor the LiveView let me know. In any case, I might not need to use a GenServer now but it is still pretty useful to learn how they work!

2 Likes

The terminate callback doesn’t necessarily get called, check out: There is no unmount callback on Phoenix.LiveView · Issue #123 · phoenixframework/phoenix_live_view · GitHub

It’s a general design principle in Erlang to put fault recovery and application logic in separate processes. It keeps the code focused on the problem, and it means you can minimize the part of your code that needs to be correct (the error kernel). Basically you can rely on the cleanup happening even if you mess up in the channel code :slight_smile:

3 Likes

Nice find! Thanks for sharing this. Marked as new solution.