First of all, sorry if I’m making a very silly question that maybe is answered in the documentation.
I’m planning to evaluate Phoenix in a micro-frontends architecture.
I’ve started to check the free Phoenix-LiveView tutorial at Phoenix LiveView Free Course | The Pragmatic Studio
From what I understood Phoenix creates a stateful LiveView process in the backend that updates small chunks of the state through web-socket calls.
That sounds like a great idea but my concern is if I want to use Phoenix with containers that are deployed in a Continuous Delivery fashion. Would that play fine with the stateful LiveView processes?
I’m thinking of a deployment chain where a new image is created and deployed in a new container, and afterwards the current container is swapped with the new container. I’m afraid that would effectively make the states from the current container get lost. Or is it possible to copy them to the new container?
Perhaps I’m getting it all wrong and Phoenix is not supposed to be used with containers or maybe one just should be fine with losing the state when swapping.
Or is there any other way to use Phoenix in a containerized & CI environment?
I’m not positive how elixir and phoenix in this regard is different from any other languages out there. You have state that you can keep in memory, just like other languages use variables and other mechanisms to store them and you can have a persistent layer, for example postgres.
If you are asking about persisting runtime state between deploys, then yes, that is possible, however the complexity involved is not worth it unless that is a hard requirement. What happens by default when you deploy/restart the server is the frontend client reconnects and reloads the page, if you had runtime state there (for example completed forms), then you will lose those values, otherwise other data that was persisted in DB will be refetched and served to the client.
@D4no0 I think the type of applications I’m planning to develop will persist the data in a database anyway. I understand that persisted data will no be a problem to fetch when the new container is swapped, but the data in the liveview process will be lost.
I’m starting to wonder then when could it be useful to store data in the liveview process that is not persisted in a db.
I think the liveview process starts to function like a “private cache memory” for the session. For one-instance applications in other languages this is solved with a cache singleton object, whereas for a containerized applications this is solved with a Redis node that all containers talk to, from what I have grasped.
I see one advantage with having data in the liveview process and that’s preventing from re-fetching persisted data from the db, just like a cache, a very smart one though.
@al2o3cr Yes Matt, i think this should be a generic problem with containers. I wonder as well if Phoenix has out-of-the-box support for this problem. Wonder if it attempts to reconnect to the new container automatically or if is something one needs to deal with on its own.
@arcanemachine Sorry, no i did not mean that storage layer, but i see that could be an alternative to moving data across containers. From what I know OTP has already the distributed db Mnesia which can save data both in memory and it can persist it as well. In such case, why save data in liveview processes when I could use Mnesia?
Like I wrote earlier, I’m starting to learn Phoenix and LiveView and think is great framework. Is really interesting that it can give service to millions of users.
I’m starting to think that maybe is not necessary to use containers if a Phoenix application scales so well out-of-the-box. The containers solution is more to provide elasticity and create and drop resource on demand. That might not be necessary though.
Hmm I think the previous posts didn’t mention that LiveView has a built in mechanism to recover form state, which I think is exactly what you are looking for:
Tl;dr give an id and phx-change to your forms, and the framework will automatically get things back on track once the “old container” is gone, as the client resends state to the “new container”.
I was not aware of this attribute but I will check the link. I see now that when I shut down the container a toast pops up saying that the client is trying to reconnect with the server. That was great, I don’t need to worry about reconnecting the client.
The toast is customizable, it is stored in your project code in core_components.ex – look for def flash and def flash_group in that file to see how it is implemented.
You can change how it looks, and how it behaves.
There’s a thread here on this forum where we’re discussing how to delay this “trying to reconnect” message to possibly avoid it quickly flashing in and out during a redeploy or when a tab wakes up from sleep.
I recall now the main point of this thread, and so wanted to add that while Phoenix and LiveView can fit such an environment, you may get into a position in which some useful things become harder to use.
I understand by “micro-frontends architecture” you mean different parts of a single user-facing webpage is composed of multiple frontend projects, possibly using different stacks, and not relying on iframes but rather sharing a single DOM. Is that what you mean?
In that case, unless Phoenix is driving the outer shell of the app, a few things off the top of my head become harder to integrate:
Consistent notifications such as those flash messages
Presence and PubSub
Backend controlled SPA-like navigation
Possibly more
That, plus the complexity of managing the multiple frontend projects, but then you probably have some other driving factors for opting into that architecture?