Double render I knew about all along, but the double databae reads probably did get lost in the brrage of database reads I do anyway.
Right, you mentioned mount/2 but I was looking at mount/3, more specifically at this exerpt from what I recall to be a largely generated user_settings liveview MyApp.UserSettingsLive module.
def mount(%{"token" => token}, _session, socket) do
socket =
case Accounts.update_user_email(socket.assigns.current_user, token) do
:ok ->
put_flash(socket, :info, "Email changed successfully.")
:error ->
put_flash(socket, :error, "Email change link is invalid or it has expired.")
end
{:ok, push_navigate(socket, to: ~p"/users/settings")}
end
def mount(_params, _session, socket) do
user = socket.assigns.current_user
email_changeset = Accounts.change_user_email(user)
extras_changeset = Accounts.change_user_extras(user)
password_changeset = Accounts.change_user_password(user)
socket =
socket
|> assign(:current_password, nil)
|> assign(:email_form_current_password, nil)
|> assign(:extras_form_current_password, nil)
|> assign(:current_email, user.email)
|> assign(:email_form, to_form(email_changeset))
|> assign(:extras_form, to_form(extras_changeset))
|> assign(:password_form, to_form(password_changeset))
|> assign(:trigger_submit, false)
{:ok, socket}
end
Despite the /2 vs /3 misnomer it seems we are talking about the same function after all, so I’ll assume we are.
Now, I’ve acknowledged from the start that I mqy well have missed the double database queries in the logs. Plus I’ve read about that life-cycle countless times online as well as in the the evolving Programming LiveView book I bought long ago. I have no specific issue with the double load at all, and I’ve now added some debug statements into both mount/3 heads but only managed to get the second one called, twice as expected. No idea when the first would match. I was thinking it would be oduring the web socket setup which does set that parameter, but I wasn’t able to catch it in the act just yet. That would however be a third call to mount/3 if I still count correctly.
Still I’ve not been able to close the gap in my understanding. If the HTTP request that gets upgraded is can be served from another pod, cool. If during the upgrade the other pod is involved cool, cool, but how? If the original process respnding to the original request is abandoned/forgotten and the whole liveview conversation carries on with the process on the pod that answered the second HTTP request, cool, but when and how are the resources the original process held in anticipation of the web-socket landing back there released and cleaned up? If the original process on the priginal pod doesn’t keep any resources but just terminate, could I see how it knows to do that?
It’s not in production yet, but in a pipelined version I am doing a lot more “prep” work per user at the start of a new session. Expensive database operations I need for the intial render but would do well to avoid repeating straight away for the second render, but to make smarter choices about that I would need to have a much better grasp on what my options are, why the default LiveView behaviour doesn’t do something along the same lines and consequently what I’m getting myself into if I want the state loaded during the first render to become available to the second render.
I’m not there yet either, that’s still dopwn the line. Right now I am still a little stuck fincing correlatin between what I expect to see based on the load balncing rules in haproxy and the combined logs of the pods in a clustered deployment. The most confusing bit being that I’ve yet to see a consistency in what gets served where and from what point onwards to be able to say whether it’s the pod reporting the websocket handshake or the pod serving the liveview that ends up with all the web socket traffic from that point onwards.
For one, I don’t see (in the logs) two requests to the (in this case it would be /users/settings) url, I see only one. And I don’t ever see a request logged (to the phoenix console) to the /live url from the code and the documents. I’ve assumed that the “[info] CONNECTED TO Phoenix.LiveView.Socket in 24µs” message was logged by the endpoint defined for /live in endpoint.ex in lieu of the normal logging that prints out the route/path/url. That might be at the heart of my confusion, who knows. I’m just trying to make sense of what I’m seeing so I can confirm if and when the rules and behaviour I am seting up is working as planned or not. Remember that the original reason I went down this rabbit hole was because one of the pods disappearing while a web socket was open ended up being picked straight away b the client (lost the internet) and by the looks of it by the server and its proess management, but the software that served as load balancer at that point (and still does actually, until I can roll out a fix) was the last to learn about the demise of the pod, insisted on sending trafic to it and got forced into sending a bad gateway error the the client trying to reconnect. I’m assuming responsibility for this and not blaming it on a bug or bad behaviour or such. The whole clustered deployment was entirely my own initiative and I’d be surprised if this was the only mess I made in the process. But I need to sort it out which unfortunately means having to keep pestering people more clever and better informed than myself because reading the raw code is not as definitive to me as it should be. I’m not really a programmer, nor a network engineer. I used to program in my youth and spent most of my career as a systems architect, but I’m really an inventor building a system to do something every single person I’ve mentioned it to concluded was impossible to do, so yeah, I’m in way over my head and not ashamed to admit it. That’s why I need all the help I can get interpreting what the code actually does and how that matches up with what it sets out to do in support of what I intend to do.
Thanks again.
P.S. I finally managed to get the first mount/3 to activate. Turns out the token referred to there is not the csrf_token I saw mentioned in the web socket setup call in the JS, but the actual user token generated by the auth system to authenticate the request that arrives from the url sent to the user. I just happened to pick a test function that didn’t follow the usual pattern which I suppose it to see only one head for the mount/3 function and no mention of the token put in the request by the JS. Sorry about that.