LiveView calls mount two times

Hi, I have a problem, in mount I execute query to db and assign results to socket.

def mount(_params, session, socket) do
    socket = mount_user(socket, session)
    IO.puts("DB request")
    socket = mount_profile_changeset(socket)
    {:ok, assign(socket, %{menu_states: %{"personal" => "is-active"}, dropdown_states: %{}})}
  end
def mount_profile_changeset(socket}) do
    user_id = socket.assigns.user_id
    result = App.Profiles.get_profile_by_user_id(user_id)
    socket
    |> assign(:changeset, result)
  end

LiveView calls mount two times, in logs I see two select queries.


I thought mount is invoked once per cycle, what could be the problem?

You can execute your code is like this one.

Under the mount function, check if socket is already connected.

if connected?(socket), do: your code
  1. Please always post text as text, never screen shots. You screen shot is barely readable on my screen

  2. It’s invoked once per cycle, but there are two cycles. The first is the GET request which does a static render, and the next is the websocket based request which does the live render. They’re two entirely separate requests. This does not necessarily imply that using live view doubles your request load though. If you use live_redirect to link between pages additional page requests will all go over the websocket and only need to happen once.

5 Likes

Hello,
here is an excerpt of the way I’m handling it (means it is ok for my needs) :

  def mount(params, session, socket) do
    case connected?(socket) do
      true -> connected_mount(params, session, socket)
      false -> {:ok, assign(socket, page: "loading")}
    end
  end

  def connected_mount(_params, %{"page" => "dashboard" = page, "user_id" => user_id}, socket) do
    
    <** YOUR BIG SQL HERE THAT YOU WANT EXECUTED ONLY ONCE CONNECTED **>

    {:ok, assign(socket, 
    
       <** YOUR ASSIGNMENTS HERE **>
    
    )}
  end

  def connected_mount(_params, _session, socket) do
    {:ok, assign(socket, page: "error")}
  end


  def render(%{page: "loading"} = assigns) do
    ~L"<div>La page est en cours de chargement, veuillez patienter...</div>"
  end

  def render(%{page: "error"} = assigns) do
    ~L"<div>Une erreur s'est produite</div>"
  end

  def render(%{page: page} = assigns) do
    Phoenix.View.render(KandeskWeb.LiveView, "page_" <> page <> ".html", assigns)
  end

Please note that the liveview is called from router via :
live "/", IndexLive, session: %{"page" => "dashboard"}

Cheers,
Sébastien

7 Likes

Thanks!

thanks for explanations

This definitely works - thanks! But it seems a little bit like a “hack” is there a more official way to get around this issue? I’m using a component on a page that calls live_render inside the standard view. Wondering if I have a subtle mistake in the way those are organized.

connected?/1 is an official function, it is the official way to check if you are connected and use that to drive conditional logic.

2 Likes

As far as I understand it, this is not an “issue”.

The first rendering is useful to quickly render an usable (or not depending on your needs, as in my example where I just push a “page loading” text) page that notably will be visible by bots and before any JS in executed (SEO, etc…). On the second run, a server process is launched and kept alive for the duration of the liveview to track updates and push diffs to the client.

The full life-cycle is explained here :

I don’t mean to imply “bug” by saying “issue” - the issue is, you need to know, somehow, whether you’re on the first run or the second run in order to do anything differently in mount. And my question is whether this is the intended way to do that - realizing that there may not be an “official” way. In my app (which I think should be a common case) the mount process is expensive, and I only want to do that once, although it works fine if I just naively let it run twice.

If you’d like a walk-through of the process itself I heartily recommend this video, which is part of a liveview course: https://pragmaticstudio.com/tutorials/the-life-cycle-of-a-phoenix-liveview

connected?/1 is indeed the way you’d defer expensive work on the dead render, at the cost of empty containers or loading states for crawlers and friends. In general, most folks will be fine ignoring the distinction b/w disconnected and connected mount, but doing this check is absolutely fine for the special case.

3 Likes

@chrismccord do you think there will ever be a way to pass content between the two phases so you don’t have to do a double db fetch? Or should we just stash in an ets cache ha

If the connected?/1 idea is preferred, would it be easier to have
def mount(params, session, socket) and
def connected_mount(params, session, socket)

How about applications that do not care about SEO at all (like using LiveView for developing the admin interface of an app) and that do not need that first quick html rendering (because page is only really interactive when connected)?

It would be nice to have a way to add code only when connected without systematically checking for connected?/1 application-wide. Then one can just have those db calls in mount without having to ensure that those are not done twice through a check.

The static render is not primarily for seo. Proper seo is just so prominent given SPAs basically made it a problem in the first place.

Much more importantly: A browser cannot connect to a websocket connection immediately. It needs to do a plain http request first, which returns html linking some js, which then can attempt to connect to a websocket endpoint. So by definition there need to be two connection attempts to your server. On the http connection you don’t know yet if the client is even able to connect to the websocket endpoint for the second connection.

Given Phoenix cannot know exactly how you want to deal with that failure of the websocket connection failing it can just give an API to users, which allows them to implement it however they need. This is connected?/1. You can decide what you render in the static render vs. any connected renders – which is not just the second one, but any fresh websocket connection made (think about reconnects).

That’s the part about failure modes.

Now the other fact people often don’t think about is the case of everything going according to plan. I initially talked about the browser not being able to connect to websockets before having the necessary js. This only affects the very first request a user makes in a given session. After that the js is on the browser and navigation can happen completely over websockets (live redirects and patching), thereby completely skipping the static render. The new stuff is directly mounted in connected mode over websockets. This means you might not even need to be so concerned about db requests being made twice. Unless the page is commonly hit at the start of a user session it might not even be rendered statically all to often in the first place.

As for the code concern: If you over and over use the same code for doing one thing for the static render and some other custom thing only when connected nobody prevents you from abstracting that into some helpers, maybe even an additional behaviour with different callbacks + some macro for the boilerplate. There are means of DRYing this up, if you truely deal with each case the same.

5 Likes

I do similarly, but have taken it one step further: I put the “bare” mount/3 in a macro in a module, and use that from all my LiveView, providing a set of default bindings. It also does the checks for authentication, so I have both a connected and an authed mount and avoid duplicating that code all the time. The result is that in those LiveViews, from the codeit looks like it connects once wherever I can get away with that. …

No because you necessarily have data dependencies in the template for almost all cases cases of disconnected vs connected mount, so they need a shared code path.

Nope. This is part of the programming model. It’s very likely you don’t even need to consider it. If you are concerned, measure and optimize as needed :slight_smile:

1 Like

I just realized that part of the problem is you have no guarantee the ws connection hits the same erlang node as the http connection. You could probably finagle something with a token and ets as cache, but yeah, that should be a library on top of lv, not in lv itself, since not everyone needs something that heavy.

Not only that. You don‘t even have a guarantee that a ws connection happens at all. And if it does, when it‘ll do so.