The Top 3 LiveView Form Mistakes (And How to Fix Them)

I’ve been writing LiveView since 2020. In that time, I’ve seen the same three form mistakes at multiple companies. Here’s what they are and how to fix them.

1. Slow, laggy forms with scattered logic because form state gets stored in socket assigns and server round-trips get used for dynamic UI (conditional inputs, toggles), instead of keeping that state in hidden form inputs where it belongs.

2. Brittle system where UI and database can’t evolve independently because database schemas get used directly for forms, coupling persistence logic to presentation.

3. Users stuck with valid data but can’t submit because changesets get manually manipulated with Map.put or Map.merge instead of Ecto.Changeset functions, leaving stale errors behind.

The common thread: don’t fight the framework. Keep form state on the client, create embedded schemas for your forms, and use Ecto.Changeset functions to modify changesets.

8 Likes

In Mistake #2, where you do:

 {:error, changeset} ->
      {:noreply, assign(socket, :form, to_form(changeset))}

How would you map the errors back to the form schema, when the fields from the DB are different?

2 Likes

Good question.

In that specific case branch, nothing needs to be done, because that changeset is the form changeset.

If there were an error in Accounts.register_user and you got a changeset back, you’d do something like (psuedo-ish code):

changeset = FormModule.changeset(%FormModule{}, form_params)
changeset
|> Ecto.Changeset.apply_action(:insert)
|> case do
  {:ok, form_params} ->
    form_params
    |> Map.from_struct()
    |> Accounts.register_user()
    |> case do
      {:error, user_changeset} ->
         # Figure out which user changeset field had an error
         # Place error in changeset, validate, re-assign
         socket =
           changeset
           |> Ecto.Changeset.put_error(:field, "There was an error saving to the database!")
           |> Map.put(:action, :validate)
           |> to_form()
           |> then(&assign(socket, :form, &1))
         
        # ... rest of handler

Is how I’ve done it before

1 Like

Thanks!
It’s super valuable to seperate the UI schema from the DB schema, because for example when we would for UI reasons split a datetime field in a separate date and time field. But there’s no built in way to map these errors back when we, let’s say, get some constraint errors back from the DB side. Maybe something like

changeset
|> Ecto.Changeset.remap_error(:datetime, :time)
|> Ecto.Changeset.remap_error(:datetime, :date)

would be a nice helper to have. WDYT?

To place it here so others can find it: Ecto.Changeset.unsafe_validate_unique/4 is super helpful when checking for unique constraints in the form changeset and not waiting for the save event when the DB changeset comes back

I don’t have much professional experience with LiveView, just hobby projects. But RE 1, aren’t things a bit more complex than you suggest? You say “don’t fight the framework,” but naively it would seem using JS at all is fighting a framework the express intention of which is presumably to manage state on the server. Your example is a good one for your argument, seems like pure FE state that is relatively easy to handle (with latest JS integration). But in reality isn’t most state more ambiguous?

Take a table with rows that are selectable. On first glance this seems like something that should be handled on the FE. Certainly requiring a server trip to toggle selection creates lag, and arguably breaks a strong user expectation that checkboxes are highly responsive. But what happens when some BE action needs to know which rows are selected? Or even needs to rerender part of the view (e.g. to display some metadata about selected rows)? Obviously, you can solve this with more JS, but now all the defects you mentioned with a BE implementation apply: the logic is more spread out, more place for bugs, and arguably JS bugs are a lot more difficult to test and QA against (at least, that’s why we’re using LV in the first place, no?). Alternatively, one could try to use different tooling to alleviate issues with lag, like debouncing, optimistic UI updates, etc.

To be clear, I don’t think your advice is necessarily bad, in the end I think it is simply one of the challenges of development with LV to find the right balance here. But it is a balance, and a delicate one. The more JS features get added to a LV project the lower the ROI it seems. At a certain point, if you want to add a lot of these features, DX is going to go downhill in comparison with React.

2 Likes

For your table example, you could use the exact same thing. I would use inputs_for if you want to show checkboxes with lists of data, and each item can have a checked/selected parameter. Once you check one of those, your checkbox is instantly shown as checked due to the JS, and your backend phx-change handler would be notified of everything the user has checked, now. Does that make sense?


The problem really manifests when you are manipulating related state (in this case, form inputs) across multiple event handlers. I’ve seen this manifest most often with combinations of phx-change handlers and phx-click handlers that are really modifying the same state (the form), but do so in different places, which is the example I gave in the article.

This fix is, once the form state has changed, to notify the backend through the phx-change event handler. We’re basically saying, “Hey, the user changed the state on their end – here’s the new stuff.”

In this way, you retain the benefits of instant user feedback, by manipulating the form state w/ JS, and also the benefits of LiveView, because now the frontend has re-rendered to mirror what the server state is. (You can verify this by taking the example from the article, toggling the checkbox, and verifying that the patch to show a checked checkbox comes through the websocket. It’s just invisible to the user most times because their checkbox was already checked due to the Phoenix.LiveView.JS helper)


Maybe, then, the anti-pattern should be re-named or modified to something more along the lines of “Don’t modify form state outside of the phx-change or phx-submit event handlers” ?

I like the idea, for sure. I’ve generally found “what error(s) occurred on this changeset?” to be a clunky problem to work around, and I haven’t found a good answer yet

For example, I frequently find myself wanting to know if a unique constraint was violated on insert (when I can’t/don’t use unsafe_validate_unique), and have to do a weird Enum.any?(changeset.errors, fn …) and I just don’t love the way it reads/writes

Yeah, that makes sense as a step back from “manage all FE state on the FE“ to something more like “BE manages state on a need to know basis.“ I think managing pure FE state (like toggling visibility or whatever) via BE rerenders could be called a legitimate anti-pattern for sure. But I don’t think we can eliminate the problem of laggy UI entirely that way. Unless you’re able to push back on specific design/UX expectations coming from a client/product design team, which I would think of as “fighting the framework.”

To be clear, I wasn’t and don’t advocate to manage frontend state on the frontend :slight_smile:

Rather, don’t separate related (form, in this case) state in multiple places (regular assigns and the @form assign), and you can use Phoenix.LiveView.JS to provide instant visual feedback to the user to provide for good UI/UX while the server W.S. Round Trip happens :slight_smile:

Thank you for raising the points you did, I think it’s turned into a good discussion! I’ll look it over and see if I can’t make that more clear for future readers

1 Like

You are spot on, and in fact this problem has come up on here several times in the past. Particularly when it comes to LiveView’s imperative APIs (stream() and JS). The simpler your app is the easier it is to get away with this. This is why programming in the imperative style is so insidious: when you write code with O(N^2) paths things start off easy (when the app is simple and new) and then you get absolutely destroyed when the curve goes vertical.

The reality is that if you want to do server rendering you need to commit to it and accept the latency. Most of the time this is fine. If in your case the latency is not fine, that is a very good hint that you should not be doing server rendering.

Of course there are always exceptions, edge cases, and compromises that have to made. This is merely a guiding principle.

The main issue is that in practice the JS is usually written in an imperative style. If you were to write your JS declaratively using a proper frontend framework (e.g. React and others) and glue LiveView to that framework properly (passing LV state into props and so on) you could get away with it.

There have been several attempts to make this integration more natural (see live_vue, live_svelte, etc). You will still have consistency issues, but those can be dealt with.

1 Like

I wasn’t thinking of this in terms of imperative vs declarative styles, but yes it seems like for apps that need a highly interactive UI features, a proper FE framework would be required to manage the complexity almost inevitably. So I agree. But…

Personally I’ve always found the imperative/declarative dichotomy puzzling and not very useful in practice. When I questioned one of the first React devs I worked with about the rationale behind some code he wrote, his defense was just “this is declarative.” Well, isn’t React itself supposed to be “declarative”? Does that mean it’s just not possible to write bad code in React? From what I gather declarative code is generally intended to either mean “more abstract” or even more just “the right abstraction,” where the word declarative functions as a magical wave of the wand substituting an actual evaluation of the abstraction itself. And sure React provides a lot of very useful abstractions for writing js apps, no doubt. But turning back to LV, I think it makes sense in the context of a framework that generally speaking wants to encourage/enabled you to write as much of your app code in Elixir to provide the imperative tools it does. Or maybe just “more imperative”? Less abstract? JS.show is still an abstraction that tells the system what you want rather than what to do, in a certain respect.

The main issue is that these terms are extremely overloaded in compsci. I think they must have at least 3 or 4 completely distinct meanings in common use. I have gone back and forth several times on terminology. My current preference is to always use the phrasing “declarative style” as you may have noticed.

The best explanation of this, and the one that finally made it click for me, is that of this article. The definition of declarative used there is something to the effect of “writing code which rebuilds itself from scratch each time”, as opposed to “writing code that mutates state in pieces” (the author calls this “delta code”). But definitely read the article because I cannot due such a complex topic justice in one sentence.

This was an important realization for me as I had previously thought of declarative as being a property of a language or runtime as opposed to the structure of the code. And in reality both are probably valid definitions (because the term is so overloaded). That’s why I’ve been going with “declarative style”.

Quite the opposite. React-like engines are useful as tools for making code written in the declarative style more efficient. What React is, actually, is an unstructured incremental engine. It’s a tool for writing flexible incremental code. Incrementalism is the key to making declarative code efficient, as otherwise you have to rebuild the universe from scratch on every invocation of anything.

Unfortunately pretty much nobody criticizing React seems to understand this. Most people advocating for React don’t seem to understand this either, nor unfortunately do most people using React. I’m not throwing shade btw, it’s not like I understood this either until relatively recently lol. Live and learn.

Which is why that “definition” of declarative isn’t really very useful for anything.

Taking the meaning I gave above, the reason JS.show() is imperative and bad is because you have to call it each time you want to show/hide something. Whether something is shown/hidden should be downstream of the state which is re-computed in full on each render.

When you write imperative code (“delta code”) you have to duct-tape all of your codepaths together to keep things working. User clicked show? Show! User clicked nav? Hide! Notification appeared after user clicked nav after user clicked show? Uh, idk? And so you get bugs. Endless bugs.

The problem with this, as detailed much more beautifully in that article, is that you are forced to write O(n^2) lines of code for n pieces of functionality, which fundamentally limits your ability to design complex behavior. This is why many apps in modern times are so limited in their functionality. The authors do not know how to manage the complexity.

1 Like