Looking for input about how to better approch JS development in Phoenix (not just liveview)

Thanks for sharing that Electric Clojure link, looks very interesting.

If anyone else has links to articles or projects which are relevant please make sure to share them. It’s very hard to find anything with search anymore in this dystopia we now inhabit :slight_smile:

A couple more things I’ve read recently and found helpful:

The bulk of the article is background rather than actually being about transmitting JSX over the wire, which is exactly what I was looking for so I very much enjoyed it.

This article and its follow-ups are notes from the author’s attempts to build a reactive UI framework in Rust, with lots of helpful background and references. I am a big fan of this blog in general, lots of great articles about GPU rendering (particularly 2D vector rendering, which is really hard).

Excellent series about React as an effect system rather than a UI library. I posted this above but I may as well put it here too.

From the same author, describing in a fair amount of detail how to build a React clone without the UI parts.

2 Likes

I see what you’re saying but when I see CRDT I immediately think of black box algorithm and things like Y.js and Automerge. They feel very different from server reconciliation which seems straightforward. I mainly make the distinction so people who are exploring building apps that have local first characteristics don’t assume they must use a “true” CRDT.

Hmm is it? I haven’t spent a ton of time with CRDTs in practice but from what I’ve read it seems you have less control of what they converge to. In the text editing example, the algorithm will converge but it may mash words together into something that neither user expected.

1 Like

Yeah, that’s fair. I think your post was perfectly reasonable, I was just trying to clarify my intent. My view is probably (or eventually will be) similar to yours.

Tautologically, you have as much control as you have. If you use someone else’s CRDT and it doesn’t converge how you want, of course there is nothing you can do about that. Likewise if you use MongoDB and it loses your writes, that’s just how it goes, I guess.

My point is that how CRDTs merge data is not defined anywhere. The only prescription is that it must be conflict-free. An append-only set is a CRDT, and yet has very intuitive behavior for users. Maybe not if they expect to delete their data, but there are ways around that which are still fairly comprehensible.

Expanding on the above, how would you implement a text CRDT? One easy way would be to have a textbox, timestamp every update, and write them into an append-only set. Your edit clobbers someone else’s? Too bad. At least we stored both! For further convenience, you can track the parent of each update. Maybe hash them together with the author and the commit time. Maybe do directories and files too. Put them into a merkle tree… starting to sound familiar? :slight_smile:

Git’s basic model is a valid CRDT. And note that it does not mash anyone’s words together!

Real-time collaboration is considerably harder. But that has nothing to do with CRDTs! It’s just a more difficult problem because collisions are more likely. Getting the data synchronized as quickly as possible seems to make things easier, which is why having a server in the mix improves results. Servers can also help timestamp operations more accurately, at least for some definition of “more accurately”.

But there is no reason you can’t design a CRDT which detects when a user was offline for a while and offers an explicit merge interface, like Git. That is still a valid CRDT.

Also: not all of the UX solutions here lie in the CRDT. For example, collaborative apps often show other users’ cursors. Why, because it looks cool? No, it reduces conflicts! If you see your friend typing there then you naturally know to avoid that spot until they’re done. Easier to avoid conflicts in the human layer :slight_smile:

2 Likes

This seems to be a bit off topic, but why should lose MongoDb your writes? I have been working with MongoDB version 3.x and I have never experienced that the database has lost data. I believe this affects some earlier versions. Interestingly, this cliché seems to persist over the years. On the other hand, the database is constantly being developed further. I estimate that the current versions of MongoDB are just as secure as other databases.

It’s a bit of an old meme, Mongo became quite infamous for losing writes back in the day (because it did, a lot). To their credit, its developers were wise enough to use all of the money they made lying about their guarantees to buy a storage engine written by people who actually knew what they were doing and my understanding is that Mongo has actually been quite reliable since then.

My throwaway analogy was not meant to be taken too seriously :slight_smile:

1 Like

Cool approach, although I honestly don’t fully understand its value, or that of inertia.js or similar approaches. If they work for you, great. To me though, in terms of making a recommendation to a broad audience, they seem… overly complex?

Our codebase is a mono repo. The Vue app lives inside the /assets folder. Vite is configured to build the app and put the generated index.js file into /priv/static. When the user navigates to mysite.com/app, the AppController renders its index.html.eex, which has a script tag that points to that index.js. Once the app is mounted, from then on it fetches whatever it needs using good ol’ HTTP requests, which return JSON via various controllers.

That’s it.

This approach is:

  1. Simple: you configure Vite (or ESBuild or whatever) and set up your controller with an HTML template and that’s it.
  2. Flexible: if you decide to split the Vue app out to its own repo, it would be trivial to do so. You could then deploy it on a completely separate host like app.mysite.com if you wanted, which is sometimes beneficial.
  3. Does not require any additional build tools or frameworks. Phoenix for the API, Vue for the client. Easy.
  4. Maintains full separation of concerns: the server has no idea what’s going on in the client in terms of UI state. It just renders JSON when asked.

On top of that, the fact that there is a standard API means that other clients (e.g. iOS) will be relatively easy to support if you go down that route in the future, as is the case for us (something we did not anticipate at the beginning).

2 Likes

What you’re doing is a regular SPA, and it’s perfectly fine! I’ve been doing similar things almost whole my career, monorepo for eg Nuxt / Django Rest Framework. (I’m assuming there’s only one index.js for the whole frontend-app?)

But then, you have to:

  • manage frontend-side routing
  • write controllers for fetching the data and keeping that state in sync
  • have some additional complexity for doing server-side updates
    etc.

My approach combines power of LiveView (lack of dedicated API, persistent server-side state, server-side routing) with a power of Vue (great client-side DX and possibility to create rich UIs). I’m just saying it works well for me :slight_smile:, and I have to admit I’m enjoying trying something different.

3 Likes

I was doing some reading on CRDTs and came across this article you might find interesting:

Note that the author went on to (recently) publish a paper with Kleppmann. I haven’t read it in full yet (I still need to work my way up to the sota) but it seems to be important research.

1 Like

To be clear, I’m not questioning whether your approach works well for you — in fact, I think it’s quite clever. My question is more about whether there are substantial net benefits over the traditional SPA + API approach, enough to recommend it to a general audience.

Has frontend routing been a major hassle for you? For us, it’s just a router.js file with an array of “path > component” mappings. There’s no real complexity or pain here in practice.

But in your approach, you’re still writing LiveView modules, which is just a different way of doing the same thing: instead of controller actions and views, you have handle_param, handle_event, handle_info, and you update the socket assigns with the new state. Functionally, it seems equivalent.

I’d argue this complexity exists either way. If User A viewing a blog post list while User B creates a new post, you still need to propagate the change to User A. In LiveView, this means broadcasting an event and handling it in handle_info. In a traditional SPA, you’d establish a websocket/channel connection and handle the update on the client (where the SPA actually does the heavy lifting). I don’t see a meaningful complexity advantage for LiveView here.

This one confuses me a bit because I see it as a drawback, not a benefit. If you later decide (or your users demand) support for another client (e.g., iOS or Android), you’ll still need to build the missing API anyway. And before you say you will probably never need a mobile app, let me point out that in the B2B SaaS space, adding a mobile client is common, with some surveys showing that nearly 80% of SaaS companies with more than 50 employees eventually offer a mobile app alongside their web app. In the B2C space, the mobile-first trend is even more pronounced, with users spending 90% of their time inside apps as opposed to websites, and apps being responsible for 71% of all web traffic. So by skipping an API up front, you take a huge risk and accumulate technical debt you’ll need to pay down later, often under time pressure.

But with Vue (or React/Svelte), you also have persistent state, just on the client. With Vue’s Options API, it lives in data; with Composition API, in ref/reactive; and you can centralize it further in Vuex/Pinia. Personally, I’d argue browser state does not belong on the server, and having the server manage UI state violates separation of concerns, which is one of my main gripes with LiveView generally.

So my core feedback on your approach (and also inertia.js or LiveSvelte etc.) boils down to this:

The main advantage of LiveView, as traditionally implemented, is that it automatically reconciles and renders state changes in the browser through efficient HTML diffs, which eliminates much of the need for client-side JS. But if you’re already using Vue/React/Svelte for rendering, you’re giving up that benefit entirely. At that point, why not just use controllers and serve data in whatever format makes sense (JSON, CSV, HTML, etc.)? This also keeps your API flexible and better separates concerns.

Ultimately, I don’t see a substantial benefit to mixing LiveView with a frontend framework, unless the goal is simply experimentation, which is perfectly valid on its own. But as a general recommendation, the traditional SPA + API approach seems simpler, more flexible, more aligned with standard web architecture, and with greater potential to be able to meet the demands of your users.

7 Likes

The advantage of server rendering is that it keeps you close to your server-side state. If you look at client-rendering frameworks like React they have long struggled with request waterfalls and so on caused by trying to interact with server-side state from the client and running into latency problems due to repeated round-trips. Server-rendering fixes that because the server is close to the database, and therefore close to the data, so it can interact freely.

Recent progress in React-land has been focused on React Server Components, which, after several generations of failed attempts, appear to be a proper “solution” to this problem. The solution, of course, is server-side rendering of React components (i.e. exactly what LiveView is doing). But once the components are rendered they head to the client, which is free to also set up client-side state as it wishes allowing for high interactivity with low latency - something we cannot do with LiveView. These hybrid client-rendering approaches like LiveVue/Svelte/etc are attempts to find a middle-ground in this space.

Of course you can turn the entire thing inside-out: instead of running your app on the server, you can run your database on the client. This is the realm of local-first, and is the reason we branched out into a brief discussion on CRDTs :slight_smile:

But not every application can be local-first. Like I’ve said before, how do you build a local-first AWS console? A local-first Twitter? A local-first ChatGPT (assuming the model is too large for user hardware)?

What about a local-first search engine? Can you fit a 100 petabyte web index on your phone? No.

And so server-rendering will remain optimal for some problems. But maybe you still want to mix in some client-rendering for the interactive parts. There is a convergence here, with React on one side, LiveView on the other, and RSC/LiveVue/Hologram/etc occupying various points in between. It remains to be seen what lies in the center.

1 Like

I just wanted to add that Hologram is unique in this respect because Hologram components are truly isomorphic - there’s no real distinction between “client” vs “server” components. The same component code can be rendered on either side, but they have different initialization methods depending on when they are initialized (they can be initialized on the client or on the server):

  • When initialized on the server-side, components use init/3 and have access to cookies, session data, database connections, and the full server environment

  • When initialized on the client-side, components use init/2 and work with whatever data is available in the client context

This allows for seamless server-side rendering with full server capabilities, while maintaining the same component logic for client-side interactivity. It’s a true hybrid approach that bridges the gap between server rendering (like traditional LiveView) and client rendering (like React), giving you the best of both worlds without the artificial boundaries that other frameworks impose.

2 Likes

The same is true of React Server Components (in fact it is the entire point), but in their case as far as I am aware they lack a stateful connection to the server. The components are server-rendered once and then shipped to the client and that’s it. Which makes them a degenerate solution; the ongoing collaboration between server and client-side state is the truly hard part.

I still have no idea how to do that properly. The server and client will never truly be in sync so if you want a correct solution in which both ends can manipulate state (no hacky prediction trickery) then you probably have to model it as a CRDT. Maybe somehow you would load an initial state from the database into a view of that state within the client and server which is maintained as a CRDT and then mutations are shipped back to the source as needed.

I’m sure there is already work in this area, I just haven’t gotten around to learning of it yet :slight_smile:

1 Like

Aren’t they always evaluated on the server only? I’ve never used them, but I was under impression they execute exclusively on the server and send their rendered output to the client, hence each operation that is related to them must go through the server?

1 Like

I have not used them either and I am not an expert (I have only read about them a bit so far), but my understanding is that they are React components rendered on the server and returned to the client. Once they are rendered into the client they are just normal client-side React components.

They seem to have a lot more trickery compared to classic SSR but I don’t know the details yet. But as far as I can tell they are only server-rendered once, there is no ongoing client-server relationship. Or, at least not a stateful one.

You raised many excellent points in your response. I’ve been building SPAs for a number of years and usually we’ve hit a few complexities along the way.

API design itself is tricky, and if your frontend app is the only consumer it can be often seen as a “necessary evil”. Some examples from the top of my head:

  • Naming things!
  • Nested resources - what if some of your UIs need relationships? Should you extend existing endpoint (?include=author,comments)? But then it increases complexity quite a lot. Not doing that might require a separate request which increases latency.
  • Nested endpoints /posts/<id>/author vs flat /posts/<id>/author?
  • Authorization & permissions. With live view it’s easy - if user can’t reach given live view, everything inside is unavailable. With endpoints, each of them could be used in multiple places. Maybe some of these places require some additional details not available to regular users? It creates additional complexity for API development.
  • Refactorability. It’s much easier to grasp what’s available and needed for a given view since everything is in one place, and you don’t fear to make changes. With API approach ensuring something can be deleted is much harder.

Big selling point of LiveView is that you don’t need an API. From what I’ve seen, people value it a lot.

The point is - most of the web apps doesn’t need that separation of concerns. Things that change together are getting more complex and harder to change when abstracted. Developing apps with Intertia.js-approach is from my experience much faster.

I can hear you about mobile apps. From my experience, they’re not that often needed, but it might be my local bubble. But if you plan to build a mobile app, then API might be necessary down the line.

There’s additional quite tangible benefit - when doing initial render or during navigation, new assigns are loaded on the server side and then embedded in the HTML. Even without SSR! No waterfall needed, everything for initial render is there.

And each handle_event can change any number of assigns and send the diff in one go.

So, I’d say it’s simply a different approach that might be more niche, but still have some valid business and DX cases.

2 Likes

Poking my nose in here for this massive thread I’ve been silently following to confirm yes: I value this a LOT. I really dislike the “what if we need an API” argument because:

  1. What if we don’t?
  2. If we do, I don’t see having a separate customer-only web API as a bad thing. Provided you have “skinny-controller” design this isn’t a herculean feat and lets you draw a firm line between what is internal v external. Also, these days we have Ash which makes this trivial :slight_smile:
7 Likes

This is becoming less relevant now, as SEO is basically dead. I agree with the “you don’t need an API” argument though.

It’s not for SEO, it’s for performance. This article which I linked above provides a very thorough overview.

TBH I don’t think we should focus on SSR as the main benefit of LiveView, as React Server Components can now do the same thing (and SSR existed for JS even before that). What makes LiveView special is the stateful connection, which is only tractable because the BEAM is built different :slight_smile:

1 Like

TLDR. If you need N (N>2) network round trips to render something meaningful then I agree it is bad. 2 round trips are still ok: 1st for the javascript, 2nd for the data.