How are you building apps with Phoenix in 2022?

I think a lot of people might be interested in this - what is your preferred way to use Phoenix in 2022?

7 Likes

I’ve been digging deep into Liveview lately.
Real-time web application brought me to Phoenix and it may remain that way for the foreseeable future.

3 Likes

I use Phoenix for everything, “normal” websites and “apps”. I mostly write internal CMS style tools for clients that want to solve some particular workflow or migration issue. I basically only write LiveViews now. I wrote my first controller last week after a long, long time (not including tweaks to mix phx.gen.auth). It felt very awkward.

LiveView (and Ecto) are amazing tools and I am incredibly grateful that they’ve been gifted to the world. The “free” pubsub change pushing/sharing is amazing, not that Phoenix couldn’t do that before but LV makes it so dead simple. It also feels like I suddenly just got “”“isomorphic”"" web apps for free without even trying. I just looked up one day and realised I was writing everything in a language I love to use and it all just (mostly) works.

I use a pretty normal stack, Phoenix, Tailwind and Alpine if it’s too small/unique for a hook. I think it’s easy to write convoluted Alpine code and hooks are underappreciated but I also think phx-hook could probably use some ergonomic improvements, maybe allowing multiple hooks on one element or hooks without ids.

I still use NodeJS because I include some PostCSS plugins in my tailwind stack (plug). I saw parcel-css a few days ago but I don’t think it’s quite intended to replace that NodeJS use-case (yet?).

I have always deployed containerised. I use a mixture of Docker and a few servers use Podman. Podman is a new addition, which I like but I am not 100% on it being production ready as I have had issues with its inter-pod networking which can be stressful. I much prefer it for running without a daemon, rootless and basically following a “just a process” model. (I know docker has rootless stuff now but it seems sort of ad-hoc.) Podman 4.0 comes with a fresh networking stack, expected sometime in the next month or so, which I look forward to over-eagerly deploying to production.

The new mix phx.gen.docker looks like a great starting point for this workflow, though I have only used it once just to “eyeball diff” between the prescribed method and my own Containerfile. Containerisation has some quirks but its worth investing your time in learning if you haven’t, even outside the scope of Phoenix et al.

I put everything behind a Caddy (container) reverse proxy which does all my https work for me.

My services are not high-traffic so I haven’t needed HA infra, the 2 seconds of downtime are fine for out-of-hours deployments (and if you care enough, LV gives you the tools to track usage and know when a container is ok to junk). I would be pretty easy to setup with caddy it needed though. My clients are geographically localised, so I can deploy regional servers that have good ping, if you’re hitting a global user base, probably LV starts to be cumbersome without something like fly.io or a much more complicated deployment strategy.

Sometimes I still do feel uneasy about LV just “not working” on some barbaric windows vista system or something and never knowing but so far that hasn’t been an issue. I think that’s mostly just hold-over baggage from being a webdev in “the bad old days”.

That was all a bit of a ramble. I have touched a lot of web stacks over the years, from Rails, Django, Grails, Angular, Vue (and 100 other JS frameworks like Marionette and Backbone). I can’t really imagine moving on from Elixir. That meme of replacing entire stacks with the BEAM is funny but kind of true for me at least and LV covers a lot of the front end stuff. That sounds weird for me to say, as I am – I hope – pretty pragmatic in terms of being willing to both drop, adopt or at least try other technology if it makes sense, but Elixir/Erlang/Phoenix/BEAM/etc just provide such a vast, consistent and unified set of services that… Well like I said, it’s hard to see why I would want to go back to something like Rails and some kind of “fake” process model.

The concurrency model alone is a game changer really. It’s just, so easy to go async in predictable patterns, with recovery in or out of tree, etc. It’s just … you just get so much for free that other stacks definitely can do but just can’t quite do as well, for so little effort.

Actually where I would reach for something else is low-memory deployments, and desktop/mobile apps.

I wrote a tool to convert SMTP mail to HTTP posts the other week. I was initially going to write it in Elixir with all the proper concurrency, retry tracking, etc but the thought of deploying the ~150mb BEAM felt like overkill for a pretty fire-and-forget service. As a Go binary it’s about 7mb. I don’t think this is “solvable” in terms of reducing the BEAM footprint, which is fine, but it is what it is. Of course, even basic Raspberry Pi’s come with 2GB memory now days, which could easily host multiple BEAMs so “low memory” is pretty relative depending on where and what you’re deploying.

I just checked in on elixir-desktop and it seems like iOS support has appeared, so it may even be feasible to use LV for all-platform mobile apps. Generally my usecase for this isn’t “the next twitter” but “control plane for x service” where it not looking 100% native isn’t a drama to me.

Those are both something I would like to explore this year. I hope to write a LV control interface to a kiln either run directly via a R-Pi or R-Pi → Arduino. I haven’t looked into Nerves yet either which I would like to.

Thanks for coming to my TED talk.

E: Oh, I also use Livebook, perhaps against best practices, by booting it in a container attached to my app networks, then accessing it over an SSH tunnel and attaching it to the running node. This is great for introspection or data analysis, or one off CRUD tweaks where a UI doesn’t totally expose something to the end user. When I close the livebook, the tunnel and container goes down and I fade away into the night. Another great tool.

38 Likes

I’d just write a tool that is able to generate and edit Phoenix code and use it e.g. phx add action --type=edit --entity=users and the like.

Fiddling with all these incantations and putting them in exactly the right file – and sometimes in the right order – is something I am gradually getting sick of regardless of the used framework. It’s a busywork that is best done by computers.

6 Likes

This is most likely a “how not to do it” but since you ask.

I only use a very limited subset of Phoenix. All I do is add live("/", FooLive) to the router’s /-scope and in FooLive I normally only use mount/3 and handle_event/3. render is done by a .heex which does the “routing” in a case like

<%= case @some_assign_that_tells_where_i_am do %>
  <% :foo -> %> <Components.foo p1=... />
  <% :bar -> %> ...

All real rendering is done in function components that also hold some presentation specific helpers. This finally is a web-framework where I understand what’s going on. Very powerful also.

2 Likes

I have been using a combination of JamStack and Phoenix for my startup https://indiepaper.me.

  • Frontend is NextJS app hosted on Vercel
  • Backend is Phoenix LiveView, hosted on Fly, on a subdomain
  • Any requests that the frontend cannot serve to get proxied to the Phoenix App, using NextJS rewrites. This means the Static Site is CDN is super fast.
  • Once the person lands on the phoenix app, the rest of the navigation takes place through live_sessions via the WebSocket without HTTP requests. This means we can serve the app close to users via Fly.io and with no interference from the Vercel Proxy.

I use TDD to develop the LiveView app. Since almost all pages are Live, I can use the live_view test helpers to write end-to-end tests which used to take a lot of time with browser-based testing. Every LiveView request is inside a single live_session with authentication done on the LiveView itself. That way no page reloads. I use TailwindCSS and AlpineJS for the frontend, including some really sophisticated editor pages.

Overall the experience both in developing and for people using the app has been really positive.

17 Likes

This would make for an excellent blog post. :slight_smile:

6 Likes

Very interesting project.
I’d love to hear something about the publishing technologies you are using (asciidoc? CCS3 paged media? docbook?)

2 Likes

I have a half done version sitting in my drafts folder, will finish it by tomorrow :smiley:

5 Likes

Post it on the forum! I’d love to check it out.

2 Likes

I use TipTap editor as the online editor which gives me a well-formatted JSON schema. The book is edited chapter wise, so I have multiple schemas.

I have a custom parser that generates valid LaTeX code from the JSON (Pattern matching worked really well here). I substitute that in the order of the chapters into a template LaTeX file, and use it to typeset and generate a beautiful PDF book.

5 Likes

I would love to know more about how you do this :slight_smile:

3 Likes

Finished up the blog post, Superfast and Practical Webapps using NextJS, Vercel, LiveView and Fly.io - aswinmohanme.

Do give your suggestions and feedback. :smiley:

8 Likes

Can you create a new thread for that?

1 Like

I don’t know whether I created it in the correct category, Superfast and Practical Webapps using NextJS, Vercel, LiveView and Fly.io

3 Likes

I tried live view for a couple of small apps and agree it is the best thing going for quickly and maintainably building real-time apps. However, in environments with a team that deploys multiple times a day, the users all get spinners all the time, unless you build out a bunch of local storage/rehydrate client code that is custom to your app. It is the only downside I found, but in continuous deployment environments it is pretty much a deal breaker. This problem may just not exist for you (not deploying that often, users not concerned with spinners, etc.) but I just thought I’d ask if you have any thoughts.

4 Likes

If I took down my app and relaunched it, and if I stared at the phoenix dashboard, I got spinner for ~10s. How often do you relaunch your app each day?

You will need to save some state either in the client side or in a database so if the connection broke (more often due to bad network at the client side, at least in my experience) you can recover at least those states that are important.

3 Likes

I build the webserver with Postgres, LiveView/Surface, hooks, Alpine and Tailwind and the desktop clients are built with wx. The clients use PubSub internally to update the UI etc and Channels is used for the client-server comms.

It’s all hosted on customer’s own machines, currently only Windows Server. Not using IIS and latency is not an issue on a LAN :+1:

I hope to replace as much of the Alpine as I can with JS commands and hooks as I move to 0.17.6.

5 Likes

in environments with a team that deploys multiple times a day, the users all get spinners all the time, unless you build out a bunch of local storage/rehydrate client code that is custom to your app.

Yeah basically it’s not a problem for me because it’s mostly business-hours B2B services, so there’s obvious deployment times, or critical fixes just come through as a stopwork order for the 30 seconds it takes for the container to restart. Critical fixes generally imply an unintentional stopwork anyway :slight_smile:

Even the stopworks are really just to stop someone trying to use a service and going “why no worky?” or if they happen to click “go” between one or two critical state transitions exactly as the deploy goes through. “Critical” in that context is still recoverable anyway so shrug. It hasn’t been worth the extra engineering time to “fix”.

I have been thinking about nicer ways to handle on a more sustainable way as the client/service count expands. Honestly for my stuff I can just popup a “please wait for server update” message on phx:disconnect and then let the state rehydrate once reconnected. You need the rehydration in most cases anyway for network errors. Form/in-page rehydration isn’t fool-proof though as someone might refresh in that state and hit an actual “no server up” error and lose the page. I would like to explore localstorage for this but haven’t yet put the time into thinking about it.

2 Likes

hey, @aswinmohanme thanks for your blog,
and for my work, I also use a lot of Latex code for document conversion into PDF through pandoc
I am wondering how you parse latex code?
how do you interpolate latex code with elixir?

2 Likes