Replacing HTTP with Websockets (or other persistent transport)

A lot of LiveView’s advantages stem from its persistent connection model.

  1. No more manually handling internal http requests (vs rest)
  2. Server is source of truth: no more manually managing client state (vs rest / graphql)
  3. No more authenticating every request i.e. db round trip (vs stateless server)

Obviously the “fragmental real time html rerendering” (can’t find appropriate term) is the main innovation here and probably LiveView’s biggest selling point, but isn’t feasible on mobile today. However, I wonder whether LiveView’s persistent connection model is. If we omit UI, the not-so-LiveView-specific concept of a persistent connection to handle all requests replaces the conventional stateless request/response model, and can potentially offer all the pros enumerated above.

So I was wondering what are the cons of using this approach, specifically for mobile?

So far I have come up with the following cons:

  1. Lack of a standard protocol on top of the transport (i.e. websockets has numerous subprotocols without a clear winner/best practice/idiomatic one) vs HTTP
  2. Increase in server load to handle the persistent connections/sessions vs stateless api server
  3. Spotty networks on mobile


Is the question too general/not elixir-specific enough (i.e. is this the wrong forum to be asking on) ?

Too generic. It is better to ask something that can be answered in 1 or 2 non-opinionated paragraphs. You can ask something like “Have you done something with liveview mainly for mobile applications? Does spotty network connection pose a serious U/X problem for you?” I cannot answer those questions either; but I bet there are people that can.

I haven’t seen enough data to support websockets being a poor fit for mobile, at least in the context of day-to-day average use. It’s true spotty connections will suffer, but simulating 30% packet loss I don’t have WebSocket issues so it would be better to have real data to discuss and what kinds of edge users you need to support. In any case, you can use LiveView with the longpoll channel transport to go over HTTP if needed:

import {Socket, LongPoll} from "phoenix"
import {LiveSocket} from "./phoenix_live_view"
let liveSocket = new LiveSocket("/live", Socket, {transport: LongPoll, ...})
1 Like

Do you actually recommend long poll for spotty network? I was under the impression that long poll is the worse of both worlds (HTTP and web socket) and is only good for compatibility with older browser and reverse proxy.

I think that more of the problem is not spotty network, but changing network on-the-fly is more of the problem. And on mobile it can happen constantly, WiFi to cellular and back, cellular to cellular on roaming borders (for while it can be less of the problem in US, it can be quite painful in EU for example). With such situations we are back to @WestKeys point about managing client state. And managing that state isn’t that much of a problem, in the end in most cases it is one simple token that need to be stored client side.

Of course WS over QUIC could solve some of that problems, but AFAIK there is no such implementation yet. Additionally HTTP/2 and HTTP over QUIC (also known as HTTP/3) already manages most of the pain points of HTTP.

About original points:

You just need to manually (even more manually) handle internal requests in whatever form you want over WS. So instead of having stable and tested technology you are trying to build almost the same from the ground up using different transport?

As I said earlier, you still need to manage client state in case of reconnections due to network change (as the IP will change and TCP multihoming isn’t that popular and IPv6 isn’t deployed in numbers big enough to use that feature).

That is not true. Imagine situation where you have some kind of access control, for example RBAC:

  1. User creates connection with access to limited resource. You save that information in the connection state.
  2. Admin revokes access to given resource for our user.
  3. User still can access the resource, due to Write-After-Read Hazard.

So you still need to check each time whether user can access given resources. Of course, you can try to listen on events happening in application and change internal state of connection to match the expectations, but it makes flow much harder and much more prone to omission. So in general you replaced small inconvenience with possible bottleneck for a solution that requires much more complex implementation, with much more testing, and much more places where it can fail. Doesn’t seem like good approach to me.

1 Like

Thanks for the responses

Amongst others, people in places where you need to e.g. extend your arm/go on the balcony/go for a walk to get coverage. @hauleth brought up a valid angle but similarly I don’t have stats on “mobile network switching over time” either.

I understand this is not an empirical discussion without data but I’m assuming there must be a networking cost to having a socket permanently open? Which is not a dealbreaker in and of itself; for affected users, you can just fall back to xyz like you mentioned

What I’m really trying to understand is why this approach hasn’t been applied (or has it?).

I mean, if you’re referring to REST, you could technically run REST over websockets (which sounds dumb), or even GraphQL over websockets. Or just use/start off from one of the many existing websocket subprotocols such as WAMP, XMPP, MQTT.

Could you elaborate on this with a proper scenario please?

I purposefully specified authenticated. There are plenty of cases where data is publicly accessible to all authenticated users.

Longpolling is only HTTP and we run a stateful process on the server that the client polls for bidirectional updates. I would only use it if you had a hard requirement for it, for either lack of WS support, or potentially edge clients where WebSockets are a known issue. So to be clear, anywhere where HTTP is a good fit should work just fine with channels + the long poll transport because it’s purely HTTP

1 Like

Which makes whole journey mostly pointless, as soon enough you will probably need a find a way to pass the information like tracing info and other metadata, so in the end you gained nothing except more work. Pinnacle example of modern engineering :wink:

IPv6 and SCTP (and for some extension TCP) allow for multihoming, which is “transparent” (well, handled on the L2/L3) communication wit devices with different addresses. So it allows server to know that device with address X and Y is the same device, so it can send packets to any of these. This is useful for exactly what we need - handling connections between different networks. QUIC mitigates that problem by session tokens that are used for relaunching connections. This approach is useful, as it do not require the routers to be aware of new protocols (like SCTP) as QUIC is built on top of UDP, but on the expense of more convoluted implementation on the hosts.

However as I said before, most of the stuff we are talking there is already mitigated by Keep-Alive in HTTP/1.1(!!!). It allowed us to keep few connections opened and send data via them instead of creating new connection on each request. With HTTP/2 come another feature (that you would need to create on your own if you would like to serve content via WebSockets only) that is interleaving different requests inside one connection. To draw a little bit what is the difference:

  • In HTTP/1.1 with keep alive when you sent request over the opened connection you would block whole connection until you receive whole response. This makes implementation on both sides very easy and straightforward, but request for a huge block of data would block that pipe for long. After the response was sent and received, the pipe was reused (if there was Keep-Alive header) for next request.
  • In HTTP/2 you can send 2 (or more) requests over the same connection without waiting for response. Each response then will come back with identificator for which request it was meant, so you can fetch huge resource and smaller ones at the same time, over the same connection. Just their data will be interleaved with each other, so the client need to understand and separate data on their own. This makes connection utilisation much better, but at expense of complicating handling the requests.

So with your WebSocket-as-a-transport you would need to solve all that problems on your own, from the ground up. So in the end you would probably end with HTTP/1.1-like solution, that would be way less battle tested and would require custom implementation for each client.

Authentication without authorisation (even very basic one) is just pointless. So I would say that in 99.9% you will have some kind of authorisation as well (for example user account can be suspended due to the illicit activity or overusing the service). Now you have super-basic authorisation as well (“only authenticated user can access” is also authorisation).

1 Like

We’re having a discussion, what’s with the sarcasm my man?

This reads mostly like your personal opinion.

GraphQL over websockets is currently being standardized by the GraphQL working group.

Meteor as a framework also runs virtually all requests through WebSockets.

LiveView itself pretty much implements this approach.

Please explain why people with lives would be working on this if in the end you gained nothing except more work?

Surely the ~500 contributors to Meteor, all its users and investors aren’t all clueless?

Surely these aren’t the only lot investigating this path.

Again this reads like your personal opinion, or Kool Aid. My previous point about publicly accessible data to authenticated users stands, and as a matter of fact I will add that there are a whole other class of cases where authorization only needs to happen on connect.

Just update connection state? Where’s the issue here?

I don’t think it was personal :slight_smile:

GraphQL is transport agnostic, it works on websocket too.

I see channels as extended controllers, and liveview as extended channels, but in the end, the work is mostly done in Contexts, or Functional Core. You should be able to switch easily from one to the other.

In a request/response cycle I don’t see websocket advantage. But I see many when the server interact with the client.


Oh :sweat_smile: my bad then @hauleth, I misread that, sorry!

Another obvious point not discussed is the potential cognitive savings and just overall simplification of using a single transport. I think most apps will grow beyond request/response cycle and require some realtime functionality at some point. When that point is reached, you’re now managing both HTTP and websockets.

Since websockets obviously also support request/response, you now have 2 places where you can place your request handlers. Why would you pick a controller over a channel if you already have a channel set up?

I would typically use a controller when login, register or logout. Otherwise it’s fine with websocket.
But as mentionned, I would not like to be tied to a transport layer.

1 Like


Why would you be tied to a transport layer?

I want to be able to use liveview, or channels, or normal controllers, or all. I don’t want to choose upfront.

Yes, just want to clarify, I am certainly not suggesting getting rid of HTTP system-wide. I’m exploring the idea that in client-server business domain interaction, one could choose upfront to route it all thru websockets (which does not imply banning LiveView or regular controllers from the app per se)

Tangentially (world catching up to LiveView)

Still curious how this will all pan out for mobile, especially considering the whole SPA bonanza was itself motivated by mobile-first-induced client-server decoupling, ie “Ok well we have a well documented backend API for our iOS and Android apps, great, let’s reuse that”…

I haven’t seen enough data to support websockets being a poor fit for mobile, at least in the context of day-to-day average use.

An Energy Efficiency Study of Web-Based Communication in Android Phones (

There were numerous complaints about battery drain on some android apps that tried to implement chat using websockets. It mostly had to do with the antenna maintaining a persistent connection. The ones that used MQTT performed better and those using push notifications performed better. I dont have the numbers or the implementation details though.

For lonpolling, you still have a mostly open HTTP connection maintaining short connections on the server, so in practice it seems very similar to websockets unless you are specifically turning a mobile client to poll for updates less frequently and to not hold a poll open if no updates are present. Even then, the attenna is active ever several seconds, so it would be interesting to see a real breakdown of the approaches.