Code privacy: which data is sent over the wire with Phoenix LiveView

We are considering writing a very large scale public facing web application using Phoenix LiveView. Considering the internal customer for which the application is developed, we know that the application will be used by millions of simultaneous users, and we expect huge spikes of growth (tens of thousands to hundreds of thousands of people).

We initially intended to write the application as a Single Page Application using typescript like we are used to. However the issue is that this particular application has a number of features and sections that we do not want curious people to learn about by reading or reverse engineering the client-side source code.

Clarifying our use case

This is not about secret algorithms (such things are safely running on the server), but to make a long story short, the operational model of the company leaks in the domain model of the source code but users and curious competitors must never know what’s going on behind the scenes. This is not adtech so we’re not doing any spying or shady stuff, simply the business processes, and the business model - literally speaking - have a lot of value.

This is akin to having all your family members impressed by the way every time they come for barbecue they find the time spent at your home exceptionally enjoyable. However they are not aware that the reason for this consistently exceptional experience is that you prepare their visit in a professional manner using an operational model similar to the one of an hotel resort. Among many other things, you do not want them to know that you refer to them as “clients” internally, to them this is just a family meal and this has to remain that way.

My point is, there is a huge difference between the operational model perceived by users and the actual operational model of the company, and very often we simply do not want users to know that certain features and processes exist. Therefore, we cannot let funny JSON objects representing our domain model appear in the network tab of developer tools :rofl:.

Phoenix LiveView vs Blazor Server

Because we usually use .NET for our backends, we were attracted by Blazor Server. The benefits are: server-side rendering ensuring that the domain model never leaves our server and that only the DOM is sent to clients, a component-based model allowing us to build the application as a composition of various UI-components, and the ability to use our back-end code without needing to build an API layer first.

The only - but huge - downside I see, is the scalability of Blazor Server. Apparently 3.5gb of memory is required to serve about 5,000 users and 14gb to serve about 20,000 users. I find it 1 or even 2 orders of magnitude too costly, and this is already having an impact on the business decision we make. For example, the product was initially supposed to have newspaper-like modules providing a modern and interactive user experience for a niche yet massive audience severely lacking this offering, but the traffic profile of such a module (hundreds of thousands to millions of daily users bringing in no income to the business - since we are actually against ads) made it prohibitive at such costs so we had to reduce our vision.

I recently came across Phoenix and LiveView and realized that this could actually be a great fit, considering the fact that the whole stack was built for scalability, and I have seen inspiring phoenix benchmarks. The developer productivity story is great, and the .NET backend would not limit us in terms of scaling since we use a middleware based on a Virtual Actor Framework. The benefits would be:

  • Much better scalability, Phoenix would allow us to accommodate any load in a cost-efficient manner.
  • WAY MORE better handling of disconnects and application updates (the client can reconnect to any server, the client-side viewmodel is sent again upon reconnects etc…).
  • Generally speak, in addition to looking like the whole stack has been built for this kind of use from day 1, Phoenix seems more mature, robust and easy to reason about. While I love .NET for the backend, Microsoft’s ASP.NET offerings tend to be over-complicated and carry too much luggage for compatibility and historical reasons. So I’d welcome not having to use them for the front-end (I however do not feel convinced by the idea of switching to Elixir in the backend as on the one hand, I feel OOP is better suited to model evolving business domain models and at the same using a dynamic language in the backend is a big NO to me - but dynamic languages are fine in the front-end and even tend to simply things).

The downsides I can see for switching to Phoenix LiveView are:

  • having to learn a new programming language. However I believe it is important that we be able to use the right tool for the right job especially when there is such a great alignment of stars. Beyond the efficient runtime of Elixir and Phoenix built on the Actor Model and BEAM, I am under the impression that UI building and diff-based re-rendering are a natural fit for functional programming.
  • Need to write an API layer to bridge the gap between .NET and Elixir. But may be able to reduce the pain by using something like GraphQL. I am also pretty sure that we will need to develop specialized alternative clients anyway (desktop, mobile apps etc…) so having the API early may be a good idea to avoid developing bad habits that will make things more difficult for us down the road.
  • and the one I am here to discuss: domain model leaks. In this Youtube video, the presenter shows that some data loaded by the LiveView component can be seen using the network tab of developer tools. This one could be the deal breaker for us, so I’d like to understand why JSON is sent over the wire if the DOM is rendered server side? And generally speaking, to which extent does this happen? I would expect that only HTML or DOM nodes be sent over the wire.

Beside this, if anyone has any top of the head recommendation on how we could reduce the pain of developing an API layer for consumption in Phoenix that would be great. I looked into using gRPC and GraphQL and it looks like the GraphQL library of elixir is more mature, ideally for selected solution elixir should be able to automatically generate client objects to consume the API (so that we only need to do this work once on the backend).

Thank you for your help!

1 Like

For some background I will start by saying that I work as a Developer Advocate for API Security and unfortunately I have to tell you that anything you release and runs on the client side, MUST be considered already reverse engineered, because it is in the public domain, and at some point in time someone will do it.

On the web all it takes to see what going on its to hit F12 and poke around the dev tools, and this is true for whatever web app you use, be it LiveView or a React one, be the payload JSON, XML, HTML, etc.

Even in mobile apps is easy to reverse engineer your API and get to understand how it communicates with the API server and what data is sent back and forth.

Just to be cristal clear this is no different from opening dev tools and see what HTML/Javascript your current .NET app is sending to the client.

What is sent by LiveView is controlled by you on the backend, it only leaks your data if you put it in the LiveView assigns.

1 Like

Phoenix is a MVC framework. Blazor, from what you say, also seems to be. This means that pages and content updates are indeed server rendered. However when a person clicks a button and an event is sent to the server for processing or when the server sends an update event to the page javascript (only if you code your app that way), then obviously that is not HTML at all and must be sent in some other format. That will ultimately be visible in the dev tools, yes. The good news is this is all in your control. If you want 100% opaque data then you just drop LiveView and stick to the classical controller-template approach that will be SSR only.

A reasonable compromise is that you don’t have any JS and rely on LiveView completely. This way JSON events may go to the server, but this rarely has valuable info. HTML will come back already rendered. This is what LiveView is all about.

Overall I do suggest taking like a week to get into it, build one usecase you have, and just see what data is visible. You can also just order this example from someone. Or third option just hop on a call with someone on this forum and give more specific details on your usecase and get instant feedback if it’s a fit (I don’t mind having a chat myself).

Performance wise - yeah have only heard bad things about .NET. I would have thought Blazor was fast, but I guess not.

In terms of cost - have you calculated how much a month of developers learning Elixir will cost vs. .NET server costs? I love Elixir and decided to specialize on it, but in reality you develop in the language the devs are most comfortable with. UNLESS you have the time and resources to learn another approach. It takes time to get used to. Specially if you’re deep into OOP, which I personally never really liked.

Whether a business case is for OOP or functional approach I think is more to do with how you’re used to thinking rather than if a business usecase fits one or the other. Although I must admit I find it hard to imagine building games in a functional language. Doable, fun, but OOP does have its perks.

1 Like

I’m looking at the dev tools out of curiosity. I made a very simple dev tool for Elixir recently. It just performs a search in the server.

So when user types something this is sent to the server:


The only valuable thing here is what the user typed (“redu” for “reduce”).

And the server responded with

This is just the HTML diff. All the info here is to be displayed for the user anyway, so nothing private leaked. Client doesn’t know where i got this data from, how I manipulated it, all the code is on the server.

You can fiddle with the app yourself:


Thank you so much for the insights in both of your messages, this perfectly answer my questions and the concerns I had.

All valid concerns. However, are the two contradicting each other? Or do you mean to secure the LiveView middleware and the .NET API backend with a private network? Then how do you make alternative clients?

Thank you, initially I considered as well that .NET server costs weren’ that meaningful from a business perspective and intended to just throw more RAM at the problem and scale up, however they do add up and already started to influence planned features in a negative manner.

Especially when added to the fact that there is server affinity (users must always reconnect to the same server to avoid losing state), and that generally speaking the experience when the application is down (for example to deploy updates) is far from ideal (the whole UI is blocked, state is lost etc…). Meaningful implications from a devops perspective (more complex infrastructure and deployments). The costs are 2 orders of magnitude too high.

And the uncertainty of not knowing how and when things will break is not reassuring either. Debugging SignalR (.NET’s wrapper around WebSocket) is not fun at all, so I can’t imagine when it is bundle in the midst of an ASP.NET processing pipeline (as is the case with Blazor), and you are under pressure because the problem is killing your growth. Blazor Server is basically a black box. Elixir/Phoenix are much more reassuring with this regard.

Anyway, I feel much more confident in going ahead with LiveView thanks to these insights. This was the only show stopper I could see, thank you a lot!

Indeed this sounds contradictory, thanks for pointing this out. To clarify: this middleware API will not be accessible from the internet it will only be accessible by web apps behind our cloud firewall (private network). Then if some external clients need access to the API as well at some point we will only expose a very limited subset of endpoints (but external clients will most likely use a WebView).

Thanks for dropping by! I agree if a REST like API is exposed, but this is precisely what technologies like Blazor Server (and apparently LiveView) allow us to avoid: an API is never exposed, the client can only see the final HTML which leaks much less information. I think what you describe would indeed apply to React, Angular, Blazor WebAseembly, and other clients connecting to a remote API, however this does not apply to technologies directly serving HTML to the client, which is the reason (actually one of the reasons) we are following this approach.

If the connection is secure and private, then I’d skip GraphQL and use a hand-rolled binary protocol in erlang external term format. It should be not hard to implement a sane subset of it in .NET, maybe something like that already exists.


Thanks a lot, I’ll definitely look into it!
Edit: Indeed, it seems to exist.

1 Like

No matter in which format you send the response to the client you just put there the data you want to put. So, if you are sending HTML I can still get the data I want out of it, just like I would get from JSON or XML.

Now, is true that a response in JSON format is easier to parse then one in HTML.

Also, I agree that more often then not a REST API exposes more data then it should, but that is a design problem. What I mean is that you can have a REST API returning exactly the same data as web server returns in the HTML.

Oh I see, now I understand your point! The problem I see with this approach is that you end up having to develop one API endpoint per view/page you have in your application, which would turn out to be a productivity killer.

Namely, if two pages targeting two different audiences need to show different parts of an entity, instead of just having two different views, you now also have to build two different endpoints only returning the exact data the view will consume.

Maybe GraphQL allows one to have a single endpoint and set up fine grained authorization for who can access which parts of the data? I do not know enough about GraphQL to answer this question but even if this is the case, the other issue with client-side application (in our situation) is that the view templates are typically all sent to the client on startup as static assets (that are quite easy to inspect).

Setting up a lazy loading strategy with fetch authorization is non trivial, we were going that way initially before settling on server-side rendering, but it is a lot of trouble in practice. Beyond the one time set up (which is non-trivial), there is also the perpetual question during development of how to split templates based on the authorized audience. This can be done for sure, but this is a lot of trouble, and more importantly this is detrimental to productivity (and user experience to some extent - things like lazy loading latency etc…). None of this is an issue with server-side technologies.

Oh, by no means I wanted to make you give up of using LiveView. I was just trying to show you that data sent back and forth can be always extracted out on the Client side.

I also prefer to use a single point of entry to my backend with LiveView instead of using an API with several endpoints. As a security advocate I was sold on using LiveView when I realised that I only needed to expose one public entrypoint for my app to retrieve and post data.

So I completely agree with you that LiveView is not a productivity killer like keeping an API clean of exposing more data then it should.

GraphQL allows for only one entrypoint and then you query the data that you only need, that can span several API endpoints. Pretty much as no limits, provided you code the resolvers in the backend to retrieve the data asked in the GraphQL query.

The problem with GraphQL for security operational (DevSecOps) is that it becomes a nightmare to secure for protecting it from abuse. A REST API with an API spec(Swagger, RAML) will alllow for security tools to introspect the request and deny it when doesn’t comply with the API spec.

1 Like