The future of the web - what does it look like to you?



Or what do you hope it will look like?

Elixir Forum 2019 Update!

I really believe in the document nature of the web, and I hate the clientside bloated mess that’s so popular today. There’s a simplicity to displaying a document, both technically and in design, and I pray for a future that respects this.


As I wrote already here is how I imagine future advanced applications:

my_app # supervises other apps
  > apps
    > scenic # app which would be compiled to standalone GUI app
      > apps
        > offline
        > online
        > render
    > server # Phoenix WebSocket-based server
    > shared # for scenic and web apps
    > web # app which would be converted to WebAssembly app
      > apps
        > offline
        > online
        > render

I believe that in future I would be able to deploy Phoenix WebSocket-based server from server app, compile scenic app to native GUI + upload it and compile web app to WebAssembly which would be ready to fetch from server app in one simple command. shared app would therefore store all functions used in scenic and web apps as well as unique document and partials structs (something like MyApp.Web.build_html5_document(unique_layout_definition)).


I would like a future where we consider web application programming as a single system that incorporates client and server.

For example stuff like:

  1. A single mix project that when built would spit out a JS bundle and a webserver
  2. Considering client and server as part of one cluster so I can a client. Modelling the client as just one more process makes a lot of sense to me.
  3. Use property testing to check message ordering and discover any race conditions in the combined system.

#5 :wink:

With BEAM and Elixir -> WebAsm this becomes maybe possible…


Got this on my shelf: Dynamic HTML: The Definitive Reference

Release Date: July 1998

… so the past hasn’t even respected it.

I hate the clientside bloated mess

Yep, somethings gotta give …


Thin clients - I hope we move towards bringing as much work to the server as possible so our clients are just rendering views; focusing on using as little battery/energy as possible so that devices can become smaller and more practical. Maybe in the future we’ll be able to have smartphones that can actually hold a charge for more than a day!



Does that mean end-users would only possess dumb terminals to access all of their data and applications remotely? No more by-default local storage and applications? If so, yours is a strong, dystopian vision. Why wish so much pain to all of us?


My goal doesn’t prevent local storage, it simply wouldn’t use it as often because you wouldn’t be deploying as much client executed logic.

I didn’t realize web development today was so much more of a utopia than the 90’s and early 00’s :joy: - I honestly have no idea what point you’re trying to make with this comment.


OK, then all good I guess :slight_smile:

Well, a dumb terminal-only edge computing is the dream of all corporations to mine and sell user data, there are obvious very dark paths ahead of us re. data privacy if those corporations have it their way, and one cornerstone of the worst-case scenario happening is the no local computing resources on the end-user side apart from a dumb remote view layer.

Edit: Not sure I clarified it for you, you realise that a dystopia is the polar opposite of a utopia?


I won’t pretend to understand the way humans use tech for good and bad, but making a distributed application easier to deploy is something I want to see. When clients are only rendering views, that means they’re aren’t executing ad-hoc application logic, which means it’s easier to create a common protocol to speak to many types of clients in a uniform way.

yes, I realize that - maybe you’re misunderstanding my comment. My comment was intended to contrast that we have much heavier clients today than the 90’s and early 00’s - as such we have historical precedence to suggest that maybe going back to thin clients wouldn’t create a dystopia.


OK. I guess we have a misunderstanding on the meaning of thin client. This random definition found online matches the understanding I have and used in the context of my above comments:

A thin client is a stateless, fanless desktop terminal that has no hard drive. All features typically found on the desktop PC, including applications, sensitive data, memory, etc., are stored back in the data center when using a thin client.


I treat thin and heavy as relative definitions to each other. There’s no such thing as a stateless client (technically speaking) because just presenting some pixels on a screen represents a state of the pixels. Maybe I’m completely butchering a relatively ubiquitous definition of thin client though. My apologies for creating confusion if that’s the case.


I heartily agree. The web was great when it was “web pages”. “Web applications”, while fixing a ton of problems with other application deployment/management models, have created a whole host of new issues.

As an old-school desktop application programmer, I hate seeing problems that were solved 20 years ago get reintroduced in web applications. Trying to shoehorn full blown applications into a document model has been awful.

That said, we’ll never beat the deployment model of the web. Being able to “install” an application just by visiting a URL has simplified life for so many users.

I’m hopeful that a future web will look something more like:
1 - Web pages are just documents with markup, styling, resources. Little-to-no “application”-y bits.
2 - Web applications are distinct creatures. Obviously they’re hosted and run by the web browser, but no markup/HTML at all. Canvas + WebAssembly can kind of achieve this today, but it’s not great. Not sure what the path forward for this looks like though. The major browser vendors all make so much money on advertising and tracking, it will be hard to get them to support some new model that makes it harder for them to track users and use/sell that data for advertising.


It might be just me not being a native speaker, I would pair thin with thick and light with heavy as far as antonyms are concerned.

But the important part is that you confirmed the misunderstanding, makes more sense now.

This is not what the definition refers to when using the word stateless, in that context.
It refers to the fact that no state is persisted accross reboots / sessions of a thin client.

I don’t know, after 20+ years in IT what thin client evokes is in line with the link I posted and wikipedia’s definitions, which is a whole industry in itself. Maybe other people/circles are overloading that term, that sometimes happen, never seen it in the wild myself though.


The future of the web - what does it look like to you?

The web-app: Phoenix LiveView

The user-facing API: gRPC or GraphQL

The back-end architecture: CQRS, EventSourcing, Distributed Event Logs

The back-end tooling: NATS, Livebridge, gRPC, GraphQL, Vault, Kubernetes

I guess that’s the present! Trying to get caught up here.


The future of the web I see going to webassembly, maybe even the DOM itself become a ‘side-thought’ to programs handling their own interfaces again but the DOM still being used for ‘smaller’ things, along with OS’s just just run webassembly as low and safe and fast as possible. I’m unsure how far away that is, but I entirely expect it.

At least in the IT world, a thin client is a minimal no-storage system that connects to some remote server for all work, including display. They usually have nothing but a display, input, and a BOOTP network interface. We use many of them here where a BOOTP server sends over a minimal linux+encrypted-vnc client that auto-connects to an auto-generated session on a server to be used as student kiosks.


Manageable by a single person.


Flutter is an interesting case study of ripping out just about everything (including DOM/Browser concerns) and working directly with the GPU. I get the feeling WebAssembly is moving us in a similar direction. We should get multi-threading and the WebAssembly memory limit raised to 4gb in the next year or so which will allow frameworks like Ember and React to further speed up their internals a fair bit (they’re already doing this). Once we get host bindings, solutions like Blazor will probably start to get very compelling.


Any webpage that eats even 256 megs in a webassembly is almost certainly doing something wrong unless it’s something like a game or video editing program or so. The default memory limit should be 256 megs and the user should be required to allow any large memory usage with a very stern warning (that can be remembered so it’s not seen again for a specific page).