The future of the web - what does it look like to you?

How about a little different direction? The Web without servers? I remember Joe in one of his talk said that we now have more powerful machines in our pockets than the servers they had, starting with Erlang. So do we really need servers at all? Do I really need something more than my phone to manage my own web services exchanging only the data I want with other people’s phones?

Actually better is that you keep all your data locally and allow programs to access sub-sets of this data subject to your approval. If your data never leaves you then it can’t be saved, copied or abused. Move the programs NOT the data.
https://twitter.com/joeerl/status/1005157822315810816

This has two main benefits if you keep your data at home: 1) You do not reveal your data 2) Programs are often smaller than data so energy used to move the program is less than the energy used to move the data.
https://twitter.com/joeerl/status/1005479857080463360

How about using P2P, crypto and CRDT’s to distribute data between apps? I really recommend to play with Dat Shopping List Demo https://blog.datproject.org/2018/05/14/dat-shopping-list/
Also a talk about motiviations behind Dat and how it works now in the Beaker browser.

There is a lot of cool things happening in that space and can shape the web of the future.

11 Likes

valid point for something like websockets, but that doesn’t mean future implementations couldn’t expand on this to use radio even less than client side implementations. One big advantage of keeping everything on the server is that you have the ability to make transport opaque. This means you don’t need JSON or a data structure that optimizes for both readability and efficiency - rather you can target binary protocols that send far less data over the wire. Further, you can more confidently diff on the server and send patches which essentially prevents developers from accidentally sending too much data over the wire (in practice this likely happens a lot)

future persistent connections don’t have to send heartbeats or anything that fully wakes the radio (unless of course there is actually data to transmit) - I’m speaking at a high level of course, I have no experience optimizing radio usage in a cell phone, so I may be way off

2 Likes

No knowledge of radio required, it’s much simpler than that. If a scheme requires transmitting data, ANY amount, with every user interaction, it will kill the battery. From a power-management perspective it’s far better to send the data needed at once and let the device manage user interaction with it.

4 Likes

nothing about SSR precludes you from sending up everything, in fact SSR will be the implementation which makes less round trips for a final rendering, technically even in native apps that will hold true because you have to download the app and then that native app requests the data. A SSR could send up a lighter weight version to cache all “pages” or w.e. you want to call them without having to send up any logic other than something like an href that points to another fully rendered page. Clicking a button that takes you to a new view doesn’t have to render anything anymore than changing a modals display from hidden to something else is considered “re-rending” in the application context

now if there’s anything dynamic that needs to be up to date, a diff/patch on the server will be lighter than a client-side ad-hoc request for data that is built at the application level

in the context of something like a calculator that doesn’t hold true of course, but that isn’t the “web” in my mind - the web implies networked interactions; not some binary that could be transported by carrier pigeon via usb that would effectively have the same user experience

2 Likes

Service Workers decoupling the “app” from the “page”, allowing web applications to find a happy medium between heavyweight apps and lightweight pages.

This forum, for example, is implemented in a single page (the service worker that it currently has is a no-op thing just to allow Android to show the add-to-home button), so it has to reimplement a lot of browser functionality to handle its pseudo-page-changes. But in exchange, it can store things like user settings and the persistent notification channel in local javascript variables instead of having to re-fetch and re-set-up all that stuff every time you click a link. If all that stuff were stored in Service Workers, it could share that stuff between browser tabs and it could use regular page changes for its links.

Also, since all kinds of web workers (service and local) have their own heap, splitting the work between them can help to reduce stop the world pauses. They also encourage stateless, message-passing-oriented concurrency and fault-tolerant design that allows subsystems to crash and restart without bad state spreading throughout the app.

It sounds pretty nice, splitting the web app into the “persistent-across-page-changes” part and the part that can be freed without having to perform a GC run. I wonder where they got the idea to do it that way?

4 Likes

Big question

Basically the web is a gigantic kludge - a complete mess - it’s not what the
early pioneers wanted it to be.

The biggest change would be one to make the web less fragile.

Things that work today should work the same way in a thousand years time.

History should be preserved. Allready we cannot find data that was stored as little
as ten years ago - what hope is there for future generations to see what we have been up to today.

The trouble with the web is its brittleness - change a filename on a server and
suddenly you might break thousands of applications that store links to this file.

One day the web might actually implement hypertext as envisaged by Ted Nelson.

The 17 rules of Xanadu might one day be implemented
see https://en.wikipedia.org/wiki/Project_Xanadu

The “404 Not found” message would go away forever.

Moving files would not break programs that pointed to these files.

The web would be totally read/write symmetric.

Changing incorrect data should be simple and foolproof.

Example: If I see a spelling mistake on any page I should be able to immediately
correct it and this change would quickly propagate to any users who were currently
viewing the page (pretty tricky this one - needs some fancy authentication etc. to stop abuse)

The control of the web should be returned to the users and not the “big five”
(Amazon/Google/Apple/Microsoft/Facebook)

If you’re interested in this topic take a look on YouTube

search for

  • “computers for cynics” by Ted Nelson – these are great
  • “the computer revolution hasn’t happened yet” by Alan Kay

Watch these and you’ll get some idea of what the web was imagined to be
before it was messed up.

My talk (on YouTube) “the mess we’re in” takes up some of these problems

Fun topic - I guess in a few hundred years we might have figured out how to
make a decent web right now it’s early days - so it’s still a big mess

Lots for you guys to do then :slight_smile:

23 Likes

Thanks for sharing your thoughts Joe - I hope you don’t mind, I added the videos to your post :003:

1 Like

There is one flaw in your logic. «working web today» has a lot greater value than «working web tomorrow». So you could not expect many of today’s producers of web content to pay a premium for allowing the user in 100 years to read the page

1 Like

(https://softwareengineeringdaily.com/wp-content/uploads/2018/03/SED545-Streamr.pdf)
What do you think about the place of blockchain in a decentralized web?

Link below is an essay from P. Frazee

https://infocivics.com/

2 Likes

Not necessarily where it all goes but I think it’s possible that it gets more decentralized, self organizing and immutable, something in this direction: https://ipfs.io/

1 Like
1 Like

Also see this thread https://twitter.com/pfrazee/status/1044265163350843392

2 Likes

I’m a bit confused here, are we talking about the future of web or future of web development? :upside_down_face:

They are different.

About the “web” itself.
Like every other aspects in this real-world. Money and politics will always win.

Say what you want to say, but I apologize… Google, Apple, Amazon, Microsoft will continue their domination. When one of them fall, other greedier corporation will take their place.

So we are going to see more giant corporates and governments controlling our web. Privacy will be completely gone by the end of 2035.

3 Likes

Just another comment.

I realised this morning that the title of this thread could be interpreted in two different ways. I interpreted “future” in the sense of “how could/should things look like in” a few hundred years" - other replies have interpreted in this in terms of the near future - “what’s going to happen in the next few years”.

I think we are in “the age of confusion” - historians in few hundred years time will view this period as one of exploration - where we try to figure out what would happen.

I mentioned history in my earlier comments - my interest in this started a couple of years ago - my wife found an old photo of a long-dead relative from about 1890 - and she asked me “what will happen to all these photos we take and end up in the cloud somewhere - will future generations me able to see them?”

A very good question - which I’ve been asking ever since.

I’m not optimistic - as far as I know my cloud storage goes away when I stop paying the bills - and, to make matters worse, all the data is encrypted.

It’s not just my personal photos - what will happen to all the data we store encrypted in the cloud in a few hundred years time? Will we loose all our collective history?

Already we have lost a great deal of data that was available as little as 10 years ago.

Apps die pretty quickly. A program that works today will probably work tomorrow, but will it work in a few months time (probably) or in six months time? (probably) or in 3 years (umm) or 10 years?

I won’t name any names here, but ,for example, Apple sees the need to gratuitously upgrade it’s OS and apps every few months. That would be fine, were it not be for the fact that on several occasion the new program cannot read files produced by previous versions of the program (case in point Keynote) - I’m not talking about centuries here but time spans of a small number of years.

If programs break within 5 years what hope is there of running them in 100 years time?

So when I talk about the future I’m interested in software that will still work in hundreds of years time - right now we don’t know how to do this reliably.

This is part of a larger question.

What problems ought we solve? - this seems a far more interesting problem than asking “What problems are we solving?”

One of these problems is History Preservation which is a sub-set of the “future web” problem.

Another big problem is “breaking stuff” problem - innocent changes break things that work - this is crazy.

15 Likes

Great post Joe, and I suppose I should add what I had in mind myself when I posted the thread :slight_smile:

The future of the web, to me, seems like it may split - to contain data that machines can consume, and data that humans can consume. To picture that we only need to look at sci-fi for inspiration - though I wouldn’t go quite as far as AI, but AI-like.

Think Gideon :003:

Smart enough to process and interpret data, and intelligent enough to communicate that data as if you were talking to a person, who knows you well, and knows how best to get that knowledge to you in a format or language that you understand. So one kind of data would help machines actually answer our queries, and the other kind which the machine will display to us - probably on large screens, holograms or via retina or cerebral implants - for our consumption.

AI-like because if it were full on AI, then it probably wouldn’t be serving us! :lol: It would be self aware enough to be independently minded and pursue its own aspirations! :044:

Maybe I am looking way too far into the future?

Personally I do not store photos or emails in the cloud. I begrudgingly store browser history and contacts (and music) so that the experience on my phone and computer is more seamless. But your wife’s question is a good one.

One problem I see is that now we have so much more - more photos, more videos, more everything. When we had to pay for photos to be developed a family could amass a few hundred or thousand maybe? Now, most teenagers have thousands upon thousands and their life has only just begun!

We have computers and web connections at home - so why can’t those be used as our personal ‘clouds’? Just as seamlessly your photos get automatically uploaded to Apple, they could get transferred to your personal storage device or machine (again, maybe think something like Gideon?). You could then (or Gideon could ;-)) find a cloud storage service that actually stores it - forever.

But you’re right, we need to build these things. We also need to stop just accepting what giants like Apple give us… because the only thing they really care about is making money.


On a slightly different topic (but related) I have thought about what the future could look like. But I am not going to saying anything more because you will all think I am insane :043:

1 Like

Reviewing what I said I wanted

I have thrown together a prototype that lets you send messages to a specific browser window, as if it was just another process.

The client has to implement an init and handle_info callback. e.g.

var client = new GenBrowser({
  init: function(state){
    console.log(state)
    return state
  },
  handle_info: function(message, state){
    console.log(message)
    if (message.your_pair) {
      console.log('paired with', message.your_pair)
      state.pair = message.your_pair
      your_pair.innerHTML = message.your_pair
    }
    if (message.text) {
      displayUpdate(message.text)
    }
    return state
  }
})

Then from the server you can send messages as follows

GenBrowser.send(other_address, {text: 'Hello from another browser'})

Or from the server it looks very similar

GenBrowser.send_message(client_address, %{text: "Hello from the server"})

See the README, for a working example.

Proof of concept

So this seems to work, I’ve had some fun playing with it.
It would be cool if I could make it work with ElixirScript.
A redux integration would also be nice. I like the single state container model, looks a lot like a process to me.

However both ElixirScript and Redux are probably a little way off.

4 Likes

Rich Hickey mentions similar sentiments in what he calls “The Space Age of Computing.”

Say we have a public, unique ID for every person/system/agent (let’s call it the PUID for short). Then we have a contract that states “any system event that includes a PUID will be delivered to the PUID event stream/chain with the git hash/SHA of the current system version”.

Each agent known by their PUID has a private log of these events accessible only by them or who they give access. If these events were on some kind of blockchain solution, there are separate keys for every event participant(system + agents). Now the agent knows what system events about them were logged, and what system did the logging. If we manage to get all the keys for the system events, and the system’s source code, we can rebuild the state.

Event-sourcing! Blockchains! hype intensifies

Infrastructure like this could potentially solve a variety of hard “large-scale-systems” problems.

Example:

  • Joe (PUID: 123) orders a pizza from Dominos.

  • Events in Domino’s system such as “Order Placed”, “Pizza Delivered”, by contract, are also delivered to Joe’s event-stream.

  • For the transaction to occur, Dominos wants to know that it was Joe and not someone else who’s buying this pizza. So they request validation from Joe.

  • Joe recalls that last week he ordered a pizza from Papa John’s, so he sends Dominos the encrypted system events from his Papa John’s interactions with the public PUID and system-version-hash on those events.

  • Dominos doesn’t have to know that Joe ordered a medium Olive + Pepperoni pizza with red sauce; they only need to ask Papa John’s if those events are authentic. Yes or no.

There’s no reason this identity validation process could be automated. Multiple separate authorities could be contacted for further validation. Domino’s doesn’t even have to run their internal systems on the Blockchain; as long as it can publish those events it could run on a raspberry pi.

Lot’s of assumptions, but this is speculation, so who cares!

  • How do we pay for the infrastructure? Blockchains use a lot of energy. Energy ain’t free!
  • Is the event-producer something that can be recreated? Is the git repo accessible?
  • How do we incentivize system owners(e.g. Dominos) to not just keep the events for themselves?

An interesting idea to solve this is to tokenize these event-hashes and sell them in a market. Owners can sell tokens with varying degrees of public accessibility. A market around system events that compensates agents for providing their personal data!

To further the example:

  • Joe’s events are being traded. Because Joe is well-respected his tokens trade for a lot.
  • Regardless, I think Joe’s tokens are undervalued currently. I’m going to buy as many of his tokens I can because more cores means more BEAM.
  • A few months later and sure enough Joe’s tokens are being traded for more. I sell the tokens. Profit.

Problems a System like this could potentially solve or ameliorate:

  • Identity Validation
  • The Problem of Merit:
    • Market incentives on who and what events are important could provide an indicator of capability with financial incentive to have a better estimate of an agent’s merit so as to beat the market.
    • Joe writes a new paper. The price of Joe Armstrong “Computer Science” category event-tokens +10. Nice.
  • Accountability of private data ownership.
    • Dominos leaked that I sometimes get pineapple on my pizza. My reputation is ruined. Shame on you Dominos. Shame. Dominos trust-ability -10.

A scalable solution to the Problem of Merit is really the big win in my opinion. With it we can see if the journalist who published that article knows a dime about their claims. We can weight votes by subject matter expertise! It would be nice if academia weren’t so institutionally centralized, so if we have a quantified estimate of a person’s credentials, maybe a paper could be published publicly and peer-reviewed by experts anywhere regardless of their academic pandering. Say what you want about meritocracies, we’ve never been able to scale them effectively and I doubt they’ve ever been actually fair. I’d like to see what sort of improvements we can make to our collective decision making once we’ve solved this problem.

So however the future of the web looks, I’d say Identity, the Problem of Merit, and data-accountability are what I’d consider as top priorities. I’ll bet the solution is something to do with more streams, and more control over those data. Who knows? :man_shrugging:

6 Likes

Hope:

  1. Decentralized.
  2. Peer-to-peer encrypted caching.
  3. Peer-to-peer encrypted backup and redundancy.
  4. Content-addressed.
  5. Provides small incentives to becoming a “super node” – namely donate [parts of] your bandwidth and storage capacity to help the network have the p2p caching and backup capabilities.
  6. Censorship-resistant. If an IP mask is banned and the requesting peer cannot reach the peer, try and contact a local IP that knows about another one that has the data, etc. to infinity. Pretty complex this one, I would be willing to work on that!
  7. Able to temporarily be served from a centralized server the old fashioned way, in cases where there is a burst of traffic – sports events for example – or when the requester has very strict data limits (p2p has an inevitable chatting overhead). But this might be abused so probably not a good idea.

Likely reality:

  1. No web. Everything is in apps, you cannot download that beautiful art you liked, sorry. Or if you somehow can, you rot in jail.
  2. Heavily regulated web. Corporations can use it to save expense and not write 50+ apps from scratch but everybody else will have to pay, or be sued. Adblockers are outlawed.
  3. Split web. An “official” heavily regulated web and a “dark net” – IMO most of the current web will flow into the “dark” web eventually, if corporations get their way. Most likely scenario I think.

The larger iPhones (all Plus series and the X* series). I know many people love to hate on Apple but iPhones are more durable than 99% of Androids. Only a few Sony and Xiaomi devices, and maybe the XL Pixels, can hold a candle to the bigger iPhones.

That’s what I want to see myself. Libraries like Drab – and the incoming Phoenix LiveVew – are hugely important evolutionary steps IMO. JS is a mess and the endless supply of young and enthusiastic people willing to put up with its problems is not a fix, it’s a bandaid.

Client-side programming should not even be a thing save for GUI apps.

1 Like

IMO that’s the main hurdle on the path of the evolution of the web today. Too much vested interests for corporations to allow any radical changes.

EDIT: Also this:

I am sad to report that this has been my experience every time I have read on these matters – our entire history shows forced authority wins every time. If you really make a good compelling case and the public is with you… then your sentiments can outright get outlawed, as a last resort. I see no way for us to be able to retort at all.

2 Likes