Radios use a lot of energy, keeping them on continuously rather than letting them power down is one of the quickest ways to drain the battery.
How about a little different direction? The Web without servers? I remember Joe in one of his talk said that we now have more powerful machines in our pockets than the servers they had, starting with Erlang. So do we really need servers at all? Do I really need something more than my phone to manage my own web services exchanging only the data I want with other peopleâs phones?
Actually better is that you keep all your data locally and allow programs to access sub-sets of this data subject to your approval. If your data never leaves you then it canât be saved, copied or abused. Move the programs NOT the data.
https://twitter.com/joeerl/status/1005157822315810816
This has two main benefits if you keep your data at home: 1) You do not reveal your data 2) Programs are often smaller than data so energy used to move the program is less than the energy used to move the data.
https://twitter.com/joeerl/status/1005479857080463360
How about using P2P, crypto and CRDTâs to distribute data between apps? I really recommend to play with Dat Shopping List Demo https://blog.datproject.org/2018/05/14/dat-shopping-list/
Also a talk about motiviations behind Dat and how it works now in the Beaker browser.
There is a lot of cool things happening in that space and can shape the web of the future.
valid point for something like websockets, but that doesnât mean future implementations couldnât expand on this to use radio even less than client side implementations. One big advantage of keeping everything on the server is that you have the ability to make transport opaque. This means you donât need JSON or a data structure that optimizes for both readability and efficiency - rather you can target binary protocols that send far less data over the wire. Further, you can more confidently diff on the server and send patches which essentially prevents developers from accidentally sending too much data over the wire (in practice this likely happens a lot)
future persistent connections donât have to send heartbeats or anything that fully wakes the radio (unless of course there is actually data to transmit) - Iâm speaking at a high level of course, I have no experience optimizing radio usage in a cell phone, so I may be way off
No knowledge of radio required, itâs much simpler than that. If a scheme requires transmitting data, ANY amount, with every user interaction, it will kill the battery. From a power-management perspective itâs far better to send the data needed at once and let the device manage user interaction with it.
nothing about SSR precludes you from sending up everything, in fact SSR will be the implementation which makes less round trips for a final rendering, technically even in native apps that will hold true because you have to download the app and then that native app requests the data. A SSR could send up a lighter weight version to cache all âpagesâ or w.e. you want to call them without having to send up any logic other than something like an href that points to another fully rendered page. Clicking a button that takes you to a new view doesnât have to render anything anymore than changing a modals display from hidden to something else is considered âre-rendingâ in the application context
now if thereâs anything dynamic that needs to be up to date, a diff/patch on the server will be lighter than a client-side ad-hoc request for data that is built at the application level
in the context of something like a calculator that doesnât hold true of course, but that isnât the âwebâ in my mind - the web implies networked interactions; not some binary that could be transported by carrier pigeon via usb that would effectively have the same user experience
Service Workers decoupling the âappâ from the âpageâ, allowing web applications to find a happy medium between heavyweight apps and lightweight pages.
This forum, for example, is implemented in a single page (the service worker that it currently has is a no-op thing just to allow Android to show the add-to-home button), so it has to reimplement a lot of browser functionality to handle its pseudo-page-changes. But in exchange, it can store things like user settings and the persistent notification channel in local javascript variables instead of having to re-fetch and re-set-up all that stuff every time you click a link. If all that stuff were stored in Service Workers, it could share that stuff between browser tabs and it could use regular page changes for its links.
Also, since all kinds of web workers (service and local) have their own heap, splitting the work between them can help to reduce stop the world pauses. They also encourage stateless, message-passing-oriented concurrency and fault-tolerant design that allows subsystems to crash and restart without bad state spreading throughout the app.
It sounds pretty nice, splitting the web app into the âpersistent-across-page-changesâ part and the part that can be freed without having to perform a GC run. I wonder where they got the idea to do it that way?
Big question
Basically the web is a gigantic kludge - a complete mess - itâs not what the
early pioneers wanted it to be.
The biggest change would be one to make the web less fragile.
Things that work today should work the same way in a thousand years time.
History should be preserved. Allready we cannot find data that was stored as little
as ten years ago - what hope is there for future generations to see what we have been up to today.
The trouble with the web is its brittleness - change a filename on a server and
suddenly you might break thousands of applications that store links to this file.
One day the web might actually implement hypertext as envisaged by Ted Nelson.
The 17 rules of Xanadu might one day be implemented
see https://en.wikipedia.org/wiki/Project_Xanadu
The â404 Not foundâ message would go away forever.
Moving files would not break programs that pointed to these files.
The web would be totally read/write symmetric.
Changing incorrect data should be simple and foolproof.
Example: If I see a spelling mistake on any page I should be able to immediately
correct it and this change would quickly propagate to any users who were currently
viewing the page (pretty tricky this one - needs some fancy authentication etc. to stop abuse)
The control of the web should be returned to the users and not the âbig fiveâ
(Amazon/Google/Apple/Microsoft/Facebook)
If youâre interested in this topic take a look on YouTube
search for
- âcomputers for cynicsâ by Ted Nelson â these are great
- âthe computer revolution hasnât happened yetâ by Alan Kay
Watch these and youâll get some idea of what the web was imagined to be
before it was messed up.
My talk (on YouTube) âthe mess weâre inâ takes up some of these problems
Fun topic - I guess in a few hundred years we might have figured out how to
make a decent web right now itâs early days - so itâs still a big mess
Lots for you guys to do then
Thanks for sharing your thoughts Joe - I hope you donât mind, I added the videos to your post
There is one flaw in your logic. «working web today» has a lot greater value than «working web tomorrow». So you could not expect many of todayâs producers of web content to pay a premium for allowing the user in 100 years to read the page
(https://softwareengineeringdaily.com/wp-content/uploads/2018/03/SED545-Streamr.pdf)
What do you think about the place of blockchain in a decentralized web?
Link below is an essay from P. Frazee
Not necessarily where it all goes but I think itâs possible that it gets more decentralized, self organizing and immutable, something in this direction: https://ipfs.io/
Iâm a bit confused here, are we talking about the future of web or future of web development?
They are different.
About the âwebâ itself.
Like every other aspects in this real-world. Money and politics will always win.
Say what you want to say, but I apologize⊠Google, Apple, Amazon, Microsoft will continue their domination. When one of them fall, other greedier corporation will take their place.
So we are going to see more giant corporates and governments controlling our web. Privacy will be completely gone by the end of 2035.
Just another comment.
I realised this morning that the title of this thread could be interpreted in two different ways. I interpreted âfutureâ in the sense of âhow could/should things look like inâ a few hundred years" - other replies have interpreted in this in terms of the near future - âwhatâs going to happen in the next few yearsâ.
I think we are in âthe age of confusionâ - historians in few hundred years time will view this period as one of exploration - where we try to figure out what would happen.
I mentioned history in my earlier comments - my interest in this started a couple of years ago - my wife found an old photo of a long-dead relative from about 1890 - and she asked me âwhat will happen to all these photos we take and end up in the cloud somewhere - will future generations me able to see them?â
A very good question - which Iâve been asking ever since.
Iâm not optimistic - as far as I know my cloud storage goes away when I stop paying the bills - and, to make matters worse, all the data is encrypted.
Itâs not just my personal photos - what will happen to all the data we store encrypted in the cloud in a few hundred years time? Will we loose all our collective history?
Already we have lost a great deal of data that was available as little as 10 years ago.
Apps die pretty quickly. A program that works today will probably work tomorrow, but will it work in a few months time (probably) or in six months time? (probably) or in 3 years (umm) or 10 years?
I wonât name any names here, but ,for example, Apple sees the need to gratuitously upgrade itâs OS and apps every few months. That would be fine, were it not be for the fact that on several occasion the new program cannot read files produced by previous versions of the program (case in point Keynote) - Iâm not talking about centuries here but time spans of a small number of years.
If programs break within 5 years what hope is there of running them in 100 years time?
So when I talk about the future Iâm interested in software that will still work in hundreds of years time - right now we donât know how to do this reliably.
This is part of a larger question.
What problems ought we solve? - this seems a far more interesting problem than asking âWhat problems are we solving?â
One of these problems is History Preservation which is a sub-set of the âfuture webâ problem.
Another big problem is âbreaking stuffâ problem - innocent changes break things that work - this is crazy.
Great post Joe, and I suppose I should add what I had in mind myself when I posted the thread
The future of the web, to me, seems like it may split - to contain data that machines can consume, and data that humans can consume. To picture that we only need to look at sci-fi for inspiration - though I wouldnât go quite as far as AI, but AI-like.
Think Gideon
Smart enough to process and interpret data, and intelligent enough to communicate that data as if you were talking to a person, who knows you well, and knows how best to get that knowledge to you in a format or language that you understand. So one kind of data would help machines actually answer our queries, and the other kind which the machine will display to us - probably on large screens, holograms or via retina or cerebral implants - for our consumption.
AI-like because if it were full on AI, then it probably wouldnât be serving us! It would be self aware enough to be independently minded and pursue its own aspirations!
Maybe I am looking way too far into the future?
Personally I do not store photos or emails in the cloud. I begrudgingly store browser history and contacts (and music) so that the experience on my phone and computer is more seamless. But your wifeâs question is a good one.
One problem I see is that now we have so much more - more photos, more videos, more everything. When we had to pay for photos to be developed a family could amass a few hundred or thousand maybe? Now, most teenagers have thousands upon thousands and their life has only just begun!
We have computers and web connections at home - so why canât those be used as our personal âcloudsâ? Just as seamlessly your photos get automatically uploaded to Apple, they could get transferred to your personal storage device or machine (again, maybe think something like Gideon?). You could then (or Gideon could ;-)) find a cloud storage service that actually stores it - forever.
But youâre right, we need to build these things. We also need to stop just accepting what giants like Apple give us⊠because the only thing they really care about is making money.
On a slightly different topic (but related) I have thought about what the future could look like. But I am not going to saying anything more because you will all think I am insane
Reviewing what I said I wanted
I have thrown together a prototype that lets you send messages to a specific browser window, as if it was just another process.
The client has to implement an init
and handle_info
callback. e.g.
var client = new GenBrowser({
init: function(state){
console.log(state)
return state
},
handle_info: function(message, state){
console.log(message)
if (message.your_pair) {
console.log('paired with', message.your_pair)
state.pair = message.your_pair
your_pair.innerHTML = message.your_pair
}
if (message.text) {
displayUpdate(message.text)
}
return state
}
})
Then from the server you can send messages as follows
GenBrowser.send(other_address, {text: 'Hello from another browser'})
Or from the server it looks very similar
GenBrowser.send_message(client_address, %{text: "Hello from the server"})
See the README, for a working example.
Proof of concept
So this seems to work, Iâve had some fun playing with it.
It would be cool if I could make it work with ElixirScript.
A redux integration would also be nice. I like the single state container model, looks a lot like a process to me.
However both ElixirScript and Redux are probably a little way off.
Rich Hickey mentions similar sentiments in what he calls âThe Space Age of Computing.â
Say we have a public, unique ID for every person/system/agent (letâs call it the PUID for short). Then we have a contract that states âany system event that includes a PUID will be delivered to the PUID event stream/chain with the git hash/SHA of the current system versionâ.
Each agent known by their PUID has a private log of these events accessible only by them or who they give access. If these events were on some kind of blockchain solution, there are separate keys for every event participant(system + agents). Now the agent knows what system events about them were logged, and what system did the logging. If we manage to get all the keys for the system events, and the systemâs source code, we can rebuild the state.
Event-sourcing! Blockchains! hype intensifies
Infrastructure like this could potentially solve a variety of hard âlarge-scale-systemsâ problems.
Example:
-
Joe (PUID: 123) orders a pizza from Dominos.
-
Events in Dominoâs system such as âOrder Placedâ, âPizza Deliveredâ, by contract, are also delivered to Joeâs event-stream.
-
For the transaction to occur, Dominos wants to know that it was Joe and not someone else whoâs buying this pizza. So they request validation from Joe.
-
Joe recalls that last week he ordered a pizza from Papa Johnâs, so he sends Dominos the encrypted system events from his Papa Johnâs interactions with the public PUID and system-version-hash on those events.
-
Dominos doesnât have to know that Joe ordered a medium Olive + Pepperoni pizza with red sauce; they only need to ask Papa Johnâs if those events are authentic. Yes or no.
Thereâs no reason this identity validation process could be automated. Multiple separate authorities could be contacted for further validation. Dominoâs doesnât even have to run their internal systems on the Blockchain; as long as it can publish those events it could run on a raspberry pi.
Lotâs of assumptions, but this is speculation, so who cares!
- How do we pay for the infrastructure? Blockchains use a lot of energy. Energy ainât free!
- Is the event-producer something that can be recreated? Is the git repo accessible?
- How do we incentivize system owners(e.g. Dominos) to not just keep the events for themselves?
An interesting idea to solve this is to tokenize these event-hashes and sell them in a market. Owners can sell tokens with varying degrees of public accessibility. A market around system events that compensates agents for providing their personal data!
To further the example:
- Joeâs events are being traded. Because Joe is well-respected his tokens trade for a lot.
- Regardless, I think Joeâs tokens are undervalued currently. Iâm going to buy as many of his tokens I can because more cores means more BEAM.
- A few months later and sure enough Joeâs tokens are being traded for more. I sell the tokens. Profit.
Problems a System like this could potentially solve or ameliorate:
- Identity Validation
- The Problem of Merit:
- Market incentives on who and what events are important could provide an indicator of capability with financial incentive to have a better estimate of an agentâs merit so as to beat the market.
- Joe writes a new paper. The price of Joe Armstrong âComputer Scienceâ category event-tokens +10. Nice.
- Accountability of private data ownership.
- Dominos leaked that I sometimes get pineapple on my pizza. My reputation is ruined. Shame on you Dominos. Shame. Dominos trust-ability -10.
A scalable solution to the Problem of Merit is really the big win in my opinion. With it we can see if the journalist who published that article knows a dime about their claims. We can weight votes by subject matter expertise! It would be nice if academia werenât so institutionally centralized, so if we have a quantified estimate of a personâs credentials, maybe a paper could be published publicly and peer-reviewed by experts anywhere regardless of their academic pandering. Say what you want about meritocracies, weâve never been able to scale them effectively and I doubt theyâve ever been actually fair. Iâd like to see what sort of improvements we can make to our collective decision making once weâve solved this problem.
So however the future of the web looks, Iâd say Identity, the Problem of Merit, and data-accountability are what Iâd consider as top priorities. Iâll bet the solution is something to do with more streams, and more control over those data. Who knows?
Hope:
- Decentralized.
- Peer-to-peer encrypted caching.
- Peer-to-peer encrypted backup and redundancy.
- Content-addressed.
- Provides small incentives to becoming a âsuper nodeâ â namely donate [parts of] your bandwidth and storage capacity to help the network have the p2p caching and backup capabilities.
- Censorship-resistant. If an IP mask is banned and the requesting peer cannot reach the peer, try and contact a local IP that knows about another one that has the data, etc. to infinity. Pretty complex this one, I would be willing to work on that!
- Able to temporarily be served from a centralized server the old fashioned way, in cases where there is a burst of traffic â sports events for example â or when the requester has very strict data limits (p2p has an inevitable chatting overhead). But this might be abused so probably not a good idea.
Likely reality:
- No web. Everything is in apps, you cannot download that beautiful art you liked, sorry. Or if you somehow can, you rot in jail.
- Heavily regulated web. Corporations can use it to save expense and not write 50+ apps from scratch but everybody else will have to pay, or be sued. Adblockers are outlawed.
- Split web. An âofficialâ heavily regulated web and a âdark netâ â IMO most of the current web will flow into the âdarkâ web eventually, if corporations get their way. Most likely scenario I think.
The larger iPhones (all Plus series and the X* series). I know many people love to hate on Apple but iPhones are more durable than 99% of Androids. Only a few Sony and Xiaomi devices, and maybe the XL Pixels, can hold a candle to the bigger iPhones.
Thatâs what I want to see myself. Libraries like Drab â and the incoming Phoenix LiveVew â are hugely important evolutionary steps IMO. JS is a mess and the endless supply of young and enthusiastic people willing to put up with its problems is not a fix, itâs a bandaid.
Client-side programming should not even be a thing save for GUI apps.