Decentralized web, interesting articles: what do you think?

(sorry for the tags, two were obliged)
https://venturebeat.com/2017/11/04/the-end-of-the-cloud-is-coming/

4 Likes

You can create tags if no suitable ones exist :023:

2 Likes

I too find these ideas quite intriguing, but every time I remember that in a peer-to-peer network someone needs to actually upload the data and most of the current ISPs sell plans where downlink throughput > uplink throughput by far thus making any uploading very expensive for the user.

So “the end of the cloud is 
” not there yet 


2 Likes

This is interesting - but whenever I try to wrap my head around how it could work I hit a lot of blocks.

Say we distribute the content through N users nodes. The first problem is, how do we divide it correctly according to geo-location? Do we assign to 3 nodes, or 5, or N in a X radius the responsibility of holding those files? So that it’s available in decent speeds in different regions.

This would probably end up duplicating content in just the same way or worse if we ought to guarantee in some marginal way that the content would always be up to retrieval.

How do we deal with spikes? Sure, central cloud servers require a lot of bandwidth, but how do we replicate that bandwidth allocation on regular users devices basis, some of which with flaky connections? Can we design a self-correcting mesh? So that when a website spikes it replicates content automatically across nodes to make it “more available”?

If every node then becomes the server for the file, what happens when that node (cellphone, laptop, wtv) goes offline, or all nodes that were assigned that content go offline for some reason (given that it’s even more probable than a server since they aren’t meant to be full 100% time plugged on servers).

How do we decide what files (websites, content, etc) should be replicated more consistently? Are they going to have an “author” and that author some sort of ranking? Because those questions are sort of easy to answer in terms of torrents - people who have the file, want to share it, and people who search for the file want the file - it’s the content itself that dictates if it’s replicated or not. Would we have quotas? How would we discern from someone putting up a few MB worth of sites, and others putting up GB worth of sites? How would we split a big site through several nodes and keep it available in decent response times? Or is it feasible to do a “packing” instead, where a node (user) decides - I want to hold 200mb or websites - and then it holds up to 200mb of complete websites - and would people even want to have this kind of choices? Wouldn’t they just rather prefer to get there immediately.

Because with torrents, I can and I expect to wait for the content. As a seeder I also choose directly what I want to seed and usually I do it as a reciprocal act - I downloaded X, so now I’ll just seed this for some months as it doesn’t cost me anything directly.
Whereas with websites and other content that is expected to be delivered in less than a few MS, wouldn’t we in the end, end up with people setting up their “server” rigs and selling that space to people who wanted to have 100% time guarantee for their content/websites?

Discoverability - Hashing content - ok - but we still need to discover this content, otherwise it’s worthless. So we would need to keep gigantic hash tables (probably partition them smartly so that they’re manageable and then with some smart way of figuring out what holds what like torrent trackers do) but we would also need to keep the content searchable? It’s not like we can ditch the traditional human searchable part of the content - again, not sure this would end up being a more efficient way of organising the web.

I’m intrigued by it but at the same time it just feels like an impossible task


2 Likes

@amnu3387 I think that rather than an ‘everyone is a seeder’ we’ll go to an ‘everyone could become a seeder’, where the network (the ‘fog’ as the one article called it) is still managed by a smaller group of beefy servers, rather than everyone’s phones and tablets, but it is free for anyone to join in in the network with whatever hardware they have lying around.

Of course, that could be added as an extra measure of speed improvement as well: Netflix et al. are actively pursuing torrent-based systems to improve their scalability right now (where their users share downloaded parts of a movie with the other users while watching to reduce load on their servers).
As a matter of fact: There is a lot of content on the BitTorrent network right now that can be streamed while downloading without significant delays.

The same way as when? As what we do now? I think we’d end up duplicating more, and that would be a good thing because in that way it would be harder (if not virtually impossible) to (maliciously) alter or remove prior-created content.


You might like to look at the Interplanetary File System (IPFS). It is by no means the single type of system that could be built and there are already a couple of similar projects (different design decisions but similar usage intentions) but it might provide you with an example of where we currently stand. A couple of the issues you propose have already been solved.

One last thing I’ll reply to is the problem of ‘one person taking up a lot of space’: Most of these decentralized file storage systems solve this by creating a method of monetary incentives, where the parties that want to host content pay a tiny amount of some (virtual) currency to the parties that supply storage space.

5 Likes

Hadn’t heard of it, will look into IPFS.

2 Likes