(sorry for the tags, two were obliged)
https://venturebeat.com/2017/11/04/the-end-of-the-cloud-is-coming/
You can create tags if no suitable ones exist
I too find these ideas quite intriguing, but every time I remember that in a peer-to-peer network someone needs to actually upload the data and most of the current ISPs sell plans where downlink throughput > uplink throughput by far thus making any uploading very expensive for the user.
So âthe end of the cloud is âŠâ not there yet âŠ
This is interesting - but whenever I try to wrap my head around how it could work I hit a lot of blocks.
Say we distribute the content through N users nodes. The first problem is, how do we divide it correctly according to geo-location? Do we assign to 3 nodes, or 5, or N in a X radius the responsibility of holding those files? So that itâs available in decent speeds in different regions.
This would probably end up duplicating content in just the same way or worse if we ought to guarantee in some marginal way that the content would always be up to retrieval.
How do we deal with spikes? Sure, central cloud servers require a lot of bandwidth, but how do we replicate that bandwidth allocation on regular users devices basis, some of which with flaky connections? Can we design a self-correcting mesh? So that when a website spikes it replicates content automatically across nodes to make it âmore availableâ?
If every node then becomes the server for the file, what happens when that node (cellphone, laptop, wtv) goes offline, or all nodes that were assigned that content go offline for some reason (given that itâs even more probable than a server since they arenât meant to be full 100% time plugged on servers).
How do we decide what files (websites, content, etc) should be replicated more consistently? Are they going to have an âauthorâ and that author some sort of ranking? Because those questions are sort of easy to answer in terms of torrents - people who have the file, want to share it, and people who search for the file want the file - itâs the content itself that dictates if itâs replicated or not. Would we have quotas? How would we discern from someone putting up a few MB worth of sites, and others putting up GB worth of sites? How would we split a big site through several nodes and keep it available in decent response times? Or is it feasible to do a âpackingâ instead, where a node (user) decides - I want to hold 200mb or websites - and then it holds up to 200mb of complete websites - and would people even want to have this kind of choices? Wouldnât they just rather prefer to get there immediately.
Because with torrents, I can and I expect to wait for the content. As a seeder I also choose directly what I want to seed and usually I do it as a reciprocal act - I downloaded X, so now Iâll just seed this for some months as it doesnât cost me anything directly.
Whereas with websites and other content that is expected to be delivered in less than a few MS, wouldnât we in the end, end up with people setting up their âserverâ rigs and selling that space to people who wanted to have 100% time guarantee for their content/websites?
Discoverability - Hashing content - ok - but we still need to discover this content, otherwise itâs worthless. So we would need to keep gigantic hash tables (probably partition them smartly so that theyâre manageable and then with some smart way of figuring out what holds what like torrent trackers do) but we would also need to keep the content searchable? Itâs not like we can ditch the traditional human searchable part of the content - again, not sure this would end up being a more efficient way of organising the web.
Iâm intrigued by it but at the same time it just feels like an impossible taskâŠ
@amnu3387 I think that rather than an âeveryone is a seederâ weâll go to an âeveryone could become a seederâ, where the network (the âfogâ as the one article called it) is still managed by a smaller group of beefy servers, rather than everyoneâs phones and tablets, but it is free for anyone to join in in the network with whatever hardware they have lying around.
Of course, that could be added as an extra measure of speed improvement as well: Netflix et al. are actively pursuing torrent-based systems to improve their scalability right now (where their users share downloaded parts of a movie with the other users while watching to reduce load on their servers).
As a matter of fact: There is a lot of content on the BitTorrent network right now that can be streamed while downloading without significant delays.
The same way as when? As what we do now? I think weâd end up duplicating more, and that would be a good thing because in that way it would be harder (if not virtually impossible) to (maliciously) alter or remove prior-created content.
You might like to look at the Interplanetary File System (IPFS). It is by no means the single type of system that could be built and there are already a couple of similar projects (different design decisions but similar usage intentions) but it might provide you with an example of where we currently stand. A couple of the issues you propose have already been solved.
One last thing Iâll reply to is the problem of âone person taking up a lot of spaceâ: Most of these decentralized file storage systems solve this by creating a method of monetary incentives, where the parties that want to host content pay a tiny amount of some (virtual) currency to the parties that supply storage space.
Hadnât heard of it, will look into IPFS.