use of a resilient data storage for serving static data

Joined
Jun 16, 2019
Messages
2
Some projects works to create a resilient , shared, data store. I know IPFS (https://ipfs.io/) or SWARM (https://swarm.ethereum.org/), but other could exists.
Theses new protocols could be used to distribute static data (image, javascript,...).

client/user could then use it to automatically offload data from the main server and alleviate the network bandwidth (sharing it with the other client near them) and reduce possible outage if used without a cdn.

Of course, this modification could/must be added incrementaly, as an option, and in parallel of the standard way (https) of distributing the content (testing is the browser support direct ipfs/... url or simply an option in the profile).
 
Group Leader
Joined
Jan 29, 2018
Messages
829
One possibly simpler way could be to develop some kind of MangaDex server extension than people can run from their own server to sync with MangaDex, thus enabling scanlation groups and enthusiasts to host their own releases and if there’s any issues things can quickly be decentralized.
 
Joined
Jun 16, 2019
Messages
2
I was thinking of theses projects since they have already (I hope) worked on the fine details.

To be sure to understand your proposition:
- choosen people (uploader) could run a software (or a server configuration + a synchronization software) on their hardware
- this software will regularly cache specific data, and publish it.
- The people could cache only their release/upload.

If I correctly understood your proposition, then orphan , old data, and so on, will not be protected.
In addition, the redundancy will be poor (at most, only two copies of the data).

The main principle could be extended to "choosen people" who could give network/disk/... to cache "orphan" data.
 

Users who are viewing this thread

Top