Author Improvements to data saver option, servers and recruitment for a dev to work on...
Holo OP
Administrator
Avatar
Holo OP
@leaflady
An opt in ad system would require a significant amount of people opting in to be generate any significant income. Ads have been considered and are still a last resort as we have not exhausted all possible options.

@RoadMovieToBerlin link me a chapter with this issue and we'll investigate.
Avatar
@leaflady

From what I read in bug trackers FF and chromium about that jpeg-xl, devs are not hyped at all
There should be a major commitment for this format

My rough guess - we will not see that image type in nearest 2y

webp is here and as I think nginx already builded with plugins for recoding on the fly

80>20

20% or less on iphones can get png/jpg. Nobody talking about cutting them off
Avatar
This option doesn't appear to exist for me on mobile? I still see it listed where account settings are, not reader settings. In my account settings, it's still on, but the loads are still painfully slow and the url still directs to /data/ rather than /data-saver/?
Avatar
Just noticed that data saver is only on the default reader after the update, not the legacy reader...at least not from the reader settings (still available on user settings). You'd probably get more users to use data saver if you have it available on both reader versions.

Holo OP
Administrator
Avatar
Holo OP
The data saver control in the user settings page control the legacy reader.

The data saver control in the default reader settings control the default reader.
Avatar
@Holo It’s happened with every chapter of every single manga I’ve tried it on
Avatar
I'm relatively new to MangaDex, but a friend pointed me to this thread and I believe I may be able to add something useful to this.

I believe that adding something like WebTorrent to the mix could possibly reduce the load on the CDN, if the implementation was preferred to mainly be client-side.
I'm quite biased as I've worked on the project in the past (contributor), but with how I can see the data-saver CDN setup, this would slot in quite well with the web-seed functionality.
As it appears the CDN is split per chapter with a hash, using something like a .torrent per chapter, would work quite well, with the ability to more fine-tune pre-loading per gallery using the library.

The way that WebTorrent works is by using WebRTC in browsers which support this (mainly Chromium-based e.g. Chrome, Edge (new ver.), Firefox, Opera), but I'm unaware how much of the current userbase this covers. With the mentioned implementation (web-seed), clients which don't support the WebRTC P2P functionality will still use the HTTP web-seed as a fallback and function as expected using the CDN. Another key benefit of using WebRTC is the upcoming support by libtorrent, which would allow any of the following torrent clients (qBittorrent, Deluge) to also augment to the distribution of each chapter to browser WebTorrent clients, with torrent clients like Vuze already able to contribute to this.

The main disadvantage of using something like WebTorrent is the additional processing per chapter (due to a page reload caused by navigating between chapters), the additional delay before first image load per chapter and the additional work needed to generate the .torrent files which would be needed per chapter and the additional performance hit of verifying the hash of each image.

Again, I apologize if this sounds more like a sales message, but believe this would be an ideal use case for the project.
I can go into further detail on how I would see this implemented if this sounds like something which piques interest.
Avatar
Such great thinking with the data saver! Since so many are home browsing my ISP has been seriously overworked, and getting on MD is painful at best - think back you weebs to the days of the 56K dial-up modem, and you'll feel the same pain. The data saver has completely changed the game here; I'm back to happy leeching. A big thank you,all.
Avatar
Depending on how long it takes to spin up web torrent, you could try to P2P load everything but the first few images.
Holo OP
Administrator
Avatar
Holo OP
@SilentBot1

We have actually tested webtorrent before like a year ago, and concluded that it wouldn't really work for older chapters, because not many people are reading them!

Webtorrent would be perfect for new chapters, since loads of people read them, but we don't have any issues with new chapters at the moment.
Avatar
@holo

Ah awesome, I didn't know about that, so sorry for suggesting this again!

It is possible via using something like `indexeddb-chunk-store` with WebTorrent to create storage which can persist through sessions, allowing for chapters previously visited to keep seeding, but the optimal use case for this is a SPA where reloading of the page is minimized as re-creating the client, re-importing the torrents, re-creating the peers are all an expensive process - again - this would not really work for the less frequently viewed chapters.

For dedicated servers, as mentioned, the H@H network has some additional features which are nice and promote running a client, with one of the key features I've taken advantage of in the past being the ability to download a copy of a specific gallery to your own H@H client, this doubles as forcing your H@H instance to cache this making sure the download isn't wasted, while also copying this to a separate location to allow data hoarders to keep local copies of their favourite galleries.

A re-implementation of H@H or a similar client would be optimal as from what I can see there is currently a 0.01% cache miss rate, but as the server-side code is private, this would require quite a bit of reverse engineering, and a feature which would incentivise people to host a client if implemented.

The core functionality of a H@H client is quite simple, there is a 110s check-in timer to a set address which returns all the ranges (125MB collection of files, 1/65535 of the entire site) that your specific client should cache (as determined by the H @ H RPC server). At all times your specific client is expected to be able to serve all of these files, with your client requesting these individual files in the range which it doesn't possess (on the fly). Using ranges makes it a lot less intensive for the RPC server to allocate these files and allows the RPC server to quickly and easily redirect the requests for each image to a client it knows should have these files (based on allocated ranges), while attempting to ensure no single client gets overloaded.

The inner workings of how these ranges are calculated, how these ranges are allocated by the RPC server, how the weighting of these ranges are calculated are the key challenges which would need to be solved to re-implement the H @ H client and RCP server, outside of the need to encourage users to start using the application.

This has always been something I've been interested in the past 6 years I've run an H @ H client, so I'm going to start investigating this a little further and help where I can if other people are interested in re-implementing the H @ H client and RCP server.

Edit: Sorry for the pings, I attempted to space them out but it appears not all did!
Last edited 9 days ago by SilentBot1.
Holo OP
Administrator
Avatar
Holo OP
@SilentBot1 You seem to have a fair amount of insight into how H@H works - I suggest you come on discord discuss this with the devs!
Avatar
Turned on my data saver 👍
Avatar
@codetaku Turn on/Check the Display advanced settings option and you will see Data Saver option in the Other settings section
Avatar
I don't get people suggesting JPEG XL when WebP "only" supporting 80% of browsers is already a big problem according to the site admins.

Sure, it might work in the future, but does anything support it right now?
Avatar
JXL isn't even published yet. There is 0% support for an unpublished standard. It's naturally an idea floated for future development.
Avatar
@RoadMovieToBerlin dunno if this is the same, but I've noticed something similar on chapters without data saver if I hit the pre-load button and come back later after everything has loaded. I've noticed that occasionally if you move up a page, the image doesn't change right away, but if you leave it for about 5 seconds it updates.
Avatar
I dont see why there is a resistance to webp, its pretty much the best widely supported format and even ios people can use it if they use chrome.

Its not like we are asking for FLIF format.

How about just polyfilling webp for ios? Have webp converted to png in realtime via canvas. Its just more work then simply serving jpg for ios only.

Such as this and others:
https://github.com/chase-moskal/webp-hero

PS I really want to voice a complaint how petty Apple is. Safari has support of WebP builtin, they just dont enable it. To be more exact, VP8 decoder is only enabled for webrtc chats. Aka, everything is already in the browser, they simply dont use it
Last edited 8 days ago by eng1.
Avatar
Another vote for WebP, it's a great format. As for Safari, what I've done in the past is check that the format is included in the Accept headers, and dynamically serve it to browsers that support it. I believe there are Nginx plugins that do this, and it's not too difficult to implement it yourself.

I'm not saying that this is the case for MD though, I can't imagine what running a website of this size (and in these conditions) is like. Thank you Holo and everyone involved for your hard work.
Avatar
I've turned on data saver (for now) because i got tired of the continuing super lag in loading chapters (i'm in Aus). Though even with it on, i'm still getting some horrendous slowness on occasion (weirdly enough mainly on pages aside from actual chapter pages).