Certainly there's a complexity argument to be made, because you don't actually need compression just to hold a bundle of files. But these days zip just works.
The perf measurement charts also make no sense. What exactly are they measuring?
Edit:
This reddit post seems to go into more depth on performance: old.reddit.com/r/selfhosted/comments/1qi64pr/comment/o0pqaeo/
And what's the point of aligning the files to be "DirectStorage-ready" if they're going to be JPEGs, a format that, as far as I know, DirectStorage doesn't understand?
And the author says it's a problem that "Metadata isn't native to CBZ, you have to use a ComicInfo.xml file.", but... that's not a problem at all?
The whole thing makes no sense.
Note that he doesn't quite say, when asked pointblank how much AI he used in his erroneous microbenchmarking, that he didn't use AI: https://reddit.com/r/selfhosted/comments/1qi64pr/i_got_into_...
Which explains all of it.
Kudos to /u/teraflop, for having infinitely more patience with this than I would.
It used to be a decent resource to learn about what services people were self hosting. But now, many posts are variations of, “I’ve made this huge complicated app in an afternoon please install it on your server”. I’ve even seen a vibe-coded password manager posted there.
Reputable alternatives to the software posted there exist a a huge amount of the time. Not to mention audited alternatives in the case of password managers, or even just actively maintained alternatives.
Every new readme, announcement post, and codebase is tailored to achieve maximum bloviation.
No substance, no credibility———just vibes.
Do the emojis not show for you?
[edit]
If I download the README I can see them in every program on my system except Firefox. I previously had issues with CJK only not displaying in Firefox, so there's probably some workaround specific to it...
I didn't even realize random access is not possible, presumably because readers just support it by linear scanning or putting everything in memory at once, and comic size is peanuts compared to modern memory size.
I suppose this becomes more useful if you have multiple issues/volumes in a single archive.
I don’t understand what’s the point of any of this over a minimal subset of PDF (one image per page).
So, like ZIP?
> Uses XXH3 for integrity checks
I don’t think XXH3 is suitable for that purpose. It’s not cryptographically secure and designed mostly for stuff like hash tables (e.g. relatively small data).
---
For example they make a big deal about each archive entry being aligned to a 4 KiB boundary "allowing for DirectStorage transfers directly from disk to GPU memory", but the pages within a CBZ are going to be encoded (JPEG/PNG/etc) rather than just being bitmaps. They need to be decoded first, the GPU isn't going to let you create a texture directly from JPEG data.
Furthermore the README says "While folders allow memory mapping, individual images within them are rarely sector-aligned for optimized DirectStorage throughput" which ... what? If an image file needs to be sector-aligned (!?) then a BBF file would also need to be, else the 4 KiB alignment within the file doesn't work, so what is special about the format that causes the OS to place its files differently on disk?
Also in the official DirectStorage docs (https://github.com/microsoft/DirectStorage/blob/main/Docs/De...) it says this:
> Don't worry about 4-KiB alignment restrictions
> * Win32 has a restriction that asynchronous requests be aligned on a
> 4-KiB boundary and be a multiple of 4-KiB in size.
> * DirectStorage does not have a 4-KiB alignment or size restriction. This
> means you don't need to pad your data which just adds extra size to your
> package and internal buffers.
Where is the supposed 4 KiB alignment restriction even coming from?There are zip-based formats that align files so they can be mmap'd as executable pages, but that's not what's happening here, and I've never heard of a JPEG/PNG/etc image decoder that requires aligned buffers for the input data.
Is the entire 4 KiB alignment requirement fictitious?
---
The README also talks about using xxhash instead of CRC32 for integrity checking (the OP calls it "verification"), claiming this is more performant for large collections, but this is insane:
> ZIP/RAR use CRC32, which is aging, collision-prone, and significantly slower
> to verify than XXH3 for large archival collections.
> [...]
> On multi-core systems, the verifier splits the asset table into chunks and
> validates multiple pages simultaneously. This makes BBF verification up to
> 10x faster than ZIP/RAR CRC checks.
CRC32 is limited by memory bandwidth if you're using a normal (i.e. SIMD) implementation. Assuming 100 GiB/s throughput, a typical comic book page (a few megabytes) will take like ... a millisecond? And there's no data dependency between file content checksums in the zip format, so for a CBZ you can run the CRC32 calculations in parallel for each page just like BBF says it does.But that doesn't matter because to actually check the integrity of archived files you want to use something like sha256, not CRC32 or xxhash. Checksum each archive (not each page), store that checksum as a `.sha256` file (or whatever), and now you can (1) use normal tools to check that your archives are intact, and (2) record those checksums as metadata in the blob storage service you're using.
---
The Reddit thread has more comments from people who have noticed other sorts of discrepancies, and the author is having a really difficult time responding to them in a coherent way. The most charitable interpretation is that this whole project (supposed problems with CBZ, the readme, the code) is the output of an LLM.
500 Internal Server Error