By using a Stream, the caller can compose it within the confines of Plug’s request/response model and serve the content of the resultant Zip acrhive in a streaming fashion. This allows fast delivery of a Zip archive consisting of many disparate parts hosted in different places, without having to first spool all of them to disk. The generated archive uses Zip64, and works with individual files that are larger than 4GB.
The Documentation contains a more detailed explanation of why it was built.
Packmatic 1.0.0 is now released. This release includes ahead-of-time verification enhancements for the Manifest, among other things. Changelog has been updated.
I am just wondering if you were aware of https://github.com/ananthakumaran/zstream when you started the project. Seems like there is a major overlap in features between the two
Yes. It does not do Unicode names, Zip64, or provide easy Plug integration. Not does it seem to have support for URL based Sources or any Custom Source which was recently added. I have seen a few other libraries like these as well.
The reason of building Packmatic was to solve a specific implementation issue outlined in the rationale section (in the README). Nevertheless, availability of these libraries made the task of implementing Packmatic easier. I have noticed that your library is not acknowledged in the README section and I will do so in the next release.
Thanks for releasing it! I am currently working on a project which requires some zip file processing. :zip works fine at this moment but in the recent future we will be generating zip files on-the-fly with url based remote files.
I feel your library is such a perfect fit. So glad to know it and definitely will give it a try. Thank you for sharing!
For our specific use case we needed to generate very large zips of video data without worrying about disk space. The goal was to create a multi part upload in S3 as the zip was created and to avoid disk writes altogether. The goal was to limit memory use to that of the largest video file. This was an unusual enough use case that I created a new library for it: zap.
Due to the specifics of the use case it doesn’t compress at all (though it could quite easily). Through the use of streams it accumulates inputs and periodically emits chunks of output suitable for S3. Since switching to zap we haven’t had any disk space issues.
Edit: I’m sharing in this thread in case others find themselves in a similar situation. Zap has a specific use case that is much narrower than packmatic, but we probably could have made packmatic work for our needs.
I shall add yours to the list as well. For S3 Multipart Uploads my preference would be to use a separate component to accumulate/buffer chunks as there is a limit on number of chunks, and another on size of chunks.
Further, within Packmatic, the URL Source reads in chunks as well (powered by ibrowse), so it could theoretically process source files that do not fit on the host.
The design rationale was driven by user experience. I wanted the download to start instantaneously, and this could only be achieved by not buffering anything at all prior to vending of the stream. Once the download starts the user will wait.
What version of Erlang/OTP does :zip not handling unicode filenames on win/mac apply to?
Up until very recently :zip just couldn’t handle unicode filenames period, but I think the fix to that is now released as of at least Erlang/OTP 22.2. Is there an additional problem that is win/mac specific?
You can by doing :erlang.binary_to_list(name) which should work if the vm is in unicode filename mode (based on the docs). But it basically just dumps the bytes as they are, so on mac it works, because the fs uses utf-8, but on windows it doesn’t.