Hi,
Thanks for this awesome project!
A little request to help my situiation. I have a 50MB gzip file containing 180MB text data I'd like to quickly seek() and tell() within less than 128MB running RAM.
The problem with lz4.dumps() is that you have to read all 180MB into memory in order to get lz4 compressed data.
It would be cool if this library support lz4.dump(iter_obj) so it iteratively reads data from iter_obj or any file-like object, so final lz4 compressed data could be saved into memory without having to read the full original data first.
Thanks!