Hi fellows,
My application has to send data over HTTP to clients.
This data is retrieved from a list of files (containing basically time series) accessible via NFS.
One client request may mean touching hundreds of files.
The result has to be delivered in a consistent order so the data can not be mixed up.
Therefore, in order to mitigate the latency of accessing sequentially each file one after another, my idea is to copy locally in parallel all the needed files and then stream the data to the client sequentially (this will guarantee that timeseries get delivered in a predictable order).
Basically, I can:
- Copy files in parallel on a local work directory
- Then stream all local data with
Plug.Conn.chunk/2
But it is still not satisfactory. What I would like to do is:
- Parallely load NFS files from a designated offset until end offset into memory
- Stream the data in the order of the filenames
Do you have any hints to achieve this ?