Skip to content
This repository was archived by the owner on Sep 20, 2023. It is now read-only.

Add sendfile_max_chunk #8

@tumb1er

Description

@tumb1er

We recently had a very nice evening with large corrupted files on a specific hardware setup.

Here is an issue describing a situation like ours:

  • We have a virtual machine running linux on network-attached disk storage
  • Network is relatively faster than disks
  • Files under the hood are ~10-50GB

Like in that issue, we had curl terminating with incomplete file read and client timed out message in nginx logs. Nginx debug logs shows that there are 2GB chunks sent via sendfile. Network is faster that disks, so sendfile does not block, and this 2GB sendfile call lasts more than a minute, if disk read speed is below 36 MBytes per second (2GB/60). One minute is nginx send_timeout default value.
After adding sendfile_max_chunk setting these sendfile calls are smaller and more frequent, so nginx send_timeout does not
happen.

We solved our issue by adding sendfile_max_chunk = 1M to nginx configuration, without any performance tuning. I propose to add some reasonable value to webdav nginx config.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions