- 
                Notifications
    You must be signed in to change notification settings 
- Fork 1.2k
Llamafiler HTTP server stability/reliability fixes #767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Previously, when all worker threads were busy, the code would forcibly cancel the oldest active connection to make room for a new one. This approach: * Prematurely terminates in-flight requests, leading to broken or truncated responses. * Forces cleanup of file descriptors mid-stream, causing spurious "Illegal seek"/EBADF errors. * Circumvents the TCP backlog queuing provided by the OS, instead dropping live clients and degrading user experience. By deleting this block, we let the kernel's listen backlog naturally queue incoming connections until a worker becomes available. As a result: * Active requests are no longer killed arbitrarily under load. * File descriptors aren't closed unexpectedly, eliminating related "static asset pread" failures. * The server benefits from standard TCP connection handling without manual interference. This change improves reliability under high concurrency and avoids unintended side effects from thread cancellation.
The current send_binary() implementation treats any write() that returns less than the requested byte count as a failure, immediately setting close_connection_ = true and returning false. This is incorrect behavior. Per POSIX, write() may legitimately return fewer bytes than requested when: - The socket send buffer is nearly full - Network congestion causes backpressure - Large write sizes exceed kernel buffers - System is under memory pressure These partial writes are normal, not error conditions. The current code incorrectly drops active connections during normal operation, especially under load when partial writes become more common. This commit replaces the single write() call with a proper write loop that: - Accumulates partial writes until all data is sent - Retries on EAGAIN/EWOULDBLOCK without closing the connection - Only treats actual errors (not partial writes) as failures - Maintains the existing error logging behavior This fix prevents spurious connection drops during large responses or when the network is congested, significantly improving reliability under production load. Fixes: Connection drops during static file serving Fixes: Increased failure rate under high load
The 512-byte buffer size results in excessive system calls when serving files. For a 1MB file, this requires 2048 read/write operations. Using 16KB reduces system call overhead by 32x and better matches typical OS page sizes and network buffer defaults. This should improve throughput for static file serving while maintaining reasonable stack usage.
| if (sent == -1) { | ||
| if (errno == EAGAIN || errno == EWOULDBLOCK) { | ||
| // no data can be written right now; retry | ||
| continue; | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The right thing to do here would be to call poll() I think to wait until it's writable.
| tokens = tokenbucket_acquire(client_.client_ip_); | ||
| server_->lock(); | ||
| dll_remove(&server_->idle_workers, &elem_); | ||
| if (dll_is_empty(server_->idle_workers)) { | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you considered increasing the maximum number of threads? One of the design goals with the server is to have it be able to manage its resources. I'd ideally like to see us doing it in a better way than simply giving up on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a good compromise here would be this: we can add a flag to disable this. That will get you unblocked fastest.
- Fix URL prefix normalization to handle double slashes (fixes mozilla-ai#787) - Consolidates consecutive slashes (// -> /) - Ensures prefix starts with single slash - Removes trailing slash (except for root) - Matches old server behavior exactly - Fix .args file loading order (fixes mozilla-ai#783) - Load .args before determining program type - Allows --server --v2 flags in .args to work correctly - Fix connection stability issues (addresses mozilla-ai#767) - Remove aggressive client cancellation when workers are busy - Let TCP backlog handle connection queuing naturally - Fix partial write handling with simple retry logic - Increase file transfer buffer from 512B to 16KB These minimal changes make Server v2 production-ready while maintaining backward compatibility. All fixes follow existing patterns from the old server implementation.
Previously, when all worker threads were busy, the code would forcibly cancel the oldest active connection to make room for a new one. This approach:
By deleting this block, we let the kernel's listen backlog naturally queue incoming connections until a worker becomes available. As a result:
This change improves reliability under high concurrency and avoids unintended side effects from thread cancellation.
The current send_binary() implementation treats any write() that returns
less than the requested byte count as a failure, immediately setting
close_connection_ = true and returning false.
This is incorrect behavior. Per POSIX, write() may legitimately return
fewer bytes than requested when:
These partial writes are normal, not error conditions. The current code
incorrectly drops active connections during normal operation, especially
under load when partial writes become more common.
This commit replaces the single write() call with a proper write loop
that:
This fix prevents spurious connection drops during large responses or
when the network is congested, significantly improving reliability under
production load.
Fixes: Connection drops during static file serving
Fixes: Increased failure rate under high load
The 512-byte buffer size results in excessive system calls when serving
files. For a 1MB file, this requires 2048 read/write operations.
Using 16KB reduces system call overhead by 32x and better matches typical
OS page sizes and network buffer defaults. This should improve throughput
for static file serving while maintaining reasonable stack usage.