Skip to content

Conversation

@vlasky
Copy link
Contributor

@vlasky vlasky commented Jun 3, 2025

  1. Remove aggressive "cancel oldest client" logic in Worker::begin()

Previously, when all worker threads were busy, the code would forcibly cancel the oldest active connection to make room for a new one. This approach:

  • Prematurely terminates in-flight requests, leading to broken or truncated responses.
  • Forces cleanup of file descriptors mid-stream, causing spurious "Illegal seek"/EBADF errors.
  • Circumvents the TCP backlog queuing provided by the OS, instead dropping live clients and degrading user experience.

By deleting this block, we let the kernel's listen backlog naturally queue incoming connections until a worker becomes available. As a result:

  • Active requests are no longer killed arbitrarily under load.
  • File descriptors aren't closed unexpectedly, eliminating related "static asset pread" failures.
  • The server benefits from standard TCP connection handling without manual interference.

This change improves reliability under high concurrency and avoids unintended side effects from thread cancellation.

  1. Fix partial write handling in send_binary()

The current send_binary() implementation treats any write() that returns
less than the requested byte count as a failure, immediately setting
close_connection_ = true and returning false.

This is incorrect behavior. Per POSIX, write() may legitimately return
fewer bytes than requested when:

  • The socket send buffer is nearly full
  • Network congestion causes backpressure
  • Large write sizes exceed kernel buffers
  • System is under memory pressure

These partial writes are normal, not error conditions. The current code
incorrectly drops active connections during normal operation, especially
under load when partial writes become more common.

This commit replaces the single write() call with a proper write loop
that:

  • Accumulates partial writes until all data is sent
  • Retries on EAGAIN/EWOULDBLOCK without closing the connection
  • Only treats actual errors (not partial writes) as failures
  • Maintains the existing error logging behavior

This fix prevents spurious connection drops during large responses or
when the network is congested, significantly improving reliability under
production load.

Fixes: Connection drops during static file serving
Fixes: Increased failure rate under high load

  1. Increase file transfer buffer from 512 to 16384 bytes

The 512-byte buffer size results in excessive system calls when serving
files. For a 1MB file, this requires 2048 read/write operations.

Using 16KB reduces system call overhead by 32x and better matches typical
OS page sizes and network buffer defaults. This should improve throughput
for static file serving while maintaining reasonable stack usage.

vlasky added 3 commits June 3, 2025 12:51
Previously, when all worker threads were busy, the code would forcibly cancel the oldest active connection to make room for a new one. This approach:

* Prematurely terminates in-flight requests, leading to broken or truncated responses.
* Forces cleanup of file descriptors mid-stream, causing spurious "Illegal seek"/EBADF errors.
* Circumvents the TCP backlog queuing provided by the OS, instead dropping live clients and degrading user experience.

By deleting this block, we let the kernel's listen backlog naturally queue incoming connections until a worker becomes available. As a result:

* Active requests are no longer killed arbitrarily under load.
* File descriptors aren't closed unexpectedly, eliminating related "static asset pread" failures.
* The server benefits from standard TCP connection handling without manual interference.

This change improves reliability under high concurrency and avoids unintended side effects from thread cancellation.
The current send_binary() implementation treats any write() that returns
less than the requested byte count as a failure, immediately setting
close_connection_ = true and returning false.

This is incorrect behavior. Per POSIX, write() may legitimately return
fewer bytes than requested when:
- The socket send buffer is nearly full
- Network congestion causes backpressure
- Large write sizes exceed kernel buffers
- System is under memory pressure

These partial writes are normal, not error conditions. The current code
incorrectly drops active connections during normal operation, especially
under load when partial writes become more common.

This commit replaces the single write() call with a proper write loop
that:
- Accumulates partial writes until all data is sent
- Retries on EAGAIN/EWOULDBLOCK without closing the connection
- Only treats actual errors (not partial writes) as failures
- Maintains the existing error logging behavior

This fix prevents spurious connection drops during large responses or
when the network is congested, significantly improving reliability under
production load.

Fixes: Connection drops during static file serving
Fixes: Increased failure rate under high load
The 512-byte buffer size results in excessive system calls when serving
files. For a 1MB file, this requires 2048 read/write operations.

Using 16KB reduces system call overhead by 32x and better matches typical
OS page sizes and network buffer defaults. This should improve throughput
for static file serving while maintaining reasonable stack usage.
if (sent == -1) {
if (errno == EAGAIN || errno == EWOULDBLOCK) {
// no data can be written right now; retry
continue;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The right thing to do here would be to call poll() I think to wait until it's writable.

tokens = tokenbucket_acquire(client_.client_ip_);
server_->lock();
dll_remove(&server_->idle_workers, &elem_);
if (dll_is_empty(server_->idle_workers)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you considered increasing the maximum number of threads? One of the design goals with the server is to have it be able to manage its resources. I'd ideally like to see us doing it in a better way than simply giving up on this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a good compromise here would be this: we can add a flag to disable this. That will get you unblocked fastest.

anivar pushed a commit to anivar/llamafile that referenced this pull request Aug 16, 2025
- Fix URL prefix normalization to handle double slashes (fixes mozilla-ai#787)
  - Consolidates consecutive slashes (// -> /)
  - Ensures prefix starts with single slash
  - Removes trailing slash (except for root)
  - Matches old server behavior exactly

- Fix .args file loading order (fixes mozilla-ai#783)
  - Load .args before determining program type
  - Allows --server --v2 flags in .args to work correctly

- Fix connection stability issues (addresses mozilla-ai#767)
  - Remove aggressive client cancellation when workers are busy
  - Let TCP backlog handle connection queuing naturally
  - Fix partial write handling with simple retry logic
  - Increase file transfer buffer from 512B to 16KB

These minimal changes make Server v2 production-ready while maintaining
backward compatibility. All fixes follow existing patterns from the old
server implementation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants