Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions changelog.d/19062.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix a bug introduced in 1.111.0 where failed attempts to download authenticated remote media would not be handled correctly.
2 changes: 1 addition & 1 deletion synapse/http/matrixfederationclient.py
Original file line number Diff line number Diff line change
Expand Up @@ -1719,6 +1719,7 @@ async def federation_get_file(
response, output_stream, boundary, expected_size + 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have tests for this kind of timeout? (just wondering)

It looks like this was previously functional and the only problem was the unexpected logs/error handling so an high-level integration/end-to-end test wouldn't have really prevented this.

It looks like we already have other tests in tests/http/test_matrixfederationclient.py that ensure we raise a RequestSendFailed in certain scenarios. We could add one of those 🤷

)
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
multipart_response = await make_deferred_yieldable(deferred)
Copy link
Contributor

@MadLittleMods MadLittleMods Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be inside the async with self.remote_download_linearizer.queue(ip_address):?

Maybe. We do it in get_file which is probably what federation_get_file was based off of:

async with self.remote_download_linearizer.queue(ip_address):
# add a byte of headroom to max size as function errs at >=
d = read_body_with_max_size(response, output_stream, expected_size + 1)
d.addTimeout(self.default_timeout_seconds, self.reactor)
length = await make_deferred_yieldable(d)

And although this isn't explained anywhere, I guess the point of self.remote_download_linearizer is to ensure we only download 6 pieces of remote media at a time for some reason (maybe resource exhaustion, just a bunch of assumptions). So it makes sense that the download part is under the lock.

except BodyExceededMaxSize:
msg = "Requested file is too large > %r bytes" % (expected_size,)
logger.warning(
Expand Down Expand Up @@ -1755,7 +1756,6 @@ async def federation_get_file(
)
raise

multipart_response = await make_deferred_yieldable(deferred)
if not multipart_response.url:
assert multipart_response.length is not None
length = multipart_response.length
Expand Down
Loading