Update foliage (curl retry backoff + download concurrency limit)#1250
Update foliage (curl retry backoff + download concurrency limit)#1250
Conversation
|
There are no messages from I couldn't find an accurate way to test this locally, but my impression is that It may be that the Shake-based rate-limiting mechanism was sufficient to avoid getting the 502s in the first place, though. This test shows that nothing is fundamentally broken by the changes, so I'm in favour of making the upgrade and seeing how things go. I think we should keep an eye on overall run time for the repo builds and then tweak the Shake resource limit if necessary. When we get the Cabal-file rewriting issue sorted out, and can safely delete the cache, we should try a non-cached run to see how long that takes with this change. |
|
Looking at the run times before and after we re-enabled the cache, I don't think caching provides any improvement. It takes as long to fetch the cache as it does to fetch the tarballs from upstream. |
… limit Updates the foliage flake input to include input-output-hk/foliage#116: - curl --retry 3 --retry-connrefused for transient HTTP errors (502, etc.) with exponential backoff - Download concurrency capped at 20 via Shake Resource to prevent overwhelming GitHub under -j 0 - CI actions upgraded (nix-installer v21, magic-nix-cache v13, cachix v16)
fccf0a1 to
ef98a62
Compare
|
The This is a chicken-and-egg: the CI baseline build always uses |
Summary
Updates the foliage flake input to include input-output-hk/foliage#116:
--retry 3 --retry-connrefused— retries transient HTTP errors (408, 429, 500, 502, 503, 504) with exponential backoff (1s, 2s, 4s)Resource— prevents hundreds of simultaneous curl processes from overwhelming GitHub when running with-j 0nix-installer-actionv9→v21,magic-nix-cache-actionv2→v13,cachix-actionv14→v16This addresses the repeated transient HTTP 502 failures seen in #1248 (failed 4 times before succeeding on the 5th manual re-run, each time on a different package).