Skip to content

fix: multi-perspective code review batch 1#132

Merged
nao1215 merged 20 commits intomainfrom
fix/multi-perspective-review-batch1
Mar 13, 2026
Merged

fix: multi-perspective code review batch 1#132
nao1215 merged 20 commits intomainfrom
fix/multi-perspective-review-batch1

Conversation

@nao1215
Copy link
Owner

@nao1215 nao1215 commented Mar 13, 2026

Summary

Addresses 16 issues identified during the multi-perspective code review (#116#131 except #114 and #115). Changes span security hardening, performance improvements, test coverage, documentation, and CI.

Security & correctness

Performance

Test coverage

Refactoring & API hygiene

Docs & CI

Closes

Closes #116, closes #117, closes #118, closes #119, closes #120, closes #121, closes #122, closes #123, closes #124, closes #125, closes #126, closes #127, closes #128, closes #129, closes #130, closes #131

Test plan

  • cargo fmt — no formatting changes
  • cargo clippy -- -D warnings — zero warnings
  • cargo test — 899 tests pass (up from 845)
  • All 37 doc tests pass
  • Integration test suites pass

Summary by CodeRabbit

  • New Features

    • Trusted proxy support to resolve real client IPs behind reverse proxies.
    • Standardized pull request template added.
  • Improvements

    • Metrics endpoint: optional Bearer auth and a disable toggle.
    • TIFF supported as input and output format.
    • Rate limiting moved to shard-based locks and uses resolved client IPs.
    • Too-many-requests responses include Retry-After header.
    • HEAD responses ensure empty bodies.
    • SVG processing enforces element and nesting limits.
    • Cache eviction throttled to run at most once per minute.
  • Tests / Tooling

    • CLI inspection emits pretty JSON; test suite reorganized and serialized where needed.

nao1215 added 17 commits March 13, 2026 09:09
@coderabbitai
Copy link

coderabbitai bot commented Mar 13, 2026

📝 Walkthrough

Walkthrough

Throttled cache eviction and sharded per‑IP rate limiting; trusted‑proxy parsing and client IP resolution; IPv6 SSRF hardening; SVG sanitizer complexity limits and CSS fix; CLI inspect now emits serde JSON; CI/issue/pr templates added; consolidated test helpers; new HEAD integration tests and multiple unit tests.

Changes

Cohort / File(s) Summary
GitHub config & CI
​.github/ISSUE_TEMPLATE/bug_report.yml, ​.github/PULL_REQUEST_TEMPLATE.md, ​.github/workflows/rust.yml
Updated bug template placeholder to 0.7.2; added PR template; removed push-only guard so security-audit runs on all events.
Documentation
doc/pipeline.md, doc/prometheus.md
Added crop & sharpen stages and TIFF support to pipeline; documented TRUSS_METRICS_TOKEN, TRUSS_DISABLE_METRICS, added truss_watermark_transforms_total and tiff in metrics/docs.
CLI inspect
src/adapters/cli/inspect.rs
Replaced manual JSON formatting with serde Serialize struct and pretty JSON; added unit tests for output shape and newline.
Cache eviction
src/adapters/server/cache.rs
Throttle eviction scans to at most once per 60s via an AtomicU64 timestamp; removed explicit file.sync_all() calls in favor of dropping file before rename; tests adjusted.
Trusted proxies & config
src/adapters/server/config.rs
Added pub enum TrustedProxy (Addr/Cidr), parsing/containment logic; ServerConfig gained trusted_proxies loaded from TRUSS_TRUSTED_PROXIES; tests added.
Client IP resolution
src/adapters/server/mod.rs, src/adapters/server/routing.rs
Added resolve_client_ip(peer_ip, headers, trusted_proxies) using X-Forwarded-For/X-Real-IP (right-to-left, skip trusted proxies) and integrated into rate-limiting; tests extended.
Sharded rate limiter
src/adapters/server/rate_limit.rs
Replaced single Mutex<HashMap> with [Mutex<HashMap>; NUM_SHARDS] sharded buckets, added shard_index, per-shard lazy cleanup constants, and concurrency/shard tests.
Rate-limit response
src/adapters/server/response.rs
too_many_requests_response now appends Retry-After: 1 header; test added.
IPv6 SSRF protections
src/adapters/server/remote.rs
Enhanced is_disallowed_ipv6 to block IPv4-compatible (::x.x.x.x), 6to4 (2002::/16 with embedded IPv4), and Teredo (2001::/32); added tests.
SVG sanitizer
src/codecs/svg.rs
Added MAX_SVG_ELEMENTS and MAX_SVG_NESTING_DEPTH, enforce limits during parse; refactored sanitize_css_urls to lowercase once and process with offsets to avoid quadratic behavior.
Core types & validation
src/core.rs
Added #[non_exhaustive] to several public enums and #[must_use] to select accessors/TransformResult; Rgba8::from_hex now rejects non‑ASCII input; docs updated.
Signing tests
src/adapters/server/signing.rs
Added unit tests for URL signing behavior; no API changes.
Test consolidation & helpers
tests/common/mod.rs, tests/cli_*.rs
Consolidated duplicated test helpers into tests/common; replaced local helpers in CLI tests with mod common; reduced fixture wait in common and perform write-half shutdown.
Cloud integration serialization
tests/{azure,gcs,s3}_integration.rs
Imported serial_test::serial and annotated cloud integration tests with #[serial] to avoid env var races.
HEAD request tests
tests/server_head.rs
Added integration tests verifying HEAD responses for health/metrics and unknown route (status + empty body).

Sequence Diagram

sequenceDiagram
    participant Client as Client
    participant Proxy as Reverse Proxy
    participant Server as Truss Server
    participant RateLimiter as Rate Limiter (Sharded)
    participant Cache as Transform Cache

    Client->>Proxy: GET /images/by-path (203.0.113.5)
    Proxy->>Server: Forward (peer: 10.0.0.1)\nX-Forwarded-For: 203.0.113.5

    Server->>Server: resolve_client_ip(peer=10.0.0.1, headers, trusted_proxies)
    Server-->>Server: returns client_ip 203.0.113.5

    Server->>RateLimiter: check(client_ip: 203.0.113.5)
    RateLimiter->>RateLimiter: shard_index(...) -> shard N
    RateLimiter->>RateLimiter: Acquire shard N lock
    RateLimiter->>RateLimiter: Per-shard lazy cleanup
    RateLimiter-->>Server: allow/deny

    alt denied
        Server->>Server: build 429 response + Retry-After: 1
        Server-->>Proxy: 429 Too Many Requests
        Proxy-->>Client: 429 Too Many Requests
    else allowed
        Server->>Cache: get/put(transform key)
        Cache->>Cache: maybe_evict() (throttled, 60s)
        Cache-->>Server: transform result
        Server-->>Proxy: 200 OK + image
        Proxy-->>Client: 200 OK + image
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Poem

🐰
I hopped through headers, forks, and shards,
I sniffed the proxy trails and guarded yards,
I counted SVG leaves and clipped their claws,
I paced eviction clocks with patient paws,
A tiny rabbit cheered: "safe paths and fewer wars!"

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: multi-perspective code review batch 1' accurately describes the main purpose: implementing fixes from a batch of code review issues (#116-#131).
Linked Issues check ✅ Passed The PR implements all coding requirements from linked issues #116-#131: serial tests, proxy IP resolution, IPv6 SSRF blocking, SVG limits, cache throttling, test consolidation, API hygiene, docs, CI, and security hardening.
Out of Scope Changes check ✅ Passed All file changes are directly scoped to requirements in linked issues #116-#131. No unrelated refactoring, dependency updates, or feature additions were detected.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/multi-perspective-review-batch1
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/adapters/server/cache.rs (1)

430-445: ⚠️ Potential issue | 🟠 Major

Origin cache writes still bypass the size cap.

TransformCache::put() now participates in bounded eviction, but OriginCache::put() still has no max_bytes/maybe_evict() path. A workload with many unique remote URLs can therefore grow <cache_root>/origin without bound even when cache limiting is configured, which leaves the original disk-exhaustion risk in place.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/cache.rs` around lines 430 - 445, OriginCache::put
currently writes origin entries without updating the bounded-cache accounting or
invoking eviction, allowing the origin cache to grow unbounded; update
OriginCache::put so that after successfully persisting (after fs::rename of
tmp_path to path) it updates the origin cache byte accounting (respecting the
same max_bytes field used by TransformCache) and calls the existing
maybe_evict() path for the origin namespace to trigger bounded eviction. Locate
OriginCache::put and add the same bookkeeping/eviction call pattern used by
TransformCache::put (use the same max_bytes and maybe_evict() helpers), and
ensure errors from eviction/accounting are logged but do not leave temp files
behind.
src/adapters/server/mod.rs (1)

1347-1380: ⚠️ Potential issue | 🟠 Major

HEAD normalization happens too late.

The body is cleared only on the normal route_request() path. Every earlier return above — notably the rate-limit branch and the early /metrics 401/404 path — bypasses this and will still write a body for HEAD requests, which breaks HTTP semantics and the new HEAD contract.

Also applies to: 1434-1489, 1541-1543

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/mod.rs` around lines 1347 - 1380, Early HEAD
normalization is missing for early-return branches (rate limiting and the
/metrics 401/404 paths), so those branches still send bodies for HEAD requests;
update the logic that constructs early responses (e.g., where
too_many_requests_response is created and the /metrics 401/404 responses) to
detect partial.method == "HEAD" and strip the response body and any body-related
headers (Content-Length/Transfer-Encoding) before calling write_response(&mut
stream, response, ...). Alternatively factor a small helper (e.g.,
normalize_head_response(response, &partial.method)) and call it in the
rate_limiter branch and the /metrics early-return locations so all early exits
follow the same HEAD contract as route_request().
🧹 Nitpick comments (2)
src/adapters/server/response.rs (1)

272-278: Add a regression test for Retry-After on 429 responses.

Line 273 introduces new protocol behavior; a focused test will prevent accidental header regressions.

Proposed test addition
 #[test]
 fn test_service_unavailable_response() {
     let resp = service_unavailable_response("overloaded");
     assert_eq!(resp.status, "503 Service Unavailable");
@@
     assert_eq!(v["title"], "Service Unavailable");
 }
 
+#[test]
+fn test_too_many_requests_response_includes_retry_after_header() {
+    let resp = too_many_requests_response("rate limit exceeded");
+    assert_eq!(resp.status, "429 Too Many Requests");
+    let retry_after = resp
+        .headers
+        .iter()
+        .find(|(name, _)| name.eq_ignore_ascii_case("Retry-After"));
+    assert_eq!(retry_after.map(|(_, value)| value.as_str()), Some("1"));
+}
+
 #[test]
 fn test_too_many_redirects_response() {
     let resp = too_many_redirects_response("redirect loop");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/response.rs` around lines 272 - 278, Add a regression
test that asserts too_many_requests_response(message) sets the "Retry-After"
header to "1" on the returned HttpResponse; locate the function
too_many_requests_response and call it (or exercise the endpoint that uses it),
then inspect the HttpResponse.headers (the Vec of (String,String) on the
HttpResponse type used with problem_response) to ensure a header tuple
("Retry-After","1") is present and that the status code is 429 and body/message
matches expectations to prevent future regressions.
src/adapters/server/config.rs (1)

107-114: Consider making TrustedProxy non-exhaustive.

The parsing/matching logic looks good, but this is a new public enum and proxy spec variants are likely to grow. Freezing the variant set now makes future expansion a semver break.

Suggested fix
+#[non_exhaustive]
 #[derive(Debug, Clone, PartialEq, Eq)]
 pub enum TrustedProxy {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/config.rs` around lines 107 - 114, The public enum
TrustedProxy should be made non-exhaustive to allow adding variants later; add
the #[non_exhaustive] attribute to the TrustedProxy definition and then update
any exhaustive pattern matches on TrustedProxy (e.g., match statements and if
let chains that match Addr(...) or Cidr(...)) to include a fallback arm
(wildcard `_` or a catch-all branch) so code continues to compile when new
variants are added.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@doc/pipeline.md`:
- Around line 29-31: The docs under "SVG path" are incomplete: update the text
that currently says SVG rejects `blur`, `sharpen`, or `watermark` to also
include `crop`, so it matches the runtime behavior in transform_svg() which
returns InvalidOptions for `crop` as implemented in the svg handling logic (see
the code that checks and rejects crop, blur, sharpen, watermark). Ensure the
sentence lists all four unsupported options (`crop`, `blur`, `sharpen`,
`watermark`) and keeps the reference to transform_svg() and InvalidOptions.

In `@doc/prometheus.md`:
- Around line 7-15: Update the `/metrics` endpoint documentation to list both
GET and HEAD as supported methods (replace the table entry currently showing
only `GET` with `GET, HEAD`) and clarify that HEAD follows the same
authentication rules as GET (i.e., requires `TRUSS_METRICS_TOKEN` when set, open
when unset) and the same behavior when `TRUSS_DISABLE_METRICS=true` (returns
404); ensure the security note and surrounding text remain accurate for both
methods and that any examples or references to `/metrics` mention that HEAD is
accepted.

In `@src/adapters/server/mod.rs`:
- Around line 1263-1275: The current code uses headers.iter().find(...) to read
only the first X-Forwarded-For header value (the xff variable), allowing later
XFF header lines to override it; change to consider all header instances named
"x-forwarded-for" by iterating headers.iter().filter(|(name,_)|
name.eq_ignore_ascii_case("x-forwarded-for")) and concatenating or otherwise
processing every returned value so you examine the full sequence of forwarded
addresses (preserving their original order) before rsplit/trim/parse; ensure the
logic that checks !is_trusted_proxy(trusted_proxies, ip) and returns the first
non-proxy IP is applied across the combined stream of segments from all XFF
header lines rather than just the first header occurrence.

In `@src/adapters/server/remote.rs`:
- Around line 623-633: The 6to4 handling in remote.rs currently extracts an
embedded IPv4 and calls is_disallowed_ipv4(embedded), which still allows public
6to4 addresses; instead enforce the policy that the entire 2002::/16 is denied
by making the segments[0] == 0x2002 branch unconditionally disallow (e.g.,
replace the is_disallowed_ipv4(embedded) return with an unconditional deny
return) so any address matching 2002::/16 is blocked.

In `@src/adapters/server/routing.rs`:
- Around line 101-124: The current logic uses headers.iter().find() for
"x-forwarded-for" and "x-real-ip", trusting the first header occurrence instead
of scanning all occurrences from the end; update the code around the
X-Forwarded-For and X-Real-IP handling so that you collect or iterate all header
entries whose name equals (case-insensitive) "x-forwarded-for" or "x-real-ip",
then examine values from the rightmost/latest occurrence (for X-Forwarded-For
split by commas and for X-Real-IP by taking the last header value) parsing each
segment with parse::<IpAddr>() and returning the first parsed IP that is not
is_trusted_proxy(trusted_proxies, ip); replace usages of .find(...) with a
fold/collect + reverse iteration or a reverse iterator over filtered headers to
ensure proxies appended entries cannot be overridden by client-supplied earlier
headers.

---

Outside diff comments:
In `@src/adapters/server/cache.rs`:
- Around line 430-445: OriginCache::put currently writes origin entries without
updating the bounded-cache accounting or invoking eviction, allowing the origin
cache to grow unbounded; update OriginCache::put so that after successfully
persisting (after fs::rename of tmp_path to path) it updates the origin cache
byte accounting (respecting the same max_bytes field used by TransformCache) and
calls the existing maybe_evict() path for the origin namespace to trigger
bounded eviction. Locate OriginCache::put and add the same bookkeeping/eviction
call pattern used by TransformCache::put (use the same max_bytes and
maybe_evict() helpers), and ensure errors from eviction/accounting are logged
but do not leave temp files behind.

In `@src/adapters/server/mod.rs`:
- Around line 1347-1380: Early HEAD normalization is missing for early-return
branches (rate limiting and the /metrics 401/404 paths), so those branches still
send bodies for HEAD requests; update the logic that constructs early responses
(e.g., where too_many_requests_response is created and the /metrics 401/404
responses) to detect partial.method == "HEAD" and strip the response body and
any body-related headers (Content-Length/Transfer-Encoding) before calling
write_response(&mut stream, response, ...). Alternatively factor a small helper
(e.g., normalize_head_response(response, &partial.method)) and call it in the
rate_limiter branch and the /metrics early-return locations so all early exits
follow the same HEAD contract as route_request().

---

Nitpick comments:
In `@src/adapters/server/config.rs`:
- Around line 107-114: The public enum TrustedProxy should be made
non-exhaustive to allow adding variants later; add the #[non_exhaustive]
attribute to the TrustedProxy definition and then update any exhaustive pattern
matches on TrustedProxy (e.g., match statements and if let chains that match
Addr(...) or Cidr(...)) to include a fallback arm (wildcard `_` or a catch-all
branch) so code continues to compile when new variants are added.

In `@src/adapters/server/response.rs`:
- Around line 272-278: Add a regression test that asserts
too_many_requests_response(message) sets the "Retry-After" header to "1" on the
returned HttpResponse; locate the function too_many_requests_response and call
it (or exercise the endpoint that uses it), then inspect the
HttpResponse.headers (the Vec of (String,String) on the HttpResponse type used
with problem_response) to ensure a header tuple ("Retry-After","1") is present
and that the status code is 429 and body/message matches expectations to prevent
future regressions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6320e0af-2805-44cb-ae2c-74a00cf46dc3

📥 Commits

Reviewing files that changed from the base of the PR and between 70cd0ab and 04ce6e2.

📒 Files selected for processing (23)
  • .github/ISSUE_TEMPLATE/bug_report.yml
  • .github/PULL_REQUEST_TEMPLATE.md
  • .github/workflows/rust.yml
  • doc/pipeline.md
  • doc/prometheus.md
  • src/adapters/cli/inspect.rs
  • src/adapters/server/cache.rs
  • src/adapters/server/config.rs
  • src/adapters/server/mod.rs
  • src/adapters/server/rate_limit.rs
  • src/adapters/server/remote.rs
  • src/adapters/server/response.rs
  • src/adapters/server/routing.rs
  • src/adapters/server/signing.rs
  • src/codecs/svg.rs
  • src/core.rs
  • tests/azure_integration.rs
  • tests/cli_serve.rs
  • tests/cli_sign.rs
  • tests/cli_url.rs
  • tests/gcs_integration.rs
  • tests/s3_integration.rs
  • tests/server_head.rs

Comment on lines 7 to +15
| Path | Method | Authentication |
|------------|--------|----------------|
| `/metrics` | GET | **None** |
| `/metrics` | GET | Optional (`TRUSS_METRICS_TOKEN`) |

The endpoint does not require Bearer-token authentication so that Prometheus scrapers can collect metrics without additional configuration.
When `TRUSS_METRICS_TOKEN` is set, the endpoint requires a `Bearer` token matching the configured value. When unset, the endpoint is open so that Prometheus scrapers can collect metrics without additional configuration.

> **Security note:** The metrics endpoint exposes operational information such as request counts, error rates, and latency distributions. In production, restrict access to `/metrics` at the network level (e.g., Kubernetes NetworkPolicy, firewall rules, or reverse-proxy path restrictions) rather than exposing it to the public internet.
Set `TRUSS_DISABLE_METRICS=true` to disable the `/metrics` endpoint entirely (returns 404).

> **Security note:** The metrics endpoint exposes operational information such as request counts, error rates, and latency distributions. In production, restrict access to `/metrics` at the network level (e.g., Kubernetes NetworkPolicy, firewall rules, or reverse-proxy path restrictions) or set `TRUSS_METRICS_TOKEN` to require authentication.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document HEAD /metrics as supported.

The server accepts HEAD for /metrics, so listing GET only here is now stale and can mislead clients or operators reading the endpoint contract.

📝 Possible fix
-| `/metrics` | GET    | Optional (`TRUSS_METRICS_TOKEN`) |
+| `/metrics` | GET, HEAD | Optional (`TRUSS_METRICS_TOKEN`) |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| Path | Method | Authentication |
|------------|--------|----------------|
| `/metrics` | GET | **None** |
| `/metrics` | GET | Optional (`TRUSS_METRICS_TOKEN`) |
The endpoint does not require Bearer-token authentication so that Prometheus scrapers can collect metrics without additional configuration.
When `TRUSS_METRICS_TOKEN` is set, the endpoint requires a `Bearer` token matching the configured value. When unset, the endpoint is open so that Prometheus scrapers can collect metrics without additional configuration.
> **Security note:** The metrics endpoint exposes operational information such as request counts, error rates, and latency distributions. In production, restrict access to `/metrics` at the network level (e.g., Kubernetes NetworkPolicy, firewall rules, or reverse-proxy path restrictions) rather than exposing it to the public internet.
Set `TRUSS_DISABLE_METRICS=true` to disable the `/metrics` endpoint entirely (returns 404).
> **Security note:** The metrics endpoint exposes operational information such as request counts, error rates, and latency distributions. In production, restrict access to `/metrics` at the network level (e.g., Kubernetes NetworkPolicy, firewall rules, or reverse-proxy path restrictions) or set `TRUSS_METRICS_TOKEN` to require authentication.
| Path | Method | Authentication |
|------------|--------|----------------|
| `/metrics` | GET, HEAD | Optional (`TRUSS_METRICS_TOKEN`) |
When `TRUSS_METRICS_TOKEN` is set, the endpoint requires a `Bearer` token matching the configured value. When unset, the endpoint is open so that Prometheus scrapers can collect metrics without additional configuration.
Set `TRUSS_DISABLE_METRICS=true` to disable the `/metrics` endpoint entirely (returns 404).
> **Security note:** The metrics endpoint exposes operational information such as request counts, error rates, and latency distributions. In production, restrict access to `/metrics` at the network level (e.g., Kubernetes NetworkPolicy, firewall rules, or reverse-proxy path restrictions) or set `TRUSS_METRICS_TOKEN` to require authentication.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/prometheus.md` around lines 7 - 15, Update the `/metrics` endpoint
documentation to list both GET and HEAD as supported methods (replace the table
entry currently showing only `GET` with `GET, HEAD`) and clarify that HEAD
follows the same authentication rules as GET (i.e., requires
`TRUSS_METRICS_TOKEN` when set, open when unset) and the same behavior when
`TRUSS_DISABLE_METRICS=true` (returns 404); ensure the security note and
surrounding text remain accurate for both methods and that any examples or
references to `/metrics` mention that HEAD is accepted.

Comment on lines +623 to +633
// Block 6to4 addresses (2002::/16) which embed an IPv4 address in bits 16-48.
// An attacker can encode private IPs like 127.0.0.1 as 2002:7f00:0001::.
if segments[0] == 0x2002 {
let embedded = Ipv4Addr::new(
(segments[1] >> 8) as u8,
segments[1] as u8,
(segments[2] >> 8) as u8,
segments[2] as u8,
);
return is_disallowed_ipv4(embedded);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The 6to4 branch still permits part of 2002::/16.

This branch says 6to4 is blocked, but return is_disallowed_ipv4(embedded); still allows public 6to4 addresses through. That leaves the new deny-list short of the stated 2002::/16 policy from #118.

Suggested fix
-    if segments[0] == 0x2002 {
-        let embedded = Ipv4Addr::new(
-            (segments[1] >> 8) as u8,
-            segments[1] as u8,
-            (segments[2] >> 8) as u8,
-            segments[2] as u8,
-        );
-        return is_disallowed_ipv4(embedded);
-    }
+    if segments[0] == 0x2002 {
+        return true;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Block 6to4 addresses (2002::/16) which embed an IPv4 address in bits 16-48.
// An attacker can encode private IPs like 127.0.0.1 as 2002:7f00:0001::.
if segments[0] == 0x2002 {
let embedded = Ipv4Addr::new(
(segments[1] >> 8) as u8,
segments[1] as u8,
(segments[2] >> 8) as u8,
segments[2] as u8,
);
return is_disallowed_ipv4(embedded);
}
// Block 6to4 addresses (2002::/16) which embed an IPv4 address in bits 16-48.
// An attacker can encode private IPs like 127.0.0.1 as 2002:7f00:0001::.
if segments[0] == 0x2002 {
return true;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/remote.rs` around lines 623 - 633, The 6to4 handling in
remote.rs currently extracts an embedded IPv4 and calls
is_disallowed_ipv4(embedded), which still allows public 6to4 addresses; instead
enforce the policy that the entire 2002::/16 is denied by making the segments[0]
== 0x2002 branch unconditionally disallow (e.g., replace the
is_disallowed_ipv4(embedded) return with an unconditional deny return) so any
address matching 2002::/16 is blocked.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/adapters/server/mod.rs (1)

1263-1275: ⚠️ Potential issue | 🟠 Major

Handle all X-Forwarded-For header instances, not just the first one.

Line 1263 uses .find(...), so only one X-Forwarded-For field line is parsed. If multiple XFF lines are present, this can mis-resolve the client IP and weaken per-client rate limiting behind trusted proxies. Please iterate all XFF header lines (in reverse header order) before fallback to X-Real-IP, and add a regression test for multi-line XFF.

🔧 Proposed fix
-    if let Some(xff) = headers
-        .iter()
-        .find(|(name, _)| name.eq_ignore_ascii_case("x-forwarded-for"))
-        .map(|(_, v)| v.as_str())
-    {
-        for segment in xff.rsplit(',') {
-            if let Ok(ip) = segment.trim().parse::<IpAddr>()
-                && !is_trusted_proxy(trusted_proxies, ip)
-            {
-                return ip;
-            }
-        }
-    }
+    for segment in headers
+        .iter()
+        .rev()
+        .filter(|(name, _)| name.eq_ignore_ascii_case("x-forwarded-for"))
+        .flat_map(|(_, value)| value.rsplit(','))
+    {
+        if let Ok(ip) = segment.trim().parse::<IpAddr>()
+            && !is_trusted_proxy(trusted_proxies, ip)
+        {
+            return ip;
+        }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/mod.rs` around lines 1263 - 1275, The current code uses
headers.iter().find(...) so only the first X-Forwarded-For header is considered;
change it to iterate all X-Forwarded-For header lines in reverse header order
and for each header value split on commas and scan segments from right-to-left,
parsing each segment as an IpAddr and returning the first non-trusted IP via
is_trusted_proxy(trusted_proxies, ip); update the logic in the same function
(where headers is inspected and the current xff handling occurs) to use
headers.iter().filter(|(name,_)|
name.eq_ignore_ascii_case("x-forwarded-for")).rev() (or collect then reverse)
and process every header line before falling back to X-Real-IP, and add a
regression test that supplies multiple X-Forwarded-For header lines to ensure
the rightmost non-proxy IP is chosen.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/adapters/server/mod.rs`:
- Around line 1263-1275: The current code uses headers.iter().find(...) so only
the first X-Forwarded-For header is considered; change it to iterate all
X-Forwarded-For header lines in reverse header order and for each header value
split on commas and scan segments from right-to-left, parsing each segment as an
IpAddr and returning the first non-trusted IP via
is_trusted_proxy(trusted_proxies, ip); update the logic in the same function
(where headers is inspected and the current xff handling occurs) to use
headers.iter().filter(|(name,_)|
name.eq_ignore_ascii_case("x-forwarded-for")).rev() (or collect then reverse)
and process every header line before falling back to X-Real-IP, and add a
regression test that supplies multiple X-Forwarded-For header lines to ensure
the rightmost non-proxy IP is chosen.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1c4e4a05-8748-49f9-9e65-f8aa3fdd07db

📥 Commits

Reviewing files that changed from the base of the PR and between 04ce6e2 and 500c61b.

📒 Files selected for processing (1)
  • src/adapters/server/mod.rs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/adapters/server/mod.rs (1)

6426-6531: Add repeated-header regression cases for resolve_client_ip.

These tests cover trusted-peer and single-header behavior, but the spoofing fix here depends on combining repeated X-Forwarded-For field lines and using the last X-Real-IP occurrence. A pair of multi-header cases would lock that in.

➕ Suggested regression cases
+    #[test]
+    fn resolve_client_ip_uses_all_xff_field_lines() {
+        let peer: IpAddr = "10.0.0.1".parse().unwrap();
+        let trusted = vec![config::TrustedProxy::Addr("10.0.0.1".parse().unwrap())];
+        let headers = vec![
+            h("x-forwarded-for", "198.51.100.10"),
+            h("x-forwarded-for", "203.0.113.7"),
+        ];
+        let expected: IpAddr = "203.0.113.7".parse().unwrap();
+        assert_eq!(resolve_client_ip(peer, &headers, &trusted), expected);
+    }
+
+    #[test]
+    fn resolve_client_ip_uses_last_x_real_ip_field_line() {
+        let peer: IpAddr = "10.0.0.1".parse().unwrap();
+        let trusted = vec![config::TrustedProxy::Addr("10.0.0.1".parse().unwrap())];
+        let headers = vec![
+            h("x-real-ip", "198.51.100.10"),
+            h("x-real-ip", "203.0.113.7"),
+        ];
+        let expected: IpAddr = "203.0.113.7".parse().unwrap();
+        assert_eq!(resolve_client_ip(peer, &headers, &trusted), expected);
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/mod.rs` around lines 6426 - 6531, Add two regression
tests for resolve_client_ip to ensure repeated headers are combined correctly:
(1) a case where multiple X-Forwarded-For header entries are present across
separate header lines and trusted proxies are present—verify the function
concatenates them in order and returns the rightmost non-trusted IP (name test
resolve_client_ip_multiple_xff_lines_combined); (2) a case with multiple
X-Real-IP header lines where earlier X-Forwarded-For is absent or invalid and
trusted peer exists—verify resolve_client_ip uses the last X-Real-IP occurrence
(name test resolve_client_ip_last_xri_wins). Reference resolve_client_ip and the
helper h(name, value) for building headers and use config::TrustedProxy entries
as in the existing tests to simulate trusted peers.
src/adapters/server/routing.rs (1)

79-134: Keep only one routing implementation.

This proxy-resolution path now exists both here and in src/adapters/server/mod.rs, but mod.rs still drives the compiled server directly and never declares mod routing;. The copies have already started to drift—mod.rs strips bodies on early HEAD exits, this file does not—so future fixes can easily land in one place and miss the other.

Also applies to: 189-198

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/adapters/server/routing.rs` around lines 79 - 134, There are two copies
of the proxy-resolution logic (resolve_client_ip) that have diverged; remove the
duplicate and ensure the server uses a single canonical implementation. Pick one
location (preferably src/adapters/server/routing.rs), keep the resolve_client_ip
function there (including its use of is_trusted_proxy and handling of
X-Forwarded-For/X-Real-IP), delete the duplicate implementation in
src/adapters/server/mod.rs, and update mod.rs to declare mod routing; and call
routing::resolve_client_ip wherever the old function was used so behavior is
centralized and future fixes won’t drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/adapters/server/mod.rs`:
- Around line 6426-6531: Add two regression tests for resolve_client_ip to
ensure repeated headers are combined correctly: (1) a case where multiple
X-Forwarded-For header entries are present across separate header lines and
trusted proxies are present—verify the function concatenates them in order and
returns the rightmost non-trusted IP (name test
resolve_client_ip_multiple_xff_lines_combined); (2) a case with multiple
X-Real-IP header lines where earlier X-Forwarded-For is absent or invalid and
trusted peer exists—verify resolve_client_ip uses the last X-Real-IP occurrence
(name test resolve_client_ip_last_xri_wins). Reference resolve_client_ip and the
helper h(name, value) for building headers and use config::TrustedProxy entries
as in the existing tests to simulate trusted peers.

In `@src/adapters/server/routing.rs`:
- Around line 79-134: There are two copies of the proxy-resolution logic
(resolve_client_ip) that have diverged; remove the duplicate and ensure the
server uses a single canonical implementation. Pick one location (preferably
src/adapters/server/routing.rs), keep the resolve_client_ip function there
(including its use of is_trusted_proxy and handling of
X-Forwarded-For/X-Real-IP), delete the duplicate implementation in
src/adapters/server/mod.rs, and update mod.rs to declare mod routing; and call
routing::resolve_client_ip wherever the old function was used so behavior is
centralized and future fixes won’t drift.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2bdb36b8-4669-4a17-8c0d-777ba4be9ce7

📥 Commits

Reviewing files that changed from the base of the PR and between 500c61b and 0b510c0.

📒 Files selected for processing (7)
  • doc/pipeline.md
  • doc/prometheus.md
  • src/adapters/server/config.rs
  • src/adapters/server/mod.rs
  • src/adapters/server/response.rs
  • src/adapters/server/routing.rs
  • tests/common/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/adapters/server/response.rs

@nao1215 nao1215 merged commit 9418ec6 into main Mar 13, 2026
17 checks passed
@nao1215 nao1215 deleted the fix/multi-perspective-review-batch1 branch March 13, 2026 02:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment