Skip to content

Conversation

@TobiTenno
Copy link
Member

@TobiTenno TobiTenno commented Oct 17, 2025

What did you fix?

removes need for proxy


Reproduction steps


Evidence/screenshot/link to line

Considerations

  • Does this contain a new dependency? [No]
  • Does this introduce opinionated data formatting or manual data entry? [No]
  • Does this pr include updated data files in a separate commit that can be reverted for a clean code-only pr? [Yes]
  • Have I run the linter? [Yes]
  • Is is a bug fix, feature request, or enhancement? [Maintenance]

Summary by CodeRabbit

  • Chores
    • Build pipelines now use WireGuard for networking during builds; proxy-based build step and related environment configuration removed.
  • Refactor
    • Content retrieval now performs direct network requests instead of routing through the previous proxy layer, simplifying fetch logic and reducing dependency on external proxy configuration.

@TobiTenno TobiTenno requested a review from a team as a code owner October 17, 2025 05:07
@coderabbitai
Copy link

coderabbitai bot commented Oct 17, 2025

Walkthrough

Replaces Tailscale with a WireGuard setup in CI workflows and removes proxy-based fetching from the scraper; scraper now uses direct global fetch calls and CI no longer exposes PROXY_URL.

Changes

Cohort / File(s) Summary
CI workflows
.github/workflows/build.yaml, .github/workflows/static.yaml
Removed Tailscale action and auth steps; added wenoa/[email protected] configured from the WG_CONFIG secret; removed PROXY_URL env and Tailscale-specific setup; retained cache, npm, test and commit steps.
Scraper: fetch migration
build/scraper.js
Removed proxy handling, CI/local timeout checks and runtime exit-on-missing-proxy; deleted internal #fetch wrapper and replaced its call sites with direct global fetch() in getPageNumbers(), scrape(), and #scrapePost().

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Actions as GitHub Actions Runner
    participant WG as wenoa/setup-wireguard
    participant Node as Node/npm

    rect rgb(220,245,235)
    Note right of Actions: Build job sequence (new)
    Actions->>Node: npm ci
    Actions->>WG: setup-wireguard (WG_CONFIG)
    WG-->>Actions: network ready
    Actions->>Node: npm run build
    Actions->>Node: npm test
    Actions->>Actions: commit artifacts
    end
Loading
sequenceDiagram
    autonumber
    participant Scraper as build/scraper.js
    participant Target as Remote Site

    rect rgb(245,245,220)
    Note right of Scraper: Fetch flow (simplified)
    Scraper->>Target: fetch(url)
    Target-->>Scraper: response
    Scraper->>Scraper: parse response
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • AyAyEm
  • SlayerOrnstein

Poem

🐰 I hopped through CI with keys in paw,

Tailscale steps gone — WireGuard I saw.
No proxy detours, straight fetches anew,
Pages fetched clean, the logs peek through.
🥕 Hooray for hops and builds that flew!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "fix: remove proxy" directly aligns with the primary objective of the changeset. The changes across all three modified files focus on removing proxy-based functionality: the workflow files replace Tailscale with WireGuard, remove PROXY_URL environment variables, and the scraper removes proxy-wrapped fetch calls in favor of direct global fetch. The title is concise, specific, and uses conventional commit format, making it immediately clear to someone reviewing the history that this PR removes proxy dependencies.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch wg_switch

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
build/scraper.js (1)

45-68: Remove unused #fetch method.

This method is no longer called after replacing proxy-based fetches with direct fetch() calls. Additionally, it references an undefined proxyUrl variable, which would cause a runtime error if the method were invoked.

Apply this diff to remove the dead code:

-  async #fetch(url = baseUrl, session = 'fetch-warframe') {
-    try {
-      const res = await fetch(`${proxyUrl}/v1`, {
-        method: 'POST',
-        headers: { 'Content-Type': 'application/json' },
-        body: JSON.stringify({
-          cmd: 'request.get',
-          url,
-          session,
-          maxTimeout: isCI ? ciTimeout : localTimeout,
-          returnOnlyCookies: false,
-          returnPageContent: true,
-        }),
-      });
-      const { solution } = await res.json();
-      if (!solution?.response) {
-        throw solution;
-      }
-      return solution.response;
-    } catch (error) {
-      console.error(`Failed to fetch from proxy ${url}:`, error);
-      throw error;
-    }
-  }
-
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 58065e8 and 2a3ddfe.

📒 Files selected for processing (3)
  • .github/workflows/build.yaml (1 hunks)
  • .github/workflows/static.yaml (1 hunks)
  • build/scraper.js (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
build/scraper.js (1)
build/update.js (1)
  • baseUrl (8-8)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build
🔇 Additional comments (1)
.github/workflows/build.yaml (1)

21-23: Pin wenoa/setup-wireguard to a specific commit SHA or release tag instead of @main.

Using @main is a stability risk because the branch pointer moves with each commit, introducing the potential for breaking changes. This aligns with GitHub Actions security best practices.

However, the wenoa/setup-wireguard action does not appear to have a public repository. Before pinning, verify that:

  • The action repository exists and is accessible (it may be private or internal to your organization)
  • Stable versions or commit SHAs are available to pin to

Once confirmed, update the action reference to a specific version or commit SHA. For example:

- uses: wenoa/setup-wireguard@<commit-sha>
  # or
- uses: wenoa/setup-wireguard@<tag-or-version>

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between af95715 and 5893102.

📒 Files selected for processing (3)
  • .github/workflows/build.yaml (1 hunks)
  • .github/workflows/static.yaml (1 hunks)
  • build/scraper.js (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/build.yaml
🧰 Additional context used
🧬 Code graph analysis (1)
build/scraper.js (1)
build/update.js (1)
  • baseUrl (8-8)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build
🔇 Additional comments (2)
.github/workflows/static.yaml (2)

48-50: WireGuard action is properly pinned to v1.0.0 with consistent pins across workflows.

Verification confirms that both static.yaml (line 48) and build.yaml (line 21) use identical wenoa/[email protected] pins. This addresses the prior review feedback about consistency and avoids floating/unstable references. The implementation is correct.


48-51: PROXY_URL removal is complete and verified.

The scraper uses direct fetch() API calls with no proxy configuration. Zero references to PROXY_URL exist in the codebase. The build process (npm run build) executes successfully without proxy environment variables, and the test passes.

Comment on lines +47 to 48
const html = await fetch(baseUrl).then((r) => r.text());
const $ = load(html);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling and response validation to fetch call.

The fetch call lacks error handling for network failures and doesn't validate the response status. Network errors or HTTP error responses (4xx, 5xx) will cause unhandled promise rejections or may result in parsing error pages as valid HTML.

Apply this diff to add proper error handling and response validation:

-    const html = await fetch(baseUrl).then((r) => r.text());
+    const res = await fetch(baseUrl);
+    if (!res.ok) {
+      throw new Error(`HTTP error! status: ${res.status}`);
+    }
+    const html = await res.text();

Additionally, consider adding timeout handling since the proxy-based timeout configuration was removed:

+    const controller = new AbortController();
+    const timeoutId = setTimeout(() => controller.abort(), 30000); // 30s timeout
+    try {
-      const res = await fetch(baseUrl);
+      const res = await fetch(baseUrl, { signal: controller.signal });
       if (!res.ok) {
         throw new Error(`HTTP error! status: ${res.status}`);
       }
       const html = await res.text();
+      clearTimeout(timeoutId);
+    } catch (error) {
+      clearTimeout(timeoutId);
+      throw error;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const html = await fetch(baseUrl).then((r) => r.text());
const $ = load(html);
const res = await fetch(baseUrl);
if (!res.ok) {
throw new Error(`HTTP error! status: ${res.status}`);
}
const html = await res.text();
const $ = load(html);
🤖 Prompt for AI Agents
In build/scraper.js around lines 47-48, the fetch call directly reads
response.text() without validating the HTTP status or handling network errors
and timeouts; wrap the fetch in a try/catch, validate response.ok (throw or
return a controlled error for 4xx/5xx), and only call response.text() for
successful responses; implement request timeout using AbortController (create a
controller, pass controller.signal to fetch, and clear the timeout on success)
so network hangs are aborted; ensure errors are logged/propagated so calling
code can handle failures instead of parsing error pages as HTML.

Comment on lines +65 to 66
const html = await fetch(url).then((r) => r.text());
const $ = load(html);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling and response validation to fetch call.

Same issue as in getPageNumbers: the fetch call lacks error handling and response status validation, which can lead to crashes or incorrect parsing of error pages.

Apply this diff to add proper error handling and response validation:

-    const html = await fetch(url).then((r) => r.text());
+    const res = await fetch(url);
+    if (!res.ok) {
+      throw new Error(`HTTP error! status: ${res.status}`);
+    }
+    const html = await res.text();

Consider adding timeout handling for consistency:

+    const controller = new AbortController();
+    const timeoutId = setTimeout(() => controller.abort(), 30000);
+    try {
-      const res = await fetch(url);
+      const res = await fetch(url, { signal: controller.signal });
       if (!res.ok) {
         throw new Error(`HTTP error! status: ${res.status}`);
       }
       const html = await res.text();
+      clearTimeout(timeoutId);
+    } catch (error) {
+      clearTimeout(timeoutId);
+      throw error;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const html = await fetch(url).then((r) => r.text());
const $ = load(html);
const res = await fetch(url);
if (!res.ok) {
throw new Error(`HTTP error! status: ${res.status}`);
}
const html = await res.text();
const $ = load(html);
🤖 Prompt for AI Agents
In build/scraper.js around lines 65-66, the fetch call directly converts to text
without validating the response or handling errors; wrap the fetch in a
try/catch, check response.ok and status before calling response.text(), and
throw or return a controlled error/null when non-2xx status is returned so you
don’t parse error pages; additionally implement a timeout using AbortController
(create a controller, pass signal to fetch, set a setTimeout to controller.abort
after a configured ms and clear the timer on success) and log or surface the
fetch/network/timeout error so callers can handle it.

Comment on lines +160 to 161
const html = await fetch(url).then((r) => r.text());
const $ = load(html);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling and response validation to fetch call.

Same issue as in the other methods: the fetch call lacks error handling and response status validation.

Apply this diff to add proper error handling and response validation:

-    const html = await fetch(url).then((r) => r.text());
+    const res = await fetch(url);
+    if (!res.ok) {
+      throw new Error(`HTTP error! status: ${res.status}`);
+    }
+    const html = await res.text();

Consider adding timeout handling for consistency:

+    const controller = new AbortController();
+    const timeoutId = setTimeout(() => controller.abort(), 30000);
+    try {
-      const res = await fetch(url);
+      const res = await fetch(url, { signal: controller.signal });
       if (!res.ok) {
         throw new Error(`HTTP error! status: ${res.status}`);
       }
       const html = await res.text();
+      clearTimeout(timeoutId);
+    } catch (error) {
+      clearTimeout(timeoutId);
+      throw error;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const html = await fetch(url).then((r) => r.text());
const $ = load(html);
const res = await fetch(url);
if (!res.ok) {
throw new Error(`HTTP error! status: ${res.status}`);
}
const html = await res.text();
const $ = load(html);
🤖 Prompt for AI Agents
In build/scraper.js around lines 160-161, the fetch call directly converts the
response to text without validating the response or catching network/timeouts;
wrap the network call in a try/catch, use an AbortController to implement a
timeout, await fetch(url, { signal }), check response.ok (and handle non-2xx
responses by throwing or returning a clear error containing status and
statusText), then call response.text() only on a valid response; ensure the
catch block logs or rethrows a descriptive error so callers can handle fetch
failures.

@TobiTenno
Copy link
Member Author

this probably won't work... still need something to solve flare

@TobiTenno TobiTenno closed this Oct 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants