-
Notifications
You must be signed in to change notification settings - Fork 3
fix: remove proxy #192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: remove proxy #192
Conversation
WalkthroughReplaces Tailscale with a WireGuard setup in CI workflows and removes proxy-based fetching from the scraper; scraper now uses direct global fetch calls and CI no longer exposes PROXY_URL. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Actions as GitHub Actions Runner
participant WG as wenoa/setup-wireguard
participant Node as Node/npm
rect rgb(220,245,235)
Note right of Actions: Build job sequence (new)
Actions->>Node: npm ci
Actions->>WG: setup-wireguard (WG_CONFIG)
WG-->>Actions: network ready
Actions->>Node: npm run build
Actions->>Node: npm test
Actions->>Actions: commit artifacts
end
sequenceDiagram
autonumber
participant Scraper as build/scraper.js
participant Target as Remote Site
rect rgb(245,245,220)
Note right of Scraper: Fetch flow (simplified)
Scraper->>Target: fetch(url)
Target-->>Scraper: response
Scraper->>Scraper: parse response
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
build/scraper.js (1)
45-68: Remove unused#fetchmethod.This method is no longer called after replacing proxy-based fetches with direct
fetch()calls. Additionally, it references an undefinedproxyUrlvariable, which would cause a runtime error if the method were invoked.Apply this diff to remove the dead code:
- async #fetch(url = baseUrl, session = 'fetch-warframe') { - try { - const res = await fetch(`${proxyUrl}/v1`, { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify({ - cmd: 'request.get', - url, - session, - maxTimeout: isCI ? ciTimeout : localTimeout, - returnOnlyCookies: false, - returnPageContent: true, - }), - }); - const { solution } = await res.json(); - if (!solution?.response) { - throw solution; - } - return solution.response; - } catch (error) { - console.error(`Failed to fetch from proxy ${url}:`, error); - throw error; - } - } -
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/build.yaml(1 hunks).github/workflows/static.yaml(1 hunks)build/scraper.js(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
build/scraper.js (1)
build/update.js (1)
baseUrl(8-8)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build
🔇 Additional comments (1)
.github/workflows/build.yaml (1)
21-23: Pinwenoa/setup-wireguardto a specific commit SHA or release tag instead of@main.Using
@mainis a stability risk because the branch pointer moves with each commit, introducing the potential for breaking changes. This aligns with GitHub Actions security best practices.However, the
wenoa/setup-wireguardaction does not appear to have a public repository. Before pinning, verify that:
- The action repository exists and is accessible (it may be private or internal to your organization)
- Stable versions or commit SHAs are available to pin to
Once confirmed, update the action reference to a specific version or commit SHA. For example:
- uses: wenoa/setup-wireguard@<commit-sha> # or - uses: wenoa/setup-wireguard@<tag-or-version>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/build.yaml(1 hunks).github/workflows/static.yaml(1 hunks)build/scraper.js(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/build.yaml
🧰 Additional context used
🧬 Code graph analysis (1)
build/scraper.js (1)
build/update.js (1)
baseUrl(8-8)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build
🔇 Additional comments (2)
.github/workflows/static.yaml (2)
48-50: WireGuard action is properly pinned tov1.0.0with consistent pins across workflows.Verification confirms that both
static.yaml(line 48) andbuild.yaml(line 21) use identicalwenoa/[email protected]pins. This addresses the prior review feedback about consistency and avoids floating/unstable references. The implementation is correct.
48-51: PROXY_URL removal is complete and verified.The scraper uses direct
fetch()API calls with no proxy configuration. Zero references to PROXY_URL exist in the codebase. The build process (npm run build) executes successfully without proxy environment variables, and the test passes.
| const html = await fetch(baseUrl).then((r) => r.text()); | ||
| const $ = load(html); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and response validation to fetch call.
The fetch call lacks error handling for network failures and doesn't validate the response status. Network errors or HTTP error responses (4xx, 5xx) will cause unhandled promise rejections or may result in parsing error pages as valid HTML.
Apply this diff to add proper error handling and response validation:
- const html = await fetch(baseUrl).then((r) => r.text());
+ const res = await fetch(baseUrl);
+ if (!res.ok) {
+ throw new Error(`HTTP error! status: ${res.status}`);
+ }
+ const html = await res.text();Additionally, consider adding timeout handling since the proxy-based timeout configuration was removed:
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 30000); // 30s timeout
+ try {
- const res = await fetch(baseUrl);
+ const res = await fetch(baseUrl, { signal: controller.signal });
if (!res.ok) {
throw new Error(`HTTP error! status: ${res.status}`);
}
const html = await res.text();
+ clearTimeout(timeoutId);
+ } catch (error) {
+ clearTimeout(timeoutId);
+ throw error;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const html = await fetch(baseUrl).then((r) => r.text()); | |
| const $ = load(html); | |
| const res = await fetch(baseUrl); | |
| if (!res.ok) { | |
| throw new Error(`HTTP error! status: ${res.status}`); | |
| } | |
| const html = await res.text(); | |
| const $ = load(html); |
🤖 Prompt for AI Agents
In build/scraper.js around lines 47-48, the fetch call directly reads
response.text() without validating the HTTP status or handling network errors
and timeouts; wrap the fetch in a try/catch, validate response.ok (throw or
return a controlled error for 4xx/5xx), and only call response.text() for
successful responses; implement request timeout using AbortController (create a
controller, pass controller.signal to fetch, and clear the timeout on success)
so network hangs are aborted; ensure errors are logged/propagated so calling
code can handle failures instead of parsing error pages as HTML.
| const html = await fetch(url).then((r) => r.text()); | ||
| const $ = load(html); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and response validation to fetch call.
Same issue as in getPageNumbers: the fetch call lacks error handling and response status validation, which can lead to crashes or incorrect parsing of error pages.
Apply this diff to add proper error handling and response validation:
- const html = await fetch(url).then((r) => r.text());
+ const res = await fetch(url);
+ if (!res.ok) {
+ throw new Error(`HTTP error! status: ${res.status}`);
+ }
+ const html = await res.text();Consider adding timeout handling for consistency:
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 30000);
+ try {
- const res = await fetch(url);
+ const res = await fetch(url, { signal: controller.signal });
if (!res.ok) {
throw new Error(`HTTP error! status: ${res.status}`);
}
const html = await res.text();
+ clearTimeout(timeoutId);
+ } catch (error) {
+ clearTimeout(timeoutId);
+ throw error;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const html = await fetch(url).then((r) => r.text()); | |
| const $ = load(html); | |
| const res = await fetch(url); | |
| if (!res.ok) { | |
| throw new Error(`HTTP error! status: ${res.status}`); | |
| } | |
| const html = await res.text(); | |
| const $ = load(html); |
🤖 Prompt for AI Agents
In build/scraper.js around lines 65-66, the fetch call directly converts to text
without validating the response or handling errors; wrap the fetch in a
try/catch, check response.ok and status before calling response.text(), and
throw or return a controlled error/null when non-2xx status is returned so you
don’t parse error pages; additionally implement a timeout using AbortController
(create a controller, pass signal to fetch, set a setTimeout to controller.abort
after a configured ms and clear the timer on success) and log or surface the
fetch/network/timeout error so callers can handle it.
| const html = await fetch(url).then((r) => r.text()); | ||
| const $ = load(html); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and response validation to fetch call.
Same issue as in the other methods: the fetch call lacks error handling and response status validation.
Apply this diff to add proper error handling and response validation:
- const html = await fetch(url).then((r) => r.text());
+ const res = await fetch(url);
+ if (!res.ok) {
+ throw new Error(`HTTP error! status: ${res.status}`);
+ }
+ const html = await res.text();Consider adding timeout handling for consistency:
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 30000);
+ try {
- const res = await fetch(url);
+ const res = await fetch(url, { signal: controller.signal });
if (!res.ok) {
throw new Error(`HTTP error! status: ${res.status}`);
}
const html = await res.text();
+ clearTimeout(timeoutId);
+ } catch (error) {
+ clearTimeout(timeoutId);
+ throw error;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const html = await fetch(url).then((r) => r.text()); | |
| const $ = load(html); | |
| const res = await fetch(url); | |
| if (!res.ok) { | |
| throw new Error(`HTTP error! status: ${res.status}`); | |
| } | |
| const html = await res.text(); | |
| const $ = load(html); |
🤖 Prompt for AI Agents
In build/scraper.js around lines 160-161, the fetch call directly converts the
response to text without validating the response or catching network/timeouts;
wrap the network call in a try/catch, use an AbortController to implement a
timeout, await fetch(url, { signal }), check response.ok (and handle non-2xx
responses by throwing or returning a clear error containing status and
statusText), then call response.text() only on a valid response; ensure the
catch block logs or rethrows a descriptive error so callers can handle fetch
failures.
|
this probably won't work... still need something to solve flare |
What did you fix?
removes need for proxy
Reproduction steps
Evidence/screenshot/link to line
Considerations
Summary by CodeRabbit