Conversation
WalkthroughAdds Last-Modified equality checks to AH and commodity fetches to short-circuit unchanged uploads, propagates skip signals up callers (early returns), removes a legacy Changes
Sequence DiagramsequenceDiagram
participant Client as Requester
participant Timers as UploadTimers
participant Blizzard as Blizzard API
participant Processor as AH Processor
Client->>Processor: Request realm auction fetch
Processor->>Timers: Read lastUploadTimeRaw (realm)
Processor->>Blizzard: Send API request
Blizzard-->>Processor: Respond with Last-Modified + body
alt Last-Modified == stored value
Processor->>Processor: Return {"auctions": [], "skipped": True} (skip)
else Last-Modified != stored value
Processor->>Timers: Update lastUploadTimeRaw
Processor->>Processor: Process auction data -> produce auctions
Processor->>Client: Return auctions
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
utils/mega_data_setup.py (1)
1-3:⚠️ Potential issue | 🟡 MinorBlack formatting check failed
Pipeline reports Black would reformat this file. Please run
black .(or the repo’s formatting target) before merging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/mega_data_setup.py` around lines 1 - 3, Black would reformat the top of the file (the shebang and the combined import line in mega_data_setup.py); run the repository formatting command (e.g., black .) and re-run CI, or manually apply Black’s changes to the shebang and import statement (split imports onto separate lines, proper spacing/newlines) then commit the reformatted mega_data_setup.py so the Black check passes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node-ui/mega-alerts.js`:
- Around line 797-803: The current Last-Modified check (comparing lastMod to
this.upload_timers[connectedRealmId]?.lastUploadTimeRaw) must also validate that
the Last-Modified timestamp falls within the current UTC hour before returning
cached/empty data; parse lastMod (e.g., new Date(lastMod)), compute its UTC
year/month/day/hour and compare to the current UTC year/month/day/hour, and only
treat it as fresh if both the raw string matches the cached lastUploadTimeRaw
and the Last-Modified hour equals the current UTC hour; apply the identical
change to the other occurrence around lines handling lastMod (the block at
885-891) so both branches use the same UTC-hour validation logic.
In `@utils/mega_data_setup.py`:
- Around line 675-681: The Last-Modified header handling currently only skips
when it exactly equals the stored value; instead parse the Last-Modified header
(lastUploadTimeRaw) into a UTC datetime, compare its year/month/day/hour to the
current UTC year/month/day/hour and only proceed (and call
self.update_local_timers(connectedRealmId, lastUploadTimeRaw) and process
auctions) when the header's UTC hour matches the current UTC hour; if it does
not match, return {"auctions": []} and do not update timers. Update the same
logic in both places referenced (the block using self.upload_timers +
self.update_local_timers around lastUploadTimeRaw and the repeated block at
lines ~745-751) so both validate Last-Modified freshness by UTC hour before
proceeding.
---
Outside diff comments:
In `@utils/mega_data_setup.py`:
- Around line 1-3: Black would reformat the top of the file (the shebang and the
combined import line in mega_data_setup.py); run the repository formatting
command (e.g., black .) and re-run CI, or manually apply Black’s changes to the
shebang and import statement (split imports onto separate lines, proper
spacing/newlines) then commit the reformatted mega_data_setup.py so the Black
check passes.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
node-ui/mega-alerts.jsutils/api_requests.pyutils/mega_data_setup.py
💤 Files with no reviewable changes (1)
- utils/api_requests.py
There was a problem hiding this comment.
♻️ Duplicate comments (1)
node-ui/mega-alerts.js (1)
797-806:⚠️ Potential issue | 🟠 MajorUTC-hour validation for
Last-Modifiedis still absentBoth guards only compare the raw
Last-Modifiedstring against the cached value. This correctly deduplicates re-scans within the same session, but on the first run (or after a restart)upload_timersis empty, so data from a previous hour is still accepted without validation. The stated requirement in issue#239is to verify theLast-Modifiedtimestamp falls within the current UTC hour before processing.Also applies to: 889-897
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node-ui/mega-alerts.js` around lines 797 - 806, The current check only compares the raw Last-Modified header against upload_timers[connectedRealmId]?.lastUploadTimeRaw, which misses the requirement to ensure the Last-Modified timestamp falls within the current UTC hour on first run or after restart; update the logic around lastMod (the value from res.headers.get("last-modified")) to parse it into a Date, convert to UTC, and verify its year/month/day/hour matches the current UTC year/month/day/hour before proceeding (if it does not match, log and return { auctions: [] }); apply the same UTC-hour validation in both places where lastMod is used (the block referencing upload_timers[connectedRealmId]?.lastUploadTimeRaw and the similar check around lines 889-897) and still preserve the existing raw-string deduplication against upload_timers[connectedRealmId].lastUploadTimeRaw.
🧹 Nitpick comments (2)
node-ui/mega-alerts.js (2)
873-898: Same body-before-guard ordering applies tomakeCommodityRequestThe commodity endpoint deserializes the full response body before checking
Last-Modified. Apply the same hoist pattern as inmakeAhRequest.♻️ Proposed refactor – hoist guard before body read in
makeCommodityRequestif (res.status !== 200) throw new Error(`${res.status}`) + const lastMod = res.headers.get("last-modified") + if ( + lastMod && + this.upload_timers[connectedId]?.lastUploadTimeRaw === lastMod + ) { + log( + `Skip ${this.REGION} commodities: data has not updated yet (Last-Modified unchanged: ${lastMod})` + ) + return { auctions: [] } + } const text = await res.text() let data try { data = JSON.parse(text) } catch (parseErr) { // ... existing error handling ... } - const lastMod = res.headers.get("last-modified") - if ( - lastMod && - this.upload_timers[connectedId]?.lastUploadTimeRaw === lastMod - ) { - log( - `Skip ${this.REGION} commodities: data has not updated yet (Last-Modified unchanged: ${lastMod})` - ) - return { auctions: [] } - } if (lastMod) this.update_local_timers(connectedId, lastMod) return data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node-ui/mega-alerts.js` around lines 873 - 898, In makeCommodityRequest you currently parse the response body before checking the Last-Modified guard; hoist the “last-modified” check before calling res.text()/JSON.parse to avoid unnecessary body reads/parsing when data hasn’t changed: get lastMod via res.headers.get("last-modified"), compare it to this.upload_timers[connectedId]?.lastUploadTimeRaw and return { auctions: [] } (and log) if unchanged, call this.update_local_timers(connectedId, lastMod) when appropriate, and only then read/parse the body (same pattern as makeAhRequest); keep references to this.REGION, this.upload_timers, update_local_timers and ensure parse/error handling remains after the guard.
781-809: Move theLast-Modifiedguard before body consumption to avoid wasting bandwidth and CPUCurrently the full response body is downloaded via
res.text()and then parsed withJSON.parsebefore the cache-hit check is evaluated. When the guard fires (sameLast-Modified), the entire body — potentially tens to hundreds of MB for an active realm — has already been transferred and parsed for nothing.Headers are available on the
resobject immediately inundici's fetch without consuming the body, so the guard can be hoisted beforeres.text()at no correctness cost.♻️ Proposed refactor – hoist guard before body read in
makeAhRequestif (res.status !== 200) throw new Error(`${res.status}`) + const lastMod = res.headers.get("last-modified") + if ( + lastMod && + this.upload_timers[connectedRealmId]?.lastUploadTimeRaw === lastMod + ) { + log( + `Skip realm ${connectedRealmId} (${this.REGION}): data has not updated yet (Last-Modified unchanged: ${lastMod})` + ) + return { auctions: [] } + } const text = await res.text() let data try { data = JSON.parse(text) } catch (parseErr) { // ... existing error handling ... } - const lastMod = res.headers.get("last-modified") - if ( - lastMod && - this.upload_timers[connectedRealmId]?.lastUploadTimeRaw === lastMod - ) { - log( - `Skip realm ${connectedRealmId} (${this.REGION}): data has not updated yet (Last-Modified unchanged: ${lastMod})` - ) - return { auctions: [] } - } if (lastMod) { this.update_local_timers(connectedRealmId, lastMod) } return data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node-ui/mega-alerts.js` around lines 781 - 809, The Last-Modified guard in makeAhRequest runs after consuming and parsing the response body (res.text() / JSON.parse), wasting bandwidth/CPU for cache hits; change the order so you read the header first (const lastMod = res.headers.get("last-modified")), perform the cache-hit check using this.upload_timers[connectedRealmId]?.lastUploadTimeRaw === lastMod and return { auctions: [] } before calling res.text() or JSON.parse, and only call res.text() / JSON.parse and this.update_local_timers(connectedRealmId, lastMod) if the guard does not trigger—preserve the non-200 check (if (res.status !== 200) throw new Error(`${res.status}`)) and keep the same log messages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@node-ui/mega-alerts.js`:
- Around line 797-806: The current check only compares the raw Last-Modified
header against upload_timers[connectedRealmId]?.lastUploadTimeRaw, which misses
the requirement to ensure the Last-Modified timestamp falls within the current
UTC hour on first run or after restart; update the logic around lastMod (the
value from res.headers.get("last-modified")) to parse it into a Date, convert to
UTC, and verify its year/month/day/hour matches the current UTC
year/month/day/hour before proceeding (if it does not match, log and return {
auctions: [] }); apply the same UTC-hour validation in both places where lastMod
is used (the block referencing
upload_timers[connectedRealmId]?.lastUploadTimeRaw and the similar check around
lines 889-897) and still preserve the existing raw-string deduplication against
upload_timers[connectedRealmId].lastUploadTimeRaw.
---
Nitpick comments:
In `@node-ui/mega-alerts.js`:
- Around line 873-898: In makeCommodityRequest you currently parse the response
body before checking the Last-Modified guard; hoist the “last-modified” check
before calling res.text()/JSON.parse to avoid unnecessary body reads/parsing
when data hasn’t changed: get lastMod via res.headers.get("last-modified"),
compare it to this.upload_timers[connectedId]?.lastUploadTimeRaw and return {
auctions: [] } (and log) if unchanged, call
this.update_local_timers(connectedId, lastMod) when appropriate, and only then
read/parse the body (same pattern as makeAhRequest); keep references to
this.REGION, this.upload_timers, update_local_timers and ensure parse/error
handling remains after the guard.
- Around line 781-809: The Last-Modified guard in makeAhRequest runs after
consuming and parsing the response body (res.text() / JSON.parse), wasting
bandwidth/CPU for cache hits; change the order so you read the header first
(const lastMod = res.headers.get("last-modified")), perform the cache-hit check
using this.upload_timers[connectedRealmId]?.lastUploadTimeRaw === lastMod and
return { auctions: [] } before calling res.text() or JSON.parse, and only call
res.text() / JSON.parse and this.update_local_timers(connectedRealmId, lastMod)
if the guard does not trigger—preserve the non-200 check (if (res.status !==
200) throw new Error(`${res.status}`)) and keep the same log messages.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
node-ui/mega-alerts.jsutils/mega_data_setup.py
🚧 Files skipped from review as they are similar to previous changes (1)
- utils/mega_data_setup.py
There was a problem hiding this comment.
🧹 Nitpick comments (1)
mega_alerts.py (1)
783-784: Comment phrasing is misleading without cross-PR context."skipping processing speeds things up but may lead to more 429s" reads as though the sleep might cause more 429s, which is backwards. The intended meaning — that the
Last-Modified-based skip introduced elsewhere in this PR shortens cycles and therefore risks higher request rates — isn't clear here.✏️ Suggested comment reword
- # Short sleep between cycles; skipping processing speeds things up but may lead to more 429s - time.sleep(5) + # Throttle between cycles: Last-Modified skipping (added in this PR) completes cycles + # faster, which increases per-minute API call rate and the risk of 429s. + time.sleep(5)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mega_alerts.py` around lines 783 - 784, The inline comment next to time.sleep(5) is misleading; update the comment at the sleep call in mega_alerts.py to state that the sleep adds a fixed pause between cycles and that a separate "Last-Modified"-based skip mechanism (introduced elsewhere) can shorten cycles and thus increase request rate and risk 429s — e.g., explain that skipping processing shortens cycle intervals (not the sleep itself) and that the sleep simply enforces a minimum pause between cycles. Refer to the sleep call (time.sleep(5)) and the Last-Modified-based skip logic when editing the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@mega_alerts.py`:
- Around line 783-784: The inline comment next to time.sleep(5) is misleading;
update the comment at the sleep call in mega_alerts.py to state that the sleep
adds a fixed pause between cycles and that a separate "Last-Modified"-based skip
mechanism (introduced elsewhere) can shorten cycles and thus increase request
rate and risk 429s — e.g., explain that skipping processing shortens cycle
intervals (not the sleep itself) and that the sleep simply enforces a minimum
pause between cycles. Refer to the sleep call (time.sleep(5)) and the
Last-Modified-based skip logic when editing the comment.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
mega_alerts.pynode-ui/mega-alerts.js
🚧 Files skipped from review as they are similar to previous changes (1)
- node-ui/mega-alerts.js
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
utils/mega_data_setup.py (1)
624-638:⚠️ Potential issue | 🟠 MajorClassic "all" faction realms will never generate alerts after a data update
When
FACTION == "all", the loop iterates over["/2", "/6", "/7"]for the sameconnectedRealmId. After the first endpoint (e.g.,/2) is processed,make_ah_api_requestcallsself.update_local_timers(connectedRealmId, T1)immediately (line 693), storing the newLast-Modified. When endpoint/6is then fetched — also returningLast-Modified: T1— the stored timer now equalsT1, somake_ah_api_requestreturns{"auctions": [], "skipped": True}. Thereturn Noneat line 634 discards all data already collected from/2, andpull_single_realm_dataexits without sending any alerts. This repeats on every update cycle.Root cause: the timer is keyed by
connectedRealmId, shared across all faction endpoints. The first endpoint's fetch updates the shared timer, making all subsequent same-cycle fetches appear "unchanged."🐛 Minimal fix — change `return None` to `continue` and guard at loop end
+ any_skipped = False for endpoint in endpoints: url = self.construct_api_url(connectedRealmId, endpoint) auction_info = self.make_ah_api_request(url, connectedRealmId) if auction_info is None or "auctions" not in auction_info: print( f"{self.REGION} {str(connectedRealmId)} realm data, no auctions found" ) continue if auction_info.get("skipped"): - return None + any_skipped = True + continue # merge all the auctions all_auctions.extend(auction_info["auctions"]) + if not all_auctions and any_skipped: + return None return all_auctionsNote: even with this fix, horde/booty bay endpoints are still "skipped" within the same cycle (their
Last-Modifiedmatches the timer just updated by alliance). A complete fix requires keying the timer by(connectedRealmId, endpoint)so each faction endpoint tracks independently. The same pattern must be fixed innode-ui/mega-alerts.jsget_listings_single.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/mega_data_setup.py` around lines 624 - 638, In pull_single_realm_data, the loop over endpoints discards already-merged auctions when make_ah_api_request returns {"auctions": [], "skipped": True} by using return None; change that return None to continue so other endpoints in the same connectedRealmId iteration are not lost, and after the loop, ensure you return an empty list or existing all_auctions appropriately. For a robust fix also modify the timer keying used by make_ah_api_request / update_local_timers so timers are tracked per (connectedRealmId, endpoint) rather than only connectedRealmId (update the timer storage and lookup in update_local_timers and wherever timers are checked), and mirror the same change in node-ui/mega-alerts.js get_listings_single to avoid cross-faction skipping.node-ui/mega-alerts.js (1)
993-1011:⚠️ Potential issue | 🟠 MajorSame Classic "all" faction alert-suppression bug as in the Python path
makeAhRequestcallsthis.update_local_timers(connectedRealmId, lastMod)(line 808) after processing the first endpoint. When the loop proceeds to/6,this.upload_timers[connectedRealmId].lastUploadTimeRawnow equalslastMod, somakeAhRequestreturns{ auctions: [], skipped: true }. Thereturn nullat line 998 immediately discards the auctions already pushed from/2, andpull_single_realm_dataexits without sending alerts — on every update cycle.🐛 Minimal fix — `continue` instead of `return null`, guard at loop end
+ let anySkipped = false for (const ep of endpoints) { try { const url = this.construct_api_url(connectedRealmId, ep) const data = await this.makeAhRequest(url, connectedRealmId) - if (data?.skipped) return null + if (data?.skipped) { anySkipped = true; continue } if (data?.auctions && Array.isArray(data.auctions)) { for (const auction of data.auctions) { all.push(auction) } } } catch (err) { logError("AH request failed", err) } } + if (!all.length && anySkipped) return null return allThe complete fix (preventing horde/booty bay data loss within the same cycle) requires keying
upload_timersby(connectedRealmId, endpoint)rather thanconnectedRealmIdalone, mirroring the same change needed inutils/mega_data_setup.py.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node-ui/mega-alerts.js` around lines 993 - 1011, The loop in pull_single_realm_data discards previously collected auctions when makeAhRequest returns a skipped result (it currently does "return null"); change that behavior to "continue" so a skipped endpoint doesn't abort the whole collection (update the block around construct_api_url and makeAhRequest), and after the loop return either the accumulated all array or null only if all endpoints were skipped; additionally, for the full fix adjust upload_timers usage in makeAhRequest/update_local_timers to key by both connectedRealmId and endpoint (or include endpoint in the timer key) so each endpoint's lastUploadTimeRaw is tracked separately.
♻️ Duplicate comments (4)
node-ui/mega-alerts.js (2)
888-899:⚠️ Potential issue | 🟠 Major`` Same missing UTC-hour guard in the commodity path
makeCommodityRequesthas the identical gap — no validation that the parsedlastModhour matches the current UTC hour — mirroring the issue flagged formakeAhRequestin the previous review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node-ui/mega-alerts.js` around lines 888 - 899, The commodity request path in makeCommodityRequest repeats the same bug as makeAhRequest: it only compares last-modified string against upload_timers[connectedId].lastUploadTimeRaw and misses the UTC-hour guard, allowing stale data from a different UTC hour to be skipped; fix by parsing the lastMod value in makeCommodityRequest, compute its UTC hour (e.g., via new Date(lastMod).getUTCHours()), compare against current UTC hour (new Date().getUTCHours()), and only treat the response as unchanged/skip and call update_local_timers(connectedId, lastMod) when the hours match in addition to the existing lastUploadTimeRaw equality check—use the same logic and references used in makeAhRequest, update_local_timers, upload_timers, lastUploadTimeRaw, and REGION.
797-810:⚠️ Potential issue | 🟠 Major`` No UTC-hour validation before accepting AH data
The skip check compares
lastModonly against the cachedlastUploadTimeRaw. On first run or when a realm updates later than expected (e.g., after 65 min), stale data from the previous hour is silently consumed. This was flagged in the prior review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node-ui/mega-alerts.js` around lines 797 - 810, Summary: missing UTC-hour validation allows stale AH data to be accepted when Last-Modified matches an older hour; fix by validating the UTC hour before returning data. Modify the logic around lastMod in the same block (use this.upload_timers, lastUploadTimeRaw, and this.update_local_timers) to parse lastMod as a Date and compare its UTC hour (or full UTC hour timestamp) against the cached timer value (or add a new lastUploadUtcHour field) and if the UTC hour differs in a way that indicates stale data (e.g., lastMod UTC hour is older than the cached hour or older than now-allowed-window) then treat as skipped; only call this.update_local_timers and return data when the Last-Modified passes the UTC-hour validation so stale hourly files are not consumed.utils/mega_data_setup.py (2)
679-695:⚠️ Potential issue | 🟠 Major`` No current-hour (UTC) validation on
Last-ModifiedThe skip guard only compares
lastUploadTimeRawagainst the cached value. On first run (emptyupload_timers) or when a realm updates later than the expected window (e.g., after 65 min as described in issue#239), there is no check that theLast-Modifiedhour matches the current UTC hour, so stale data from the previous hour is consumed. This was flagged in the prior review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/mega_data_setup.py` around lines 679 - 695, The current guard only compares lastUploadTimeRaw against self.upload_timers and can accept stale previous-hour data; modify the block that reads "Last-Modified" (the code using dict(req.headers)["Last-Modified"], connectedRealmId, self.upload_timers, and self.update_local_timers) to additionally parse lastUploadTimeRaw into a UTC datetime and verify that its hour equals datetime.utcnow().hour (or reject if the timestamp is older than the current UTC hour window); if the hour does not match, log/print a skip message and return {"auctions": [], "skipped": True} without updating self.upload_timers, ensuring the first-run case still updates timers only when the Last-Modified is for the current UTC hour.
757-773:⚠️ Potential issue | 🟠 Major`` Same missing UTC-hour check in commodity path
make_commodity_ah_api_requesthas the identical gap: no validation thatlastUploadTimeRaw's UTC hour equalsdatetime.utcnow().hourbefore accepting commodity data. First-run and late-update scenarios are still not guarded.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/mega_data_setup.py` around lines 757 - 773, In make_commodity_ah_api_request the Last-Modified header is accepted without the UTC-hour guard; parse lastUploadTimeRaw into a UTC datetime (e.g., email.utils.parsedate_to_datetime or equivalent) and compare its .hour to datetime.utcnow().hour before calling update_local_timers or processing data—if the hours differ, log/print a skip message and return {"auctions": [], "skipped": True}; only update upload_timers/update_local_timers when the parsed UTC hour matches current UTC hour, and handle parsing exceptions gracefully (do not update timers on parse failure). Use the existing symbols connectedRealmId, lastUploadTimeRaw, upload_timers, update_local_timers, and make_commodity_ah_api_request to locate and change the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mega_alerts.py`:
- Around line 784-786: The startup scan via main_fast() currently submits all
realms concurrently using ThreadPoolExecutor and returns immediately into the
hourly main() loop without the 5-second throttle that main() uses; to prevent
startup-driven 429s, add a brief sleep (e.g., time.sleep a few seconds) after
main_fast() completes before launching main(), or explicitly document that
main_fast() intentionally omits throttling for UX—modify the bootstrap flow
where main_fast() is invoked to either insert the sleep or add a clear comment
explaining the asymmetry so maintainers know the rate-limit tradeoff.
---
Outside diff comments:
In `@node-ui/mega-alerts.js`:
- Around line 993-1011: The loop in pull_single_realm_data discards previously
collected auctions when makeAhRequest returns a skipped result (it currently
does "return null"); change that behavior to "continue" so a skipped endpoint
doesn't abort the whole collection (update the block around construct_api_url
and makeAhRequest), and after the loop return either the accumulated all array
or null only if all endpoints were skipped; additionally, for the full fix
adjust upload_timers usage in makeAhRequest/update_local_timers to key by both
connectedRealmId and endpoint (or include endpoint in the timer key) so each
endpoint's lastUploadTimeRaw is tracked separately.
In `@utils/mega_data_setup.py`:
- Around line 624-638: In pull_single_realm_data, the loop over endpoints
discards already-merged auctions when make_ah_api_request returns {"auctions":
[], "skipped": True} by using return None; change that return None to continue
so other endpoints in the same connectedRealmId iteration are not lost, and
after the loop, ensure you return an empty list or existing all_auctions
appropriately. For a robust fix also modify the timer keying used by
make_ah_api_request / update_local_timers so timers are tracked per
(connectedRealmId, endpoint) rather than only connectedRealmId (update the timer
storage and lookup in update_local_timers and wherever timers are checked), and
mirror the same change in node-ui/mega-alerts.js get_listings_single to avoid
cross-faction skipping.
---
Duplicate comments:
In `@node-ui/mega-alerts.js`:
- Around line 888-899: The commodity request path in makeCommodityRequest
repeats the same bug as makeAhRequest: it only compares last-modified string
against upload_timers[connectedId].lastUploadTimeRaw and misses the UTC-hour
guard, allowing stale data from a different UTC hour to be skipped; fix by
parsing the lastMod value in makeCommodityRequest, compute its UTC hour (e.g.,
via new Date(lastMod).getUTCHours()), compare against current UTC hour (new
Date().getUTCHours()), and only treat the response as unchanged/skip and call
update_local_timers(connectedId, lastMod) when the hours match in addition to
the existing lastUploadTimeRaw equality check—use the same logic and references
used in makeAhRequest, update_local_timers, upload_timers, lastUploadTimeRaw,
and REGION.
- Around line 797-810: Summary: missing UTC-hour validation allows stale AH data
to be accepted when Last-Modified matches an older hour; fix by validating the
UTC hour before returning data. Modify the logic around lastMod in the same
block (use this.upload_timers, lastUploadTimeRaw, and this.update_local_timers)
to parse lastMod as a Date and compare its UTC hour (or full UTC hour timestamp)
against the cached timer value (or add a new lastUploadUtcHour field) and if the
UTC hour differs in a way that indicates stale data (e.g., lastMod UTC hour is
older than the cached hour or older than now-allowed-window) then treat as
skipped; only call this.update_local_timers and return data when the
Last-Modified passes the UTC-hour validation so stale hourly files are not
consumed.
In `@utils/mega_data_setup.py`:
- Around line 679-695: The current guard only compares lastUploadTimeRaw against
self.upload_timers and can accept stale previous-hour data; modify the block
that reads "Last-Modified" (the code using dict(req.headers)["Last-Modified"],
connectedRealmId, self.upload_timers, and self.update_local_timers) to
additionally parse lastUploadTimeRaw into a UTC datetime and verify that its
hour equals datetime.utcnow().hour (or reject if the timestamp is older than the
current UTC hour window); if the hour does not match, log/print a skip message
and return {"auctions": [], "skipped": True} without updating
self.upload_timers, ensuring the first-run case still updates timers only when
the Last-Modified is for the current UTC hour.
- Around line 757-773: In make_commodity_ah_api_request the Last-Modified header
is accepted without the UTC-hour guard; parse lastUploadTimeRaw into a UTC
datetime (e.g., email.utils.parsedate_to_datetime or equivalent) and compare its
.hour to datetime.utcnow().hour before calling update_local_timers or processing
data—if the hours differ, log/print a skip message and return {"auctions": [],
"skipped": True}; only update upload_timers/update_local_timers when the parsed
UTC hour matches current UTC hour, and handle parsing exceptions gracefully (do
not update timers on parse failure). Use the existing symbols connectedRealmId,
lastUploadTimeRaw, upload_timers, update_local_timers, and
make_commodity_ah_api_request to locate and change the logic.
| pool.shutdown(wait=True) | ||
| # Short sleep between cycles; skipping processing speeds things up but may lead to more 429s | ||
| time.sleep(5) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -name "mega_alerts.py" -type fRepository: ff14-advanced-market-search/AzerothAuctionAssassin
Length of output: 324
🏁 Script executed:
fd "mega_alerts.py"Repository: ff14-advanced-market-search/AzerothAuctionAssassin
Length of output: 108
🏁 Script executed:
cat -n mega_alerts.py | sed -n '780,820p'Repository: ff14-advanced-market-search/AzerothAuctionAssassin
Length of output: 2259
🏁 Script executed:
rg "main_fast\(\)|main_single\(\)" mega_alerts.py -A 2 -B 2Repository: ff14-advanced-market-search/AzerothAuctionAssassin
Length of output: 738
🏁 Script executed:
cat -n mega_alerts.py | sed -n '750,830p'Repository: ff14-advanced-market-search/AzerothAuctionAssassin
Length of output: 4083
🏁 Script executed:
cat -n mega_alerts.py | sed -n '830,880p'Repository: ff14-advanced-market-search/AzerothAuctionAssassin
Length of output: 2569
Startup scan lacks throttle between main_fast() and subsequent cycles
The 5-second sleep between cycles in the main loop (lines 785-786) prevents rapid re-requests when realms skip. However, main_fast() (lines 806-812), which runs once on startup to immediately show current data, completes without any sleep before main() begins its hourly polling. Since both functions submit all realms concurrently via ThreadPoolExecutor, the startup scan could trigger rate limits under high thread counts. Consider adding a brief sleep after main_fast() completes if startup-driven 429s are observed, or document this asymmetry as intentional for UX reasons.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mega_alerts.py` around lines 784 - 786, The startup scan via main_fast()
currently submits all realms concurrently using ThreadPoolExecutor and returns
immediately into the hourly main() loop without the 5-second throttle that
main() uses; to prevent startup-driven 429s, add a brief sleep (e.g., time.sleep
a few seconds) after main_fast() completes before launching main(), or
explicitly document that main_fast() intentionally omits throttling for
UX—modify the bootstrap flow where main_fast() is invoked to either insert the
sleep or add a clear comment explaining the asymmetry so maintainers know the
rate-limit tradeoff.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.github/workflows/electron-exe-build.yml (1)
4-4: LGTM — version matches branch name.
"2.0.7"satisfies the guideline requiring this field to match the versioned branch name. As per coding guidelines, "Version numbers in.github/workflows/electron-exe-build.ymlmust match the versioned branch name."Consider externalizing
AAA_VERSIONinto a dedicated file (e.g.,version.txtor.version) and reading it in the workflow with acatstep — this avoids touching the YAML on every release cycle and was previously suggested forwindows-exe-build.ymlas well.Based on learnings from PR
#154: "When updating theAAA_VERSIONvariable in GitHub Actions workflows, externalize it into a separate file (e.g.,version.txt) to avoid frequent changes to the workflow file."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/electron-exe-build.yml at line 4, Replace the hard-coded AAA_VERSION value with an externalized version source: create a simple file (e.g., version.txt) containing the version string and modify the workflow to read that file into the AAA_VERSION variable using a shell step (e.g., run: echo "AAA_VERSION=$(cat version.txt)" >> $GITHUB_ENV or similar), update any uses of AAA_VERSION in the electron-exe-build.yml to reference the injected environment variable, and mirror the same change where applicable (e.g., windows-exe-build.yml) so future releases only require updating the version file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.github/workflows/electron-exe-build.yml:
- Line 4: Replace the hard-coded AAA_VERSION value with an externalized version
source: create a simple file (e.g., version.txt) containing the version string
and modify the workflow to read that file into the AAA_VERSION variable using a
shell step (e.g., run: echo "AAA_VERSION=$(cat version.txt)" >> $GITHUB_ENV or
similar), update any uses of AAA_VERSION in the electron-exe-build.yml to
reference the injected environment variable, and mirror the same change where
applicable (e.g., windows-exe-build.yml) so future releases only require
updating the version file.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (5)
.github/workflows/electron-exe-build.yml.github/workflows/windows-exe-build.ymlAzerothAuctionAssassin.pynode-ui/index.htmlpackage.json
✅ Files skipped from review due to trivial changes (3)
- .github/workflows/windows-exe-build.yml
- package.json
- node-ui/index.html


Closes: #239
Summary by CodeRabbit
Performance Improvements
Refactor
Stability
Chores