Skip to content

Comments

2.0.7#240

Merged
cohenaj194 merged 7 commits intomainfrom
2.0.7
Feb 23, 2026
Merged

2.0.7#240
cohenaj194 merged 7 commits intomainfrom
2.0.7

Conversation

@cohenaj194
Copy link
Contributor

@cohenaj194 cohenaj194 commented Feb 23, 2026

Closes: #239

Summary by CodeRabbit

  • Performance Improvements

    • Skip reprocessing when upstream auction data is unchanged, reducing unnecessary work and improving responsiveness.
    • Add short delays between alert cycles to throttle processing and smooth polling cadence.
  • Refactor

    • Simplified auction-listing fetch flow and scan-window control to make skipping and wraparound handling more consistent.
  • Stability

    • Improved early-exit handling to avoid wasted work when data is skipped; public interfaces unchanged.
  • Chores

    • Bumped application and build versions displayed in the UI and release artifacts.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

Walkthrough

Adds Last-Modified equality checks to AH and commodity fetches to short-circuit unchanged uploads, propagates skip signals up callers (early returns), removes a legacy get_listings_single from api utilities, adjusts scan-window wraparound handling, and inserts a 5s throttle after dispatching batch alert tasks.

Changes

Cohort / File(s) Summary
Last-Modified guards & skip propagation
utils/mega_data_setup.py, node-ui/mega-alerts.js, mega_alerts.py
Add Last-Modified == stored-timestamp check that returns {"auctions": [], "skipped": True}; callers now return None/early-exit when skipped. Move wraparound handling for scan-window and add a 5s sleep after dispatching batch alert tasks.
API utility removal
utils/api_requests.py
Remove public get_listings_single() and its NA/EU region-specific retry/URL logic.
Versioning / metadata
AzerothAuctionAssassin.py, .github/workflows/...yml, node-ui/index.html, package.json, manifest_file
Bump AAA_VERSION and package/node-ui displayed version; small manifest/package edits.

Sequence Diagram

sequenceDiagram
    participant Client as Requester
    participant Timers as UploadTimers
    participant Blizzard as Blizzard API
    participant Processor as AH Processor

    Client->>Processor: Request realm auction fetch
    Processor->>Timers: Read lastUploadTimeRaw (realm)
    Processor->>Blizzard: Send API request
    Blizzard-->>Processor: Respond with Last-Modified + body
    alt Last-Modified == stored value
        Processor->>Processor: Return {"auctions": [], "skipped": True} (skip)
    else Last-Modified != stored value
        Processor->>Timers: Update lastUploadTimeRaw
        Processor->>Processor: Process auction data -> produce auctions
        Processor->>Client: Return auctions
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related issues

  • Issue #239: Implements Last-Modified validation to avoid using stale auction data (direct match to objective).
  • Issue ff14-advanced-market-search/AzerothAuctionAssassin#171: Related — changes Last-Modified handling (replaces age-check with equality-based skip), affecting staleness semantics.

Possibly related PRs

  • PR ff14-advanced-market-search/AzerothAuctionAssassin#235: Modifies scan-window/wraparound logic — overlaps the scan-window adjustments here.
  • PR ff14-advanced-market-search/AzerothAuctionAssassin#104: Changes API request functions in utils/mega_data_setup.py — related to Last-Modified/skip logic.
  • PR ff14-advanced-market-search/AzerothAuctionAssassin#170: Alters auction-fetch logic in utils/mega_data_setup.py — closely connected to these changes.

Poem

🐰 I sniffed headers in the moonlight bright,
If Last-Modified matched, I hugged it tight.
Empty lists returned, no needless race,
Timers updated, then a gentle 5s grace.
Hoppity-hop — alerts rest between their chase. 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Out of Scope Changes check ⚠️ Warning The PR includes multiple out-of-scope changes: version bumps (1.6.3→1.6.4, 2.0.6→2.0.7) and removal of get_listings_single function from api_requests.py that are unrelated to the LastModified header issue. Separate version bump changes into a dedicated PR and clarify whether get_listings_single removal is intentional for this scope or should be addressed separately.
✅ Passed checks (2 passed)
Check name Status Explanation
Linked Issues check ✅ Passed The PR implements Last-Modified header validation across all AH and commodity request paths, adds skip signals to prevent stale data processing, and ensures realms with unchanged data are bypassed to address the core issue of respecting LastModified headers.
Version Bump Check ✅ Passed All required node-based app files have been correctly updated to version 2.0.7, matching the branch name and verification requirements.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 2.0.7

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
utils/mega_data_setup.py (1)

1-3: ⚠️ Potential issue | 🟡 Minor

Black formatting check failed

Pipeline reports Black would reformat this file. Please run black . (or the repo’s formatting target) before merging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utils/mega_data_setup.py` around lines 1 - 3, Black would reformat the top of
the file (the shebang and the combined import line in mega_data_setup.py); run
the repository formatting command (e.g., black .) and re-run CI, or manually
apply Black’s changes to the shebang and import statement (split imports onto
separate lines, proper spacing/newlines) then commit the reformatted
mega_data_setup.py so the Black check passes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@node-ui/mega-alerts.js`:
- Around line 797-803: The current Last-Modified check (comparing lastMod to
this.upload_timers[connectedRealmId]?.lastUploadTimeRaw) must also validate that
the Last-Modified timestamp falls within the current UTC hour before returning
cached/empty data; parse lastMod (e.g., new Date(lastMod)), compute its UTC
year/month/day/hour and compare to the current UTC year/month/day/hour, and only
treat it as fresh if both the raw string matches the cached lastUploadTimeRaw
and the Last-Modified hour equals the current UTC hour; apply the identical
change to the other occurrence around lines handling lastMod (the block at
885-891) so both branches use the same UTC-hour validation logic.

In `@utils/mega_data_setup.py`:
- Around line 675-681: The Last-Modified header handling currently only skips
when it exactly equals the stored value; instead parse the Last-Modified header
(lastUploadTimeRaw) into a UTC datetime, compare its year/month/day/hour to the
current UTC year/month/day/hour and only proceed (and call
self.update_local_timers(connectedRealmId, lastUploadTimeRaw) and process
auctions) when the header's UTC hour matches the current UTC hour; if it does
not match, return {"auctions": []} and do not update timers. Update the same
logic in both places referenced (the block using self.upload_timers +
self.update_local_timers around lastUploadTimeRaw and the repeated block at
lines ~745-751) so both validate Last-Modified freshness by UTC hour before
proceeding.

---

Outside diff comments:
In `@utils/mega_data_setup.py`:
- Around line 1-3: Black would reformat the top of the file (the shebang and the
combined import line in mega_data_setup.py); run the repository formatting
command (e.g., black .) and re-run CI, or manually apply Black’s changes to the
shebang and import statement (split imports onto separate lines, proper
spacing/newlines) then commit the reformatted mega_data_setup.py so the Black
check passes.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5ad0bde and 49b88fb.

📒 Files selected for processing (3)
  • node-ui/mega-alerts.js
  • utils/api_requests.py
  • utils/mega_data_setup.py
💤 Files with no reviewable changes (1)
  • utils/api_requests.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
node-ui/mega-alerts.js (1)

797-806: ⚠️ Potential issue | 🟠 Major

UTC-hour validation for Last-Modified is still absent

Both guards only compare the raw Last-Modified string against the cached value. This correctly deduplicates re-scans within the same session, but on the first run (or after a restart) upload_timers is empty, so data from a previous hour is still accepted without validation. The stated requirement in issue #239 is to verify the Last-Modified timestamp falls within the current UTC hour before processing.

Also applies to: 889-897

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@node-ui/mega-alerts.js` around lines 797 - 806, The current check only
compares the raw Last-Modified header against
upload_timers[connectedRealmId]?.lastUploadTimeRaw, which misses the requirement
to ensure the Last-Modified timestamp falls within the current UTC hour on first
run or after restart; update the logic around lastMod (the value from
res.headers.get("last-modified")) to parse it into a Date, convert to UTC, and
verify its year/month/day/hour matches the current UTC year/month/day/hour
before proceeding (if it does not match, log and return { auctions: [] }); apply
the same UTC-hour validation in both places where lastMod is used (the block
referencing upload_timers[connectedRealmId]?.lastUploadTimeRaw and the similar
check around lines 889-897) and still preserve the existing raw-string
deduplication against upload_timers[connectedRealmId].lastUploadTimeRaw.
🧹 Nitpick comments (2)
node-ui/mega-alerts.js (2)

873-898: Same body-before-guard ordering applies to makeCommodityRequest

The commodity endpoint deserializes the full response body before checking Last-Modified. Apply the same hoist pattern as in makeAhRequest.

♻️ Proposed refactor – hoist guard before body read in makeCommodityRequest
       if (res.status !== 200) throw new Error(`${res.status}`)
+      const lastMod = res.headers.get("last-modified")
+      if (
+        lastMod &&
+        this.upload_timers[connectedId]?.lastUploadTimeRaw === lastMod
+      ) {
+        log(
+          `Skip ${this.REGION} commodities: data has not updated yet (Last-Modified unchanged: ${lastMod})`
+        )
+        return { auctions: [] }
+      }
       const text = await res.text()
       let data
       try {
         data = JSON.parse(text)
       } catch (parseErr) {
         // ... existing error handling ...
       }
-      const lastMod = res.headers.get("last-modified")
-      if (
-        lastMod &&
-        this.upload_timers[connectedId]?.lastUploadTimeRaw === lastMod
-      ) {
-        log(
-          `Skip ${this.REGION} commodities: data has not updated yet (Last-Modified unchanged: ${lastMod})`
-        )
-        return { auctions: [] }
-      }
       if (lastMod) this.update_local_timers(connectedId, lastMod)
       return data
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@node-ui/mega-alerts.js` around lines 873 - 898, In makeCommodityRequest you
currently parse the response body before checking the Last-Modified guard; hoist
the “last-modified” check before calling res.text()/JSON.parse to avoid
unnecessary body reads/parsing when data hasn’t changed: get lastMod via
res.headers.get("last-modified"), compare it to
this.upload_timers[connectedId]?.lastUploadTimeRaw and return { auctions: [] }
(and log) if unchanged, call this.update_local_timers(connectedId, lastMod) when
appropriate, and only then read/parse the body (same pattern as makeAhRequest);
keep references to this.REGION, this.upload_timers, update_local_timers and
ensure parse/error handling remains after the guard.

781-809: Move the Last-Modified guard before body consumption to avoid wasting bandwidth and CPU

Currently the full response body is downloaded via res.text() and then parsed with JSON.parse before the cache-hit check is evaluated. When the guard fires (same Last-Modified), the entire body — potentially tens to hundreds of MB for an active realm — has already been transferred and parsed for nothing.

Headers are available on the res object immediately in undici's fetch without consuming the body, so the guard can be hoisted before res.text() at no correctness cost.

♻️ Proposed refactor – hoist guard before body read in makeAhRequest
       if (res.status !== 200) throw new Error(`${res.status}`)
+      const lastMod = res.headers.get("last-modified")
+      if (
+        lastMod &&
+        this.upload_timers[connectedRealmId]?.lastUploadTimeRaw === lastMod
+      ) {
+        log(
+          `Skip realm ${connectedRealmId} (${this.REGION}): data has not updated yet (Last-Modified unchanged: ${lastMod})`
+        )
+        return { auctions: [] }
+      }
       const text = await res.text()
       let data
       try {
         data = JSON.parse(text)
       } catch (parseErr) {
         // ... existing error handling ...
       }

-      const lastMod = res.headers.get("last-modified")
-      if (
-        lastMod &&
-        this.upload_timers[connectedRealmId]?.lastUploadTimeRaw === lastMod
-      ) {
-        log(
-          `Skip realm ${connectedRealmId} (${this.REGION}): data has not updated yet (Last-Modified unchanged: ${lastMod})`
-        )
-        return { auctions: [] }
-      }
       if (lastMod) {
         this.update_local_timers(connectedRealmId, lastMod)
       }
       return data
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@node-ui/mega-alerts.js` around lines 781 - 809, The Last-Modified guard in
makeAhRequest runs after consuming and parsing the response body (res.text() /
JSON.parse), wasting bandwidth/CPU for cache hits; change the order so you read
the header first (const lastMod = res.headers.get("last-modified")), perform the
cache-hit check using this.upload_timers[connectedRealmId]?.lastUploadTimeRaw
=== lastMod and return { auctions: [] } before calling res.text() or JSON.parse,
and only call res.text() / JSON.parse and
this.update_local_timers(connectedRealmId, lastMod) if the guard does not
trigger—preserve the non-200 check (if (res.status !== 200) throw new
Error(`${res.status}`)) and keep the same log messages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@node-ui/mega-alerts.js`:
- Around line 797-806: The current check only compares the raw Last-Modified
header against upload_timers[connectedRealmId]?.lastUploadTimeRaw, which misses
the requirement to ensure the Last-Modified timestamp falls within the current
UTC hour on first run or after restart; update the logic around lastMod (the
value from res.headers.get("last-modified")) to parse it into a Date, convert to
UTC, and verify its year/month/day/hour matches the current UTC
year/month/day/hour before proceeding (if it does not match, log and return {
auctions: [] }); apply the same UTC-hour validation in both places where lastMod
is used (the block referencing
upload_timers[connectedRealmId]?.lastUploadTimeRaw and the similar check around
lines 889-897) and still preserve the existing raw-string deduplication against
upload_timers[connectedRealmId].lastUploadTimeRaw.

---

Nitpick comments:
In `@node-ui/mega-alerts.js`:
- Around line 873-898: In makeCommodityRequest you currently parse the response
body before checking the Last-Modified guard; hoist the “last-modified” check
before calling res.text()/JSON.parse to avoid unnecessary body reads/parsing
when data hasn’t changed: get lastMod via res.headers.get("last-modified"),
compare it to this.upload_timers[connectedId]?.lastUploadTimeRaw and return {
auctions: [] } (and log) if unchanged, call
this.update_local_timers(connectedId, lastMod) when appropriate, and only then
read/parse the body (same pattern as makeAhRequest); keep references to
this.REGION, this.upload_timers, update_local_timers and ensure parse/error
handling remains after the guard.
- Around line 781-809: The Last-Modified guard in makeAhRequest runs after
consuming and parsing the response body (res.text() / JSON.parse), wasting
bandwidth/CPU for cache hits; change the order so you read the header first
(const lastMod = res.headers.get("last-modified")), perform the cache-hit check
using this.upload_timers[connectedRealmId]?.lastUploadTimeRaw === lastMod and
return { auctions: [] } before calling res.text() or JSON.parse, and only call
res.text() / JSON.parse and this.update_local_timers(connectedRealmId, lastMod)
if the guard does not trigger—preserve the non-200 check (if (res.status !==
200) throw new Error(`${res.status}`)) and keep the same log messages.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 49b88fb and c226101.

📒 Files selected for processing (2)
  • node-ui/mega-alerts.js
  • utils/mega_data_setup.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • utils/mega_data_setup.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
mega_alerts.py (1)

783-784: Comment phrasing is misleading without cross-PR context.

"skipping processing speeds things up but may lead to more 429s" reads as though the sleep might cause more 429s, which is backwards. The intended meaning — that the Last-Modified-based skip introduced elsewhere in this PR shortens cycles and therefore risks higher request rates — isn't clear here.

✏️ Suggested comment reword
-                    # Short sleep between cycles; skipping processing speeds things up but may lead to more 429s
-                    time.sleep(5)
+                    # Throttle between cycles: Last-Modified skipping (added in this PR) completes cycles
+                    # faster, which increases per-minute API call rate and the risk of 429s.
+                    time.sleep(5)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mega_alerts.py` around lines 783 - 784, The inline comment next to
time.sleep(5) is misleading; update the comment at the sleep call in
mega_alerts.py to state that the sleep adds a fixed pause between cycles and
that a separate "Last-Modified"-based skip mechanism (introduced elsewhere) can
shorten cycles and thus increase request rate and risk 429s — e.g., explain that
skipping processing shortens cycle intervals (not the sleep itself) and that the
sleep simply enforces a minimum pause between cycles. Refer to the sleep call
(time.sleep(5)) and the Last-Modified-based skip logic when editing the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@mega_alerts.py`:
- Around line 783-784: The inline comment next to time.sleep(5) is misleading;
update the comment at the sleep call in mega_alerts.py to state that the sleep
adds a fixed pause between cycles and that a separate "Last-Modified"-based skip
mechanism (introduced elsewhere) can shorten cycles and thus increase request
rate and risk 429s — e.g., explain that skipping processing shortens cycle
intervals (not the sleep itself) and that the sleep simply enforces a minimum
pause between cycles. Refer to the sleep call (time.sleep(5)) and the
Last-Modified-based skip logic when editing the comment.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c226101 and 287f6c3.

📒 Files selected for processing (2)
  • mega_alerts.py
  • node-ui/mega-alerts.js
🚧 Files skipped from review as they are similar to previous changes (1)
  • node-ui/mega-alerts.js

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
29.5% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
utils/mega_data_setup.py (1)

624-638: ⚠️ Potential issue | 🟠 Major

Classic "all" faction realms will never generate alerts after a data update

When FACTION == "all", the loop iterates over ["/2", "/6", "/7"] for the same connectedRealmId. After the first endpoint (e.g., /2) is processed, make_ah_api_request calls self.update_local_timers(connectedRealmId, T1) immediately (line 693), storing the new Last-Modified. When endpoint /6 is then fetched — also returning Last-Modified: T1 — the stored timer now equals T1, so make_ah_api_request returns {"auctions": [], "skipped": True}. The return None at line 634 discards all data already collected from /2, and pull_single_realm_data exits without sending any alerts. This repeats on every update cycle.

Root cause: the timer is keyed by connectedRealmId, shared across all faction endpoints. The first endpoint's fetch updates the shared timer, making all subsequent same-cycle fetches appear "unchanged."

🐛 Minimal fix — change `return None` to `continue` and guard at loop end
+        any_skipped = False
         for endpoint in endpoints:
             url = self.construct_api_url(connectedRealmId, endpoint)

             auction_info = self.make_ah_api_request(url, connectedRealmId)
             if auction_info is None or "auctions" not in auction_info:
                 print(
                     f"{self.REGION} {str(connectedRealmId)} realm data, no auctions found"
                 )
                 continue
             if auction_info.get("skipped"):
-                return None
+                any_skipped = True
+                continue
             # merge all the auctions
             all_auctions.extend(auction_info["auctions"])

+        if not all_auctions and any_skipped:
+            return None
         return all_auctions

Note: even with this fix, horde/booty bay endpoints are still "skipped" within the same cycle (their Last-Modified matches the timer just updated by alliance). A complete fix requires keying the timer by (connectedRealmId, endpoint) so each faction endpoint tracks independently. The same pattern must be fixed in node-ui/mega-alerts.js get_listings_single.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utils/mega_data_setup.py` around lines 624 - 638, In pull_single_realm_data,
the loop over endpoints discards already-merged auctions when
make_ah_api_request returns {"auctions": [], "skipped": True} by using return
None; change that return None to continue so other endpoints in the same
connectedRealmId iteration are not lost, and after the loop, ensure you return
an empty list or existing all_auctions appropriately. For a robust fix also
modify the timer keying used by make_ah_api_request / update_local_timers so
timers are tracked per (connectedRealmId, endpoint) rather than only
connectedRealmId (update the timer storage and lookup in update_local_timers and
wherever timers are checked), and mirror the same change in
node-ui/mega-alerts.js get_listings_single to avoid cross-faction skipping.
node-ui/mega-alerts.js (1)

993-1011: ⚠️ Potential issue | 🟠 Major

Same Classic "all" faction alert-suppression bug as in the Python path

makeAhRequest calls this.update_local_timers(connectedRealmId, lastMod) (line 808) after processing the first endpoint. When the loop proceeds to /6, this.upload_timers[connectedRealmId].lastUploadTimeRaw now equals lastMod, so makeAhRequest returns { auctions: [], skipped: true }. The return null at line 998 immediately discards the auctions already pushed from /2, and pull_single_realm_data exits without sending alerts — on every update cycle.

🐛 Minimal fix — `continue` instead of `return null`, guard at loop end
+    let anySkipped = false
     for (const ep of endpoints) {
       try {
         const url = this.construct_api_url(connectedRealmId, ep)
         const data = await this.makeAhRequest(url, connectedRealmId)
-        if (data?.skipped) return null
+        if (data?.skipped) { anySkipped = true; continue }
         if (data?.auctions && Array.isArray(data.auctions)) {
           for (const auction of data.auctions) {
             all.push(auction)
           }
         }
       } catch (err) {
         logError("AH request failed", err)
       }
     }
+    if (!all.length && anySkipped) return null
     return all

The complete fix (preventing horde/booty bay data loss within the same cycle) requires keying upload_timers by (connectedRealmId, endpoint) rather than connectedRealmId alone, mirroring the same change needed in utils/mega_data_setup.py.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@node-ui/mega-alerts.js` around lines 993 - 1011, The loop in
pull_single_realm_data discards previously collected auctions when makeAhRequest
returns a skipped result (it currently does "return null"); change that behavior
to "continue" so a skipped endpoint doesn't abort the whole collection (update
the block around construct_api_url and makeAhRequest), and after the loop return
either the accumulated all array or null only if all endpoints were skipped;
additionally, for the full fix adjust upload_timers usage in
makeAhRequest/update_local_timers to key by both connectedRealmId and endpoint
(or include endpoint in the timer key) so each endpoint's lastUploadTimeRaw is
tracked separately.
♻️ Duplicate comments (4)
node-ui/mega-alerts.js (2)

888-899: ⚠️ Potential issue | 🟠 Major

`` Same missing UTC-hour guard in the commodity path

makeCommodityRequest has the identical gap — no validation that the parsed lastMod hour matches the current UTC hour — mirroring the issue flagged for makeAhRequest in the previous review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@node-ui/mega-alerts.js` around lines 888 - 899, The commodity request path in
makeCommodityRequest repeats the same bug as makeAhRequest: it only compares
last-modified string against upload_timers[connectedId].lastUploadTimeRaw and
misses the UTC-hour guard, allowing stale data from a different UTC hour to be
skipped; fix by parsing the lastMod value in makeCommodityRequest, compute its
UTC hour (e.g., via new Date(lastMod).getUTCHours()), compare against current
UTC hour (new Date().getUTCHours()), and only treat the response as
unchanged/skip and call update_local_timers(connectedId, lastMod) when the hours
match in addition to the existing lastUploadTimeRaw equality check—use the same
logic and references used in makeAhRequest, update_local_timers, upload_timers,
lastUploadTimeRaw, and REGION.

797-810: ⚠️ Potential issue | 🟠 Major

`` No UTC-hour validation before accepting AH data

The skip check compares lastMod only against the cached lastUploadTimeRaw. On first run or when a realm updates later than expected (e.g., after 65 min), stale data from the previous hour is silently consumed. This was flagged in the prior review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@node-ui/mega-alerts.js` around lines 797 - 810, Summary: missing UTC-hour
validation allows stale AH data to be accepted when Last-Modified matches an
older hour; fix by validating the UTC hour before returning data. Modify the
logic around lastMod in the same block (use this.upload_timers,
lastUploadTimeRaw, and this.update_local_timers) to parse lastMod as a Date and
compare its UTC hour (or full UTC hour timestamp) against the cached timer value
(or add a new lastUploadUtcHour field) and if the UTC hour differs in a way that
indicates stale data (e.g., lastMod UTC hour is older than the cached hour or
older than now-allowed-window) then treat as skipped; only call
this.update_local_timers and return data when the Last-Modified passes the
UTC-hour validation so stale hourly files are not consumed.
utils/mega_data_setup.py (2)

679-695: ⚠️ Potential issue | 🟠 Major

`` No current-hour (UTC) validation on Last-Modified

The skip guard only compares lastUploadTimeRaw against the cached value. On first run (empty upload_timers) or when a realm updates later than the expected window (e.g., after 65 min as described in issue #239), there is no check that the Last-Modified hour matches the current UTC hour, so stale data from the previous hour is consumed. This was flagged in the prior review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utils/mega_data_setup.py` around lines 679 - 695, The current guard only
compares lastUploadTimeRaw against self.upload_timers and can accept stale
previous-hour data; modify the block that reads "Last-Modified" (the code using
dict(req.headers)["Last-Modified"], connectedRealmId, self.upload_timers, and
self.update_local_timers) to additionally parse lastUploadTimeRaw into a UTC
datetime and verify that its hour equals datetime.utcnow().hour (or reject if
the timestamp is older than the current UTC hour window); if the hour does not
match, log/print a skip message and return {"auctions": [], "skipped": True}
without updating self.upload_timers, ensuring the first-run case still updates
timers only when the Last-Modified is for the current UTC hour.

757-773: ⚠️ Potential issue | 🟠 Major

`` Same missing UTC-hour check in commodity path

make_commodity_ah_api_request has the identical gap: no validation that lastUploadTimeRaw's UTC hour equals datetime.utcnow().hour before accepting commodity data. First-run and late-update scenarios are still not guarded.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utils/mega_data_setup.py` around lines 757 - 773, In
make_commodity_ah_api_request the Last-Modified header is accepted without the
UTC-hour guard; parse lastUploadTimeRaw into a UTC datetime (e.g.,
email.utils.parsedate_to_datetime or equivalent) and compare its .hour to
datetime.utcnow().hour before calling update_local_timers or processing data—if
the hours differ, log/print a skip message and return {"auctions": [],
"skipped": True}; only update upload_timers/update_local_timers when the parsed
UTC hour matches current UTC hour, and handle parsing exceptions gracefully (do
not update timers on parse failure). Use the existing symbols connectedRealmId,
lastUploadTimeRaw, upload_timers, update_local_timers, and
make_commodity_ah_api_request to locate and change the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mega_alerts.py`:
- Around line 784-786: The startup scan via main_fast() currently submits all
realms concurrently using ThreadPoolExecutor and returns immediately into the
hourly main() loop without the 5-second throttle that main() uses; to prevent
startup-driven 429s, add a brief sleep (e.g., time.sleep a few seconds) after
main_fast() completes before launching main(), or explicitly document that
main_fast() intentionally omits throttling for UX—modify the bootstrap flow
where main_fast() is invoked to either insert the sleep or add a clear comment
explaining the asymmetry so maintainers know the rate-limit tradeoff.

---

Outside diff comments:
In `@node-ui/mega-alerts.js`:
- Around line 993-1011: The loop in pull_single_realm_data discards previously
collected auctions when makeAhRequest returns a skipped result (it currently
does "return null"); change that behavior to "continue" so a skipped endpoint
doesn't abort the whole collection (update the block around construct_api_url
and makeAhRequest), and after the loop return either the accumulated all array
or null only if all endpoints were skipped; additionally, for the full fix
adjust upload_timers usage in makeAhRequest/update_local_timers to key by both
connectedRealmId and endpoint (or include endpoint in the timer key) so each
endpoint's lastUploadTimeRaw is tracked separately.

In `@utils/mega_data_setup.py`:
- Around line 624-638: In pull_single_realm_data, the loop over endpoints
discards already-merged auctions when make_ah_api_request returns {"auctions":
[], "skipped": True} by using return None; change that return None to continue
so other endpoints in the same connectedRealmId iteration are not lost, and
after the loop, ensure you return an empty list or existing all_auctions
appropriately. For a robust fix also modify the timer keying used by
make_ah_api_request / update_local_timers so timers are tracked per
(connectedRealmId, endpoint) rather than only connectedRealmId (update the timer
storage and lookup in update_local_timers and wherever timers are checked), and
mirror the same change in node-ui/mega-alerts.js get_listings_single to avoid
cross-faction skipping.

---

Duplicate comments:
In `@node-ui/mega-alerts.js`:
- Around line 888-899: The commodity request path in makeCommodityRequest
repeats the same bug as makeAhRequest: it only compares last-modified string
against upload_timers[connectedId].lastUploadTimeRaw and misses the UTC-hour
guard, allowing stale data from a different UTC hour to be skipped; fix by
parsing the lastMod value in makeCommodityRequest, compute its UTC hour (e.g.,
via new Date(lastMod).getUTCHours()), compare against current UTC hour (new
Date().getUTCHours()), and only treat the response as unchanged/skip and call
update_local_timers(connectedId, lastMod) when the hours match in addition to
the existing lastUploadTimeRaw equality check—use the same logic and references
used in makeAhRequest, update_local_timers, upload_timers, lastUploadTimeRaw,
and REGION.
- Around line 797-810: Summary: missing UTC-hour validation allows stale AH data
to be accepted when Last-Modified matches an older hour; fix by validating the
UTC hour before returning data. Modify the logic around lastMod in the same
block (use this.upload_timers, lastUploadTimeRaw, and this.update_local_timers)
to parse lastMod as a Date and compare its UTC hour (or full UTC hour timestamp)
against the cached timer value (or add a new lastUploadUtcHour field) and if the
UTC hour differs in a way that indicates stale data (e.g., lastMod UTC hour is
older than the cached hour or older than now-allowed-window) then treat as
skipped; only call this.update_local_timers and return data when the
Last-Modified passes the UTC-hour validation so stale hourly files are not
consumed.

In `@utils/mega_data_setup.py`:
- Around line 679-695: The current guard only compares lastUploadTimeRaw against
self.upload_timers and can accept stale previous-hour data; modify the block
that reads "Last-Modified" (the code using dict(req.headers)["Last-Modified"],
connectedRealmId, self.upload_timers, and self.update_local_timers) to
additionally parse lastUploadTimeRaw into a UTC datetime and verify that its
hour equals datetime.utcnow().hour (or reject if the timestamp is older than the
current UTC hour window); if the hour does not match, log/print a skip message
and return {"auctions": [], "skipped": True} without updating
self.upload_timers, ensuring the first-run case still updates timers only when
the Last-Modified is for the current UTC hour.
- Around line 757-773: In make_commodity_ah_api_request the Last-Modified header
is accepted without the UTC-hour guard; parse lastUploadTimeRaw into a UTC
datetime (e.g., email.utils.parsedate_to_datetime or equivalent) and compare its
.hour to datetime.utcnow().hour before calling update_local_timers or processing
data—if the hours differ, log/print a skip message and return {"auctions": [],
"skipped": True}; only update upload_timers/update_local_timers when the parsed
UTC hour matches current UTC hour, and handle parsing exceptions gracefully (do
not update timers on parse failure). Use the existing symbols connectedRealmId,
lastUploadTimeRaw, upload_timers, update_local_timers, and
make_commodity_ah_api_request to locate and change the logic.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 287f6c3 and c6a8924.

📒 Files selected for processing (3)
  • mega_alerts.py
  • node-ui/mega-alerts.js
  • utils/mega_data_setup.py

Comment on lines 784 to +786
pool.shutdown(wait=True)
# Short sleep between cycles; skipping processing speeds things up but may lead to more 429s
time.sleep(5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

fd -name "mega_alerts.py" -type f

Repository: ff14-advanced-market-search/AzerothAuctionAssassin

Length of output: 324


🏁 Script executed:

fd "mega_alerts.py"

Repository: ff14-advanced-market-search/AzerothAuctionAssassin

Length of output: 108


🏁 Script executed:

cat -n mega_alerts.py | sed -n '780,820p'

Repository: ff14-advanced-market-search/AzerothAuctionAssassin

Length of output: 2259


🏁 Script executed:

rg "main_fast\(\)|main_single\(\)" mega_alerts.py -A 2 -B 2

Repository: ff14-advanced-market-search/AzerothAuctionAssassin

Length of output: 738


🏁 Script executed:

cat -n mega_alerts.py | sed -n '750,830p'

Repository: ff14-advanced-market-search/AzerothAuctionAssassin

Length of output: 4083


🏁 Script executed:

cat -n mega_alerts.py | sed -n '830,880p'

Repository: ff14-advanced-market-search/AzerothAuctionAssassin

Length of output: 2569


Startup scan lacks throttle between main_fast() and subsequent cycles

The 5-second sleep between cycles in the main loop (lines 785-786) prevents rapid re-requests when realms skip. However, main_fast() (lines 806-812), which runs once on startup to immediately show current data, completes without any sleep before main() begins its hourly polling. Since both functions submit all realms concurrently via ThreadPoolExecutor, the startup scan could trigger rate limits under high thread counts. Consider adding a brief sleep after main_fast() completes if startup-driven 429s are observed, or document this asymmetry as intentional for UX reasons.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mega_alerts.py` around lines 784 - 786, The startup scan via main_fast()
currently submits all realms concurrently using ThreadPoolExecutor and returns
immediately into the hourly main() loop without the 5-second throttle that
main() uses; to prevent startup-driven 429s, add a brief sleep (e.g., time.sleep
a few seconds) after main_fast() completes before launching main(), or
explicitly document that main_fast() intentionally omits throttling for
UX—modify the bootstrap flow where main_fast() is invoked to either insert the
sleep or add a clear comment explaining the asymmetry so maintainers know the
rate-limit tradeoff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.github/workflows/electron-exe-build.yml (1)

4-4: LGTM — version matches branch name.

"2.0.7" satisfies the guideline requiring this field to match the versioned branch name. As per coding guidelines, "Version numbers in .github/workflows/electron-exe-build.yml must match the versioned branch name."

Consider externalizing AAA_VERSION into a dedicated file (e.g., version.txt or .version) and reading it in the workflow with a cat step — this avoids touching the YAML on every release cycle and was previously suggested for windows-exe-build.yml as well.

Based on learnings from PR #154: "When updating the AAA_VERSION variable in GitHub Actions workflows, externalize it into a separate file (e.g., version.txt) to avoid frequent changes to the workflow file."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/electron-exe-build.yml at line 4, Replace the hard-coded
AAA_VERSION value with an externalized version source: create a simple file
(e.g., version.txt) containing the version string and modify the workflow to
read that file into the AAA_VERSION variable using a shell step (e.g., run: echo
"AAA_VERSION=$(cat version.txt)" >> $GITHUB_ENV or similar), update any uses of
AAA_VERSION in the electron-exe-build.yml to reference the injected environment
variable, and mirror the same change where applicable (e.g.,
windows-exe-build.yml) so future releases only require updating the version
file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.github/workflows/electron-exe-build.yml:
- Line 4: Replace the hard-coded AAA_VERSION value with an externalized version
source: create a simple file (e.g., version.txt) containing the version string
and modify the workflow to read that file into the AAA_VERSION variable using a
shell step (e.g., run: echo "AAA_VERSION=$(cat version.txt)" >> $GITHUB_ENV or
similar), update any uses of AAA_VERSION in the electron-exe-build.yml to
reference the injected environment variable, and mirror the same change where
applicable (e.g., windows-exe-build.yml) so future releases only require
updating the version file.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c6a8924 and 244d5ce.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (5)
  • .github/workflows/electron-exe-build.yml
  • .github/workflows/windows-exe-build.yml
  • AzerothAuctionAssassin.py
  • node-ui/index.html
  • package.json
✅ Files skipped from review due to trivial changes (3)
  • .github/workflows/windows-exe-build.yml
  • package.json
  • node-ui/index.html

@cohenaj194 cohenaj194 merged commit 4cb156c into main Feb 23, 2026
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LastModified header not respected

1 participant