-
-
Notifications
You must be signed in to change notification settings - Fork 586
chore(metrics): properly detect rate limits #3499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(metrics): properly detect rate limits #3499
Conversation
✅ Deploy Preview for testcontainers-go ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughAdds a 120-minute job timeout to the usage-metrics workflow, inserts a 2-second delay between successive successful version queries when more remain, and broadens GitHub rate-limit detection to include HTTP 429. Also marks Changes
Sequence DiagramsequenceDiagram
participant WF as Metrics Workflow
participant CM as collect-metrics
participant GH as GitHub API
WF->>CM: start collect-metrics job
CM->>GH: GET /repos/.../releases?per_page=100
alt Response = 429 or rate-limited (403 / rate limit text)
GH-->>CM: 429 / rate-limit error
Note right of CM `#DDEBF7`: Retry with backoff
CM->>CM: backoff wait, retry
else 200 OK
GH-->>CM: 200 OK (version data)
Note right of CM `#E6F4EA`: Process result
alt More metrics remain
CM->>CM: wait 2s (inter-request delay)
end
end
CM-->>WF: metrics processed / job continues
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
usage-metrics/collect-metrics.go (1)
81-86: Consider using loop index to avoid unnecessary delay.The condition
len(metrics) < len(versions)attempts to skip the delay after the last version, but it counts only successful queries. If any earlier queries fail, this condition remains true even when processing the final version, causing an unnecessary 2-second delay at the end.Apply this diff to use the loop index instead:
- for _, version := range versions { + for i, version := range versions { version = strings.TrimSpace(version) if version == "" { continue } count, err := queryGitHubUsageWithRetry(version) if err != nil { log.Printf("Warning: Failed to query version %s after retries: %v", version, err) continue } metric := usageMetric{ Date: date, Version: version, Count: count, } metrics = append(metrics, metric) fmt.Printf("Successfully queried: %s has %d usages on %s\n", version, count, metric.Date) // Add delay to avoid rate limiting (30 requests/minute = 2 seconds between requests) - if len(metrics) < len(versions) { + if i < len(versions)-1 { log.Printf("Waiting 2 seconds before next query to avoid rate limiting...") time.Sleep(2 * time.Second) } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/usage-metrics.yml(1 hunks)usage-metrics/collect-metrics.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
usage-metrics/collect-metrics.go (1)
log/logger.go (1)
Printf(47-49)
🔇 Additional comments (2)
usage-metrics/collect-metrics.go (1)
136-138: Good addition of HTTP 429 detection.Adding the check for "429" properly detects the standard HTTP rate-limit status code, improving the retry logic's ability to recognize and handle rate limits.
.github/workflows/usage-metrics.yml (1)
23-23: Appropriate timeout for the enhanced metrics collection.The 120-minute timeout aligns with the new 2-second inter-request delays and retry backoffs introduced in the Go code, providing adequate time while preventing indefinite execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
scripts/changed-modules.sh (1)
6-65: Consider adding a test case for usage-metrics module changes.The documentation includes comprehensive test cases for modulegen (case #4) and other modules, but no explicit test case for usage-metrics. Adding one would improve clarity of the expected behavior when usage-metrics/* files are modified, mirroring test #4 for consistency.
Example addition to documentation:
# 4. A Go file from the modulegen dir is modified: # ALL_CHANGED_FILES="modulegen/a.go" ./scripts/changed-modules.sh # The output should be: just the modulegen module. # + # 4b. A file from the usage-metrics dir is modified: + # ALL_CHANGED_FILES="usage-metrics/metrics.go" ./scripts/changed-modules.sh + # The output should be: just the usage-metrics module. + # # 5. A non-Go file from the core dir is modified:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
scripts/changed-modules.sh(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: lint (modules/nebulagraph) / lint: modules/nebulagraph
- GitHub Check: lint (modules/compose) / lint: modules/compose
- GitHub Check: lint (modules/k3s) / lint: modules/k3s
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
scripts/changed-modules.sh (1)
173-174: Correctly adds usage-metrics module detection in line with existing patterns.The implementation properly detects changes in the usage-metrics/* directory and appends the module to the modified_modules list. It follows the same approach as the existing modulegen handler and integrates cleanly with the already-initialized usageMetricsModule variable.
What does this PR do?
Why is it important?
Testing the workflow fails, and this is something we have to test live.
Related issues