-
Notifications
You must be signed in to change notification settings - Fork 7
f(ILO): improve perf price-oracle #47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughReduced response timeouts and retry limits; processSetPriceFeed and commitSetPrices now accept and respect context cancellation, include per-attempt retry loops with short delays, and stop timers via defer to avoid leaks. Batch submission flushes on cancellation and logs cancellation events. go.mod dependency versions updated. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant Service as OracleService
participant Feed as processSetPriceFeed
participant Retry as Per-Attempt Retry
participant Channel as dataC
participant Commit as commitSetPrices
rect rgb(220,235,255)
Note over Caller,Service: Start(ctx) with cancellable context
end
Caller->>Service: Start(ctx)
Service->>Feed: processSetPriceFeed(ctx)
loop per-interval
Feed->>Retry: attempt PullPrice (maxRetriesPerInterval=3)
alt success
Retry->>Channel: send PriceData
else all attempts failed
Retry->>Feed: log error, report metrics, reset interval
end
end
par Commit flow
Channel-->>Commit: PriceData arrives
Commit->>Commit: collect into batch
alt ctx.Done()
Commit->>Commit: flush & submit batch, stop timers, log cancellation
else timeout/normal
Commit->>Commit: submit batch normally
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Fix all issues with AI Agents 🤖
In @internal/service/oracle/service.go:
- Around line 413-417: When handling ctx.Done() inside commitSetPrices, avoid
calling submitBatch with the cancelled ctx; instead create a short-lived
detached context (e.g., context.WithTimeout(context.Background(),
chainMaxTimeLimit)) and use that for the final flush so broadcastToClient
doesn't inherit cancellation; update the call in the ctx.Done() branch to use
this shutdown ctx (or adjust submitBatch/broadcastToClient to accept an explicit
ctx parameter) and ensure the shutdown context is canceled/deferred after use.
🧹 Nitpick comments (1)
internal/service/oracle/service.go (1)
227-237:time.Sleepdoesn't respect context cancellation.If the context is cancelled during the retry loop,
time.Sleep(100 * time.Millisecond)will block unnecessarily. Consider using a select with a timer ortime.Afteralongsidectx.Done().🔎 Suggested improvement
if err == nil { break } - time.Sleep(100 * time.Millisecond) + select { + case <-ctx.Done(): + return + case <-time.After(100 * time.Millisecond): + } }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/service/oracle/service.go
🧰 Additional context used
🧬 Code graph analysis (1)
internal/service/oracle/service.go (1)
internal/service/oracle/models.go (1)
PriceData(13-33)
🔇 Additional comments (6)
internal/service/oracle/service.go (6)
60-64: Verify timeout reduction in production conditions.The timeout reduction from 15s to 3s is aggressive. With 3 retries at 3s each plus 100ms delays, the max time per interval is ~9.2s. Ensure external price providers respond reliably within this window, especially under network latency or provider degradation.
215-216: Good: Timer leak prevention.Adding
defer t.Stop()ensures the timer is properly cleaned up when the goroutine exits, preventing resource leaks.
220-222: LGTM!Context cancellation is properly handled with appropriate logging before returning.
259-264: Verify reduced chain timeout and retries are sufficient.
chainMaxTimeLimitreduced to 3s andmaxRetriesto 3. Ensure chain broadcasts complete reliably under these constraints, especially during network congestion.
363-364: Good: Timer leak prevention.The
defer expirationTimer.Stop()ensures proper cleanup of the batch expiration timer.
456-504: LGTM!The broadcast logic correctly uses a timeout context and handles errors appropriately. The metrics reporting and logging are comprehensive.
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI Agents
In @internal/service/oracle/service.go:
- Around line 413-417: The final batch submission is using the cancelled ctx so
broadcastToClient creates a timeout from an already cancelled context and
immediately fails; change submitBatch to accept a context parameter (e.g.,
submitBatch(ctx context.Context, batch, meta, persist bool)), update its
internals and all callers to pass an explicit context, and in the ctx.Done()
branch create a short-lived shutdown context with
context.WithTimeout(context.Background(), chainMaxTimeLimit) (and defer
cancel()) and call submitBatch(shutdownCtx, prevBatch, prevMeta, false) so the
final flush uses a fresh, non-cancelled context.
🧹 Nitpick comments (1)
internal/service/oracle/service.go (1)
227-237: Good addition: Per-interval retry logic improves reliability.The retry loop with per-attempt timeouts and cleanup improves resilience. The 100ms delay between retries is reasonable to avoid overwhelming the price feed source.
💡 Optional: Consider making retry count and delay configurable
If different price feeds have varying reliability characteristics, consider making
maxRetriesPerIntervaland the 100ms delay configurable per feed:+type PricePuller interface { + Provider() FeedProvider + ProviderName() string + Symbol() string + Interval() time.Duration + MaxRetries() int + RetryDelay() time.Duration + PullPrice(ctx context.Context) (priceData *PriceData, err error) + OracleType() oracletypes.OracleType +}Then use
pricePuller.MaxRetries()andpricePuller.RetryDelay()in the loop.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (2)
go.modinternal/service/oracle/service.go
🧰 Additional context used
🧬 Code graph analysis (1)
internal/service/oracle/service.go (1)
internal/service/oracle/models.go (1)
PriceData(13-33)
🔇 Additional comments (7)
internal/service/oracle/service.go (5)
216-216: Good addition: Timer cleanup prevents resource leak.Properly stopping the timer via
deferensures resources are released when the goroutine exits.
220-222: Good addition: Context cancellation enables graceful shutdown.The context-aware design allows price feed goroutines to exit cleanly when the parent context is cancelled.
364-364: Good addition: Timer cleanup prevents resource leak.Properly stopping the expiration timer via
deferensures resources are released when commitSetPrices exits.
494-503: Good improvement: Enhanced timing metrics and consistency.Storing the duration in
diffbefore use improves code clarity. The addition ofmetrics.Timer("price_oracle.execution_time", ...)provides valuable observability for transaction submission performance.
61-61: Aggressive timeout and retry limits may cause unnecessary failures.The configured timeouts are tight at 3 seconds each:
maxRespTime: 3s per price pull attemptchainMaxTimeLimit: 3s per chain broadcastmaxRetries: 3 attempts per operationWith
maxRetriesPerIntervalalso set to 3, this allows up to 9 seconds total per price pull interval but provides limited margin for network latency or chain congestion. Production logs show timeouts are already occurring (timeout=truein actual execution). Verify these values work reliably under realistic load and network conditions, particularly for slower or congested networks.go.mod (2)
59-59: No known security vulnerabilities found for the indirect dependencies.All five new indirect dependencies (gostackparse v0.7.0, pprof v0.0.0-20240424215950-a892ee059fd6, mixpanel-go v1.2.1, molecule v1.0.1-0.20221107223329-32cfee06a052, and murmur3 v1.1.0) are free from published CVEs and GitHub Security Advisories as of January 2026.
7-7: Verify compatibility with metrics library v0.0.21 and document the upgrade.The metrics library was upgraded from v0.0.10 to v0.0.21. While the
metrics.Timer()call at line 503 appears compatible, verify there are no breaking changes in the updated version by consulting the InjectiveLabs/metrics repository changelog, and consider documenting the upgrade reason and any API changes in the commit message.
| case <-ctx.Done(): | ||
| s.logger.Infoln("context cancelled, stopping commitSetPrices") | ||
| prevBatch, prevMeta := resetBatch() | ||
| submitBatch(prevBatch, prevMeta, false) | ||
| return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Context cancellation issue persists: Final batch submission will fail.
When ctx.Done() fires, submitBatch is called with the cancelled context. This causes broadcastToClient (line 466) to create context.WithTimeout(ctx, chainMaxTimeLimit) which immediately inherits the cancellation and fails.
Use context.Background() or a detached context with a fresh timeout for the final flush:
case <-ctx.Done():
s.logger.Infoln("context cancelled, stopping commitSetPrices")
prevBatch, prevMeta := resetBatch()
// Create a short-lived context for final submission
shutdownCtx, cancel := context.WithTimeout(context.Background(), chainMaxTimeLimit)
defer cancel()
// Pass shutdownCtx to submitBatch (requires signature change)
submitBatch(shutdownCtx, prevBatch, prevMeta, false)
returnThis requires modifying submitBatch to accept a context parameter.
🤖 Prompt for AI Agents
In @internal/service/oracle/service.go around lines 413 - 417, The final batch
submission is using the cancelled ctx so broadcastToClient creates a timeout
from an already cancelled context and immediately fails; change submitBatch to
accept a context parameter (e.g., submitBatch(ctx context.Context, batch, meta,
persist bool)), update its internals and all callers to pass an explicit
context, and in the ctx.Done() branch create a short-lived shutdown context with
context.WithTimeout(context.Background(), chainMaxTimeLimit) (and defer
cancel()) and call submitBatch(shutdownCtx, prevBatch, prevMeta, false) so the
final flush uses a fresh, non-cancelled context.
Summary by CodeRabbit
Performance
Reliability
✏️ Tip: You can customize this high-level summary in your review settings.