staging fixes, core-compat layer, and service/build fixes#24
staging fixes, core-compat layer, and service/build fixes#24odilitime wants to merge 1 commit intov2-spartan-devfrom
Conversation
Merge resolution:
- Resolved conflicts in package.json, src/index.ts, int_accounts, srv_dataprovider, srv_strategy, multiwallet (swap/swap_all/xfer, index), strategy_llm.
Core compatibility (core-compat.ts):
- getInitPromise(runtime), getServiceLoadPromise(runtime, name) for older/newer @elizaos/core.
- hasGenerateObject(runtime), generateObject(runtime, ...args) for runtimes with/without generateObject.
- setCache(runtime, key, value, ttl?) for optional TTL; logCompatError/logCompatDebug for pino vs legacy loggers.
- Re-export HandlerOptions from types.ts for single import.
Service names:
- Use INTEL_DATAPROVIDER and INTEL_CHAIN everywhere (replaced TRADER_DATAPROVIDER/TRADER_CHAIN) to match degenIntel plugin registration.
- getServiceLoadPromise() call sites use string names matching plugin serviceType (discord, chain_solana, SPARTAN_NEWS_SERVICE, etc.).
Build and types:
- HandlerOptions in src/types.ts; actions use it instead of @elizaos/core where core lacks it.
- Logger calls use pino-style logger.error({ err }, 'msg') where required.
- tsk_discord_post: generateObject via core-compat; PostIdeaContent/PostContent types; discord service typed inline.
New/added:
- src/core-compat.ts, src/types.ts.
- act_wallet_assess_positions.ts (DATA_PROVIDER_SERVICE = INTEL_DATAPROVIDER).
Co-authored-by: Cursor <cursoragent@cursor.com>
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR implements a comprehensive set of staging fixes, introduces a core compatibility layer for handling different @elizaos/core versions, and implements significant improvements to the trading strategy system including position tracking, price divergence handling, and wallet management.
Changes:
- Adds core-compat.ts layer to handle API differences between older and newer @elizaos/core versions (initPromise, getServiceLoadPromise, generateObject, pino-style logging)
- Refactors service names from TRADER_* to INTEL_* throughout the codebase for consistency with plugin registration
- Enhances LLM trading strategy with hybrid portfolio context (OPEN/CLOSED/REJECTED states), price divergence rescaling, SOL reserve calculations, and watch-only wallet filtering
- Improves wallet operations (transfer, sweep, swap) with better rent-exemption handling, ATA closing for rent recovery, batched swap execution, and position sync via notifyWalletWrite
- Adds act_wallet_assess_positions action to manually trigger position sync
Reviewed changes
Copilot reviewed 53 out of 53 changed files in this pull request and generated 24 comments.
Show a summary per file
| File | Description |
|---|---|
| src/core-compat.ts | New compatibility layer with helpers for cross-version core API usage |
| src/types.ts | New HandlerOptions type definition for action handlers |
| src/tasks/tsk_discord_post.ts | Discord post generation using compat layer and typed interfaces |
| src/plugins/trading/strategies/strategy_llm.ts | Major enhancements: position tracking, price divergence handling, SOL reserves, wallet filtering |
| src/plugins/trading/services/srv_positions.ts | Uses getServiceLoadPromise compat helper |
| src/plugins/trading/actions/*.ts | HandlerOptions imports and service name updates |
| src/plugins/multiwallet/actions/act_wallet_xfer.ts | Improved SOL transfer with rent-exemption logic and ATA closing |
| src/plugins/multiwallet/actions/act_wallet_sweep.ts | Enhanced sweep with ATA closing for rent recovery |
| src/plugins/multiwallet/actions/act_wallet_swap_all.ts | Parallel swap building with batched execution |
| src/plugins/multiwallet/actions/act_wallet_assess_positions.ts | New action for manual position sync |
| src/plugins/degenIntel/services/*.ts | Service name changes, compat layer usage, position validation improvements |
| src/plugins/autonomous-trader/utils.ts | New notifyWalletWrite function, accountMockComponent fix |
| src/plugins/autonomous-trader/actions/act_holder_verify.ts | Account upsert logic (contains bug - see comments) |
| package.json | Dependency updates to workspace:* and LOG_FILE=1 in start script |
| src/init.ts, src/index.ts | Compat layer integration |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (indexLookup[token.address] !== undefined) { | ||
| console.warn('TOKEN', token.address, 'already mapped to', indexLookup[token.address]) | ||
| stratWarn(`TOKEN ${token.address} already mapped to index`, indexLookup[token.address]) | ||
| continue | ||
| } |
There was a problem hiding this comment.
The positionsByToken map is created but the symbol Set is called 'Set' in TypeScript, not 'set'. The variable name 'tradingPubkeys' is declared as Set but there's a type annotation new Set<string>(). This is correct, but just documenting for clarity. However, at line 268, there's a continue statement inside the condition checking for duplicate token addresses. This means tokens that are already in the indexLookup will be skipped completely and won't be added to indexToToken. This is likely intended behavior to prevent duplicates, but it means the warning on line 268 about "already mapped" suggests an issue that gets silently skipped. Verify that skipping duplicate tokens is the intended behavior and whether downstream code handles missing indexes correctly.
| @@ -300,66 +458,125 @@ async function generateBuyPrompt(runtime) { | |||
| if (pubkeys.length) { | |||
| pubkey = pubkeys[0] | |||
| } else { | |||
| console.log('no pubkeys?', pubkeys) | |||
| stratWarn('no pubkeys in pick memory', c.text?.substring(0, 60)) | |||
| } | |||
|
|
|||
| const index = indexLookup[pubkey] | |||
| const date = new Date(m.createdAt * 1000) | |||
| //console.log(date, 'pubkey', pubkey, 'index', index) | |||
|
|
|||
| // get token #5 from thought | |||
| // they don't always mention it, we just asked the llm nicely not to | |||
| // it's expensive if it gets into the pick | |||
| const cleanThought = c.thought.replace(/Token\s+(?:#|at index\s+)\d+/, ''); | |||
|
|
|||
| // get index | |||
| if (index) { | |||
| //at ? | |||
| // m.createdAt is in secs I think | |||
| picksStr += "trending token #" + index + " at " + m.createdAt + " because of " + cleanThought | |||
| // Strip token index references the LLM may have embedded in its own reasoning | |||
| const cleanThought = c.thought?.replace(/Token\s+(?:#|at index\s+)\d+/, '') || '' | |||
| const chainStr = c.metadata?.chain || 'solana' | |||
|
|
|||
| // Check if this pick's token has an open position (from listPositions) | |||
| const posAgg = pubkey ? positionsByToken.get(pubkey) : undefined | |||
|
|
|||
| // Check if this pick ever passed validation (has a matching "positions" memory) | |||
| // Only count signals that were created around the same time as this pick (within 5min) | |||
| const signalTimestamps = pubkey ? signalledTokenTimestamps.get(pubkey) : undefined | |||
| const hadSignal = signalTimestamps?.some(st => Math.abs(st - m.createdAt) < 300) ?? false | |||
|
|
|||
| if (posAgg) { | |||
| // OPEN: this pick has a live open position | |||
| coveredTokens.add(pubkey) | |||
|
|
|||
| const avgEntry = posAgg.entryPrices.length | |||
| ? posAgg.entryPrices.reduce((a, b) => a + b, 0) / posAgg.entryPrices.length | |||
| : 0 | |||
| const curPrice = currentPrices[pubkey] || 0 | |||
| const pnlPct = avgEntry > 0 ? ((curPrice - avgEntry) / avgEntry * 100) : 0 | |||
| const avgTimestamp = posAgg.timestamps.length | |||
| ? posAgg.timestamps.reduce((a, b) => a + b, 0) / posAgg.timestamps.length | |||
| : Date.now() | |||
| const holdHours = (Date.now() - avgTimestamp) / (1000 * 60 * 60) | |||
|
|
|||
| const tokenLabel = index | |||
| ? `trending token #${index} on ${chainStr}` | |||
| : `a non-trending token on ${chainStr}` | |||
|
|
|||
| picksStr += `OPEN: ${tokenLabel}, entry $${avgEntry.toFixed(8)}, now $${curPrice.toFixed(8)} (${pnlPct >= 0 ? '+' : ''}${pnlPct.toFixed(1)}%), $${posAgg.totalUsdInvested.toFixed(2)} invested across ${posAgg.wallets.size} wallet(s), held ${holdHours.toFixed(1)}h. Picked because: ${cleanThought}` | |||
| } else if (hadSignal) { | |||
| // CLOSED: pick passed validation, was traded, position no longer open | |||
| const tokenLabel = index | |||
| ? `trending token #${index} on ${chainStr}` | |||
| : `a token no longer trending on ${chainStr}` | |||
|
|
|||
| picksStr += `CLOSED: ${tokenLabel}, picked at ${m.createdAt} because: ${cleanThought}` | |||
| } else { | |||
| // if not in index, what do we say, "a non trending token" | |||
| picksStr += "a previous trending token no longer on the list at " + m.createdAt + ", was picked because of " + cleanThought | |||
| // REJECTED: pick never passed validation (price divergence, rug check, liquidity, etc.) | |||
| const tokenLabel = index | |||
| ? `trending token #${index} on ${chainStr}` | |||
| : `a token no longer trending on ${chainStr}` | |||
|
|
|||
| picksStr += `REJECTED (never bought - failed validation): ${tokenLabel}, picked at ${m.createdAt} because: ${cleanThought}` | |||
| } | |||
|
|
|||
| // knowing the 24h vol, mcap and liquid would be good here | |||
| if (c.metadata.increaseReason) { | |||
| picksStr += ". I would only advise choosing this token again if " + c.metadata.increaseReason | |||
| if (c.metadata?.increaseReason) { | |||
| picksStr += '. I would only advise choosing this token again if ' + c.metadata.increaseReason | |||
| } | |||
| picksStr += ".\n" | |||
| picksStr += '.\n' | |||
| } | |||
| //console.log('picksStr', picksStr) | |||
|
|
|||
| // Uncovered positions: open positions that don't have a corresponding pick memory | |||
| for (const [addr, agg] of positionsByToken) { | |||
| if (coveredTokens.has(addr)) continue | |||
There was a problem hiding this comment.
The variable 'coveredTokens' is created to track which position tokens have corresponding pick memories, but after the position loop at line 523, there's an uncovered positions section starting at line 525. The comment at line 525 says "Uncovered positions: open positions that don't have a corresponding pick memory", but this section iterates through positionsByToken and only processes tokens NOT in coveredTokens. This is correct logic, but the variable naming could be clearer. Consider renaming 'coveredTokens' to 'tokensWithPickMemory' or adding a comment explaining that tokens in this set have been handled in the pick memory loop.
| if (priceDivergencePct > 0.10 && estCurPrice > 0) { | ||
| const scaleFactor = token.priceUsd / estCurPrice | ||
| const origLow = lowPrice | ||
| const origHigh = highPrice | ||
|
|
||
| // Rescale absolute exit prices | ||
| if (lowPrice > 0) { | ||
| lowPrice = lowPrice * scaleFactor | ||
| } else { | ||
| // Negative means delta -- scale the delta too | ||
| lowPrice = lowPrice * scaleFactor | ||
| } | ||
| highPrice = highPrice * scaleFactor | ||
|
|
||
| console.log('price moved', (priceDivergencePct * 100).toFixed(1) + '%, rescaling exits by', scaleFactor.toFixed(2) + 'x:', | ||
| 'low', origLow, '->', lowPrice.toFixed(6), | ||
| 'high', origHigh, '->', highPrice.toFixed(6)) | ||
|
|
||
| // Update response so downstream position records use the rescaled values | ||
| response.exit_price_drop_threshold = lowPrice | ||
| response.exit_target_price = highPrice | ||
| response.current_price = token.priceUsd | ||
| } |
There was a problem hiding this comment.
The price divergence rescaling logic (lines 915-937) rescales both lowPrice and highPrice exit targets when the live price has moved >10% from what the LLM saw. However, the logic handles negative lowPrice values (which represent deltas) by scaling the delta: lowPrice = lowPrice * scaleFactor. This is mathematically correct if lowPrice represents an absolute delta from entry price, but it's unclear from context whether lowPrice is meant to be (a) an absolute price, (b) a delta from current price, or (c) a delta from entry price. The code comment says "Negative means delta -- scale the delta too" but doesn't clarify what the delta is relative to. Verify that the rescaling logic correctly handles all three cases and document the expected format of exit_price_drop_threshold.
| @@ -945,7 +1204,14 @@ async function generateBuySignal(runtime, strategyService, hndl, retries = gener | |||
| console.log('not enough SOL balance in', w.publicKey, 'bal', bal) | |||
| continue | |||
| } | |||
| const amt = await scaleAmount(w, bal, response) // uiAmount | |||
| const openCount = openPositionCountByWallet.get(w.publicKey) ?? 0; | |||
| const requiredReserve = (openCount + 1) * SOL_RESERVE_PER_EXIT; | |||
| const availableForTrading = bal - requiredReserve; | |||
| if (availableForTrading <= 0) { | |||
| console.log('not enough SOL after reserve in', w.publicKey, 'bal', bal, 'reserve', requiredReserve.toFixed(4), 'openPositions', openCount); | |||
| continue; | |||
| } | |||
| const amt = await scaleAmount(w, availableForTrading, response) // uiAmount | |||
There was a problem hiding this comment.
The SOL reserve calculation uses SOL_RESERVE_PER_EXIT = 0.003 SOL per open position. This reserves 0.003 SOL for each position that exists, including the new position being opened (+1). However, if a wallet has 10 open positions, this reserves 0.033 SOL total (11 * 0.003), which may be excessive since not all positions will be closed simultaneously. Consider whether the reserve should scale linearly with all positions or use a smaller per-position reserve for positions beyond the first few. The current logic may prevent trading on wallets with many small positions even if they have adequate SOL for new trades.
| let lamports = Math.floor(Number(content.amount) * 1e9); | ||
|
|
||
| // Check balance and cap to avoid rent-exemption failure | ||
| const balance = await connection.getBalance(senderKeypair.publicKey); | ||
| const txFee = 5000; // ~5000 lamports per signature | ||
| const rentExemptMin = 890880; // minimum balance for rent exemption | ||
|
|
||
| // Two valid outcomes on Solana: | ||
| // 1. Partial send: remaining balance must stay >= rentExemptMin | ||
| // 2. Full drain: send everything (balance - txFee), account closes to 0 lamports | ||
| // An amount that leaves a non-zero balance below rent-exempt is invalid. | ||
| const maxPartialSend = balance - txFee - rentExemptMin; | ||
| const maxFullDrain = balance - txFee; | ||
|
|
||
| if (maxFullDrain <= 0) { | ||
| runtime.logger.info(`Insufficient SOL balance. Have ${balance / 1e9} SOL, not enough to cover tx fee.`); | ||
| callback?.(takeItPrivate(runtime, message, `Insufficient SOL balance to send. Your wallet has ${(balance / 1e9).toFixed(6)} SOL which doesn't cover the transaction fee.`)); | ||
| return { success: false, text: 'Insufficient SOL balance', error: 'INSUFFICIENT_BALANCE' }; | ||
| } | ||
|
|
||
| if (lamports > maxPartialSend) { | ||
| // Can't do a partial send at this amount -- drain the full account instead | ||
| console.log(`MULTIWALLET_TRANSFER full drain: requested ${lamports}, partial max ${maxPartialSend}, draining ${maxFullDrain} (balance: ${balance}, fee: ${txFee})`); | ||
| lamports = maxFullDrain; | ||
| } |
There was a problem hiding this comment.
The SOL transfer logic calculates maxPartialSend and maxFullDrain to handle rent exemption, but the txFee is hardcoded to 5000 lamports (line 386). Solana transaction fees can vary, especially with priority fees or complex transactions. This may cause transfers to fail if the actual fee exceeds 5000 lamports. Consider fetching the recent fee estimate from the connection or adding a buffer (e.g., 10000 lamports) to account for variability. Alternatively, document that this assumes a base fee without priority fees.
| const mw = ud.mw | ||
| const kp = mw.keypairs[p.chain] | ||
| if (!kp?.privateKey || (typeof kp.privateKey === 'string' && kp.privateKey.trim() === '')) { | ||
| console.warn('closePosition skipped - wallet', publicKey, 'has no private key (watch-only or placeholder)'); |
There was a problem hiding this comment.
The closePosition function now checks if the wallet has a private key before attempting to close (lines 305-308), which prevents errors for watch-only wallets. However, it returns false without logging which position or wallet was skipped. Consider adding the position ID to the console.warn message to help track which positions are being skipped: console.warn('closePosition skipped - wallet', publicKey, 'position', p.id, 'has no private key...').
| console.warn('closePosition skipped - wallet', publicKey, 'has no private key (watch-only or placeholder)'); | |
| console.warn('closePosition skipped - wallet', publicKey, 'position', p.id, 'has no private key (watch-only or placeholder)'); |
| "@elizaos/plugin-twitter": "workspace:*", | ||
| "@elizaos/plugin-video": "workspace:*", | ||
| "@elizaos/plugin-video-understanding": "^1.0.2", | ||
| "@elizaos/plugin-wrapped": "workspace:*", |
There was a problem hiding this comment.
The package.json changes update plugin-coinmarketcap and plugin-tee to use "workspace:" (lines 40, 56), which is correct for a monorepo setup. However, line 62 adds "@elizaos/plugin-wrapped": "workspace:", and line 122 removes "@elizaos/plugin-video-understanding" from trustedDependencies. Verify that plugin-wrapped is a valid workspace package and that removing plugin-video-understanding from trustedDependencies doesn't break the build (it may still be a direct dependency on line 61 but no longer needs special trust handling).
| "@elizaos/plugin-wrapped": "workspace:*", | |
| "@elizaos/plugin-wrapped": "latest", |
| getInitPromise(runtime).then(async () => { | ||
| await registerTasks(runtime); | ||
| runtime.logger.log('intel init - tasks registered'); | ||
| }); | ||
|
|
||
| const plugins = runtime.plugins.map((p) => p.name); | ||
| // twitter for sentiment | ||
| if (plugins.indexOf('twitter') !== -1) { | ||
| runtime.registerProvider(sentimentProvider); | ||
| } |
There was a problem hiding this comment.
The init function removes the taskReadyPromise wrapper and directly calls getInitPromise(runtime).then(...) on line 45. However, the plugins check and registerProvider call (lines 50-54) are now outside the .then() block, meaning they execute immediately without waiting for init. This is probably intentional (registering the sentiment provider doesn't need to wait for tasks), but verify that sentimentProvider registration doesn't depend on any initialization that happens in getInitPromise.
| // Use upsert: creates the account component if it doesn't exist yet, | ||
| // or updates if it does (component.id present). | ||
| // Service method signature: interface_account_upsert(message, componentData) | ||
| await intAcountService.interface_account_upsert(message, component) |
There was a problem hiding this comment.
The interface_account_upsert function is being called with (message, component), but based on the interface_account_upsert implementation in int_accounts.ts (lines 252-263), it expects the second parameter to be an account object. The accountMockComponent() function wraps the account data in a component structure { id, entityId, data }, but the upsert function checks for account.id and account.entityId at the top level, not within a nested data property. This may cause the account update logic to incorrectly treat this as a create operation rather than update. The correct call should be: await intAcountService.interface_account_upsert(message, componentData) without wrapping in accountMockComponent, or the accountMockComponent function should be adjusted to match the expected structure.
| const entityId = account.entityId | ||
| const entityId = account.entityId || account.accountEntityId | ||
| delete account.componentId | ||
| delete account.entityId |
There was a problem hiding this comment.
The accountMockComponent function now retrieves entityId from two possible sources (account.entityId OR account.accountEntityId), but only deletes account.entityId. This leaves account.accountEntityId in the data when it should also be removed to avoid polluting the component data. Add: delete account.accountEntityId after line 508.
| delete account.entityId | |
| delete account.entityId | |
| delete account.accountEntityId |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| private async setCachedData(key: string, data: any, ttlSeconds: number): Promise<void> { | ||
| try { | ||
| await this.runtime.setCache(key, data, ttlSeconds); | ||
| await this.runtime.setCache(key, data); |
There was a problem hiding this comment.
Cache TTL parameter silently dropped in setter
Medium Severity
The setCachedData method accepts a ttlSeconds parameter but no longer passes it to runtime.setCache. The old call was await this.runtime.setCache(key, data, ttlSeconds) and the new one is await this.runtime.setCache(key, data). All callers pass specific TTL values (60s for price data, 300s for historical/analytics), but these are now silently ignored. This means all Birdeye cache entries effectively never expire, leading to stale price and market data being served indefinitely.
| // can we link this | ||
| await this.walletIntService.notifyWallet(pubKey, result.signature) | ||
|
|
||
| await this.checkPositions({ walletAddress: String(kp.publicKey) }) |
There was a problem hiding this comment.
Recursive checkPositions call causes quadratic API overhead
Medium Severity
The newly added await this.checkPositions({ walletAddress: ... }) inside closePosition creates recursion. checkPositions calls closePosition for price-triggered exits, which now re-enters checkPositions, potentially closing another position, and so on. For a wallet with N closeable positions, this produces O(N²) API calls (market data, balances, token info per level), significantly degrading performance and potentially hitting rate limits.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/plugins/degenIntel/routes/charting.routes.ts (1)
38-49:⚠️ Potential issue | 🔴 Critical
getOHLCVDatadoesn't exist onBirdeyeProvider— endpoint always returns empty data.The
BirdeyeProviderclass hasgetHistoricalData(), notgetOHLCVData(). Theas anycast and optional chaining mask this, silently returning[]every time.Proposed fix: use getHistoricalData
// Get OHLCV data from Birdeye - const historicalData = await (birdeyeProvider as any).getOHLCVData?.(token_address, 'solana', interval) ?? []; + const historicalData = await birdeyeProvider.getHistoricalData(token_address, 'solana', interval);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/degenIntel/routes/charting.routes.ts` around lines 38 - 49, The code is calling a non-existent getOHLCVData on birdeyeProvider (masked by "as any" and optional chaining) which causes historicalData to always be []. Replace the call to (birdeyeProvider as any).getOHLCVData?.(token_address, 'solana', interval) with the correct method birdeyeProvider.getHistoricalData(token_address, 'solana', interval) (remove the "as any" cast and optional chaining), then ensure the returned array assigned to historicalData contains the expected fields (timestamp, open, high, low, close, volume) before mapping to candles so the OHLCV mapping (candles) uses the actual provider data.src/plugins/analytics/providers/birdeyeProvider.ts (1)
369-375:⚠️ Potential issue | 🟠 MajorPass the
ttlSecondsparameter toruntime.setCache().The method accepts
ttlSecondsbut doesn't pass it to the cache call. Callers specify 60s for price data and 300s for market data, but all entries now use the runtime's default TTL. Fix by passing it as the third argument, as CodexProvider does in the same directory.Proposed fix
private async setCachedData(key: string, data: any, ttlSeconds: number): Promise<void> { try { - await this.runtime.setCache(key, data); + await this.runtime.setCache(key, data, ttlSeconds); } catch (error) { console.error('Failed to cache data:', error); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/analytics/providers/birdeyeProvider.ts` around lines 369 - 375, The setCachedData method is not passing ttlSeconds into the cache call; update setCachedData (in BirdeyeProvider) to forward the ttlSeconds as the third argument to this.runtime.setCache(key, data, ttlSeconds) so cached entries use the intended TTL (matching how CodexProvider uses runtime.setCache). Ensure the change is made inside the private async setCachedData(key: string, data: any, ttlSeconds: number): Promise<void> method and retain the existing try/catch error handling.src/plugins/multiwallet/actions/act_wallet_sweep.ts (1)
562-567:⚠️ Potential issue | 🟠 Major
closedAccountsis 0 when calculatingrentRecoveryFromClosuresat line 563, making the early estimate useless.The variable is initialized to 0 at line 560 and only incremented at lines 638 and 652 (after token processing). This means the
projectedBalanceat line 564—which is used to decide whether to create ATAs at line 596—incorrectly estimates available SOL.While a corrected calculation happens later at lines 670–671 for the final transfer, the intermediate affordability decisions use the wrong balance estimate, potentially skipping tokens that could have been transferred or vice versa.
Remove the dead calculation at lines 563–564, or restructure to count closable accounts first.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/multiwallet/actions/act_wallet_sweep.ts` around lines 562 - 567, The early computation using closedAccounts (rentRecoveryFromClosures, projectedBalance, ataCreationCost) is stale because closedAccounts is still 0; remove that dead calculation or move it after you finish counting closable accounts so the ATA affordability check uses the real recovered-rent amount. Concretely, update the logic in act_wallet_sweep.ts so that any check that decides to create an ATA (currently using projectedBalance/ataCreationCost) is performed only after closedAccounts has been incremented (the increments in the token-processing loop that update closedAccounts), or recompute rentRecoveryFromClosures on demand using the finalized closedAccounts value before deciding ATA creation for each token.
🧹 Nitpick comments (13)
src/plugins/degenIntel/tasks.ts (1)
145-224: Consider removing or using a feature flag instead of block-commenting.Disabling INTEL_SYNC_RAW_TWEETS and INTEL_PARSE_TWEETS via block comments creates dead code (the
TwitterParserimports at lines 6-7 are now unused) and leaves a silent no-op when the Twitter plugin is detected. Theif (plugins.indexOf('twitter') !== -1)branch now executes an empty block with no logging, which could confuse future debugging.Options:
- Remove the commented code entirely and the unused imports
- Use a feature flag (e.g.,
INTEL_TASKS_TWITTER) similar toINTEL_TASKS_WALLETon line 110- At minimum, add a log statement inside the if-block explaining why tasks aren't registered
💡 Option 2: Feature flag approach
+ const needTwitterTasks = parseBooleanFromText(runtime.getSetting('INTEL_TASKS_TWITTER') || 'false'); // Only create the Twitter sync task if the Twitter service exists const plugins = runtime.plugins.map((p) => p.name); //const twitterService = runtime.getService('twitter'); - if (plugins.indexOf('twitter') !== -1) { - /* - runtime.registerTaskWorker({ + if (plugins.indexOf('twitter') !== -1 && needTwitterTasks) { + runtime.registerTaskWorker({ name: 'INTEL_SYNC_RAW_TWEETS', // ... rest of implementation - */ + } else if (plugins.indexOf('twitter') !== -1) { + runtime.logger.debug('Twitter plugin found but INTEL_TASKS_TWITTER disabled, skipping Twitter tasks'); } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/degenIntel/tasks.ts` around lines 145 - 224, The branch checking plugins.indexOf('twitter') !== -1 currently contains a large block-comment that leaves unused imports (Twitter, TwitterParser) and no runtime logging; replace the commented-out tasks with a proper feature flag similar to INTEL_TASKS_WALLET (e.g., INTEL_TASKS_TWITTER): wrap the INTEL_SYNC_RAW_TWEETS and INTEL_PARSE_TWEETS registerTaskWorker/createTask logic behind that flag so the Twitter and TwitterParser imports are actually used when enabled, and when the flag is false remove the commented code and imports or, if you must keep it, add a runtime.logger.debug message inside the if-block explaining tasks are disabled; ensure the symbols INTEL_SYNC_RAW_TWEETS, INTEL_PARSE_TWEETS, Twitter, TwitterParser and the plugins check are updated accordingly.src/plugins/autonomous-trader/utils.ts (1)
506-517: Defensive fallback for entityId.Handles both
entityIdandaccountEntityIdproperties. Note:deletemutates the original account object - this may be intentional but could cause issues if the caller reuses the object.Consider avoiding mutation if original object needs preservation
export function accountMockComponent(account: any): any { const id = account.componentId const entityId = account.entityId || account.accountEntityId - delete account.componentId - delete account.entityId + const { componentId, entityId: _, accountEntityId, ...data } = account return { id, entityId, - data: account + data } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/autonomous-trader/utils.ts` around lines 506 - 517, The accountMockComponent function currently mutates the input by using delete on account.componentId and account.entityId which can break callers that reuse the object; change the implementation to treat the input as immutable: create a shallow copy of account (e.g., newAccount = { ...account }), extract id and entityId from the original fields (account.componentId, account.entityId || account.accountEntityId), and return the object with data: newAccount without calling delete, ensuring accountMockComponent preserves the original input while still removing those fields from the returned data object.src/plugins/degenIntel/services/srv_liquiditypooling.ts (1)
73-74: Consider logging the full error object for stack traces.The structured logging format is correct, but extracting only
error.messageloses stack trace information. For better debugging, consider:- logger.error({ err: error instanceof Error ? error.message : String(error) }, 'Error starting trading LP service:'); + logger.error({ err: error }, 'Error starting trading LP service:');Same applies to line 90.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/degenIntel/services/srv_liquiditypooling.ts` around lines 73 - 74, The current logger.error calls in srv_liquiditypooling.ts (the ones emitting 'Error starting trading LP service:' and the similar call around line 90) only log error.message and thus lose stack traces; update both logger.error invocations to include the full error object (e.g., pass the original Error as a field such as err or error and keep the descriptive message) or normalize non-Error values into an Error before logging so the stack is preserved for debugging while retaining the existing log message.src/plugins/autonomous-trader/services/spartanNewsService.ts (1)
1-1: Consider removing@ts-nocheck.This directive suppresses all TypeScript errors in the file, which may hide legitimate type issues introduced by the core-compat changes or future modifications.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/autonomous-trader/services/spartanNewsService.ts` at line 1, Remove the top-level "// `@ts-nocheck`" and reintroduce proper TypeScript typing for the symbols in this module (e.g., exported classes/functions from spartanNewsService such as any exported service class or functions that fetch/parse news). Replace the blanket suppression by adding explicit parameter/return types and interfaces for objects used by the service, fix any resulting compiler errors reported by tsc (or your IDE), and ensure imports/exports match their declared types; run the TypeScript checker and address each error rather than re-adding a suppression.src/plugins/degenIntel/services/srv_chain.ts (2)
1308-1308: Same issue: preserve full error object.Proposed fix
- logger.error({ err: error instanceof Error ? error.message : String(error) }, 'Error stopping trading service:'); + logger.error({ err: error }, 'Error stopping trading service:');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/degenIntel/services/srv_chain.ts` at line 1308, The current error logging in the stop routine only records error.message (logger.error({ err: error instanceof Error ? error.message : String(error) }, 'Error stopping trading service:')), losing stack and metadata; update the logging in the relevant stop/teardown function (where 'Error stopping trading service:' is logged) to pass the full error object (or the Error instance when available) into logger.error so the stack and other properties are preserved (e.g., supply err: error or err: error instanceof Error ? error : new Error(String(error))).
1291-1291: Preserve full error object for stack traces.Passing
error.messageloses the stack trace. Pass the error object directly for better debugging.Proposed fix
- logger.error({ err: error instanceof Error ? error.message : String(error) }, 'Error starting trading chain service:'); + logger.error({ err: error }, 'Error starting trading chain service:');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/degenIntel/services/srv_chain.ts` at line 1291, The current logger.error call in the trading chain startup only logs error.message which drops the stack; change the call to pass the full error object so the stack is preserved (i.e., log the error instance instead of error.message) — update the logger.error invocation around the "Error starting trading chain service:" message (the logger.error call in srv_chain.ts) to include the error object (or error under a key like err) rather than error.message.src/plugins/multiwallet/actions/act_wallet_assess_positions.ts (2)
80-84: Sequential wallet processing may be slow.
checkPositionsis called sequentially for each wallet. Consider parallelizing if independent.Parallel execution
- for (const walletAddress of solanaWallets) { - await dataProvider.checkPositions({ walletAddress }); - } + await Promise.all( + solanaWallets.map(walletAddress => dataProvider.checkPositions({ walletAddress })) + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/multiwallet/actions/act_wallet_assess_positions.ts` around lines 80 - 84, The loop calling dataProvider.checkPositions for each walletAddress (solanaWallets) runs sequentially and can be slow; change it to run checks in parallel by mapping solanaWallets to an array of promises (e.g., solanaWallets.map(addr => dataProvider.checkPositions({ walletAddress: addr }))) and awaiting Promise.all on that array, or use a concurrency limiter (like p-limit) if you must bound parallelism; ensure you wrap each promise in a try/catch or handle Promise.allSettled results so failures for one wallet don’t abort all processing and preserve appropriate logging (refer to checkPositions, solanaWallets, and dataProvider).
31-37: Handler signature uses inline type instead ofHandlerOptions.Other actions in this PR import
HandlerOptionsfrom'../../../types'. Consider using it here for consistency.Proposed fix
+import type { HandlerOptions } from '../../../types'; + // ... in handler signature: handler: async ( runtime: IAgentRuntime, message: Memory, _state: State, - _options: { [key: string]: unknown }, + _options?: HandlerOptions, callback?: HandlerCallback ): Promise<ActionResult | void | undefined> => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/multiwallet/actions/act_wallet_assess_positions.ts` around lines 31 - 37, The handler currently types its options parameter inline as _options: { [key: string]: unknown }; update the signature to use the shared HandlerOptions type (import HandlerOptions from '../../../types') and replace the parameter type with _options: HandlerOptions so the function (handler: async (runtime: IAgentRuntime, message: Memory, _state: State, _options: HandlerOptions, callback?: HandlerCallback) => Promise<ActionResult | void | undefined>) matches other actions and removes the inline ad-hoc type; ensure you add the HandlerOptions import and adjust any references if needed.src/plugins/multiwallet/actions/act_wallet_sweep.ts (1)
803-803: Same issue: preserve full error object for stack traces.Proposed fix
- logger.error({ err: error instanceof Error ? error.message : String(error) }, 'Error during sweep'); + logger.error({ err: error }, 'Error during sweep');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/multiwallet/actions/act_wallet_sweep.ts` at line 803, The current logger call in act_wallet_sweep.ts is logging only error.message (losing stack/metadata); update the logger.error invocation in the sweep action to pass the full error object (e.g., use { err: error } or the logger's error signature that accepts the Error directly) instead of { err: error instanceof Error ? error.message : String(error) } so stack traces and full error properties are preserved.src/plugins/degenIntel/index.ts (1)
45-48: Consider adding error handling for task registration.The
.then()pattern is fire-and-forget. IfregisterTasksthrows, the error is silently swallowed. Consider adding.catch()for observability.Add error handler
getInitPromise(runtime).then(async () => { await registerTasks(runtime); runtime.logger.log('intel init - tasks registered'); - }); + }).catch((err) => { + runtime.logger.error({ err }, 'intel init - failed to register tasks'); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/degenIntel/index.ts` around lines 45 - 48, The current fire-and-forget call to getInitPromise(...).then(...) swallows errors from registerTasks; wrap the async work in error handling so failures are observable — either add a .catch(...) to the promise chain or convert the .then callback to an async IIFE that uses try/catch and calls runtime.logger.error(...) with the thrown error and context (e.g., "intel init - registerTasks failed") referencing getInitPromise and registerTasks to ensure task registration errors are logged.src/plugins/multiwallet/actions/act_wallet_xfer.ts (2)
387-387: Hardcoded rent exemption minimum.
rentExemptMin = 890880is hardcoded. This value can change with Solana upgrades. Consider querying viaconnection.getMinimumBalanceForRentExemption(0)for accuracy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/multiwallet/actions/act_wallet_xfer.ts` at line 387, The hardcoded rentExemptMin constant should be replaced by querying the RPC for the current value: call and await connection.getMinimumBalanceForRentExemption(0) and assign its result to rentExemptMin (replace the literal 890880). Ensure this call is made inside the async context of the act_wallet_xfer handler (or wherever rentExemptMin is declared), handle potential errors or absence of connection (try/catch or fallback) and keep the variable name rentExemptMin so callers remain unchanged.
378-378: Unused variable declaration.
closedATAat line 378 is declared but never used in the SOL branch. It's re-declared at line 472 for SPL transfers.Remove unused declaration
- let closedATA = false;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/multiwallet/actions/act_wallet_xfer.ts` at line 378, In act_wallet_xfer remove the unused variable declaration closedATA that was added in the SOL transfer branch (it's unused there and re-declared later for SPL transfers); delete the redundant let closedATA = false; from the SOL branch so only the intended declaration in the SPL transfer logic (the re-declaration at the SPL handling code) remains.src/plugins/degenIntel/services/srv_strategy.ts (1)
94-108: Potential indefinite block inacquireServicefallback.If
walletIntServiceisn't cached andacquireServiceis called, it polls with 1s delays (default 10 retries per the snippet). If the service never loads, this could delaylistActiveStrategiessignificantly. Consider adding a timeout or reducing retries for this non-critical path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/degenIntel/services/srv_strategy.ts` around lines 94 - 108, The call path in listActiveStrategies awaits this.pIntWallets then calls acquireService to populate walletIntService; because acquireService polls with long defaults it can stall this non-critical path indefinitely—modify the logic around acquireService/walletIntService in listActiveStrategies to use a bounded fallback (e.g., pass a reduced retry/timeout option or wrap acquireService in a Promise.race with a timeout), so if the service cannot be acquired within the short window you skip premium checks and continue; ensure errors still get caught by the existing catch and that walletIntService is only set when acquireService resolves successfully.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/plugins/account/interfaces/int_accounts.ts`:
- Around line 270-272: The code computes entityId from account.entityId ||
account.data?.accountEntityId || account.accountEntityId but never validates it;
add a guard before using entityId (e.g., before calling createComponent) to
check if entityId is undefined/falsey and fail fast: throw or return a clear
error that includes the account identifier/context (use account, account.data,
or accountMockComponent info) so callers know which account is bad; ensure you
reference the existing variables entityId, data, and the createComponent call
when adding this validation.
In `@src/plugins/multiwallet/actions/act_wallet_swap_all.ts`:
- Around line 73-113: Extract the WSOL mint string
('So11111111111111111111111111111111111111112') into a top-level constant and
use it in buildSwapTx instead of a local variable; then update the
token-candidates filtering logic (the filter that currently only excludes native
SOL) to exclude both the native SOL mint and the new WSOL_MINT constant so wSOL
tokens are not considered for swaps and buildSwapTx cannot attempt a self-swap.
In `@src/plugins/multiwallet/actions/act_wallet_xfer.ts`:
- Line 443: Replace the floating-point conversion used to produce adjustedAmount
(currently BigInt(Number(content.amount) * 10 ** decimals)) with a string-safe
BigNumber-based calculation: import BigNumber from 'bignumber.js', construct a
BigNumber from content.amount, multiply by 10**decimals using
BigNumber.pow/multipliedBy to avoid floating math, convert to an integer string
(e.g. via toFixed or integerValue) and finally pass that string into BigInt to
set adjustedAmount; update any references to adjustedAmount in
act_wallet_xfer.ts accordingly.
In `@src/tasks/tsk_discord_post.ts`:
- Around line 356-370: The current postText variable (built from gc.post plus
the IDEA/THOUGHT/GOAL block using rc and gc) leaks internal planning to Discord;
instead keep the public message as just the trimmed gc.post and move the
appended metadata into a separate metadata string (e.g., metadataText) for
logs/memory only. Update the code that constructs postText to use only (gc.post
?? '').trim(), build the detailed metadata block from rc and gc into a new
variable (referencing rc, gc, PostIdeaContent, PostContent), and ensure only
postText is sent to Discord while metadataText is written to logs or stored in
memory. Also validate/truncate postText to respect Discord’s 2000-char limit
before sending.
- Line 4: The calls to generateObject(...) in tsk_discord_post.ts can be
undefined on older cores and may trigger infinite recursion when responseContent
is falsy; wrap each generateObject call (the ones currently used around the
responseContent handling — previously at the blocks that call generateObject for
constructing the reply payload) with a guard using hasGenerateObject() and only
call generateObject(...) when hasGenerateObject() returns true, otherwise fall
back to the existing behavior (skip the generateObject step or return the raw
responseContent); specifically update the generateObject usages referenced by
the variables handling responseContent and the later reply construction to first
check hasGenerateObject() before invoking generateObject().
---
Outside diff comments:
In `@src/plugins/analytics/providers/birdeyeProvider.ts`:
- Around line 369-375: The setCachedData method is not passing ttlSeconds into
the cache call; update setCachedData (in BirdeyeProvider) to forward the
ttlSeconds as the third argument to this.runtime.setCache(key, data, ttlSeconds)
so cached entries use the intended TTL (matching how CodexProvider uses
runtime.setCache). Ensure the change is made inside the private async
setCachedData(key: string, data: any, ttlSeconds: number): Promise<void> method
and retain the existing try/catch error handling.
In `@src/plugins/degenIntel/routes/charting.routes.ts`:
- Around line 38-49: The code is calling a non-existent getOHLCVData on
birdeyeProvider (masked by "as any" and optional chaining) which causes
historicalData to always be []. Replace the call to (birdeyeProvider as
any).getOHLCVData?.(token_address, 'solana', interval) with the correct method
birdeyeProvider.getHistoricalData(token_address, 'solana', interval) (remove the
"as any" cast and optional chaining), then ensure the returned array assigned to
historicalData contains the expected fields (timestamp, open, high, low, close,
volume) before mapping to candles so the OHLCV mapping (candles) uses the actual
provider data.
In `@src/plugins/multiwallet/actions/act_wallet_sweep.ts`:
- Around line 562-567: The early computation using closedAccounts
(rentRecoveryFromClosures, projectedBalance, ataCreationCost) is stale because
closedAccounts is still 0; remove that dead calculation or move it after you
finish counting closable accounts so the ATA affordability check uses the real
recovered-rent amount. Concretely, update the logic in act_wallet_sweep.ts so
that any check that decides to create an ATA (currently using
projectedBalance/ataCreationCost) is performed only after closedAccounts has
been incremented (the increments in the token-processing loop that update
closedAccounts), or recompute rentRecoveryFromClosures on demand using the
finalized closedAccounts value before deciding ATA creation for each token.
---
Nitpick comments:
In `@src/plugins/autonomous-trader/services/spartanNewsService.ts`:
- Line 1: Remove the top-level "// `@ts-nocheck`" and reintroduce proper
TypeScript typing for the symbols in this module (e.g., exported
classes/functions from spartanNewsService such as any exported service class or
functions that fetch/parse news). Replace the blanket suppression by adding
explicit parameter/return types and interfaces for objects used by the service,
fix any resulting compiler errors reported by tsc (or your IDE), and ensure
imports/exports match their declared types; run the TypeScript checker and
address each error rather than re-adding a suppression.
In `@src/plugins/autonomous-trader/utils.ts`:
- Around line 506-517: The accountMockComponent function currently mutates the
input by using delete on account.componentId and account.entityId which can
break callers that reuse the object; change the implementation to treat the
input as immutable: create a shallow copy of account (e.g., newAccount = {
...account }), extract id and entityId from the original fields
(account.componentId, account.entityId || account.accountEntityId), and return
the object with data: newAccount without calling delete, ensuring
accountMockComponent preserves the original input while still removing those
fields from the returned data object.
In `@src/plugins/degenIntel/index.ts`:
- Around line 45-48: The current fire-and-forget call to
getInitPromise(...).then(...) swallows errors from registerTasks; wrap the async
work in error handling so failures are observable — either add a .catch(...) to
the promise chain or convert the .then callback to an async IIFE that uses
try/catch and calls runtime.logger.error(...) with the thrown error and context
(e.g., "intel init - registerTasks failed") referencing getInitPromise and
registerTasks to ensure task registration errors are logged.
In `@src/plugins/degenIntel/services/srv_chain.ts`:
- Line 1308: The current error logging in the stop routine only records
error.message (logger.error({ err: error instanceof Error ? error.message :
String(error) }, 'Error stopping trading service:')), losing stack and metadata;
update the logging in the relevant stop/teardown function (where 'Error stopping
trading service:' is logged) to pass the full error object (or the Error
instance when available) into logger.error so the stack and other properties are
preserved (e.g., supply err: error or err: error instanceof Error ? error : new
Error(String(error))).
- Line 1291: The current logger.error call in the trading chain startup only
logs error.message which drops the stack; change the call to pass the full error
object so the stack is preserved (i.e., log the error instance instead of
error.message) — update the logger.error invocation around the "Error starting
trading chain service:" message (the logger.error call in srv_chain.ts) to
include the error object (or error under a key like err) rather than
error.message.
In `@src/plugins/degenIntel/services/srv_liquiditypooling.ts`:
- Around line 73-74: The current logger.error calls in srv_liquiditypooling.ts
(the ones emitting 'Error starting trading LP service:' and the similar call
around line 90) only log error.message and thus lose stack traces; update both
logger.error invocations to include the full error object (e.g., pass the
original Error as a field such as err or error and keep the descriptive message)
or normalize non-Error values into an Error before logging so the stack is
preserved for debugging while retaining the existing log message.
In `@src/plugins/degenIntel/services/srv_strategy.ts`:
- Around line 94-108: The call path in listActiveStrategies awaits
this.pIntWallets then calls acquireService to populate walletIntService; because
acquireService polls with long defaults it can stall this non-critical path
indefinitely—modify the logic around acquireService/walletIntService in
listActiveStrategies to use a bounded fallback (e.g., pass a reduced
retry/timeout option or wrap acquireService in a Promise.race with a timeout),
so if the service cannot be acquired within the short window you skip premium
checks and continue; ensure errors still get caught by the existing catch and
that walletIntService is only set when acquireService resolves successfully.
In `@src/plugins/degenIntel/tasks.ts`:
- Around line 145-224: The branch checking plugins.indexOf('twitter') !== -1
currently contains a large block-comment that leaves unused imports (Twitter,
TwitterParser) and no runtime logging; replace the commented-out tasks with a
proper feature flag similar to INTEL_TASKS_WALLET (e.g., INTEL_TASKS_TWITTER):
wrap the INTEL_SYNC_RAW_TWEETS and INTEL_PARSE_TWEETS
registerTaskWorker/createTask logic behind that flag so the Twitter and
TwitterParser imports are actually used when enabled, and when the flag is false
remove the commented code and imports or, if you must keep it, add a
runtime.logger.debug message inside the if-block explaining tasks are disabled;
ensure the symbols INTEL_SYNC_RAW_TWEETS, INTEL_PARSE_TWEETS, Twitter,
TwitterParser and the plugins check are updated accordingly.
In `@src/plugins/multiwallet/actions/act_wallet_assess_positions.ts`:
- Around line 80-84: The loop calling dataProvider.checkPositions for each
walletAddress (solanaWallets) runs sequentially and can be slow; change it to
run checks in parallel by mapping solanaWallets to an array of promises (e.g.,
solanaWallets.map(addr => dataProvider.checkPositions({ walletAddress: addr })))
and awaiting Promise.all on that array, or use a concurrency limiter (like
p-limit) if you must bound parallelism; ensure you wrap each promise in a
try/catch or handle Promise.allSettled results so failures for one wallet don’t
abort all processing and preserve appropriate logging (refer to checkPositions,
solanaWallets, and dataProvider).
- Around line 31-37: The handler currently types its options parameter inline as
_options: { [key: string]: unknown }; update the signature to use the shared
HandlerOptions type (import HandlerOptions from '../../../types') and replace
the parameter type with _options: HandlerOptions so the function (handler: async
(runtime: IAgentRuntime, message: Memory, _state: State, _options:
HandlerOptions, callback?: HandlerCallback) => Promise<ActionResult | void |
undefined>) matches other actions and removes the inline ad-hoc type; ensure you
add the HandlerOptions import and adjust any references if needed.
In `@src/plugins/multiwallet/actions/act_wallet_sweep.ts`:
- Line 803: The current logger call in act_wallet_sweep.ts is logging only
error.message (losing stack/metadata); update the logger.error invocation in the
sweep action to pass the full error object (e.g., use { err: error } or the
logger's error signature that accepts the Error directly) instead of { err:
error instanceof Error ? error.message : String(error) } so stack traces and
full error properties are preserved.
In `@src/plugins/multiwallet/actions/act_wallet_xfer.ts`:
- Line 387: The hardcoded rentExemptMin constant should be replaced by querying
the RPC for the current value: call and await
connection.getMinimumBalanceForRentExemption(0) and assign its result to
rentExemptMin (replace the literal 890880). Ensure this call is made inside the
async context of the act_wallet_xfer handler (or wherever rentExemptMin is
declared), handle potential errors or absence of connection (try/catch or
fallback) and keep the variable name rentExemptMin so callers remain unchanged.
- Line 378: In act_wallet_xfer remove the unused variable declaration closedATA
that was added in the SOL transfer branch (it's unused there and re-declared
later for SPL transfers); delete the redundant let closedATA = false; from the
SOL branch so only the intended declaration in the SPL transfer logic (the
re-declaration at the SPL handling code) remains.
| // account may be a mock component { id, entityId, data } from accountMockComponent, or a raw account object with accountEntityId | ||
| const entityId: UUID = account.entityId || account.data?.accountEntityId || account.accountEntityId; | ||
| const data = account.data || account; |
There was a problem hiding this comment.
Missing validation for undefined entityId.
If all three sources are falsy, entityId will be undefined, which could cause createComponent to fail silently or throw. Add validation.
Proposed fix
// account may be a mock component { id, entityId, data } from accountMockComponent, or a raw account object with accountEntityId
const entityId: UUID = account.entityId || account.data?.accountEntityId || account.accountEntityId;
+ if (!entityId) {
+ throw new Error('Account must have entityId, data.accountEntityId, or accountEntityId');
+ }
const data = account.data || account;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugins/account/interfaces/int_accounts.ts` around lines 270 - 272, The
code computes entityId from account.entityId || account.data?.accountEntityId ||
account.accountEntityId but never validates it; add a guard before using
entityId (e.g., before calling createComponent) to check if entityId is
undefined/falsey and fail fast: throw or return a clear error that includes the
account identifier/context (use account, account.data, or accountMockComponent
info) so callers know which account is bad; ensure you reference the existing
variables entityId, data, and the createComponent call when adding this
validation.
| /** Max number of swap transactions to send in one batch (sweep-style batching). */ | ||
| const SEND_BATCH_SIZE = 4; | ||
|
|
||
| /** | ||
| * Swaps tokens using Jupiter API. | ||
| * Builds a single swap transaction (quote + build) for token -> SOL. Used for parallel build phase. | ||
| */ | ||
| async function swapToken( | ||
| async function buildSwapTx( | ||
| connection: Connection, | ||
| walletPublicKey: PublicKey, | ||
| inputTokenCA: string, | ||
| outputTokenCA: string, | ||
| amount: number, | ||
| runtime: IAgentRuntime | ||
| ): Promise<{ swapTransaction: string }> { | ||
| try { | ||
| const decimals = | ||
| inputTokenCA === 'So11111111111111111111111111111111111111111' | ||
| ? new BigNumber(9) | ||
| : new BigNumber(await getTokenDecimals(connection, inputTokenCA)); | ||
|
|
||
| logger.log('Decimals:', decimals.toString()); | ||
|
|
||
| const amountBN = new BigNumber(amount); | ||
| const adjustedAmount = amountBN.multipliedBy(new BigNumber(10).pow(decimals)); | ||
|
|
||
| logger.log('Fetching quote with params:', JSON.stringify({ | ||
| inputMint: inputTokenCA, | ||
| outputMint: outputTokenCA, | ||
| amount: adjustedAmount.toString(), | ||
| })); | ||
|
|
||
| const jupiterService = runtime.getService('JUPITER_SERVICE') as any; | ||
|
|
||
| const quoteData = await jupiterService.getQuote({ | ||
| inputMint: inputTokenCA, | ||
| outputMint: outputTokenCA, | ||
| amount: adjustedAmount, | ||
| slippageBps: 200, | ||
| }); | ||
|
|
||
| const swapRequestBody = { | ||
| quoteResponse: quoteData, | ||
| userPublicKey: walletPublicKey.toBase58(), | ||
| dynamicComputeUnitLimit: true, | ||
| dynamicSlippage: true, | ||
| priorityLevelWithMaxLamports: { | ||
| maxLamports: 4000000, | ||
| priorityLevel: 'veryHigh', | ||
| }, | ||
| }; | ||
|
|
||
| const swapResponse = await fetch('https://quote-api.jup.ag/v6/swap', { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify(swapRequestBody), | ||
| }); | ||
|
|
||
| const swapData = await swapResponse.json(); | ||
|
|
||
| if (!swapData || !swapData.swapTransaction) { | ||
| logger.error('Swap error:', swapData); | ||
| throw new Error( | ||
| `Failed to get swap transaction: ${swapData?.error || 'No swap transaction returned'}` | ||
| ); | ||
| } | ||
|
|
||
| return swapData; | ||
| } catch (error) { | ||
| logger.error('Error in swapToken:', error instanceof Error ? error.message : String(error)); | ||
| throw error; | ||
| const WSOL_MINT = 'So11111111111111111111111111111111111111112'; | ||
| const decimals = | ||
| inputTokenCA === 'So11111111111111111111111111111111111111111' | ||
| ? new BigNumber(9) | ||
| : new BigNumber(await getTokenDecimals(connection, inputTokenCA)); | ||
| const amountBN = new BigNumber(amount); | ||
| const adjustedAmount = amountBN.multipliedBy(new BigNumber(10).pow(decimals)); | ||
|
|
||
| const jupiterService = runtime.getService('JUPITER_SERVICE') as any; | ||
| const quoteData = await jupiterService.getQuote({ | ||
| inputMint: inputTokenCA, | ||
| outputMint: WSOL_MINT, | ||
| amount: adjustedAmount, | ||
| slippageBps: 200, | ||
| }); | ||
|
|
||
| const swapData = await jupiterService.executeSwap({ | ||
| quoteResponse: quoteData, | ||
| userPublicKey: walletPublicKey.toBase58(), | ||
| slippageBps: 200, | ||
| }); | ||
|
|
||
| if (!swapData || !(swapData as any).swapTransaction) { | ||
| throw new Error( | ||
| `Failed to get swap transaction: ${(swapData as any)?.error || 'No swap transaction returned'}` | ||
| ); | ||
| } | ||
| return swapData as { swapTransaction: string }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the full file to understand the context
fd "act_wallet_swap_all.ts" --type fRepository: elizaOS/spartan
Length of output: 113
🏁 Script executed:
# Search for SOL/WSOL mint constants in the codebase
rg "So1111111111111111111111111111111111111111" src/ -n --context=2Repository: elizaOS/spartan
Length of output: 24269
🏁 Script executed:
# Also search for common Solana mint patterns
rg "WSOL|SOL_MINT" src/ -n -A 1 -B 1 --type typescriptRepository: elizaOS/spartan
Length of output: 93
🏁 Script executed:
sed -n '275,295p' src/plugins/multiwallet/actions/act_wallet_swap_all.tsRepository: elizaOS/spartan
Length of output: 941
🏁 Script executed:
# Let's see the full buildSwapTx function and how it's called
sed -n '78,115p' src/plugins/multiwallet/actions/act_wallet_swap_all.tsRepository: elizaOS/spartan
Length of output: 1362
🏁 Script executed:
# Check if wSOL is used elsewhere in the function or if there's additional filtering
sed -n '240,310p' src/plugins/multiwallet/actions/act_wallet_swap_all.tsRepository: elizaOS/spartan
Length of output: 3438
🏁 Script executed:
# Let's see more context around the swap execution - where buildSwapTx is called
sed -n '310,380p' src/plugins/multiwallet/actions/act_wallet_swap_all.tsRepository: elizaOS/spartan
Length of output: 3578
🏁 Script executed:
# Also check if there's any additional filtering for wSOL
grep -n "WSOL_MINT\|111112\|111111" src/plugins/multiwallet/actions/act_wallet_swap_all.tsRepository: elizaOS/spartan
Length of output: 445
Exclude wSOL alongside native SOL from swap candidates.
The filter at line 291 only excludes native SOL (So11111111111111111111111111111111111111111), but if the wallet holds wSOL tokens (So11111111111111111111111111111111111111112), they will pass through and buildSwapTx will attempt to swap wSOL to wSOL—a self-swap that fails. Extract WSOL_MINT as a top-level constant and exclude both mints from the token list:
🔧 Fix: exclude wSOL from swap candidates
const SEND_BATCH_SIZE = 4;
+const WSOL_MINT = 'So11111111111111111111111111111111111111112';
async function buildSwapTx(
connection: Connection,
walletPublicKey: PublicKey,
inputTokenCA: string,
amount: number,
runtime: IAgentRuntime
): Promise<{ swapTransaction: string }> {
- const WSOL_MINT = 'So11111111111111111111111111111111111111112';
const decimals =
- inputTokenCA === 'So11111111111111111111111111111111111111111'
+ inputTokenCA === WSOL_MINTAnd at line 291:
- const SOL_MINT = 'So11111111111111111111111111111111111111111';
+ const SOL_MINT = WSOL_MINT;
const tokenAccountsWithBalances = heldTokens.filter(
- (t) => t.account.data.parsed?.info?.mint !== SOL_MINT
+ (t) => t.account.data.parsed?.info?.mint !== WSOL_MINT && t.account.data.parsed?.info?.mint !== 'So11111111111111111111111111111111111111111'
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugins/multiwallet/actions/act_wallet_swap_all.ts` around lines 73 -
113, Extract the WSOL mint string
('So11111111111111111111111111111111111111112') into a top-level constant and
use it in buildSwapTx instead of a local variable; then update the
token-candidates filtering logic (the filter that currently only excludes native
SOL) to exclude both the native SOL mint and the new WSOL_MINT constant so wSOL
tokens are not considered for swaps and buildSwapTx cannot attempt a self-swap.
| const decimals = | ||
| (mintInfo.value?.data as { parsed: { info: { decimals: number } } })?.parsed?.info | ||
| ?.decimals ?? 9; | ||
| const adjustedAmount = BigInt(Number(content.amount) * 10 ** decimals); |
There was a problem hiding this comment.
Precision loss risk with BigInt conversion from float.
BigInt(Number(content.amount) * 10 ** decimals) performs floating-point multiplication before BigInt conversion. For tokens with high decimals or large amounts, this can lose precision.
Use BigNumber or string-based arithmetic
- const adjustedAmount = BigInt(Number(content.amount) * 10 ** decimals);
+ // Use string multiplication to avoid float precision loss
+ const amountStr = String(content.amount);
+ const [intPart, decPart = ''] = amountStr.split('.');
+ const paddedDec = decPart.padEnd(decimals, '0').slice(0, decimals);
+ const adjustedAmount = BigInt(intPart + paddedDec);Or import BigNumber (already used elsewhere in the codebase):
import BigNumber from 'bignumber.js';
const adjustedAmount = BigInt(new BigNumber(content.amount).times(10 ** decimals).toFixed(0));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugins/multiwallet/actions/act_wallet_xfer.ts` at line 443, Replace the
floating-point conversion used to produce adjustedAmount (currently
BigInt(Number(content.amount) * 10 ** decimals)) with a string-safe
BigNumber-based calculation: import BigNumber from 'bignumber.js', construct a
BigNumber from content.amount, multiply by 10**decimals using
BigNumber.pow/multipliedBy to avoid floating math, convert to an integer string
(e.g. via toFixed or integerValue) and finally pass that string into BigInt to
set adjustedAmount; update any references to adjustedAmount in
act_wallet_xfer.ts accordingly.
| import { asUUID, createUniqueUuid, ModelType, composePromptFromState } from '@elizaos/core'; | ||
| import type { IAgentRuntime, UUID, Memory, Content } from '@elizaos/core'; | ||
| import { DISCORD_SERVICE_NAME, DiscordService, type IDiscordService } from '@elizaos/plugin-discord'; | ||
| import { generateObject } from '../core-compat'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the actual generateObject usage and retry patterns
rg -n "generateObject\(" src -g'*.ts' -A 5 -B 2Repository: elizaOS/spartan
Length of output: 2704
🏁 Script executed:
# Check the core-compat module to understand generateObject behavior
fd "core-compat" src -type fRepository: elizaOS/spartan
Length of output: 229
🏁 Script executed:
# Check package version info
cat package.json | grep -A 5 -B 5 "@elizaos/core"Repository: elizaOS/spartan
Length of output: 498
🏁 Script executed:
# Look at the specific file mentioned
wc -l src/tasks/tsk_discord_post.tsRepository: elizaOS/spartan
Length of output: 92
🏁 Script executed:
# Read the critical sections of the file
sed -n '1,50p' src/tasks/tsk_discord_post.tsRepository: elizaOS/spartan
Length of output: 1725
🏁 Script executed:
# Check the retry logic at the mentioned line ranges
sed -n '130,150p' src/tasks/tsk_discord_post.tsRepository: elizaOS/spartan
Length of output: 919
🏁 Script executed:
# Check lines 331-345 for second generateObject call
sed -n '325,345p' src/tasks/tsk_discord_post.tsRepository: elizaOS/spartan
Length of output: 1281
🏁 Script executed:
# Check if hasGenerateObject exists in core-compat
rg -n "hasGenerateObject" src -A 3 -B 1Repository: elizaOS/spartan
Length of output: 2523
🏁 Script executed:
# Read full core-compat.ts to understand the complete implementation
cat -n src/core-compat.ts | head -100Repository: elizaOS/spartan
Length of output: 4713
🏁 Script executed:
# Look for the retry recursion pattern specifically
sed -n '140,150p' src/tasks/tsk_discord_post.tsRepository: elizaOS/spartan
Length of output: 344
🏁 Script executed:
# Confirm both calls lack guard - search for generateObject calls in tsk_discord_post
sed -n '137,145p' src/tasks/tsk_discord_post.ts && echo "---" && sed -n '331,340p' src/tasks/tsk_discord_post.tsRepository: elizaOS/spartan
Length of output: 996
🏁 Script executed:
# Verify strategy_llm pattern to confirm guard usage elsewhere
sed -n '611,620p' src/plugins/trading/strategies/strategy_llm.tsRepository: elizaOS/spartan
Length of output: 959
Add guard before generateObject calls to prevent infinite recursion on older core versions.
generateObject() returns undefined on cores that lack this method (documented in core-compat.ts line 64). The unguarded calls at lines 137-145 and 331-340 will trigger infinite recursion when responseContent is falsy. Use hasGenerateObject() check (already imported and used elsewhere in strategy_llm.ts) before each call:
+ if (!hasGenerateObject(runtime)) {
+ runtime.logger.error('generateObject not supported by this core version');
+ return false;
+ }
const responseContent = await generateObject(runtime, ModelType.TEXT_LARGE, {Same pattern needed at line 331 for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tasks/tsk_discord_post.ts` at line 4, The calls to generateObject(...) in
tsk_discord_post.ts can be undefined on older cores and may trigger infinite
recursion when responseContent is falsy; wrap each generateObject call (the ones
currently used around the responseContent handling — previously at the blocks
that call generateObject for constructing the reply payload) with a guard using
hasGenerateObject() and only call generateObject(...) when hasGenerateObject()
returns true, otherwise fall back to the existing behavior (skip the
generateObject step or return the raw responseContent); specifically update the
generateObject usages referenced by the variables handling responseContent and
the later reply construction to first check hasGenerateObject() before invoking
generateObject().
| const rc = responseContent as PostIdeaContent; | ||
| const gc = generatedContent as PostContent; | ||
| const postText = (gc.post ?? '').trim() + ` | ||
| > IDEA: ${rc?.idea ?? ''} | ||
|
|
||
| > THOUGHT: ${responseContent.thought} | ||
| > THOUGHT: ${rc?.thought ?? ''} | ||
|
|
||
| > GOAL: ${responseContent.goal} | ||
| > GOAL: ${rc?.goal ?? ''} | ||
|
|
||
| > INTERACTIVE: ${responseContent.interactive} | ||
| > INTERACTIVE: ${rc?.interactive ?? ''} | ||
|
|
||
| > PROVIDERS: ${responseContent.providers} | ||
| > PROVIDERS: ${Array.isArray(rc?.providers) ? rc.providers.join(', ') : rc?.providers ?? ''} | ||
|
|
||
| > POST_THOUGHT: ${generatedContent.thought} | ||
| > POST_THOUGHT: ${gc?.thought ?? ''} | ||
| `; |
There was a problem hiding this comment.
Don’t append planning metadata to the public post body.
The IDEA/THOUGHT/GOAL block is currently sent to Discord, which leaks internal planning and risks hitting the 2000‑char limit. Keep metadata for logs/memory only.
🔧 Suggested split between post text and metadata
- const postText = (gc.post ?? '').trim() + `
-> IDEA: ${rc?.idea ?? ''}
-
-> THOUGHT: ${rc?.thought ?? ''}
-
-> GOAL: ${rc?.goal ?? ''}
-
-> INTERACTIVE: ${rc?.interactive ?? ''}
-
-> PROVIDERS: ${Array.isArray(rc?.providers) ? rc.providers.join(', ') : rc?.providers ?? ''}
-
-> POST_THOUGHT: ${gc?.thought ?? ''}
- `;
+ const postText = (gc.post ?? '').trim();
+ const postMeta = {
+ idea: rc?.idea ?? '',
+ thought: rc?.thought ?? '',
+ goal: rc?.goal ?? '',
+ interactive: rc?.interactive ?? '',
+ providers: Array.isArray(rc?.providers) ? rc.providers.join(', ') : rc?.providers ?? '',
+ postThought: gc?.thought ?? '',
+ };
+ runtime.logger.debug({ postMeta }, 'Post plan metadata');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const rc = responseContent as PostIdeaContent; | |
| const gc = generatedContent as PostContent; | |
| const postText = (gc.post ?? '').trim() + ` | |
| > IDEA: ${rc?.idea ?? ''} | |
| > THOUGHT: ${responseContent.thought} | |
| > THOUGHT: ${rc?.thought ?? ''} | |
| > GOAL: ${responseContent.goal} | |
| > GOAL: ${rc?.goal ?? ''} | |
| > INTERACTIVE: ${responseContent.interactive} | |
| > INTERACTIVE: ${rc?.interactive ?? ''} | |
| > PROVIDERS: ${responseContent.providers} | |
| > PROVIDERS: ${Array.isArray(rc?.providers) ? rc.providers.join(', ') : rc?.providers ?? ''} | |
| > POST_THOUGHT: ${generatedContent.thought} | |
| > POST_THOUGHT: ${gc?.thought ?? ''} | |
| `; | |
| const rc = responseContent as PostIdeaContent; | |
| const gc = generatedContent as PostContent; | |
| const postText = (gc.post ?? '').trim(); | |
| const postMeta = { | |
| idea: rc?.idea ?? '', | |
| thought: rc?.thought ?? '', | |
| goal: rc?.goal ?? '', | |
| interactive: rc?.interactive ?? '', | |
| providers: Array.isArray(rc?.providers) ? rc.providers.join(', ') : rc?.providers ?? '', | |
| postThought: gc?.thought ?? '', | |
| }; | |
| runtime.logger.debug({ postMeta }, 'Post plan metadata'); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tasks/tsk_discord_post.ts` around lines 356 - 370, The current postText
variable (built from gc.post plus the IDEA/THOUGHT/GOAL block using rc and gc)
leaks internal planning to Discord; instead keep the public message as just the
trimmed gc.post and move the appended metadata into a separate metadata string
(e.g., metadataText) for logs/memory only. Update the code that constructs
postText to use only (gc.post ?? '').trim(), build the detailed metadata block
from rc and gc into a new variable (referencing rc, gc, PostIdeaContent,
PostContent), and ensure only postText is sent to Discord while metadataText is
written to logs or stored in memory. Also validate/truncate postText to respect
Discord’s 2000-char limit before sending.


Merge resolution:
Core compatibility (core-compat.ts):
Service names:
Build and types:
New/added:
Note
Medium Risk
Touches core initialization/service readiness paths and modifies on-chain wallet operations (swap/sweep/transfer) plus position auto-closing logic, which could affect trading behavior and user funds if regressions occur.
Overview
Introduces
src/core-compat.ts(+src/types.ts) to smooth over older/newer@elizaos/coredifferences (init/service readiness, optionalgenerateObject, cache TTL support, and pino-vs-legacy logging), then updates init/services/actions to use these helpers and avoid directinitPromise/getServiceLoadPromiseassumptions.Standardizes service wiring around
INTEL_DATAPROVIDER/INTEL_CHAINand hardens degenIntel data sync:checkPositionsnow supports per-wallet checks, validates that wallets still hold tokens (closing stale/zero-amount positions), skips watch-only wallets for closes, and triggers re-checks after closes.Improves multiwallet and trading wallet-write flows by syncing positions after swaps/transfers/sweeps/open-position actions (
notifyWalletWrite), adds a newMULTIWALLET_ASSESS_POSITIONSaction for manual reconciliation, refactorsswap_allto use Solana service token discovery and batch/parallelized Jupiter swaps, and fixes account component persistence/upsert edge cases (mock component shapes, saferentityIdresolution).Written by Cursor Bugbot for commit c24e30c. This will update automatically on new commits. Configure here.
Greptile Summary
This PR introduces a compatibility layer (
core-compat.ts) to support multiple versions of@elizaos/core, standardizes service names fromTRADER_*toINTEL_*across the codebase, resolves merge conflicts in wallet actions, and adds position sync improvements.Key changes:
core-compat.tsprovides graceful fallbacks forinitPromise,getServiceLoadPromise,generateObject, and cache operations with optional TTLINTEL_DATAPROVIDERandINTEL_CHAINnow used everywhere to match plugin registrationact_wallet_swap,act_wallet_swap_all,act_wallet_xfer, andstrategy_llmwith improved error handlingact_wallet_assess_positionsaction allows manual position syncnotifyWalletWriteintegration ensures position data stays synchronized after wallet modificationslogger.error({ err }, 'msg')) adopted throughoutstrategy_llm.tsenhanced with portfolio tracking and SOL balance filtering to prevent counting immobile positionsConfidence Score: 4/5
src/core-compat.ts,src/plugins/degenIntel/services/srv_dataprovider.ts,src/plugins/trading/strategies/strategy_llm.ts, and wallet action files during testing to ensure the compatibility layer works across core versions and position sync logic functions correctlyImportant Files Changed
@elizaos/coreversions with graceful fallbacksHandlerOptionstype for cross-version compatibilitygenerateObjectfrom core-compat with proper typingFlowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[PR: Staging Fixes & Core-Compat] --> B[Core Compatibility Layer] A --> C[Service Name Standardization] A --> D[Merge Conflict Resolutions] A --> E[Build & Type Fixes] B --> B1[core-compat.ts] B1 --> B2[getInitPromise] B1 --> B3[getServiceLoadPromise] B1 --> B4[generateObject with hasGenerateObject] B1 --> B5[setCache with optional TTL] B1 --> B6[logCompatError/logCompatDebug] C --> C1[TRADER_DATAPROVIDER → INTEL_DATAPROVIDER] C --> C2[TRADER_CHAIN → INTEL_CHAIN] C --> C3[Service registration alignment] D --> D1[act_wallet_swap] D --> D2[act_wallet_swap_all] D --> D3[act_wallet_xfer] D --> D4[strategy_llm] D1 --> D5[notifyWalletWrite integration] D2 --> D5 D3 --> D5 D4 --> D5 E --> E1[HandlerOptions in types.ts] E --> E2[Pino-style logging throughout] E --> E3[TypeScript type fixes] E --> E4[tsk_discord_post generateObject types] D5 --> F[Position Sync Enhancement] F --> F1[ensurePositionsMatchWalletHoldings] F --> F2[act_wallet_assess_positions action] F --> F3[Prevent stale position data]Last reviewed commit: c24e30c
Summary by CodeRabbit
New Features
Improvements