Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthrough引入 TTFT(首次令牌时间)跟踪、RequestTracker 与两阶段优雅关机;将计费单位从 microUSD 迁移为 nanoUSD 并改用 big.Int;新增成本重算 API/进度广播(HTTP + WebSocket);仓储扩展批量/带进度重算;前端增加定价、成本分解与重算交互;包含 DB 迁移将 cost * 1000。 Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Frontend UI
participant Transport as HttpTransport
participant AdminHTTP as AdminHandler (HTTP)
participant AdminSvc as AdminService
participant Repo as Repository/DB
participant Broad as WebSocketHub/Broadcaster
UI->>Transport: POST /usage-stats/recalculate-costs
Transport->>AdminHTTP: HTTP POST
AdminHTTP->>AdminSvc: RecalculateCosts()
activate AdminSvc
AdminSvc->>Repo: RecalculateCostsFromAttemptsWithProgress(progressChan)
loop 每批处理
Repo-->>AdminSvc: progress event (current/total)
AdminSvc->>Broad: Broadcast RecalculateCostsProgress
Broad->>UI: WS: recalculate_costs_progress
end
Repo-->>AdminSvc: final result
AdminSvc->>Broad: Broadcast completed result
AdminSvc-->>AdminHTTP: result
AdminHTTP-->>Transport: HTTP 200 with result
Transport-->>UI: response
deactivate AdminSvc
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (6)
✏️ Tip: You can disable this entire section by setting Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cmd/maxx/main.go (1)
214-232: 统一 AdminService 初始化中的 broadcaster 实例,避免事件路由分叉cmd/maxx/main.go 和 internal/core/database.go 分别初始化 AdminService 时传入了不同的 broadcaster 实例:
- main.go 传入原始的
wsHub(*handler.WebSocketHub)- database.go 传入包装后的
wailsBroadcaster(*event.WailsBroadcaster)虽然两者都实现了
event.Broadcaster接口,但这会导致事件通过两个不同的实例广播,造成状态不同步。应统一使用wailsBroadcaster实例,确保事件路由一致。internal/repository/sqlite/proxy_request.go (1)
77-106: 游标分页与排序字段不一致可能导致漏/重复。当前仍用
id的 before/after 作为游标,但排序改为end_time+id。翻页边界不再与排序一致,可能出现重复或跳过记录。建议改为基于(end_time, id)的复合游标,或保持按id排序并在前端做次级排序。
🤖 Fix all issues with AI agents
In `@internal/repository/sqlite/proxy_request.go`:
- Around line 232-241: The AddCost method can write a negative value into the
unsigned cost column when delta is negative and its absolute value exceeds
current cost; update AddCost on ProxyRequestRepository to guarantee non-negative
results by replacing gorm.Expr("cost + ?", delta) with a guarded SQL expression
(e.g. gorm.Expr("MAX(cost + ?, 0)", delta) or gorm.Expr("CASE WHEN cost + ? < 0
THEN 0 ELSE cost + ? END", delta, delta)) or by loading the current cost and
performing the arithmetic/validation in a transaction before writing; ensure the
method clamps the resulting cost to >= 0 (and returns an error on overflow if
desired) so the uint64 column cannot receive negative data.
In `@web/src/components/provider-details-dialog.tsx`:
- Around line 112-123: The formatCost function is treating its input as nanoUSD
but ProviderStats.totalCost is microUSD, causing values to be 1000x too small;
update formatCost (and its parameter name if helpful) to accept microUSD and
convert by dividing by 1_000_000 (i.e., const usd = Math.floor(microUsd) /
1_000_000 or Math.floor(microUsd / 1) / 1_000_000) instead of dividing by 1000
first, preserve the existing toFixed logic, and update any callers or the
parameter name to reflect microUSD to avoid future confusion (reference:
function formatCost and ProviderStats.totalCost).
In `@web/src/pages/overview.tsx`:
- Around line 70-80: The formatCost function is treating values as nanoUSD but
the type/comments and other code use microUSD; fix by making representation
consistent and protecting against overflows: either update the type/comments to
nanoUSD everywhere or change formatCost to accept and operate on BigInt microUSD
values (matching RequestDetailPanel.tsx token math) — if using BigInt, update
callers to pass BigInt, perform integer scaling/divisions with BigInt constants
(e.g., convert microUSD→USD with proper BigInt divisors), then convert to Number
only for final formatting when safe or format from BigInt to string to avoid
precision loss for totals > Number.MAX_SAFE_INTEGER; ensure function name
formatCost and any aggregation sites are updated accordingly.
In `@web/src/pages/requests/index.tsx`:
- Around line 44-52: The local re-declaration of ProviderTypeKey and related
constants can drift from the canonical definition; import and reuse the existing
ProviderTypeKey type from web/src/pages/providers/types.ts instead of
re-declaring it, update PROVIDER_TYPE_ORDER to be typed with the imported
ProviderTypeKey, and replace the hard-coded "Custom" label in
PROVIDER_TYPE_LABELS with an i18n string (use your i18n hook/translate function)
while keeping the other labels consistent.
In `@web/src/pages/stats/index.tsx`:
- Around line 847-853: The conditional computing the success ratio inside the
iconClassName prop can produce NaN when summary.totalRequests is 0; update the
logic in the iconClassName expression (the ternary that compares
summary.successfulRequests / summary.totalRequests) to first check
summary.totalRequests > 0 and only compute the division when true, otherwise
treat the ratio as 0 (or an explicit fallback) so the ternary branches are
deterministic; modify the expression around iconClassName to use a guarded ratio
variable or inline checks like summary.totalRequests > 0 &&
(summary.successfulRequests / summary.totalRequests) >= 0.95 etc.
🧹 Nitpick comments (15)
internal/executor/executor.go (1)
484-489: 逻辑正确,但存在重复代码。此处的定价模型选择逻辑与成功路径(第 407-412 行)完全相同。考虑提取为辅助函数以提高可维护性。
♻️ 可选重构:提取辅助函数
+// getPricingModel returns the model to use for pricing calculation. +// Prefers ResponseModel (actual model from API), falls back to MappedModel. +func getPricingModel(attempt *domain.ProxyUpstreamAttempt) string { + if attempt.ResponseModel != "" { + return attempt.ResponseModel + } + return attempt.MappedModel +}然后在两处调用:
- // Use ResponseModel for pricing (actual model from API response), fallback to MappedModel - pricingModel := attemptRecord.ResponseModel - if pricingModel == "" { - pricingModel = attemptRecord.MappedModel - } - attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics) + attemptRecord.Cost = pricing.GlobalCalculator().Calculate(getPricingModel(attemptRecord), metrics)web/src/pages/client-routes/components/provider-row.tsx (1)
28-39: 格式化逻辑正确,实现清晰。转换逻辑验证:
Math.floor(nanoUsd / 1000)将纳美元转换为微美元(向下取整)- 再除以
1_000_000转换为美元,保留 6 位小数精度正则表达式
/\.?0+$/正确处理尾随零的移除(如0.001500→0.0015)。💡 可选优化:考虑提取魔法数字为常量
+const NANO_TO_MICRO = 1000; +const MICRO_TO_USD = 1_000_000; + // 格式化成本 (纳美元 → 美元,向下取整到 6 位) function formatCost(nanoUsd: number): string { // 向下取整到 6 位小数 (microUSD 精度) - const usd = Math.floor(nanoUsd / 1000) / 1_000_000; + const usd = Math.floor(nanoUsd / NANO_TO_MICRO) / MICRO_TO_USD;internal/repository/sqlite/migrations.go (1)
25-49: UP/DOWN WHERE 条件不一致,但未造成数据损坏风险经验证,
ProxyRequest、ProxyUpstreamAttempt和UsageStats表中的cost字段均定义为uint64(无符号整数),不可能为负数。因此原review中关于"负成本/退款记录"导致数据损坏的假设在技术上不成立。不过,Up 使用
WHERE cost > 0而 Down 无 WHERE 条件的设计仍然不一致。对于cost = 0的行,Up 跳过而 Down 仍会执行0 / 1000 = 0,虽然结果相同但逻辑不对称。建议 Up/Down 统一条件以保持迁移的对称性和代码清晰度(要么都加 WHERE cost > 0,要么都移除)。internal/repository/sqlite/proxy_upstream_attempt.go (2)
47-53: 潜在的内存问题:ListAll() 加载所有记录到内存对于大型数据集,
ListAll()可能会导致内存问题。考虑到StreamForCostCalc已经实现了分批处理模式,建议在成本重算场景中优先使用流式方法。如果
ListAll()仅用于测试或小数据集场景,建议添加注释说明其使用限制。
99-116: 变量遮蔽:循环变量r遮蔽了接收器变量第 101 行的循环变量
r遮蔽了方法接收器r。虽然代码功能正确,但这会降低可读性并可能在维护时引入 bug。♻️ 建议修改
- for i, r := range results { + for i, row := range results { batch[i] = &domain.AttemptCostData{ - ID: r.ID, - ProxyRequestID: r.ProxyRequestID, - ResponseModel: r.ResponseModel, - MappedModel: r.MappedModel, - RequestModel: r.RequestModel, - InputTokenCount: r.InputTokenCount, - OutputTokenCount: r.OutputTokenCount, - CacheReadCount: r.CacheReadCount, - CacheWriteCount: r.CacheWriteCount, - Cache5mWriteCount: r.Cache5mWriteCount, - Cache1hWriteCount: r.Cache1hWriteCount, - Cost: r.Cost, + ID: row.ID, + ProxyRequestID: row.ProxyRequestID, + ResponseModel: row.ResponseModel, + MappedModel: row.MappedModel, + RequestModel: row.RequestModel, + InputTokenCount: row.InputTokenCount, + OutputTokenCount: row.OutputTokenCount, + CacheReadCount: row.CacheReadCount, + CacheWriteCount: row.CacheWriteCount, + Cache5mWriteCount: row.Cache5mWriteCount, + Cache1hWriteCount: row.Cache1hWriteCount, + Cost: row.Cost, } }internal/repository/sqlite/usage_stats.go (2)
521-528: 使用sort.Slice替代手动冒泡排序当前使用手动冒泡排序(O(n²)),对于较大的数据集效率较低。建议使用 Go 标准库的
sort.Slice。♻️ 建议修改
+import "sort" // 按 TimeBucket 倒序排列 - for i := 0; i < len(merged)-1; i++ { - for j := i + 1; j < len(merged); j++ { - if merged[j].TimeBucket.After(merged[i].TimeBucket) { - merged[i], merged[j] = merged[j], merged[i] - } - } - } + sort.Slice(merged, func(i, j int) bool { + return merged[i].TimeBucket.After(merged[j].TimeBucket) + })
617-619: 考虑记录扫描错误扫描失败时静默继续可能会隐藏数据问题。建议至少记录一条警告日志以便调试。
♻️ 建议修改
if err != nil { + log.Printf("[queryRecentMinutesStats] Scan error: %v", err) continue }web/src/pages/routes/form.tsx (1)
1-16: 建议复用已有 ProviderTypeKey 以避免定义漂移当前文件重复定义
ProviderTypeKey和标签常量,已有同名类型存在于web/src/pages/providers/types.ts。建议抽到共享模块并在此处复用,避免后续类型扩展时不同步。web/src/pages/requests/detail/RequestHeader.tsx (1)
9-15: 建议提取formatCost为共享工具函数。此函数与
provider-row.tsx中的实现几乎相同,唯一区别是对零值的处理(这里返回'-',而 provider-row 返回'$0')。建议将其提取到@/lib/utils或类似的共享位置,通过参数控制零值行为,以避免代码重复和潜在的不一致。♻️ 建议的重构方案
// 在 `@/lib/utils.ts` 中 export function formatCostNanoUSD(nanoUSD: number, options?: { showZero?: boolean }): string { if (nanoUSD === 0) return options?.showZero ? '$0' : '-'; const usd = Math.floor(nanoUSD / 1000) / 1_000_000; if (usd >= 1) return `$${usd.toFixed(2)}`; if (usd >= 0.01) return `$${usd.toFixed(3)}`; return `$${usd.toFixed(6).replace(/\.?0+$/, '')}`; }web/src/components/routes/ClientTypeRoutesContent.tsx (1)
50-58: 建议复用ProviderTypeKey类型定义。
ProviderTypeKey已在web/src/pages/providers/types.ts中定义为'custom' | 'antigravity' | 'kiro'。建议从该文件导入类型,而非重新定义,以确保类型一致性并减少维护负担。♻️ 建议的重构方案
+import { ProviderTypeKey } from '@/pages/providers/types'; -type ProviderTypeKey = 'antigravity' | 'kiro' | 'custom'; - const PROVIDER_TYPE_ORDER: ProviderTypeKey[] = ['antigravity', 'kiro', 'custom'];web/src/pages/requests/detail/RequestDetailView.tsx (1)
56-56: 可缓存 costBreakdown 条目以减少重复 find。当前每行多次
find+ 非空断言,建议先提取到局部变量提升可读性。♻️ 建议修改
export function RequestDetailView({ request, activeTab, setActiveTab, formatJSON, formatCost, projectName, sessionInfo, projectMap, tokenName, costBreakdown, }: RequestDetailViewProps) { const { t } = useTranslation(); + const inputItem = costBreakdown?.items.find((i) => i.label === 'Input'); + const outputItem = costBreakdown?.items.find((i) => i.label === 'Output'); + const cacheReadItem = costBreakdown?.items.find((i) => i.label === 'Cache Read'); + const cache5mItem = costBreakdown?.items.find((i) => i.label === 'Cache Write (5m)'); + const cache1hItem = costBreakdown?.items.find((i) => i.label === 'Cache Write (1h)'); + const cacheWriteItem = costBreakdown?.items.find((i) => i.label === 'Cache Write'); + const cacheWriteDisplayItem = cache5mItem || cache1hItem || cacheWriteItem; return ( @@ - {costBreakdown?.items.find((i) => i.label === 'Input') && ( + {inputItem && ( <span className="text-xs text-muted-foreground"> - × {formatPricePerM(costBreakdown.items.find((i) => i.label === 'Input')!.pricePerM)} ={' '} + × {formatPricePerM(inputItem.pricePerM)} ={' '} <span className="text-blue-400"> - {formatCost(costBreakdown.items.find((i) => i.label === 'Input')!.cost)} + {formatCost(inputItem.cost)} </span> </span> )} @@ - {costBreakdown?.items.find((i) => i.label === 'Output') && ( + {outputItem && ( <span className="text-xs text-muted-foreground"> - × {formatPricePerM(costBreakdown.items.find((i) => i.label === 'Output')!.pricePerM)} ={' '} + × {formatPricePerM(outputItem.pricePerM)} ={' '} <span className="text-blue-400"> - {formatCost(costBreakdown.items.find((i) => i.label === 'Output')!.cost)} + {formatCost(outputItem.cost)} </span> </span> )} @@ - {costBreakdown?.items.find((i) => i.label === 'Cache Read') && ( + {cacheReadItem && ( <span className="text-xs text-muted-foreground"> - × {formatPricePerM(costBreakdown.items.find((i) => i.label === 'Cache Read')!.pricePerM)} ={' '} + × {formatPricePerM(cacheReadItem.pricePerM)} ={' '} <span className="text-blue-400"> - {formatCost(costBreakdown.items.find((i) => i.label === 'Cache Read')!.cost)} + {formatCost(cacheReadItem.cost)} </span> </span> )} @@ - const cache5m = costBreakdown?.items.find((i) => i.label === 'Cache Write (5m)'); - const cache1h = costBreakdown?.items.find((i) => i.label === 'Cache Write (1h)'); - const cacheWrite = costBreakdown?.items.find((i) => i.label === 'Cache Write'); - const item = cache5m || cache1h || cacheWrite; + const item = cacheWriteDisplayItem; if (!item) return null; return ( <span className="text-xs text-muted-foreground"> × {formatPricePerM(item.pricePerM)} ={' '} <span className="text-blue-400">{formatCost(item.cost)}</span> </span> ); })()} @@ - const cache5m = costBreakdown?.items.find((i) => i.label === 'Cache Write (5m)'); - const cache1h = costBreakdown?.items.find((i) => i.label === 'Cache Write (1h)'); + const cache5m = cache5mItem; + const cache1h = cache1hItem; const parts: string[] = []; if (cache5m) parts.push(`5m: ${formatCost(cache5m.cost)}`); if (cache1h) parts.push(`1h: ${formatCost(cache1h.cost)}`); return parts.length > 0 ? parts.join(' | ') : null; })()}Also applies to: 364-446
internal/handler/admin.go (1)
703-714: providerId 解析失败时静默忽略,与 count 端点行为不一致在 cursor 端点中,
providerId解析失败时会静默忽略(不创建 filter),而在handleProxyRequestsCount(lines 736-740)中则返回 400 错误。这种不一致可能导致用户困惑。建议统一行为:要么两处都返回 400,要么两处都静默忽略。
♻️ 建议统一返回 400 错误
// 构建过滤条件 var filter *repository.ProxyRequestFilter if p := r.URL.Query().Get("providerId"); p != "" { - if providerID, err := strconv.ParseUint(p, 10, 64); err == nil { - filter = &repository.ProxyRequestFilter{ - ProviderID: &providerID, - } + providerID, err := strconv.ParseUint(p, 10, 64) + if err != nil { + writeJSON(w, http.StatusBadRequest, map[string]string{"error": "invalid providerId"}) + return } + filter = &repository.ProxyRequestFilter{ + ProviderID: &providerID, + } }web/src/pages/requests/detail/RequestDetailPanel.tsx (1)
23-29: formatCost 对于极小金额可能显示不准确当
nanoUSD非常小(如 1-999)时,Math.floor(nanoUSD / 1000)会得到 0,导致显示为$0而非更精确的值。此外,.replace(/\.?0+$/, '')会将$0.000000变成$0,可能会让用户误以为没有成本。如果需要显示更高精度的小金额,建议保留至少一位有效数字或使用科学计数法。
internal/service/admin.go (2)
774-774: 未使用的affectedRequestIDs变量
affectedRequestIDs映射被填充但从未被使用。如果不需要,建议移除以避免不必要的内存分配。♻️ 建议移除未使用的变量
processedCount := 0 const batchSize = 100 - affectedRequestIDs := make(map[uint64]struct{}) // 2. Stream through attempts, process and update each batch immediately err = s.attemptRepo.StreamForCostCalc(batchSize, func(batch []*domain.AttemptCostData) error { attemptUpdates := make(map[uint64]uint64, len(batch)) for _, attempt := range batch { // ... calculation logic ... - // Track affected request IDs - affectedRequestIDs[attempt.ProxyRequestID] = struct{}{} - // Track if attempt needs update
827-828: 50ms 延迟可能影响大数据集性能每批次 100 条记录后等待 50ms,意味着 100 万条记录需要约 8 分钟的纯等待时间。
建议考虑:
- 减少延迟到 10-20ms
- 使用非阻塞的 WebSocket 发送
- 或者每 N 批次发送一次进度更新而非每批次都发送
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (40)
cmd/maxx/main.gointernal/core/database.gointernal/domain/model.gointernal/executor/executor.gointernal/handler/admin.gointernal/pricing/calculator.gointernal/pricing/tiered.gointernal/repository/interfaces.gointernal/repository/sqlite/migrations.gointernal/repository/sqlite/models.gointernal/repository/sqlite/proxy_request.gointernal/repository/sqlite/proxy_upstream_attempt.gointernal/repository/sqlite/usage_stats.gointernal/service/admin.goweb/src/components/provider-details-dialog.tsxweb/src/components/routes/ClientTypeRoutesContent.tsxweb/src/components/ui/index.tsweb/src/components/ui/progress.tsxweb/src/hooks/queries/index.tsweb/src/hooks/queries/use-pricing.tsweb/src/hooks/queries/use-requests.tsweb/src/hooks/queries/use-usage-stats.tsweb/src/index.cssweb/src/lib/transport/http-transport.tsweb/src/lib/transport/index.tsweb/src/lib/transport/interface.tsweb/src/lib/transport/types.tsweb/src/locales/en.jsonweb/src/locales/zh.jsonweb/src/pages/client-routes/components/provider-row.tsxweb/src/pages/overview.tsxweb/src/pages/providers/components/provider-row.tsxweb/src/pages/providers/index.tsxweb/src/pages/requests/detail.tsxweb/src/pages/requests/detail/RequestDetailPanel.tsxweb/src/pages/requests/detail/RequestDetailView.tsxweb/src/pages/requests/detail/RequestHeader.tsxweb/src/pages/requests/index.tsxweb/src/pages/routes/form.tsxweb/src/pages/stats/index.tsx
🧰 Additional context used
🧬 Code graph analysis (22)
internal/pricing/calculator.go (1)
internal/pricing/tiered.go (2)
CalculateTieredCost(28-50)CalculateLinearCost(56-58)
web/src/components/ui/progress.tsx (2)
web/src/components/ui/index.ts (1)
Progress(63-63)web/src/lib/utils.ts (1)
cn(8-10)
web/src/pages/providers/index.tsx (1)
web/src/pages/providers/types.ts (1)
ProviderTypeKey(15-15)
internal/executor/executor.go (4)
internal/domain/model.go (1)
ResponseModel(613-625)internal/repository/sqlite/models.go (2)
ResponseModel(302-308)ResponseModel(310-310)web/src/lib/transport/types.ts (1)
ResponseModel(615-621)internal/pricing/calculator.go (1)
GlobalCalculator(23-28)
internal/repository/sqlite/proxy_request.go (3)
internal/repository/interfaces.go (2)
ProxyRequestRepository(67-101)ProxyRequestFilter(63-65)internal/domain/model.go (2)
ProxyRequest(185-247)Progress(780-786)internal/repository/sqlite/models.go (2)
ProxyRequest(182-212)ProxyRequest(214-214)
web/src/pages/requests/detail.tsx (4)
web/src/lib/transport/index.ts (1)
useTransport(103-103)web/src/lib/query-client.ts (1)
queryClient(3-14)web/src/hooks/queries/use-requests.ts (1)
requestKeys(16-23)web/src/pages/requests/detail/RequestHeader.tsx (1)
RequestHeader(35-161)
web/src/hooks/queries/use-usage-stats.ts (3)
web/src/hooks/queries/index.ts (2)
useRecalculateCosts(115-115)usageStatsKeys(111-111)web/src/lib/query-client.ts (1)
queryClient(3-14)web/src/lib/transport/index.ts (1)
getTransport(95-95)
internal/repository/sqlite/migrations.go (2)
internal/version/version.go (1)
Version(6-6)internal/repository/sqlite/db.go (1)
DB(17-20)
web/src/pages/routes/form.tsx (3)
web/src/pages/providers/types.ts (1)
ProviderTypeKey(15-15)web/src/lib/transport/index.ts (1)
Provider(9-9)web/src/lib/transport/types.ts (1)
Provider(43-53)
web/src/pages/requests/detail/RequestDetailView.tsx (2)
web/src/lib/transport/types.ts (1)
ProxyRequest(171-205)web/src/pages/requests/detail/RequestDetailPanel.tsx (1)
CostBreakdown(40-45)
web/src/hooks/queries/use-requests.ts (3)
web/src/hooks/queries/index.ts (1)
useProxyRequestsCount(75-75)web/src/lib/transport/index.ts (4)
getTransport(95-95)CursorPaginationParams(35-35)CursorPaginationResult(36-36)ProxyRequest(26-26)web/src/lib/transport/types.ts (3)
CursorPaginationParams(251-259)CursorPaginationResult(262-269)ProxyRequest(171-205)
web/src/pages/stats/index.tsx (3)
web/src/lib/transport/types.ts (3)
StatsGranularity(530-530)RecalculateCostsProgress(588-594)RecalculateStatsProgress(597-603)web/src/hooks/queries/use-usage-stats.ts (2)
useRecalculateUsageStats(145-155)useRecalculateCosts(160-170)web/src/lib/utils.ts (1)
cn(8-10)
web/src/lib/transport/http-transport.ts (1)
web/src/lib/transport/types.ts (3)
RecalculateCostsResult(580-585)RecalculateRequestCostResult(606-612)PriceTable(809-812)
web/src/pages/requests/detail/RequestDetailPanel.tsx (2)
web/src/lib/transport/types.ts (1)
ModelPricing(793-806)web/src/hooks/queries/use-pricing.ts (1)
usePricing(16-22)
internal/service/admin.go (5)
internal/event/broadcaster.go (1)
Broadcaster(7-12)internal/repository/interfaces.go (1)
ProxyRequestFilter(63-65)web/src/lib/transport/types.ts (4)
RecalculateCostsResult(580-585)RecalculateCostsProgress(588-594)ResponseModel(615-621)RecalculateRequestCostResult(606-612)internal/domain/model.go (3)
Progress(780-786)AttemptCostData(299-312)ResponseModel(613-625)internal/repository/sqlite/models.go (2)
ResponseModel(302-308)ResponseModel(310-310)
internal/repository/sqlite/proxy_upstream_attempt.go (4)
internal/repository/interfaces.go (1)
ProxyUpstreamAttemptRepository(103-118)internal/domain/model.go (3)
ProxyUpstreamAttempt(249-296)AttemptCostData(299-312)ResponseModel(613-625)internal/repository/sqlite/models.go (4)
ProxyUpstreamAttempt(217-239)ProxyUpstreamAttempt(241-241)ResponseModel(302-308)ResponseModel(310-310)internal/repository/sqlite/db.go (1)
DB(17-20)
web/src/pages/requests/detail/RequestHeader.tsx (1)
web/src/components/ui/tooltip.tsx (3)
Tooltip(56-56)TooltipTrigger(56-56)TooltipContent(56-56)
internal/handler/admin.go (2)
internal/repository/interfaces.go (1)
ProxyRequestFilter(63-65)internal/pricing/default_prices.go (1)
DefaultPriceTable(11-16)
internal/domain/model.go (2)
internal/repository/sqlite/models.go (2)
ResponseModel(302-308)ResponseModel(310-310)web/src/lib/transport/types.ts (1)
ResponseModel(615-621)
web/src/lib/transport/types.ts (2)
web/src/lib/transport/index.ts (7)
StatsGranularity(68-68)RecalculateCostsResult(70-70)RecalculateCostsProgress(71-71)RecalculateStatsProgress(72-72)RecalculateRequestCostResult(69-69)ModelPricing(82-82)PriceTable(83-83)internal/service/admin.go (3)
RecalculateCostsResult(716-721)RecalculateCostsProgress(724-730)RecalculateRequestCostResult(867-873)
web/src/pages/requests/index.tsx (4)
web/src/pages/providers/types.ts (1)
ProviderTypeKey(15-15)web/src/hooks/queries/use-requests.ts (2)
useProxyRequests(26-31)useProxyRequestsCount(34-39)internal/repository/sqlite/models.go (2)
Provider(63-70)Provider(72-72)web/src/lib/transport/types.ts (1)
Provider(43-53)
web/src/components/routes/ClientTypeRoutesContent.tsx (4)
web/src/pages/providers/types.ts (1)
ProviderTypeKey(15-15)internal/repository/sqlite/models.go (2)
Provider(63-70)Provider(72-72)web/src/lib/transport/types.ts (1)
Provider(43-53)web/src/lib/theme.ts (2)
getProviderColor(153-155)ProviderType(9-20)
🔇 Additional comments (87)
internal/executor/executor.go (1)
407-412: LGTM! 使用实际响应模型进行计费更准确。此更改优先使用 API 实际返回的模型名称 (
ResponseModel) 进行成本计算,仅在其为空时回退到映射模型 (MappedModel)。这确保了计费基于实际使用的模型,而非请求的模型,提高了计费准确性。web/src/index.css (1)
32-32: 格式优化,看起来不错!在图表颜色变量组(
--chart-5)后添加空行,将颜色定义与后续变量(如--radius、--sidebar等)分隔开来,提升了代码可读性。两个主题块(:root和.dark)保持了一致的格式。Also applies to: 113-113
internal/repository/sqlite/migrations.go (1)
27-35: 确认 cost×1000 在 SQLite 中的溢出风险迁移直接在 SQL 中乘以 1000 进行单位转换。虽然 SQLite 64-bit 整数的最大值约为 9.2 亿 microUSD($9.2M),但从代码实测来看,单笔请求成本通常在数十万 microUSD 范围内($0.50-$1.00)。个别请求不太可能触发溢出。
不过,建议补充说明:
- 聚合成本表(
usage_stats)可能累积较大值,请确认其最大值安全范围- 或在迁移中添加显式检查,对可能的极端情况进行防御性验证
web/src/locales/en.json (2)
24-24: 新增通用/请求文案 OK。
用语简洁,与现有键风格一致。Also applies to: 112-112
548-556: Stats 新增筛选与重新计算文案 OK。
时间范围与操作文案覆盖完整、语义清晰。Also applies to: 563-563, 601-603
web/src/locales/zh.json (2)
24-24: 新增通用与统计文案 OK。
中文用语与现有风格一致。Also applies to: 547-555, 562-562, 600-602
112-112: 请求列表新增“全部供应商”文案 OK。web/src/hooks/queries/use-requests.ts (3)
34-38: 计数查询支持按 Provider 过滤 OK。
queryKey 与查询函数同步更新,语义明确。
78-123: 列表缓存按 Provider 过滤更新逻辑清晰。
逐查询处理并在分页场景下避免插入新项,思路合理。
125-138: 新增请求的计数增量处理 OK。
能按 provider 维度正确累加。internal/domain/model.go (4)
242-243: 成本单位统一为纳美元的注释更新 OK。
能有效避免单位混淆。Also applies to: 451-452, 496-497
298-312: AttemptCostData 结构补充合理。
字段覆盖重算成本所需信息。
464-464: 新增 year 粒度常量 OK。
与 All Time/年度统计需求匹配。
776-786: Progress 结构定义清晰。
便于长任务进度上报与展示。web/src/components/ui/progress.tsx (1)
1-78: Progress 组件封装清晰易用。
slot 与类名约定统一,便于复用与扩展。web/src/components/ui/index.ts (2)
26-34: Select 导出补全 OK。
62-75: 新增 Progress/Sheet 导出 OK。internal/repository/sqlite/proxy_upstream_attempt.go (1)
136-186: LGTM!
BatchUpdateCosts实现良好:
- 使用事务确保原子性
- 采用 CASE WHEN 批量更新,性能优于逐条更新
- 参数化查询防止 SQL 注入
- 批次大小 500 合理
internal/repository/sqlite/usage_stats.go (2)
62-63: LGTM! 年粒度支持正确添加了
GranularityYear的支持,在 UTC 和时区版本的函数中都进行了处理。Also applies to: 88-89
1533-1585: LGTM! 进度报告实现
ClearAndRecalculateWithProgress的实现良好:
- 正确处理 nil channel
- 进度报告结构清晰
- 各阶段的进度反馈完整
web/src/lib/transport/index.ts (1)
69-83: LGTM!新增的类型导出组织良好,与成本重算和定价功能的需求一致。
internal/repository/sqlite/models.go (1)
191-191: LGTM!在
EndTime字段添加索引是合理的优化,支持ListCursor中基于end_time的过滤和排序操作。web/src/pages/providers/index.tsx (1)
56-59: LGTM!使用
localeCompare进行字母排序是正确的选择,对国际化字符串排序也有良好支持。这个改动提升了用户按名称查找 Provider 的体验。web/src/hooks/queries/use-usage-stats.ts (1)
142-170: 很好!新增的
useRecalculateCostshook 实现正确:
- 遵循与
useRecalculateUsageStats相同的模式- 成功后正确使查询失效
- 注释清晰说明了两个 hook 的区别
internal/core/database.go (1)
262-279: 新增 WailsBroadcaster 参数传递一致这里传入的
wailsBroadcaster与前面创建的实例一致,新增依赖 wiring 清晰。web/src/hooks/queries/index.ts (2)
109-119: 新增 useRecalculateCosts 导出 OKUsage Stats 相关 hook 的对外入口补齐,barrel 一致性更好。
147-148: Pricing hooks 导出合理新增 pricingKeys/usePricing 的集中导出清晰明了。
web/src/hooks/queries/use-pricing.ts (1)
8-21: usePricing hook 实现清晰查询键与缓存策略明晰,符合价格表低频更新的预期。
web/src/pages/routes/form.tsx (2)
40-63: 分组与排序逻辑清晰分组兜底到 custom 并做字母序排序,易读且稳定。
134-146: 分组渲染可读性良好
optgroup结构清晰,用户可快速按类型选择。web/src/pages/providers/components/provider-row.tsx (1)
23-34: LGTM! 纳美元转换逻辑正确。转换逻辑清晰:先向下取整到微美元精度(
Math.floor(nanoUsd / 1000)),再转换为美元(/ 1_000_000)。正则表达式\.?0+$正确地移除了尾随零。internal/pricing/calculator.go (2)
37-59: LGTM! 文档和返回单位更新正确。文档清晰地标注了返回单位为纳美元(nanoUSD),与
tiered.go中CalculateTieredCost和CalculateLinearCost的实现保持一致。
121-127: Fallback 逻辑实现合理。当没有 5m/1h 细分数据但存在总缓存写入数时,使用 5m 价格作为默认值是一个合理的保守选择。条件判断确保了只在必要时触发 fallback。
web/src/pages/requests/detail/RequestHeader.tsx (1)
143-154: LGTM! Tooltip 和重新计算按钮实现正确。按钮在
isRecalculating时正确禁用,并显示旋转动画。条件渲染确保只有传入onRecalculateCost时才显示该控件。web/src/components/routes/ClientTypeRoutesContent.tsx (2)
174-182: 类型断言的防御性处理正确。当
p.type不匹配已知类型时,fallback 到custom分组是合理的防御性编程。不过可以考虑添加注释说明这种 fallback 行为,以便未来维护者理解设计意图。
460-530: LGTM! 分组 UI 渲染实现正确。分组渲染逻辑清晰:遍历
PROVIDER_TYPE_ORDER,跳过空分组,并为每个分组渲染带有标签的卡片网格。key属性使用正确,条件渲染逻辑合理。web/src/lib/transport/interface.ts (3)
44-51: LGTM! 新增类型导入正确。导入了定价和成本重算功能所需的类型,与
types.ts中的定义一致。
106-106: LGTM! 参数签名调整合理。将
providerId改为可选参数,支持查询全部请求数量或按 Provider 过滤,增加了 API 的灵活性。
170-171: LGTM! 新增 API 方法定义完整。新增的
recalculateCosts()、recalculateRequestCost()和getPricing()方法签名与 HTTP 传输层实现一致,返回类型明确。Also applies to: 183-185
web/src/lib/transport/http-transport.ts (4)
279-286: LGTM!getProxyRequestsCount实现正确。条件性地添加
providerId查询参数,与接口定义的可选参数行为一致。使用String()转换确保参数类型正确。
531-531: LGTM! 新增 model 过滤参数。与其他过滤参数保持一致的实现模式。
543-555: LGTM! 成本重算 API 实现正确。两个方法都使用 POST 请求,适合执行变更操作。端点路径与 PR 描述中的功能一致。
589-594: LGTM! 定价 API 实现正确。使用 GET 请求获取只读的定价数据,实现简洁明了。
web/src/pages/requests/detail.tsx (3)
1-16: 导入调整暂无需评论。
28-53: 成本重算的 mutation 流程清晰。触发重算后立即失效 detail 查询,UI 能及时拿到最新成本。
140-145: Header 透传重算动作合理。onRecalculateCost / isRecalculating 的数据流干净明确。
web/src/pages/requests/index.tsx (9)
1-40: 导入更新无需评论。
74-83: 过滤器状态已正确联动查询参数。
138-143: 筛选变化重置分页逻辑正确。
153-160: Header 中条件渲染合理。
379-383: Token 缩写格式保持一致。
389-407: 纳美元显示转换实现清楚。
524-530: 优先展示结束时间的规则清晰。
617-620: Duration Tooltip 信息更完整。
666-742: Provider 选择器分组与排序实现合理。web/src/pages/requests/detail/RequestDetailView.tsx (2)
18-29: 价格/单位格式化辅助函数清晰。
36-55: Props 扩展与 nanoUSD 适配 OK。internal/pricing/tiered.go (3)
3-20: 常量与 big.Int 缓存初始化清晰。
22-71: big.Int 计算路径正确。
73-86: 兼容层与 NanoToUSD 保留合理。internal/repository/sqlite/proxy_request.go (4)
3-12: 导入调整无需评论。
126-143: 带过滤计数实现合理。
243-293: 批量更新成本的事务处理清晰。
295-377: 带进度的成本重算流程实现完整。internal/repository/interfaces.go (4)
62-65: 过滤条件结构体定义清晰。
75-101: ProxyRequestRepository 接口扩展合理。
107-117: ProxyUpstreamAttemptRepository 新方法补齐完整。
173-174: UsageStats 进度版接口添加合理。internal/handler/admin.go (3)
92-93: LGTM!新增的
/admin/pricing端点实现正确,遵循了现有的处理器模式,正确检查了 HTTP 方法并返回价格表数据。Also applies to: 1429-1439
676-680: LGTM!单请求成本重算端点
/admin/requests/{id}/recalculate-cost实现正确,路由和处理器逻辑清晰。Also applies to: 782-795
1217-1221: LGTM!批量成本重算端点
/admin/usage-stats/recalculate-costs实现正确,与现有的recalculate端点模式一致。Also applies to: 1313-1327
web/src/pages/requests/detail/RequestDetailPanel.tsx (3)
50-89: LGTM!使用
BigInt进行成本计算是正确的做法,可以防止大量 token 场景下的整数溢出问题。calculateLinearCost和calculateTieredCost的实现与后端逻辑保持一致。
105-120: LGTM!模型价格匹配逻辑合理:先尝试精确匹配,失败后使用最长前缀匹配。这与常见的模型命名规范(如
claude-3-sonnet-20240229)兼容良好。
202-212: 缓存写入的 fallback 逻辑清晰当没有 5m/1h 细分数据但有
cacheWriteTokens时,使用 5m 价格作为 fallback。逻辑正确且有良好的向后兼容性。web/src/lib/transport/types.ts (4)
257-258: LGTM!
providerId过滤参数添加到CursorPaginationParams是合理的扩展,与后端ProxyRequestFilter结构对齐。
530-530: LGTM!
StatsGranularity新增'year'粒度以支持 All Time 查询,与后端和 Stats 页面改动一致。
579-612: LGTM!新增的重算结果和进度类型定义与后端
internal/service/admin.go中的结构体保持一致,类型安全性良好。
789-812: LGTM!
ModelPricing和PriceTable接口定义完整,字段注释清晰说明了单位(microUSD/M tokens)和默认值逻辑,便于前端开发者理解。web/src/pages/stats/index.tsx (6)
104-163: LGTM!时间范围边界处理正确,使用
-1ms确保lastWeek/lastMonth等不包含当前周期的第一天。这修复了 PR 描述中提到的边界日期重复计算问题。
164-169: All Time 使用 year 粒度合理5 年范围配合 year 粒度可以有效减少数据点数量,避免图表过于拥挤。
412-442: LGTM!WebSocket 订阅正确返回 unsubscribe 函数用于清理,进度完成后延迟 3 秒清除状态让用户能看到最终消息。
285-286: LGTM!nanoUSD 到 USD 的转换使用
1_000_000_000除数是正确的(1 USD = 10^9 nanoUSD)。Also applies to: 839-839
531-554: LGTM!两个重算按钮相互独立,在任一操作进行中或有进度时禁用所有按钮,防止并发操作冲突。
1053-1089: LGTM!
FilterSection组件封装良好,清除按钮的显示/隐藏逻辑使用opacity和pointer-events-none避免了布局位移,符合 PR 描述中的修复目标。internal/service/admin.go (3)
676-713: LGTM!
RecalculateUsageStats正确使用 goroutine 监听进度通道并通过 WebSocket 广播,通道在完成后正确关闭。
875-941: LGTM!单请求成本重算逻辑清晰:
- 获取请求和关联的所有 attempts
- 使用与批量重算相同的模型选择逻辑
- 累加所有 attempt 成本更新到 request
- 失败时继续处理其他 attempts,保证部分成功
日志记录完善便于问题排查。
370-371: LGTM!
GetProxyRequestsCursor和GetProxyRequestsCountWithFilter正确支持 filter 参数,与 handler 层的过滤逻辑对接良好。Also applies to: 398-400
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| // UpdateCost updates only the cost field of a request | ||
| func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error { | ||
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Update("cost", cost).Error | ||
| } | ||
|
|
||
| // AddCost adds a delta to the cost field of a request (can be negative) | ||
| func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error { | ||
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id). | ||
| Update("cost", gorm.Expr("cost + ?", delta)).Error | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find ProxyRequest struct definition and cost field type
fd "proxy_request.go" --type fRepository: awsl-project/maxx
Length of output: 104
🏁 Script executed:
# Search for ProxyRequest struct definition and cost field
rg -A 20 "type ProxyRequest struct" --type goRepository: awsl-project/maxx
Length of output: 2852
🏁 Script executed:
# Look for tests related to AddCost
fd "proxy_request" --type f | xargs rg "AddCost" -lRepository: awsl-project/maxx
Length of output: 104
🏁 Script executed:
# Check if there are any validations for delta before calling AddCost
rg "AddCost" --type go -B 3 -A 3Repository: awsl-project/maxx
Length of output: 1529
🏁 Script executed:
# Get full ProxyRequest struct from models.go to find cost field
cat -n internal/repository/sqlite/models.go | grep -A 50 "type ProxyRequest struct" | head -60Repository: awsl-project/maxx
Length of output: 2314
🏁 Script executed:
# Search specifically for cost field in the ProxyRequest struct
rg "^\s+[Cc]ost" internal/repository/sqlite/models.goRepository: awsl-project/maxx
Length of output: 149
🏁 Script executed:
# Check where AddCost is being called to understand usage patterns
rg "\.AddCost\(" --type go -B 2 -A 2Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Search more broadly for AddCost - might be called through interface
rg "AddCost" --type goRepository: awsl-project/maxx
Length of output: 475
🏁 Script executed:
# Check if there are any test files that might use AddCost
fd "test" --type f | xargs rg "AddCost" 2>/dev/null || echo "No test files found with AddCost"Repository: awsl-project/maxx
Length of output: 93
🏁 Script executed:
# Look at the interface definition again to see the full context
rg -B 5 -A 15 "AddCost" internal/repository/interfaces.goRepository: awsl-project/maxx
Length of output: 1279
🏁 Script executed:
# Check if there are any usages of the interface through reflection or dynamic calls
rg "ProxyRequestRepository" --type go | grep -i "interface\|reflect\|cast"Repository: awsl-project/maxx
Length of output: 134
🏁 Script executed:
# Look for any comment or TODO that might indicate why this method exists
rg -B 10 "AddCost" internal/repository/sqlite/proxy_request.go | head -20Repository: awsl-project/maxx
Length of output: 477
🏁 Script executed:
# Check git history to see when this was added (if available)
git log -p --all -- "*proxy_request.go" | grep -A 5 -B 5 "AddCost" | head -30Repository: awsl-project/maxx
Length of output: 43
AddCost 允许负增量但 cost 为 uint64,存在数据腐蚀风险。
虽然该方法当前未被代码库中任何地方调用,但其作为公开接口方法,delta 为负数且绝对值超过当前 cost 时,SQL 表达式 cost + ? 会产生负值。SQLite 允许将负值写入 uint64 字段,导致读取时数据腐蚀。建议修复该方法或将其移除:
修正方案(如保留此方法)
func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error {
return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).
- Update("cost", gorm.Expr("cost + ?", delta)).Error
+ Update("cost", gorm.Expr("CASE WHEN cost + ? < 0 THEN 0 ELSE cost + ? END", delta, delta)).Error
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // UpdateCost updates only the cost field of a request | |
| func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error { | |
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Update("cost", cost).Error | |
| } | |
| // AddCost adds a delta to the cost field of a request (can be negative) | |
| func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error { | |
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id). | |
| Update("cost", gorm.Expr("cost + ?", delta)).Error | |
| } | |
| // UpdateCost updates only the cost field of a request | |
| func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error { | |
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Update("cost", cost).Error | |
| } | |
| // AddCost adds a delta to the cost field of a request (can be negative) | |
| func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error { | |
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id). | |
| Update("cost", gorm.Expr("CASE WHEN cost + ? < 0 THEN 0 ELSE cost + ? END", delta, delta)).Error | |
| } |
🤖 Prompt for AI Agents
In `@internal/repository/sqlite/proxy_request.go` around lines 232 - 241, The
AddCost method can write a negative value into the unsigned cost column when
delta is negative and its absolute value exceeds current cost; update AddCost on
ProxyRequestRepository to guarantee non-negative results by replacing
gorm.Expr("cost + ?", delta) with a guarded SQL expression (e.g.
gorm.Expr("MAX(cost + ?, 0)", delta) or gorm.Expr("CASE WHEN cost + ? < 0 THEN 0
ELSE cost + ? END", delta, delta)) or by loading the current cost and performing
the arithmetic/validation in a transaction before writing; ensure the method
clamps the resulting cost to >= 0 (and returns an error on overflow if desired)
so the uint64 column cannot receive negative data.
| // 格式化成本 (纳美元 → 美元,向下取整到 6 位) | ||
| function formatCost(nanoUsd: number): string { | ||
| // 向下取整到 6 位小数 (microUSD 精度) | ||
| const usd = Math.floor(nanoUsd / 1000) / 1_000_000; | ||
| if (usd >= 1) { | ||
| return `$${usd.toFixed(2)}`; | ||
| } | ||
| if (usd >= 0.01) { | ||
| return `$${usd.toFixed(3)}`; | ||
| } | ||
| return `$${usd.toFixed(4)}`; | ||
| return `$${usd.toFixed(6).replace(/\.?0+$/, '')}`; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for ProviderStats interface definition and totalCost type
rg -n --type=ts --type=tsx 'interface ProviderStats|totalCost' | head -30Repository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Look for formatCost function calls to see what value is passed
rg -n --type=ts --type=tsx 'formatCost\(' -A2 -B2Repository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Search for any type definitions related to cost fields
rg -n --type=ts --type=tsx '(nanoUsd|microUsd|totalCost):' -B1 -A1 | head -40Repository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Check the file to see how formatCost is being imported and used
cat -n web/src/components/provider-details-dialog.tsx | head -150Repository: awsl-project/maxx
Length of output: 5479
🏁 Script executed:
# Find ProviderStats interface definition
fd -t f "types" --search-path lib/transport | head -20Repository: awsl-project/maxx
Length of output: 161
🏁 Script executed:
# Search in the file for where formatCost is called
grep -n "formatCost" web/src/components/provider-details-dialog.tsxRepository: awsl-project/maxx
Length of output: 171
🏁 Script executed:
# Get the rest of the provider-details-dialog file to see totalCost usage
cat -n web/src/components/provider-details-dialog.tsx | tail -n +150 | head -200Repository: awsl-project/maxx
Length of output: 10105
🏁 Script executed:
# Find the transport types file
fd -t f "types" | grep transportRepository: awsl-project/maxx
Length of output: 91
🏁 Script executed:
# Alternative: search for ProviderStats interface definition
rg -n "interface ProviderStats" --type=tsRepository: awsl-project/maxx
Length of output: 128
🏁 Script executed:
# Also search for any type that contains totalCost
rg -n "totalCost" --type=ts --type=tsx -B2 -A2Repository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Read the ProviderStats interface definition
cat -n web/src/lib/transport/types.ts | sed -n '310,360p'Repository: awsl-project/maxx
Length of output: 1528
🏁 Script executed:
# Also search for totalCost in the types file to see how it's defined
grep -n "totalCost" web/src/lib/transport/types.ts -B2 -A2Repository: awsl-project/maxx
Length of output: 271
formatCost 与 totalCost 单位不匹配,会导致成本显示缩小 1000 倍。
ProviderStats.totalCost 被标注为微美元(microUSD),但 formatCost 函数期望接收纳美元(nanoUSD)参数。现有代码在第 115 行将输入除以 1000,这会将微美元误当作纳美元处理,导致最终显示的金额缩小 1000 倍。
例如:totalCost = 1,000,000(微美元,实际 $1.00)会被显示为 $0.001。
请修改 formatCost 函数的参数以匹配 microUSD 单位(直接除以 1_000_000 而非先除以 1000 再除以 1_000_000),或在调用前将 totalCost 转换为 nanoUSD。
🤖 Prompt for AI Agents
In `@web/src/components/provider-details-dialog.tsx` around lines 112 - 123, The
formatCost function is treating its input as nanoUSD but ProviderStats.totalCost
is microUSD, causing values to be 1000x too small; update formatCost (and its
parameter name if helpful) to accept microUSD and convert by dividing by
1_000_000 (i.e., const usd = Math.floor(microUsd) / 1_000_000 or
Math.floor(microUsd / 1) / 1_000_000) instead of dividing by 1000 first,
preserve the existing toFixed logic, and update any callers or the parameter
name to reflect microUSD to avoid future confusion (reference: function
formatCost and ProviderStats.totalCost).
| type ProviderTypeKey = 'antigravity' | 'kiro' | 'custom'; | ||
|
|
||
| const PROVIDER_TYPE_ORDER: ProviderTypeKey[] = ['antigravity', 'kiro', 'custom']; | ||
|
|
||
| const PROVIDER_TYPE_LABELS: Record<ProviderTypeKey, string> = { | ||
| antigravity: 'Antigravity', | ||
| kiro: 'Kiro', | ||
| custom: 'Custom', | ||
| }; |
There was a problem hiding this comment.
ProviderTypeKey 建议复用统一定义并本地化标签。
项目里已有 ProviderTypeKey 定义(web/src/pages/providers/types.ts),本地再声明容易漂移。另外 PROVIDER_TYPE_LABELS 里的 “Custom” 建议走 i18n。
♻️ 建议修改
-import type { ProxyRequest, ProxyRequestStatus, Provider } from '@/lib/transport';
+import type { ProxyRequest, ProxyRequestStatus, Provider } from '@/lib/transport';
+import type { ProviderTypeKey } from '@/pages/providers/types';
-type ProviderTypeKey = 'antigravity' | 'kiro' | 'custom';🤖 Prompt for AI Agents
In `@web/src/pages/requests/index.tsx` around lines 44 - 52, The local
re-declaration of ProviderTypeKey and related constants can drift from the
canonical definition; import and reuse the existing ProviderTypeKey type from
web/src/pages/providers/types.ts instead of re-declaring it, update
PROVIDER_TYPE_ORDER to be typed with the imported ProviderTypeKey, and replace
the hard-coded "Custom" label in PROVIDER_TYPE_LABELS with an i18n string (use
your i18n hook/translate function) while keeping the other labels consistent.
- 添加 year 粒度支持,用于全时间范围查询 - 修复时间范围边界问题,避免 lastWeek/lastMonth 等重复计算边界日期 - 使用系统配置的时区而不是 UTC 进行时间桶计算 - 添加筛选条件摘要显示(精确到时间) - 修复 model 参数未传递到 API 的问题 - 移除侧边栏折叠功能,保持始终显示 - 图表颜色使用主题变量,Cost 在图例和 Tooltip 中优先显示 - 修复 Filter 清空按钮的布局位移问题 - Provider 分组显示类型名称标签 - 成本计算从微美元(microUSD)升级到纳美元(nanoUSD),精度提升1000倍 - 使用 big.Int 进行计算,防止大数量 token 时的整数溢出 - 添加 CacheCreationCount fallback 支持 - 保留旧函数作为兼容层(标记为 Deprecated)
b8b91a5 to
28b61c4
Compare
在 defer 块中,当 attempt 仍处于 IN_PROGRESS 状态需要标记为 failed/cancelled 时,之前没有设置 EndTime 和 Duration, 导致统计数据不准确。
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/repository/sqlite/proxy_request.go (1)
83-105: 游标分页仍按 id 过滤,但排序改为 end_time 优先会导致漏/重。
id < before/id > after的游标语义与end_time主排序不一致,分页可能不稳定。建议要么保持id排序,要么升级为(end_time, id)复合游标并同步调用方。🔧 最小化修复示例(保持 id 游标语义)
- if err := query.Order("CASE WHEN end_time = 0 THEN 0 ELSE 1 END, end_time DESC, id DESC").Limit(limit).Find(&models).Error; err != nil { + if err := query.Order("id DESC").Limit(limit).Find(&models).Error; err != nil { return nil, err }
🤖 Fix all issues with AI agents
In `@internal/handler/admin.go`:
- Around line 703-714: The providerId parsing currently swallows
strconv.ParseUint errors in the block that builds a
*repository.ProxyRequestFilter before calling h.svc.GetProxyRequestsCursor;
update this to mirror handleProxyRequestsCount's behavior by validating the
providerId query param and returning a 400 Bad Request when strconv.ParseUint
fails (include a clear error message), otherwise set the ProviderID on
repository.ProxyRequestFilter and proceed to call h.svc.GetProxyRequestsCursor;
reference the same symbols providerId, strconv.ParseUint,
repository.ProxyRequestFilter, and h.svc.GetProxyRequestsCursor when making the
change.
In `@internal/pricing/calculator.go`:
- Around line 71-79: The code currently falls back to treating
CacheCreationCount as 5m when Cache5mCreationCount and Cache1hCreationCount are
zero, which can bias pricing; inspect and confirm the semantic contract of
CacheCreationCount (should it represent Cache5mCreationCount +
Cache1hCreationCount) and modify the logic around
CalculateTieredCost/CalculateLinearCost to validate the three counters
(CacheCreationCount, Cache5mCreationCount, Cache1hCreationCount) before choosing
a price path: if the fine-grained fields are missing or inconsistent, emit a
warning/log (using the existing logger), apply an explicit default policy (e.g.,
treat missing fine-grained data as unknown and use conservative pricing or split
proportionally), and add handling to disambiguate mixed-ttl data rather than
silently defaulting to 5m pricing (update the branch that calls
CalculateTieredCost/CalculateLinearCost and any helpers that compute
CacheCreationCount to implement this validation/default strategy).
In `@internal/pricing/tiered.go`:
- Around line 28-50: CalculateTieredCost currently converts the big.Int result t
to uint64 and adds it to baseCostNano which can silently truncate or overflow;
fix by doing the final addition at big.Int level and validating the result fits
in uint64: create a big.Int sum initialized with baseCostNano, sum.Add(sum, t),
then use sum.IsUint64() to detect overflow and handle it (e.g., return
math.MaxUint64 or another defined cap) instead of directly using t.Uint64();
also avoid any direct t.Uint64() use without checking IsUint64() (references:
function CalculateTieredCost, variables baseCostNano and t, and the big.Int
intermediates).
In `@internal/repository/sqlite/migrations.go`:
- Around line 40-52: The Down migration's UPDATEs in the migration's Down func
(which runs "UPDATE proxy_requests SET cost = cost / 1000", "UPDATE
proxy_upstream_attempts SET cost = cost / 1000", and "UPDATE usage_stats SET
cost = cost / 1000") will zero-out values <1000 due to SQLite integer division;
change each UPDATE to only touch rows with cost >= 1000 (e.g., add a WHERE cost
>= 1000 clause) so small-cost rows are left untouched and unnecessary writes are
avoided while keeping the Down migration symmetric to Up.
In `@internal/repository/sqlite/proxy_upstream_attempt.go`:
- Around line 142-179: The batchSize of 500 in BatchUpdateCosts causes SQLite to
exceed its default 999 bound because each id uses three parameters; reduce
batchSize to 300 (or compute batchSize = floor((sqliteParamLimit-1)/3)) in the
function where batchSize is declared so the constructed args length stays within
limits; apply the same change to the homologous method in proxy_request.go (look
for BatchUpdateCosts or the code building the CASE WHEN and WHERE IN clauses) to
ensure both implementations use the safe batch size.
In `@internal/repository/sqlite/usage_stats.go`:
- Around line 1409-1413: The RollUpAllWithProgress logic in usage_stats.go sets
loc only for day/week/month but omits year; update the conditional that assigns
loc (the var loc *time.Location block) to include domain.GranularityYear so the
repository method RollUpAllWithProgress uses r.getConfiguredTimezone() for year
granularity as well, ensuring consistent timezone handling across
GranularityDay/Week/Month/Year.
In `@internal/service/admin.go`:
- Line 774: Remove the unused variable affectedRequestIDs and any code that
builds or populates it (the map created as affectedRequestIDs :=
make(map[uint64]struct{}) and subsequent inserts) inside the same function in
internal/service/admin.go; if the intent was to track request IDs, either
replace uses with the existing logic that operates on request IDs or delete the
map and its population code to eliminate the dead variable and related
operations (search for the symbol affectedRequestIDs to locate all occurrences
to remove).
In `@web/src/index.css`:
- Around line 27-31: You removed semantic chart tokens and replaced them with
new --chart-N oklch variables, but the high L values in --chart-1, --chart-4,
and --chart-5 can produce insufficient contrast when used as text in
overview.tsx and stats/index.tsx; locate usages of these CSS variables in
overview.tsx and stats/index.tsx and either map those components to a darker
semantic token (or fallback to an existing --color-text/--color-foreground
variable) or adjust the --chart-1/4/5 values to lower L (increase contrast) for
light mode; ensure the change keeps the new simplified token set while verifying
text contrast in stats/index.tsx and updating any CSS mapping logic that
previously relied on chart-success/chart-error/etc. to use the new variables
safely.
In `@web/src/lib/transport/types.ts`:
- Around line 588-594: The front-end RecalculateCostsProgress interface's phase
union (in web/src/lib/transport/types.ts) is missing the 'aggregating_stats'
value present in the backend; update the phase type in the
RecalculateCostsProgress interface to include 'aggregating_stats' (i.e., change
the union to 'calculating' | 'aggregating_stats' | 'updating_attempts' |
'updating_requests' | 'completed') so it matches the backend representation and
prevents mismatches when that phase is emitted.
In `@web/src/pages/requests/detail/RequestDetailView.tsx`:
- Around line 412-427: The displayed Cache Write detail currently picks only the
first matching item (cache5m/cache1h/cacheWrite) causing incorrect per-item
display when multiple intervals exist; update the JSX logic in RequestDetailView
(where request.cacheWriteCount, cache5m/cache1h/cacheWrite and
formatPricePerM/formatCost are referenced) to instead filter costBreakdown.items
for all entries whose label startsWith 'Cache Write', sum their cost into a
totalCost and compute a combined pricePerM (e.g., sum of pricePerM or a suitable
aggregated rate), then render the count × aggregated pricePerM = formatted
totalCost (and optionally show per-interval breakdown) so the displayed cost
reflects the aggregate; apply the same fix to the analogous block mentioned
(lines ~429-445).
♻️ Duplicate comments (3)
web/src/pages/overview.tsx (1)
81-91: 关于大额成本的精度问题已在之前的审查中指出。之前的审查评论已经指出当累计成本超过 JavaScript 安全整数限制(约 $9M)时可能出现精度丢失。当前实现与
provider-row.tsx保持一致,对于日常使用场景(如todayCost)应该足够安全,但对于allTimeStats.totalCost在长期累计后可能需要考虑 BigInt 处理。web/src/pages/requests/index.tsx (1)
44-52: ProviderTypeKey/标签复用与本地化仍建议按前述方式处理。internal/repository/sqlite/proxy_request.go (1)
237-240: AddCost 负增量写入 uint64 的风险与之前评论一致。
🧹 Nitpick comments (8)
web/src/pages/requests/detail/RequestHeader.tsx (1)
141-154: 图标触发器建议补充可访问性标识。刷新按钮是纯图标,建议加 aria-label(或用 Button 作为触发器)以提升读屏与可点击语义。
♿ 可选修正
- {onRecalculateCost && ( - <Tooltip> - <TooltipTrigger - className="inline-flex items-center justify-center h-5 w-5 rounded-md text-muted-foreground hover:text-foreground hover:bg-accent disabled:opacity-50" - onClick={onRecalculateCost} - disabled={isRecalculating} - > + {onRecalculateCost && ( + <Tooltip> + <TooltipTrigger + aria-label="Recalculate cost" + type="button" + className="inline-flex items-center justify-center h-5 w-5 rounded-md text-muted-foreground hover:text-foreground hover:bg-accent disabled:opacity-50" + onClick={onRecalculateCost} + disabled={isRecalculating} + >web/src/pages/requests/detail.tsx (1)
1-5: 可选:重算失败的反馈与非法 id 保护。目前直接调用 recalculateRequestCost(Number(id)) 且仅 onSuccess 刷新详情;建议在 id 非法时短路,并在 onError 时提示/记录,避免用户无反馈。
🧯 可选修正
- const recalculateMutation = useMutation({ - mutationFn: () => transport.recalculateRequestCost(Number(id)), + const requestId = Number(id); + const recalculateMutation = useMutation({ + mutationFn: () => { + if (!Number.isFinite(requestId) || requestId <= 0) { + return Promise.reject(new Error('Invalid request id')); + } + return transport.recalculateRequestCost(requestId); + }, onSuccess: () => { - queryClient.invalidateQueries({ queryKey: requestKeys.detail(Number(id)) }); + queryClient.invalidateQueries({ queryKey: requestKeys.detail(requestId) }); }, + onError: (err) => { + console.error('Recalculate cost failed:', err); + }, });Also applies to: 28-52
internal/repository/sqlite/proxy_upstream_attempt.go (2)
63-69: 建议校验 batchSize,避免意外全表扫描。♻️ 建议修复
func (r *ProxyUpstreamAttemptRepository) StreamForCostCalc(batchSize int, callback func(batch []*domain.AttemptCostData) error) error { + if batchSize <= 0 { + return fmt.Errorf("batchSize must be > 0") + } var lastID uint64 = 0
132-134: UpdateCost 可同步更新 updated_at 以保持一致性。♻️ 建议修复
func (r *ProxyUpstreamAttemptRepository) UpdateCost(id uint64, cost uint64) error { - return r.db.gorm.Model(&ProxyUpstreamAttempt{}).Where("id = ?", id).Update("cost", cost).Error + return r.db.gorm.Model(&ProxyUpstreamAttempt{}).Where("id = ?", id). + Updates(map[string]any{ + "cost": cost, + "updated_at": time.Now().UnixMilli(), + }).Error }web/src/pages/requests/detail/RequestDetailPanel.tsx (2)
105-120: 模型前缀匹配逻辑合理。先精确匹配,再按最长前缀匹配的 fallback 策略是合理的。但建议在注释中说明这种匹配策略的业务场景(例如:
claude-3-sonnet匹配claude-3)。
52-89: BigInt 使用正确,有效防止中间计算的整数溢出。
calculateLinearCost和calculateTieredCost的实现与后端逻辑保持一致。Number(result)的精度风险在实际场景中极低:仅当tokens * priceMicro > 9×10^18时才会超出安全整数范围,这要求同时存在极不现实的 token 数量和价格组合。典型请求场景(如 1 亿 tokens × 1000 万 microUSD)远低于此阈值,计算结果约 10^12 nanoUSD,远在安全范围内。若需增强代码稳健性,建议在
calculateCostBreakdown或调用处添加输入验证,以明确定义支持的 token 数量和价格范围。internal/service/admin.go (1)
827-828: 50ms 延迟可能显著影响大数据量的重算性能。每批 100 条数据后 sleep 50ms,对于 100 万条数据意味着约 500 秒(8+ 分钟)的纯等待时间。建议考虑更高效的进度更新策略,例如基于时间间隔而非每批次。
建议的优化
+ lastProgressTime := time.Now() + const progressInterval = 200 * time.Millisecond + err = s.attemptRepo.StreamForCostCalc(batchSize, func(batch []*domain.AttemptCostData) error { // ... 处理逻辑 ... // Broadcast progress - broadcastProgress("calculating", processedCount, int(totalCount), - fmt.Sprintf("Processed %d/%d attempts", processedCount, totalCount)) - - // Small delay to allow UI to update (WebSocket messages need time to be processed) - time.Sleep(50 * time.Millisecond) + if time.Since(lastProgressTime) >= progressInterval { + broadcastProgress("calculating", processedCount, int(totalCount), + fmt.Sprintf("Processed %d/%d attempts", processedCount, totalCount)) + lastProgressTime = time.Now() + } return nil })internal/repository/sqlite/usage_stats.go (1)
522-528: 使用冒泡排序效率较低。
mergeRealtimeMinuteStats中使用 O(n²) 的冒泡排序对合并后的数据进行排序。对于大量分钟数据,这可能影响性能。建议使用标准库的sort.Slice。建议的修复
+import "sort" + // 合并实时数据和历史数据,按时间倒序排列 merged := append(realtimeStats, filtered...) -// 按 TimeBucket 倒序排列 -for i := 0; i < len(merged)-1; i++ { - for j := i + 1; j < len(merged); j++ { - if merged[j].TimeBucket.After(merged[i].TimeBucket) { - merged[i], merged[j] = merged[j], merged[i] - } - } -} +// 按 TimeBucket 倒序排列 +sort.Slice(merged, func(i, j int) bool { + return merged[i].TimeBucket.After(merged[j].TimeBucket) +}) return merged
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
cmd/maxx/main.gointernal/core/database.gointernal/domain/model.gointernal/executor/executor.gointernal/handler/admin.gointernal/pricing/calculator.gointernal/pricing/tiered.gointernal/repository/interfaces.gointernal/repository/sqlite/migrations.gointernal/repository/sqlite/models.gointernal/repository/sqlite/proxy_request.gointernal/repository/sqlite/proxy_upstream_attempt.gointernal/repository/sqlite/usage_stats.gointernal/service/admin.goweb/src/components/provider-details-dialog.tsxweb/src/components/ui/index.tsweb/src/components/ui/progress.tsxweb/src/hooks/queries/index.tsweb/src/hooks/queries/use-pricing.tsweb/src/hooks/queries/use-requests.tsweb/src/hooks/queries/use-usage-stats.tsweb/src/index.cssweb/src/lib/transport/http-transport.tsweb/src/lib/transport/index.tsweb/src/lib/transport/interface.tsweb/src/lib/transport/types.tsweb/src/locales/en.jsonweb/src/locales/zh.jsonweb/src/pages/client-routes/components/provider-row.tsxweb/src/pages/overview.tsxweb/src/pages/providers/components/provider-row.tsxweb/src/pages/providers/index.tsxweb/src/pages/requests/detail.tsxweb/src/pages/requests/detail/RequestDetailPanel.tsxweb/src/pages/requests/detail/RequestDetailView.tsxweb/src/pages/requests/detail/RequestHeader.tsxweb/src/pages/requests/index.tsxweb/src/pages/routes/form.tsx
🚧 Files skipped from review as they are similar to previous changes (12)
- web/src/hooks/queries/use-pricing.ts
- web/src/lib/transport/index.ts
- web/src/components/provider-details-dialog.tsx
- web/src/pages/routes/form.tsx
- web/src/locales/en.json
- cmd/maxx/main.go
- internal/repository/sqlite/models.go
- internal/executor/executor.go
- web/src/components/ui/progress.tsx
- web/src/pages/providers/index.tsx
- web/src/pages/providers/components/provider-row.tsx
- web/src/hooks/queries/index.ts
🧰 Additional context used
🧬 Code graph analysis (13)
web/src/hooks/queries/use-usage-stats.ts (3)
web/src/hooks/queries/index.ts (2)
useRecalculateCosts(115-115)usageStatsKeys(111-111)web/src/lib/query-client.ts (1)
queryClient(3-14)web/src/lib/transport/index.ts (1)
getTransport(95-95)
internal/domain/model.go (2)
internal/repository/sqlite/models.go (2)
ResponseModel(302-308)ResponseModel(310-310)web/src/lib/transport/types.ts (1)
ResponseModel(615-621)
web/src/pages/requests/detail.tsx (3)
web/src/lib/transport/index.ts (1)
useTransport(103-103)web/src/hooks/queries/use-requests.ts (1)
requestKeys(16-23)web/src/pages/requests/detail/RequestHeader.tsx (1)
RequestHeader(35-161)
web/src/lib/transport/interface.ts (2)
internal/service/admin.go (2)
RecalculateCostsResult(716-721)RecalculateRequestCostResult(867-873)web/src/lib/transport/types.ts (3)
RecalculateCostsResult(580-585)RecalculateRequestCostResult(606-612)PriceTable(809-812)
web/src/pages/requests/detail/RequestDetailView.tsx (2)
web/src/lib/transport/types.ts (1)
ProxyRequest(171-205)web/src/pages/requests/detail/RequestDetailPanel.tsx (1)
CostBreakdown(40-45)
internal/repository/sqlite/proxy_request.go (3)
internal/repository/interfaces.go (2)
ProxyRequestRepository(67-101)ProxyRequestFilter(63-65)internal/domain/model.go (2)
ProxyRequest(185-247)Progress(780-786)internal/repository/sqlite/models.go (2)
ProxyRequest(182-212)ProxyRequest(214-214)
web/src/lib/transport/types.ts (2)
web/src/lib/transport/index.ts (7)
StatsGranularity(68-68)RecalculateCostsResult(70-70)RecalculateCostsProgress(71-71)RecalculateStatsProgress(72-72)RecalculateRequestCostResult(69-69)ModelPricing(82-82)PriceTable(83-83)internal/service/admin.go (3)
RecalculateCostsResult(716-721)RecalculateCostsProgress(724-730)RecalculateRequestCostResult(867-873)
web/src/lib/transport/http-transport.ts (3)
internal/service/admin.go (2)
RecalculateCostsResult(716-721)RecalculateRequestCostResult(867-873)web/src/lib/transport/index.ts (3)
RecalculateCostsResult(70-70)RecalculateRequestCostResult(69-69)PriceTable(83-83)web/src/lib/transport/types.ts (3)
RecalculateCostsResult(580-585)RecalculateRequestCostResult(606-612)PriceTable(809-812)
internal/repository/sqlite/proxy_upstream_attempt.go (3)
internal/repository/interfaces.go (1)
ProxyUpstreamAttemptRepository(103-118)internal/domain/model.go (3)
ProxyUpstreamAttempt(249-296)AttemptCostData(299-312)ResponseModel(613-625)internal/repository/sqlite/models.go (4)
ProxyUpstreamAttempt(217-239)ProxyUpstreamAttempt(241-241)ResponseModel(302-308)ResponseModel(310-310)
internal/handler/admin.go (2)
internal/repository/interfaces.go (1)
ProxyRequestFilter(63-65)internal/pricing/default_prices.go (1)
DefaultPriceTable(11-16)
internal/repository/interfaces.go (3)
internal/domain/model.go (4)
ProxyRequest(185-247)Progress(780-786)ProxyUpstreamAttempt(249-296)AttemptCostData(299-312)internal/repository/sqlite/models.go (4)
ProxyRequest(182-212)ProxyRequest(214-214)ProxyUpstreamAttempt(217-239)ProxyUpstreamAttempt(241-241)web/src/lib/transport/types.ts (2)
ProxyRequest(171-205)ProxyUpstreamAttempt(216-241)
internal/repository/sqlite/usage_stats.go (2)
internal/domain/model.go (10)
GranularityYear(464-464)Granularity(456-456)GranularityMonth(463-463)GranularityWeek(462-462)GranularityMinute(459-459)UsageStats(468-498)ClientType(6-6)GranularityDay(461-461)Progress(780-786)GranularityHour(460-460)internal/repository/sqlite/models.go (2)
UsageStats(277-297)UsageStats(299-299)
internal/pricing/calculator.go (1)
internal/pricing/tiered.go (2)
CalculateTieredCost(28-50)CalculateLinearCost(56-58)
🔇 Additional comments (39)
web/src/components/ui/index.ts (2)
26-34: LGTM!Select 组件的导出扩展符合现有模式,新增的
SelectGroup和SelectLabel导出与 shadcn/ui 的 Select 组件 API 一致。
61-75: LGTM!新增的
Progress和Sheet组件导出遵循文件中已有的组织模式(注释标题 + 导出语句)。Sheet 组件的完整 API(包括 Trigger、Close、Content、Header、Footer、Title、Description)导出齐全,便于在 Stats 页面等场景中使用。progress.tsx和sheet.tsx文件已在此 PR 中正确创建。web/src/hooks/queries/use-usage-stats.ts (1)
156-170: LGTM!新增的
useRecalculateCostshook 遵循了与useRecalculateUsageStats相同的模式,正确使用了useMutation和查询失效机制。代码结构清晰。web/src/pages/client-routes/components/provider-row.tsx (1)
28-38: LGTM!
formatCost函数正确实现了纳美元到美元的转换:
- 先向下取整到微美元精度 (
Math.floor(nanoUsd / 1000))- 再转换为美元 (
/ 1_000_000)- 根据金额大小选择合适的小数位数显示
逻辑清晰,与 PR 目标一致。
web/src/locales/zh.json (1)
576-631: LGTM!新增的本地化键值支持 Stats 页面的筛选功能和成本重算特性,翻译准确。
internal/repository/sqlite/migrations.go (1)
21-39: LGTM!Up 迁移逻辑正确:
- 仅更新
cost > 0的记录,避免不必要的写操作- 覆盖了所有三个涉及成本的表
- 乘法操作在整数范围内是安全的
internal/core/database.go (1)
278-279: LGTM!将
wailsBroadcaster注入到AdminService的改动符合现有的依赖注入模式,与projectWaiter、executor等组件的用法一致,支持成本重算等操作的进度广播功能。internal/pricing/calculator.go (1)
37-60: 返回单位文档更新清晰。注释明确 nanoUSD,方便调用方对齐显示/存储。
web/src/pages/requests/detail/RequestHeader.tsx (2)
1-3: 组件 API 扩展合理。新增 onRecalculateCost / isRecalculating 且保持可选,向后兼容清晰。
Also applies to: 31-40
9-14: 操作方向误解 - 代码执行的是除法而非乘法。代码
Math.floor(nanoUSD / 1000) / 1_000_000是除以 1000(降低溢出风险),而非乘以 1000(增加溢出风险)。实际的精度问题在实际场景中不成立:
- 单次请求成本通常为 $0.01-$10 = 10K-10M nanoUSD
- JavaScript MAX_SAFE_INTEGER 阈值约为 $9 百万 USD
- 复杂计算已在 RequestDetailPanel.tsx 中使用 BigInt 防护
- 后端计算使用 uint64 + big.Int 完全避免溢出
无需修改。
Likely an incorrect or invalid review comment.
web/src/hooks/queries/use-requests.ts (3)
34-38: providerId 已纳入 count 查询键。按 provider 分桶缓存,避免计数互相覆盖。
125-137: count 乐观更新按 provider 过滤 OK。仅对匹配的 providerId 递增,避免跨 provider 误增。
78-123: firstId 不需要在移除项目时更新。
firstId是分页响应中的信息性字段,表示当前页面第一条记录的 ID。然而,根据代码库的实际使用情况,firstId从未被读取或用于分页逻辑。真实的游标分页依赖于before和after参数(如 CursorPaginationParams 中定义),而不是firstId。即使在移除项目时不更新firstId,也不会影响分页功能,因为下次从服务器获取数据时,firstId会根据新的items[0].ID重新计算。Likely an incorrect or invalid review comment.
web/src/lib/transport/http-transport.ts (4)
279-285: count 查询参数透传 OK。仅在 providerId 有值时追加参数,避免多余 query。
535-535: model 过滤参数已透传。与筛选条件保持一致,便于后端按模型统计。
593-598: 新增 Pricing API 暴露清晰。返回 PriceTable,后续 UI 使用方便。
47-54: 该评审意见不准确,后端 JSON tag 并非存在拼写错误。检查确认:
- 后端
internal/service/admin.go(第871行):json:"updatedAttempts"✓ 正确拼写- 前端
web/src/lib/transport/types.ts(第610行):updatedAttempts: number✓ 一致后端与前端的字段名称完全匹配,不存在
updatedAttemps的拼写错误。现有代码无需修改或字段映射。Likely an incorrect or invalid review comment.
web/src/lib/transport/interface.ts (1)
44-51: Transport 接口扩展一致。新增 providerId 计数与重算成本、pricing 能力,接口定义齐全。
Also applies to: 106-106, 170-172, 183-185
web/src/pages/requests/detail.tsx (1)
140-145: Header 与重算状态联动良好。按钮与加载态透传清晰,UI 交互一致。
internal/domain/model.go (4)
242-243: 成本单位注释更新清晰。
nanoUSD 口径统一后更易理解。Also applies to: 451-452, 496-497
298-312: AttemptCostData 作为轻量传输结构合理。
字段覆盖成本重算所需信息,便于批处理。
458-465: 新增 year 粒度与全量统计需求匹配。
777-786: Progress 结构体清晰易序列化。web/src/pages/requests/detail/RequestDetailView.tsx (2)
18-42: 成本展示辅助与 Props 调整清晰。
nanoUSD 入参与 costBreakdown 接入明确。
360-406: 输入/输出/缓存读取成本行展示逻辑清楚。web/src/pages/requests/index.tsx (2)
74-160: Provider 过滤器接入与分页重置逻辑很稳健。
分组与排序也清晰可读。Also applies to: 666-742
379-384: 数值/时间展示优化符合预期。Also applies to: 389-406, 524-531, 617-620
internal/repository/sqlite/proxy_upstream_attempt.go (1)
47-61: ListAll/CountAll 新增接口直观易用。internal/repository/sqlite/proxy_request.go (1)
243-288: 该报告不适用于当前项目的 SQLite 版本。项目使用
modernc.org/sqlite v1.23.1,该版本集成的是 SQLite 3.41.2(发布于 2023-03-22)。SQLite 3.32.0 之后(包括 3.41.2)的SQLITE_MAX_VARIABLE_NUMBER默认值为 32766,而非 999。当前代码中每批次的占位符数量为:3 × 500 + 1 = 1501,远低于 32766 的限制。
batchSize=500不会触发参数上限问题,无需调整。Likely an incorrect or invalid review comment.
web/src/pages/requests/detail/RequestDetailPanel.tsx (2)
23-29: formatCost 实现正确,精度处理合理。使用
Math.floor向下取整到 microUSD 精度(6 位小数),并移除尾部零,符合 nanoUSD 到 USD 的转换逻辑。
171-179: 缓存读取价格的默认值计算使用了整数除法。
Math.floor(pricing.inputPriceMicro / 10)对于inputPriceMicro < 10的情况会返回 0,可能导致免费缓存读取。请确认这是预期行为。internal/handler/admin.go (1)
1429-1439: handlePricing 实现简洁正确。直接返回
pricing.DefaultPriceTable()单例,线程安全且无性能问题。internal/repository/interfaces.go (2)
91-100: 接口设计合理,进度报告模式统一。
AddCost使用int64作为 delta 类型(支持负数),而其他成本相关方法使用uint64,这是合理的设计。进度报告通过 channel 实现,与ClearAndRecalculateWithProgress保持一致。
107-117: StreamForCostCalc 的回调设计适合批量处理。使用 callback 模式处理批量数据可以有效控制内存使用,避免一次性加载所有数据。
web/src/lib/transport/types.ts (1)
789-812: ModelPricing 和 PriceTable 类型定义完整。类型定义与后端定价结构对齐,包含了 1M context 的分层定价支持字段。
internal/service/admin.go (2)
815-821: 批量更新失败时仅记录日志,不中断流程。
BatchUpdateCosts失败时只打印日志但继续处理。这种容错策略可能导致部分数据更新失败但用户不知情。建议在最终结果中报告失败数量,或考虑是否应该中断流程。请确认这种容错行为是否符合业务预期?
688-710: 进度 channel 的 goroutine 可能在函数返回后仍在运行。
close(progressChan)在ClearAndRecalculateWithProgress返回错误后可能不会立即被执行,导致 goroutine 阻塞等待。当前实现是安全的,因为 channel 带有 buffer(size 10),但如果 broadcaster 处理缓慢可能产生问题。internal/repository/sqlite/usage_stats.go (2)
62-63: GranularityYear 支持已正确添加。
TruncateToGranularity和TruncateToGranularityInTimezone都正确处理了年粒度截断。Also applies to: 88-89
1533-1585: ClearAndRecalculateWithProgress 实现完整,进度报告清晰。分阶段报告进度(clearing → aggregating → rollup → completed),并正确处理 nil progress channel 的情况。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
|
|
||
| // 构建过滤条件 | ||
| var filter *repository.ProxyRequestFilter | ||
| if p := r.URL.Query().Get("providerId"); p != "" { | ||
| if providerID, err := strconv.ParseUint(p, 10, 64); err == nil { | ||
| filter = &repository.ProxyRequestFilter{ | ||
| ProviderID: &providerID, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| result, err := h.svc.GetProxyRequestsCursor(limit, before, after, filter) |
There was a problem hiding this comment.
providerId 解析错误处理不一致。
此处 strconv.ParseUint 失败时静默忽略错误,而 handleProxyRequestsCount(lines 735-742)中会返回 400 错误。建议统一错误处理策略。
建议的修复
// 构建过滤条件
var filter *repository.ProxyRequestFilter
if p := r.URL.Query().Get("providerId"); p != "" {
- if providerID, err := strconv.ParseUint(p, 10, 64); err == nil {
+ providerID, err := strconv.ParseUint(p, 10, 64)
+ if err != nil {
+ writeJSON(w, http.StatusBadRequest, map[string]string{"error": "invalid providerId"})
+ return
+ }
filter = &repository.ProxyRequestFilter{
ProviderID: &providerID,
}
- }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 构建过滤条件 | |
| var filter *repository.ProxyRequestFilter | |
| if p := r.URL.Query().Get("providerId"); p != "" { | |
| if providerID, err := strconv.ParseUint(p, 10, 64); err == nil { | |
| filter = &repository.ProxyRequestFilter{ | |
| ProviderID: &providerID, | |
| } | |
| } | |
| } | |
| result, err := h.svc.GetProxyRequestsCursor(limit, before, after, filter) | |
| // 构建过滤条件 | |
| var filter *repository.ProxyRequestFilter | |
| if p := r.URL.Query().Get("providerId"); p != "" { | |
| providerID, err := strconv.ParseUint(p, 10, 64) | |
| if err != nil { | |
| writeJSON(w, http.StatusBadRequest, map[string]string{"error": "invalid providerId"}) | |
| return | |
| } | |
| filter = &repository.ProxyRequestFilter{ | |
| ProviderID: &providerID, | |
| } | |
| } | |
| result, err := h.svc.GetProxyRequestsCursor(limit, before, after, filter) |
🤖 Prompt for AI Agents
In `@internal/handler/admin.go` around lines 703 - 714, The providerId parsing
currently swallows strconv.ParseUint errors in the block that builds a
*repository.ProxyRequestFilter before calling h.svc.GetProxyRequestsCursor;
update this to mirror handleProxyRequestsCount's behavior by validating the
providerId query param and returning a 400 Bad Request when strconv.ParseUint
fails (include a clear error message), otherwise set the ProviderID on
repository.ProxyRequestFilter and proceed to call h.svc.GetProxyRequestsCursor;
reference the same symbols providerId, strconv.ParseUint,
repository.ProxyRequestFilter, and h.svc.GetProxyRequestsCursor when making the
change.
| totalCost += CalculateTieredCost( | ||
| metrics.InputTokens, | ||
| pricing.InputPriceMicro, | ||
| inputNum, inputDenom, | ||
| pricing.GetContext1MThreshold(), | ||
| ) | ||
| } else { | ||
| totalCost += CalculateLinearCostMicro(metrics.InputTokens, pricing.InputPriceMicro) | ||
| totalCost += CalculateLinearCost(metrics.InputTokens, pricing.InputPriceMicro) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for CacheCreationCount, Cache5mCreationCount, Cache1hCreationCount definitions and usage
rg -n "CacheCreationCount|Cache5mCreationCount|Cache1hCreationCount" -B 2 -A 2Repository: awsl-project/maxx
Length of output: 12793
🏁 Script executed:
#!/bin/bash
# Check the pricing calculator file to understand the fallback logic
head -150 internal/pricing/calculator.go | tail -100Repository: awsl-project/maxx
Length of output: 2495
🏁 Script executed:
#!/bin/bash
# Search for usage.Metrics definition and structure
rg -n "type.*Metrics.*struct" -A 15Repository: awsl-project/maxx
Length of output: 2448
🏁 Script executed:
#!/bin/bash
# Look for data collection/writing paths for cache metrics
rg -n "Cache.*CreationCount\s*=" -B 3 -A 3Repository: awsl-project/maxx
Length of output: 2147
确认 CacheCreationCount 的语义以避免回退定价偏差。
已切换到 CalculateTieredCost/CalculateLinearCost 的 nanoUSD 计算路径。但回退逻辑在 Cache5mCreationCount 和 Cache1hCreationCount 都为 0 时统一按 5m 价格计算 CacheCreationCount。由于三个字段在数据采集层独立提取(分别来自 cache_creation_input_tokens、cache_creation_5m_input_tokens、cache_creation_1h_input_tokens),若历史数据或部分来源仅包含 CacheCreationCount 而缺失细分字段,或数据中混合包含两种 TTL 写入时,此默认策略会导致定价偏差(以 125% 作为默认而非 200%)。建议:
- 核对 CacheCreationCount 的实际含义是否与文档注释一致(应为两者之和)
- 添加数据层验证或显式的默认策略(如日志告警、降级处理)
- 若混合数据确实存在,考虑在此处引入更明确的区分逻辑
🤖 Prompt for AI Agents
In `@internal/pricing/calculator.go` around lines 71 - 79, The code currently
falls back to treating CacheCreationCount as 5m when Cache5mCreationCount and
Cache1hCreationCount are zero, which can bias pricing; inspect and confirm the
semantic contract of CacheCreationCount (should it represent
Cache5mCreationCount + Cache1hCreationCount) and modify the logic around
CalculateTieredCost/CalculateLinearCost to validate the three counters
(CacheCreationCount, Cache5mCreationCount, Cache1hCreationCount) before choosing
a price path: if the fine-grained fields are missing or inconsistent, emit a
warning/log (using the existing logger), apply an explicit default policy (e.g.,
treat missing fine-grained data as unknown and use conservative pricing or split
proportionally), and add handling to disambiguate mixed-ttl data rather than
silently defaulting to 5m pricing (update the branch that calls
CalculateTieredCost/CalculateLinearCost and any helpers that compute
CacheCreationCount to implement this validation/default strategy).
| func CalculateTieredCost(tokens uint64, basePriceMicro uint64, premiumNum, premiumDenom, threshold uint64) uint64 { | ||
| if tokens <= threshold { | ||
| return tokens * basePriceMicro / TokensPerMillion | ||
| return calculateLinearCostBig(tokens, basePriceMicro) | ||
| } | ||
| baseCost := threshold * basePriceMicro / TokensPerMillion | ||
|
|
||
| baseCostNano := calculateLinearCostBig(threshold, basePriceMicro) | ||
| premiumTokens := tokens - threshold | ||
| // premiumCost = premiumTokens * basePriceMicro * (premiumNum/premiumDenom) / TokensPerMillion | ||
| // 重排以避免溢出: (premiumTokens * basePriceMicro / TokensPerMillion) * premiumNum / premiumDenom | ||
| premiumCost := premiumTokens * basePriceMicro / TokensPerMillion * premiumNum / premiumDenom | ||
| return baseCost + premiumCost | ||
|
|
||
| // premiumCost = premiumTokens * basePriceMicro * MicroToNano / TokensPerMillion * premiumNum / premiumDenom | ||
| t := big.NewInt(0).SetUint64(premiumTokens) | ||
| p := big.NewInt(0).SetUint64(basePriceMicro) | ||
| num := big.NewInt(0).SetUint64(premiumNum) | ||
| denom := big.NewInt(0).SetUint64(premiumDenom) | ||
|
|
||
| // t * p * MicroToNano * num / TokensPerMillion / denom | ||
| t.Mul(t, p) | ||
| t.Mul(t, bigMicroToNano) | ||
| t.Mul(t, num) | ||
| t.Div(t, bigTokensPerMillion) | ||
| t.Div(t, denom) | ||
|
|
||
| return baseCostNano + t.Uint64() | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "tiered.go" -type fRepository: awsl-project/maxx
Length of output: 89
🏁 Script executed:
cat -n ./internal/pricing/tiered.go | head -80Repository: awsl-project/maxx
Length of output: 3107
🏁 Script executed:
# Search for test files related to pricing
find . -name "*pricing*test*.go" -o -name "*test*pricing*.go" 2>/dev/nullRepository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
rg "CalculateTieredCost|calculateLinearCostBig" --type go -B 2 -A 2Repository: awsl-project/maxx
Length of output: 3507
🏁 Script executed:
cat -n ./internal/pricing/calculator_test.go | head -60Repository: awsl-project/maxx
Length of output: 1822
🏁 Script executed:
cat -n ./internal/pricing/calculator_test.goRepository: awsl-project/maxx
Length of output: 9175
🏁 Script executed:
cat -n ./internal/pricing/calculator.go | head -100Repository: awsl-project/maxx
Length of output: 2941
🏁 Script executed:
rg "GetInputPremiumFraction|GetOutputPremiumFraction" --type go -B 2 -A 5Repository: awsl-project/maxx
Length of output: 2148
🏁 Script executed:
cat -n ./internal/pricing/pricing.go | grep -A 12 "GetInputPremiumFraction\|GetOutputPremiumFraction"Repository: awsl-project/maxx
Length of output: 749
去除不必要的除零检查,保留 uint64 溢出防护。
premiumDenom 已在 GetInputPremiumFraction() 和 GetOutputPremiumFraction() 中被验证不为 0,因此 CalculateTieredCost 中无需显式除零保护。
但 line 49 的 uint64 相加存在溢出风险:baseCostNano + t.Uint64() 需要在 big.Int 层完成求和并验证结果不超出 uint64 范围。同时 line 70 的 t.Uint64() 在值超出范围时会静默截断,也需要用 IsUint64() 检查。
🔧 建议修复
func CalculateTieredCost(tokens uint64, basePriceMicro uint64, premiumNum, premiumDenom, threshold uint64) uint64 {
if tokens <= threshold {
return calculateLinearCostBig(tokens, basePriceMicro)
}
- baseCostNano := calculateLinearCostBig(threshold, basePriceMicro)
+ baseCostNano := calculateLinearCostBigInt(threshold, basePriceMicro)
premiumTokens := tokens - threshold
// premiumCost = premiumTokens * basePriceMicro * MicroToNano / TokensPerMillion * premiumNum / premiumDenom
t := big.NewInt(0).SetUint64(premiumTokens)
p := big.NewInt(0).SetUint64(basePriceMicro)
num := big.NewInt(0).SetUint64(premiumNum)
denom := big.NewInt(0).SetUint64(premiumDenom)
@@
t.Mul(t, num)
t.Div(t, bigTokensPerMillion)
t.Div(t, denom)
- return baseCostNano + t.Uint64()
+ t.Add(t, baseCostNano)
+ if !t.IsUint64() {
+ return ^uint64(0)
+ }
+ return t.Uint64()
}
-// calculateLinearCostBig 使用 big.Int 计算线性成本
-func calculateLinearCostBig(tokens, priceMicro uint64) uint64 {
+// calculateLinearCostBigInt 使用 big.Int 计算线性成本
+func calculateLinearCostBigInt(tokens, priceMicro uint64) *big.Int {
// cost = tokens * priceMicro * MicroToNano / TokensPerMillion
t := big.NewInt(0).SetUint64(tokens)
p := big.NewInt(0).SetUint64(priceMicro)
t.Mul(t, p)
t.Mul(t, bigMicroToNano)
t.Div(t, bigTokensPerMillion)
- return t.Uint64()
+ return t
}
+
+func calculateLinearCostBig(tokens, priceMicro uint64) uint64 {
+ t := calculateLinearCostBigInt(tokens, priceMicro)
+ if !t.IsUint64() {
+ return ^uint64(0)
+ }
+ return t.Uint64()
+}也适用于:line 60-71
🤖 Prompt for AI Agents
In `@internal/pricing/tiered.go` around lines 28 - 50, CalculateTieredCost
currently converts the big.Int result t to uint64 and adds it to baseCostNano
which can silently truncate or overflow; fix by doing the final addition at
big.Int level and validating the result fits in uint64: create a big.Int sum
initialized with baseCostNano, sum.Add(sum, t), then use sum.IsUint64() to
detect overflow and handle it (e.g., return math.MaxUint64 or another defined
cap) instead of directly using t.Uint64(); also avoid any direct t.Uint64() use
without checking IsUint64() (references: function CalculateTieredCost, variables
baseCostNano and t, and the big.Int intermediates).
| Down: func(db *gorm.DB) error { | ||
| // Rollback: divide by 1000 | ||
| if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000").Error; err != nil { | ||
| return err | ||
| } | ||
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000").Error; err != nil { | ||
| return err | ||
| } | ||
| if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000").Error; err != nil { | ||
| return err | ||
| } | ||
| return nil | ||
| }, |
There was a problem hiding this comment.
Down 迁移可能导致小额成本数据丢失
当 cost < 1000 纳美元时,SQLite 的整数除法 cost / 1000 会得到 0,导致回滚时数据丢失。建议与 Up 迁移保持对称,添加条件检查:
🔧 建议的修复
Down: func(db *gorm.DB) error {
// Rollback: divide by 1000
- if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000").Error; err != nil {
+ if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000 WHERE cost > 0").Error; err != nil {
return err
}
- if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000").Error; err != nil {
+ if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000 WHERE cost > 0").Error; err != nil {
return err
}
- if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000").Error; err != nil {
+ if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000 WHERE cost > 0").Error; err != nil {
return err
}
return nil
},注意:这不会完全解决精度丢失问题(小于 1000 的值仍会变成 0),但至少避免了不必要的更新操作。如果需要完全可逆的迁移,可能需要在迁移前备份原始值或使用浮点数存储。
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Down: func(db *gorm.DB) error { | |
| // Rollback: divide by 1000 | |
| if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000").Error; err != nil { | |
| return err | |
| } | |
| return nil | |
| }, | |
| Down: func(db *gorm.DB) error { | |
| // Rollback: divide by 1000 | |
| if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000 WHERE cost > 0").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000 WHERE cost > 0").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000 WHERE cost > 0").Error; err != nil { | |
| return err | |
| } | |
| return nil | |
| }, |
🤖 Prompt for AI Agents
In `@internal/repository/sqlite/migrations.go` around lines 40 - 52, The Down
migration's UPDATEs in the migration's Down func (which runs "UPDATE
proxy_requests SET cost = cost / 1000", "UPDATE proxy_upstream_attempts SET cost
= cost / 1000", and "UPDATE usage_stats SET cost = cost / 1000") will zero-out
values <1000 due to SQLite integer division; change each UPDATE to only touch
rows with cost >= 1000 (e.g., add a WHERE cost >= 1000 clause) so small-cost
rows are left untouched and unnecessary writes are avoided while keeping the
Down migration symmetric to Up.
| return r.db.gorm.Transaction(func(tx *gorm.DB) error { | ||
| // Use CASE WHEN for batch update | ||
| const batchSize = 500 | ||
| ids := make([]uint64, 0, len(updates)) | ||
| for id := range updates { | ||
| ids = append(ids, id) | ||
| } | ||
|
|
||
| for i := 0; i < len(ids); i += batchSize { | ||
| end := i + batchSize | ||
| if end > len(ids) { | ||
| end = len(ids) | ||
| } | ||
| batchIDs := ids[i:end] | ||
|
|
||
| // Build CASE WHEN statement | ||
| var cases strings.Builder | ||
| cases.WriteString("CASE id ") | ||
| args := make([]interface{}, 0, len(batchIDs)*3+1) | ||
|
|
||
| // First: CASE WHEN pairs (id, cost) | ||
| for _, id := range batchIDs { | ||
| cases.WriteString("WHEN ? THEN ? ") | ||
| args = append(args, id, updates[id]) | ||
| } | ||
| cases.WriteString("END") | ||
|
|
||
| // Second: timestamp for updated_at | ||
| args = append(args, time.Now().UnixMilli()) | ||
|
|
||
| // Third: WHERE IN ids | ||
| for _, id := range batchIDs { | ||
| args = append(args, id) | ||
| } | ||
|
|
||
| sql := fmt.Sprintf("UPDATE proxy_upstream_attempts SET cost = %s, updated_at = ? WHERE id IN (?%s)", | ||
| cases.String(), strings.Repeat(",?", len(batchIDs)-1)) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read the full context around the mentioned lines
cat -n internal/repository/sqlite/proxy_upstream_attempt.go | sed -n '130,190p'Repository: awsl-project/maxx
Length of output: 178
🏁 Script executed:
#!/bin/bash
# Search for proxy_request reference mentioned in the comment
rg "proxy_request" internal/repository/sqlite/ -A 5 -B 5Repository: awsl-project/maxx
Length of output: 12656
🏁 Script executed:
#!/bin/bash
# Check SQLite configuration or version requirements in the project
find . -name "*.go" -o -name "go.mod" -o -name "*.md" | xargs rg -l "SQLITE_MAX_VARIABLE" 2>/dev/null || echo "No explicit SQLITE_MAX_VARIABLE found"
rg "sqlite" go.mod 2>/dev/null | head -5Repository: awsl-project/maxx
Length of output: 225
🏁 Script executed:
#!/bin/bash
# Read the exact file and lines mentioned in the review comment
cat -n internal/repository/sqlite/proxy_upstream_attempt.go | sed -n '130,190p'Repository: awsl-project/maxx
Length of output: 2184
🏁 Script executed:
#!/bin/bash
# Search for BatchUpdateCosts function to see the complete implementation
rg "func.*BatchUpdateCosts" internal/repository/sqlite/ -A 60Repository: awsl-project/maxx
Length of output: 9472
🏁 Script executed:
#!/bin/bash
# Search for similar batchSize usage in proxy_request.go for comparison
rg "batchSize" internal/repository/sqlite/proxy_request.go -B 2 -A 20Repository: awsl-project/maxx
Length of output: 1478
BatchUpdateCosts 的 batchSize=500 超出 SQLite 参数上限。
当前实现每行消耗 3 个参数(CASE WHEN 的 id 和 cost,加上 WHERE IN 中的 id),总计 500×3 + 1 = 1501 个参数,超出 SQLite 默认的 999 参数限制。建议降低 batchSize 至 300,使参数数量为 901,确保在限制内运行。proxy_request.go 中的同名方法存在相同问题。
🔧 建议修复
- const batchSize = 500
+ const batchSize = 300 // 3 params/row,确保不超过 SQLite 默认 999 上限📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return r.db.gorm.Transaction(func(tx *gorm.DB) error { | |
| // Use CASE WHEN for batch update | |
| const batchSize = 500 | |
| ids := make([]uint64, 0, len(updates)) | |
| for id := range updates { | |
| ids = append(ids, id) | |
| } | |
| for i := 0; i < len(ids); i += batchSize { | |
| end := i + batchSize | |
| if end > len(ids) { | |
| end = len(ids) | |
| } | |
| batchIDs := ids[i:end] | |
| // Build CASE WHEN statement | |
| var cases strings.Builder | |
| cases.WriteString("CASE id ") | |
| args := make([]interface{}, 0, len(batchIDs)*3+1) | |
| // First: CASE WHEN pairs (id, cost) | |
| for _, id := range batchIDs { | |
| cases.WriteString("WHEN ? THEN ? ") | |
| args = append(args, id, updates[id]) | |
| } | |
| cases.WriteString("END") | |
| // Second: timestamp for updated_at | |
| args = append(args, time.Now().UnixMilli()) | |
| // Third: WHERE IN ids | |
| for _, id := range batchIDs { | |
| args = append(args, id) | |
| } | |
| sql := fmt.Sprintf("UPDATE proxy_upstream_attempts SET cost = %s, updated_at = ? WHERE id IN (?%s)", | |
| cases.String(), strings.Repeat(",?", len(batchIDs)-1)) | |
| return r.db.gorm.Transaction(func(tx *gorm.DB) error { | |
| // Use CASE WHEN for batch update | |
| const batchSize = 300 // 3 params/row,确保不超过 SQLite 默认 999 上限 | |
| ids := make([]uint64, 0, len(updates)) | |
| for id := range updates { | |
| ids = append(ids, id) | |
| } | |
| for i := 0; i < len(ids); i += batchSize { | |
| end := i + batchSize | |
| if end > len(ids) { | |
| end = len(ids) | |
| } | |
| batchIDs := ids[i:end] | |
| // Build CASE WHEN statement | |
| var cases strings.Builder | |
| cases.WriteString("CASE id ") | |
| args := make([]interface{}, 0, len(batchIDs)*3+1) | |
| // First: CASE WHEN pairs (id, cost) | |
| for _, id := range batchIDs { | |
| cases.WriteString("WHEN ? THEN ? ") | |
| args = append(args, id, updates[id]) | |
| } | |
| cases.WriteString("END") | |
| // Second: timestamp for updated_at | |
| args = append(args, time.Now().UnixMilli()) | |
| // Third: WHERE IN ids | |
| for _, id := range batchIDs { | |
| args = append(args, id) | |
| } | |
| sql := fmt.Sprintf("UPDATE proxy_upstream_attempts SET cost = %s, updated_at = ? WHERE id IN (?%s)", | |
| cases.String(), strings.Repeat(",?", len(batchIDs)-1)) |
🤖 Prompt for AI Agents
In `@internal/repository/sqlite/proxy_upstream_attempt.go` around lines 142 - 179,
The batchSize of 500 in BatchUpdateCosts causes SQLite to exceed its default 999
bound because each id uses three parameters; reduce batchSize to 300 (or compute
batchSize = floor((sqliteParamLimit-1)/3)) in the function where batchSize is
declared so the constructed args length stays within limits; apply the same
change to the homologous method in proxy_request.go (look for BatchUpdateCosts
or the code building the CASE WHEN and WHERE IN clauses) to ensure both
implementations use the safe batch size.
| // 对于 day 及以上粒度,使用配置的时区 | ||
| var loc *time.Location | ||
| if to == domain.GranularityDay || to == domain.GranularityWeek || to == domain.GranularityMonth { | ||
| loc = r.getConfiguredTimezone() | ||
| } |
There was a problem hiding this comment.
RollUpAllWithProgress 未处理 GranularityYear 的时区。
RollUpAllWithProgress 只对 day/week/month 粒度使用配置时区,但缺少对 year 粒度的处理。虽然年粒度通常不受时区影响太大,但为保持一致性,建议添加。
建议的修复
// 对于 day 及以上粒度,使用配置的时区
var loc *time.Location
-if to == domain.GranularityDay || to == domain.GranularityWeek || to == domain.GranularityMonth {
+if to == domain.GranularityDay || to == domain.GranularityWeek || to == domain.GranularityMonth || to == domain.GranularityYear {
loc = r.getConfiguredTimezone()
}🤖 Prompt for AI Agents
In `@internal/repository/sqlite/usage_stats.go` around lines 1409 - 1413, The
RollUpAllWithProgress logic in usage_stats.go sets loc only for day/week/month
but omits year; update the conditional that assigns loc (the var loc
*time.Location block) to include domain.GranularityYear so the repository method
RollUpAllWithProgress uses r.getConfiguredTimezone() for year granularity as
well, ensuring consistent timezone handling across
GranularityDay/Week/Month/Year.
| calculator := pricing.GlobalCalculator() | ||
| processedCount := 0 | ||
| const batchSize = 100 | ||
| affectedRequestIDs := make(map[uint64]struct{}) |
There was a problem hiding this comment.
未使用的变量 affectedRequestIDs。
affectedRequestIDs map 在 line 774 创建并在 line 804 填充,但之后从未被使用。这看起来是重构遗留的代码。
建议的修复
calculator := pricing.GlobalCalculator()
processedCount := 0
const batchSize = 100
- affectedRequestIDs := make(map[uint64]struct{})
// 2. Stream through attempts, process and update each batch immediately
err = s.attemptRepo.StreamForCostCalc(batchSize, func(batch []*domain.AttemptCostData) error {
attemptUpdates := make(map[uint64]uint64, len(batch))
for _, attempt := range batch {
// ... (省略中间代码)
- // Track affected request IDs
- affectedRequestIDs[attempt.ProxyRequestID] = struct{}{}
-
// Track if attempt needs update📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| affectedRequestIDs := make(map[uint64]struct{}) |
🤖 Prompt for AI Agents
In `@internal/service/admin.go` at line 774, Remove the unused variable
affectedRequestIDs and any code that builds or populates it (the map created as
affectedRequestIDs := make(map[uint64]struct{}) and subsequent inserts) inside
the same function in internal/service/admin.go; if the intent was to track
request IDs, either replace uses with the existing logic that operates on
request IDs or delete the map and its population code to eliminate the dead
variable and related operations (search for the symbol affectedRequestIDs to
locate all occurrences to remove).
| --chart-1: oklch(0.8241 0.1251 84.4866); | ||
| --chart-2: oklch(0.8006 0.1116 203.6044); | ||
| --chart-3: oklch(0.4198 0.1693 266.7798); | ||
| --chart-4: oklch(0.9214 0.0762 125.5777); | ||
| --chart-5: oklch(0.9151 0.1032 116.1913); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# 搜索代码库中对已移除语义图表 token 的引用
echo "=== 搜索 chart-success, chart-error, chart-warning, chart-info, chart-primary 引用 ==="
rg -n --type=ts --type=tsx --type=css -e 'chart-success|chart-error|chart-warning|chart-info|chart-primary' -g '!node_modules/**'
echo ""
echo "=== 搜索 color-chart- 语义映射引用 ==="
rg -n -e 'color-chart-' -g '!node_modules/**'Repository: awsl-project/maxx
Length of output: 1147
🏁 Script executed:
# 查看 web/src/index.css 文件,特别是涉及图表颜色的部分
echo "=== 查看 web/src/index.css 文件内容(行1-50)==="
head -50 web/src/index.css
echo ""
echo "=== 查看 web/src/index.css 文件内容(行80-130)==="
sed -n '80,130p' web/src/index.cssRepository: awsl-project/maxx
Length of output: 4187
🏁 Script executed:
# 搜索是否有其他文件引用 --chart-1 到 --chart-5
echo "=== 搜索 --chart 变量引用 ==="
rg -n -e '\-\-chart-[0-9]' -g '!node_modules/**'Repository: awsl-project/maxx
Length of output: 1018
图表颜色更新为新的 oklch 值,符合主题简化目标。
-
语义图表 token 移除不会导致破坏:搜索结果显示代码库中不存在对
chart-success、chart-error、chart-warning、chart-info、chart-primary的引用,这些 token 的移除不会影响现有组件。 -
对比度需关注:
--chart-1(L: 0.8241)、--chart-4(L: 0.9214)、--chart-5(L: 0.9151) 的亮度较高。这些颜色在浅色模式下通过--color-chart-*映射被使用在overview.tsx和stats/index.tsx中。在stats/index.tsx中被用作文本颜色,在浅色背景下可能存在对比度不足的问题,建议验证可读性或调整使用方式。
🤖 Prompt for AI Agents
In `@web/src/index.css` around lines 27 - 31, You removed semantic chart tokens
and replaced them with new --chart-N oklch variables, but the high L values in
--chart-1, --chart-4, and --chart-5 can produce insufficient contrast when used
as text in overview.tsx and stats/index.tsx; locate usages of these CSS
variables in overview.tsx and stats/index.tsx and either map those components to
a darker semantic token (or fallback to an existing
--color-text/--color-foreground variable) or adjust the --chart-1/4/5 values to
lower L (increase contrast) for light mode; ensure the change keeps the new
simplified token set while verifying text contrast in stats/index.tsx and
updating any CSS mapping logic that previously relied on
chart-success/chart-error/etc. to use the new variables safely.
| export interface RecalculateCostsProgress { | ||
| phase: 'calculating' | 'updating_attempts' | 'updating_requests' | 'completed'; | ||
| current: number; | ||
| total: number; | ||
| percentage: number; | ||
| message: string; | ||
| } |
There was a problem hiding this comment.
RecalculateCostsProgress 的 phase 类型与后端不完全匹配。
前端定义了 4 个 phase:'calculating' | 'updating_attempts' | 'updating_requests' | 'completed',但后端 RecalculateCostsProgress(参见 internal/service/admin.go lines 723-729)还包含 'aggregating_stats' phase。
建议的修复
/** RecalculateCostsProgress - 成本重算进度更新 */
export interface RecalculateCostsProgress {
- phase: 'calculating' | 'updating_attempts' | 'updating_requests' | 'completed';
+ phase: 'calculating' | 'updating_attempts' | 'updating_requests' | 'aggregating_stats' | 'completed';
current: number;
total: number;
percentage: number;
message: string;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export interface RecalculateCostsProgress { | |
| phase: 'calculating' | 'updating_attempts' | 'updating_requests' | 'completed'; | |
| current: number; | |
| total: number; | |
| percentage: number; | |
| message: string; | |
| } | |
| export interface RecalculateCostsProgress { | |
| phase: 'calculating' | 'updating_attempts' | 'updating_requests' | 'aggregating_stats' | 'completed'; | |
| current: number; | |
| total: number; | |
| percentage: number; | |
| message: string; | |
| } |
🤖 Prompt for AI Agents
In `@web/src/lib/transport/types.ts` around lines 588 - 594, The front-end
RecalculateCostsProgress interface's phase union (in
web/src/lib/transport/types.ts) is missing the 'aggregating_stats' value present
in the backend; update the phase type in the RecalculateCostsProgress interface
to include 'aggregating_stats' (i.e., change the union to 'calculating' |
'aggregating_stats' | 'updating_attempts' | 'updating_requests' | 'completed')
so it matches the backend representation and prevents mismatches when that phase
is emitted.
| <dd className="text-sm text-amber-400 font-mono font-medium flex items-center gap-2"> | ||
| <span>{request.cacheWriteCount.toLocaleString()}</span> | ||
| {(() => { | ||
| const cache5m = costBreakdown?.items.find((i) => i.label === 'Cache Write (5m)'); | ||
| const cache1h = costBreakdown?.items.find((i) => i.label === 'Cache Write (1h)'); | ||
| const cacheWrite = costBreakdown?.items.find((i) => i.label === 'Cache Write'); | ||
| const item = cache5m || cache1h || cacheWrite; | ||
| if (!item) return null; | ||
| return ( | ||
| <span className="text-xs text-muted-foreground"> | ||
| × {formatPricePerM(item.pricePerM)} ={' '} | ||
| <span className="text-blue-400">{formatCost(item.cost)}</span> | ||
| </span> | ||
| ); | ||
| })()} | ||
| </dd> |
There was a problem hiding this comment.
Cache Write 主行成本在 5m/1h 同时存在时可能失真。
当前主行显示总数,但成本/单价只取第一个条目;当同时存在 5m 与 1h 时会误导。建议优先使用聚合项或合计成本。
🔧 建议修复
- {(() => {
- const cache5m = costBreakdown?.items.find((i) => i.label === 'Cache Write (5m)');
- const cache1h = costBreakdown?.items.find((i) => i.label === 'Cache Write (1h)');
- const cacheWrite = costBreakdown?.items.find((i) => i.label === 'Cache Write');
- const item = cache5m || cache1h || cacheWrite;
- if (!item) return null;
- return (
- <span className="text-xs text-muted-foreground">
- × {formatPricePerM(item.pricePerM)} ={' '}
- <span className="text-blue-400">{formatCost(item.cost)}</span>
- </span>
- );
- })()}
+ {(() => {
+ const cache5m = costBreakdown?.items.find((i) => i.label === 'Cache Write (5m)');
+ const cache1h = costBreakdown?.items.find((i) => i.label === 'Cache Write (1h)');
+ const cacheWrite = costBreakdown?.items.find((i) => i.label === 'Cache Write');
+ const totalCost =
+ cacheWrite?.cost ?? (cache5m?.cost ?? 0) + (cache1h?.cost ?? 0);
+ const priceItem =
+ cacheWrite ??
+ (cache5m && !cache1h ? cache5m : null) ??
+ (cache1h && !cache5m ? cache1h : null);
+ if (!priceItem && totalCost === 0) return null;
+ return (
+ <span className="text-xs text-muted-foreground">
+ {priceItem && <>× {formatPricePerM(priceItem.pricePerM)} = </>}
+ <span className="text-blue-400">{formatCost(totalCost)}</span>
+ </span>
+ );
+ })()}Also applies to: 429-445
🤖 Prompt for AI Agents
In `@web/src/pages/requests/detail/RequestDetailView.tsx` around lines 412 - 427,
The displayed Cache Write detail currently picks only the first matching item
(cache5m/cache1h/cacheWrite) causing incorrect per-item display when multiple
intervals exist; update the JSX logic in RequestDetailView (where
request.cacheWriteCount, cache5m/cache1h/cacheWrite and
formatPricePerM/formatCost are referenced) to instead filter costBreakdown.items
for all entries whose label startsWith 'Cache Write', sum their cost into a
totalCost and compute a combined pricePerM (e.g., sum of pricePerM or a suitable
aggregated rate), then render the count × aggregated pricePerM = formatted
totalCost (and optionally show per-interval breakdown) so the displayed cost
reflects the aggregate; apply the same fix to the analogous block mentioned
(lines ~429-445).
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
web/src/lib/transport/types.ts (1)
326-326: totalCost 注释与实际单位可能不一致。此处注释仍标注为"微美元",但 PR 目标说明成本已升级为纳美元(nanoUSD)。如果后端已更新,请同步修改此注释以保持文档一致性。
📝 建议更新注释
- totalCost: number; // 微美元 + totalCost: number; // 纳美元 (nanoUSD)
🤖 Fix all issues with AI agents
In `@internal/executor/executor.go`:
- Around line 409-414: The current pricing logic uses
attemptRecord.ResponseModel unconditionally which yields 0 if that model lacks
pricing; update the logic (both in the shown block and the similar block around
the other occurrence) to attempt pricing with ResponseModel first and if
pricing.GlobalCalculator().Calculate(pricingModel, metrics) returns 0 (or
indicates missing price) then fall back to attemptRecord.MappedModel and
recalculate so Cost is not underestimated; modify the code that sets
pricingModel and attemptRecord.Cost to perform this check and fallback using the
same symbols (attemptRecord.ResponseModel, attemptRecord.MappedModel,
pricing.GlobalCalculator().Calculate, attemptRecord.Cost).
In `@internal/handler/admin.go`:
- Around line 703-714: The cursor handler currently parses the providerId query
and silently ignores parse errors while building a
*repository.ProxyRequestFilter before calling h.svc.GetProxyRequestsCursor; this
is inconsistent with the count endpoint which returns a 400 on parse failure.
Update the providerId parsing logic in the cursor handler (where filter is built
before calling GetProxyRequestsCursor) to mirror the count endpoint: if
strconv.ParseUint(p, 10, 64) returns an error, return an HTTP 400 response (bad
request) with an appropriate error message instead of silently ignoring the
parse error, ensuring consistent behavior across endpoints.
In `@internal/repository/sqlite/migrations.go`:
- Around line 27-49: The migration's Up and Down handlers are asymmetric: Up
multiplies only rows matching "cost > 0" while Down divides all rows, which
mis-handles negative costs; update the Up and Down DB updates in the Up and Down
functions (the db.Exec calls for proxy_requests, proxy_upstream_attempts,
usage_stats) to use a symmetric condition (e.g., "WHERE cost != 0") or otherwise
handle negative values explicitly, and update the comment to state the chosen
intent (e.g., "Convert all non-zero costs" or "Scale positive and negative costs
consistently") so both migration directions apply the same filter.
In `@internal/repository/sqlite/proxy_upstream_attempt.go`:
- Around line 132-134: UpdateCost 只更新了 cost 导致 BaseModel 的 UpdatedAt(int64)不会被
GORM 自动刷新;修改 UpdateCost 在更新 cost 的同时显式设置 updated_at 为当前 unix 时间戳(与
BatchUpdateCosts 保持一致),例如在 ProxyUpstreamAttemptRepository 的 UpdateCost 方法中对
ProxyUpstreamAttempt 使用 Updates/UpdateColumn 将 cost 和 updated_at 一并写入(设置为
time.Now().Unix()),确保 UpdatedAt 字段类型为 int64 并被正确赋值以保持数据审计一致性。
In `@internal/repository/sqlite/usage_stats.go`:
- Around line 1567-1581: ClearAndRecalculateWithProgress currently discards
errors from the four RollUpAllWithProgress calls (minute->hour, hour->day,
day->week, day->month) by using "_, _"; capture the returned error from each
RollUpAllWithProgress invocation, log the error (including context like the
source and target granularities and current operation message) and propagate the
error back from ClearAndRecalculateWithProgress instead of silently proceeding;
update the calls around RollUpAllWithProgress (and keep sendProgress usage) so
any non-nil error is either logged via the repository/logger and the function
returns the error immediately or aggregated and returned, ensuring failures
don’t get silently ignored.
- Around line 617-619: The code currently swallows Scan errors (rows.Scan) by
just doing continue; change this so the error is logged at debug (or warn) level
before continuing: capture the err returned by rows.Scan, call the repository's
logger (e.g., r.logger.Debugf or similar logger in this package) with a clear
message like "rows.Scan failed" and include the err plus identifying context
(e.g., the query/table or any available column values/row index), then continue;
ensure you reference rows.Scan and the existing err variable when adding the log
call.
In `@internal/service/admin.go`:
- Around line 827-828: The unconditional time.Sleep(50 * time.Millisecond)
inside the processing loop significantly increases total runtime for large
datasets; remove this per-batch sleep and instead implement one of: move the
short sleep into broadcastProgress so throttling happens only when sending
progress updates, or throttle sleeps to run once every N batches (e.g., every 10
batches) or use a time.Ticker to rate-limit progress broadcasts; update the loop
that calls time.Sleep and the broadcastProgress function accordingly so progress
emission is throttled without sleeping on every batch.
- Around line 688-713: The goroutine that ranges over progressChan in
RecalculateUsageStats can race with closing progressChan; change the flow so the
sender closes the channel and the goroutine exits before the function
returns—for example, create a sync.WaitGroup, increment it before starting the
goroutine that reads from progressChan and calls s.broadcaster.BroadcastMessage,
then have the caller (the goroutine that invokes
s.usageStatsRepo.ClearAndRecalculateWithProgress) be the only one to close
progressChan after that repository call returns and wait for the WaitGroup to
finish; specifically update RecalculateUsageStats to use waitGroup.Add(1) before
the anonymous goroutine, defer waitGroup.Done() inside the goroutine,
close(progressChan) only after s.usageStatsRepo.ClearAndRecalculateWithProgress
returns, and call waitGroup.Wait() before returning the error.
In `@web/src/pages/client-routes/components/provider-row.tsx`:
- Around line 28-39: Update the comment for ProviderStats.totalCost in types.ts
to match the backend: change the unit annotation from "微美元" to "纳美元" wherever
ProviderStats.totalCost (and any other occurrences referring to TotalCost unit)
is documented in web/src/lib/transport/types.ts so frontend types match
internal/domain/model.go; keep the field name ProviderStats.totalCost and only
adjust the comment text.
In `@web/src/pages/requests/detail.tsx`:
- Around line 42-52: The recalculate cost mutation (recalculateMutation)
currently only invalidates requestKeys.detail(Number(id)) on success, but the
backend also updates attempt costs so you must also invalidate the attempts
cache; update the onSuccess handler inside the useMutation for
recalculateMutation to call queryClient.invalidateQueries({ queryKey:
requestKeys.attempts(Number(id)) }) in addition to invalidating
requestKeys.detail(Number(id)) so the sidebar attempt costs refresh; verify
handleRecalculateCost continues to call recalculateMutation.mutate().
In `@web/src/pages/requests/detail/RequestHeader.tsx`:
- Around line 9-14: formatCost currently takes a number which can lose precision
for very large nanoUSD values; change formatCost to accept a string or BigInt
(e.g., nanoUSD: string | bigint), convert string input to BigInt, use BigInt
arithmetic to compute microUSD and fractional parts (divide by 1_000n for micro,
then divide by 1_000_000n to get whole USD and use remainders to build up to 6
decimal places), handle zero by returning '-' and strip trailing zeros from the
fractional part before joining, and update any callers to pass the backend
string value (or BigInt) instead of a JS number.
♻️ Duplicate comments (3)
web/src/components/provider-details-dialog.tsx (1)
112-123: formatCost 函数与 provider-row.tsx 重复,且存在相同的单位匹配问题。此
formatCost函数与provider-row.tsx中的实现完全相同。建议提取为共享工具函数以避免代码重复。此外,之前的审查已指出
ProviderStats.totalCost与formatCost的单位不匹配问题,请确认后端是否已将totalCost升级为纳美元。♻️ 建议提取共享函数
在
@/lib/utils或新建@/lib/format中创建共享的成本格式化函数:// lib/format.ts export function formatNanoUsdToCost(nanoUsd: number): string { const usd = Math.floor(nanoUsd / 1000) / 1_000_000; if (usd >= 1) return `$${usd.toFixed(2)}`; if (usd >= 0.01) return `$${usd.toFixed(3)}`; return `$${usd.toFixed(6).replace(/\.?0+$/, '')}`; }web/src/pages/requests/index.tsx (1)
44-52: ProviderTypeKey 建议复用统一定义。项目中已有
ProviderTypeKey定义(web/src/pages/providers/types.ts),本地重复声明容易导致类型漂移。同时PROVIDER_TYPE_LABELS中的硬编码字符串建议走 i18n。♻️ 建议修改
-type ProviderTypeKey = 'antigravity' | 'kiro' | 'custom'; +import type { ProviderTypeKey } from '@/pages/providers/types';internal/repository/sqlite/proxy_request.go (1)
237-241: AddCost 负增量下溢风险已在之前的审查中标记。此问题已在过去的审查评论中详细说明,建议使用
CASE WHEN cost + ? < 0 THEN 0 ELSE cost + ? END或MAX(cost + ?, 0)来防止负值写入。
🧹 Nitpick comments (8)
web/src/locales/zh.json (1)
112-112: 术语一致性建议
requests.allProviders使用了"全部供应商",但已有的stats.allProviders(第 593 行)使用的是"所有提供商"。建议统一术语以保持用户体验一致性。可以考虑统一使用"所有提供商"或"全部供应商"。
💡 可选修复
- "allProviders": "全部供应商", + "allProviders": "所有提供商",web/src/hooks/queries/use-usage-stats.ts (1)
157-169: 确认重算成本后的缓存失效范围目前只失效
usageStats。若成本变化还影响请求列表/详情或 overview 等视图,可能仍显示旧值。请确认调用方是否额外失效相关 query key,或在此补齐。web/src/pages/requests/detail/RequestHeader.tsx (1)
141-154: 图标按钮建议补充 aria-label 以提升可访问性。♿ 建议补充可访问性标注
- <TooltipTrigger + <TooltipTrigger + aria-label="Recalculate cost" className="inline-flex items-center justify-center h-5 w-5 rounded-md text-muted-foreground hover:text-foreground hover:bg-accent disabled:opacity-50" onClick={onRecalculateCost} disabled={isRecalculating} >web/src/pages/requests/detail/RequestDetailView.tsx (1)
360-459: 可考虑减少重复的items.find扫描。
同一渲染中多次find+ 非空断言会重复扫描数组。建议先构建 label→item 的临时映射或局部变量,减少重复查找并提升可读性。internal/repository/sqlite/proxy_upstream_attempt.go (1)
63-130: 建议对 batchSize 做下限保护以避免意外全表加载。
当 batchSize <= 0 时,GORM 的 Limit(0) 可能变成不限制,易引发内存压力。建议在方法入口处设定最小默认值。♻️ 建议补充保护
func (r *ProxyUpstreamAttemptRepository) StreamForCostCalc(batchSize int, callback func(batch []*domain.AttemptCostData) error) error { + if batchSize <= 0 { + batchSize = 500 + } var lastID uint64 = 0internal/repository/sqlite/proxy_request.go (1)
264-289: BatchUpdateCosts SQL 构建存在改进空间。当前实现正确,但 SQL 字符串拼接方式可读性较差。建议考虑使用
strings.Builder的fmt.Fprintf或预分配参数切片来提升代码清晰度。此外,第 280-282 行的 WHERE IN 参数追加可以与前面的参数合并,减少重复循环。♻️ 可选的优化建议
- // Third: WHERE IN ids - for _, id := range batchIDs { - args = append(args, id) - } - - sql := fmt.Sprintf("UPDATE proxy_requests SET cost = %s, updated_at = ? WHERE id IN (?%s)", - cases.String(), strings.Repeat(",?", len(batchIDs)-1)) + // Build WHERE IN placeholders + placeholders := strings.Repeat(",?", len(batchIDs)-1) + sql := fmt.Sprintf("UPDATE proxy_requests SET cost = %s, updated_at = ? WHERE id IN (?%s)", + cases.String(), placeholders) + + // Append WHERE IN ids + for _, id := range batchIDs { + args = append(args, id) + }internal/service/admin.go (1)
854-860: hardcoded 进度值可能与实际不符。第 854 行使用
result.UpdatedRequests作为 current 和 total,第 860 行使用 100/100,这些与实际处理的数量可能不一致。建议使用实际的len(affectedRequestIDs)或保持与前面阶段一致的进度计算方式。internal/repository/sqlite/usage_stats.go (1)
522-528: O(n²) 冒泡排序在大数据集下性能较差。
mergeRealtimeMinuteStats使用双重循环进行排序,时间复杂度为 O(n²)。虽然最近 2 分钟的数据量通常较小,但建议使用标准库的sort.Slice实现 O(n log n) 的排序。♻️ 建议使用 sort.Slice
+import "sort" // 按 TimeBucket 倒序排列 -for i := 0; i < len(merged)-1; i++ { - for j := i + 1; j < len(merged); j++ { - if merged[j].TimeBucket.After(merged[i].TimeBucket) { - merged[i], merged[j] = merged[j], merged[i] - } - } -} +sort.Slice(merged, func(i, j int) bool { + return merged[i].TimeBucket.After(merged[j].TimeBucket) +})
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
cmd/maxx/main.gointernal/core/database.gointernal/domain/model.gointernal/executor/executor.gointernal/handler/admin.gointernal/pricing/calculator.gointernal/pricing/tiered.gointernal/repository/interfaces.gointernal/repository/sqlite/migrations.gointernal/repository/sqlite/models.gointernal/repository/sqlite/proxy_request.gointernal/repository/sqlite/proxy_upstream_attempt.gointernal/repository/sqlite/usage_stats.gointernal/service/admin.goweb/src/components/provider-details-dialog.tsxweb/src/components/ui/index.tsweb/src/components/ui/progress.tsxweb/src/hooks/queries/index.tsweb/src/hooks/queries/use-pricing.tsweb/src/hooks/queries/use-requests.tsweb/src/hooks/queries/use-usage-stats.tsweb/src/index.cssweb/src/lib/transport/http-transport.tsweb/src/lib/transport/index.tsweb/src/lib/transport/interface.tsweb/src/lib/transport/types.tsweb/src/locales/en.jsonweb/src/locales/zh.jsonweb/src/pages/client-routes/components/provider-row.tsxweb/src/pages/overview.tsxweb/src/pages/providers/components/provider-row.tsxweb/src/pages/providers/index.tsxweb/src/pages/requests/detail.tsxweb/src/pages/requests/detail/RequestDetailPanel.tsxweb/src/pages/requests/detail/RequestDetailView.tsxweb/src/pages/requests/detail/RequestHeader.tsxweb/src/pages/requests/index.tsxweb/src/pages/routes/form.tsx
🚧 Files skipped from review as they are similar to previous changes (9)
- web/src/pages/providers/components/provider-row.tsx
- cmd/maxx/main.go
- web/src/pages/overview.tsx
- web/src/components/ui/progress.tsx
- web/src/pages/routes/form.tsx
- web/src/hooks/queries/index.ts
- web/src/components/ui/index.ts
- web/src/pages/providers/index.tsx
- web/src/lib/transport/http-transport.ts
🧰 Additional context used
🧬 Code graph analysis (15)
internal/executor/executor.go (4)
internal/domain/model.go (1)
ResponseModel(613-625)internal/repository/sqlite/models.go (2)
ResponseModel(302-308)ResponseModel(310-310)web/src/lib/transport/types.ts (1)
ResponseModel(615-621)internal/pricing/calculator.go (1)
GlobalCalculator(23-28)
web/src/lib/transport/interface.ts (2)
internal/service/admin.go (2)
RecalculateCostsResult(716-721)RecalculateRequestCostResult(867-873)web/src/lib/transport/types.ts (3)
RecalculateCostsResult(580-585)RecalculateRequestCostResult(606-612)PriceTable(809-812)
internal/pricing/calculator.go (1)
internal/pricing/tiered.go (2)
CalculateTieredCost(28-50)CalculateLinearCost(56-58)
internal/repository/sqlite/proxy_upstream_attempt.go (4)
internal/repository/interfaces.go (1)
ProxyUpstreamAttemptRepository(103-118)internal/domain/model.go (3)
ProxyUpstreamAttempt(249-296)AttemptCostData(299-312)ResponseModel(613-625)internal/repository/sqlite/models.go (4)
ProxyUpstreamAttempt(217-239)ProxyUpstreamAttempt(241-241)ResponseModel(302-308)ResponseModel(310-310)internal/repository/sqlite/db.go (1)
DB(17-20)
internal/repository/interfaces.go (2)
internal/domain/model.go (4)
ProxyRequest(185-247)Progress(780-786)ProxyUpstreamAttempt(249-296)AttemptCostData(299-312)internal/repository/sqlite/models.go (4)
ProxyRequest(182-212)ProxyRequest(214-214)ProxyUpstreamAttempt(217-239)ProxyUpstreamAttempt(241-241)
internal/handler/admin.go (2)
internal/repository/interfaces.go (1)
ProxyRequestFilter(63-65)internal/pricing/default_prices.go (1)
DefaultPriceTable(11-16)
web/src/hooks/queries/use-requests.ts (2)
web/src/lib/transport/types.ts (3)
CursorPaginationParams(251-259)CursorPaginationResult(262-269)ProxyRequest(171-205)internal/domain/model.go (1)
ProxyRequest(185-247)
web/src/hooks/queries/use-usage-stats.ts (3)
web/src/hooks/queries/index.ts (2)
useRecalculateCosts(115-115)usageStatsKeys(111-111)web/src/lib/query-client.ts (1)
queryClient(3-14)web/src/lib/transport/index.ts (1)
getTransport(95-95)
internal/repository/sqlite/migrations.go (1)
internal/repository/sqlite/db.go (1)
DB(17-20)
internal/service/admin.go (6)
internal/event/broadcaster.go (1)
Broadcaster(7-12)internal/repository/interfaces.go (1)
ProxyRequestFilter(63-65)web/src/lib/transport/index.ts (4)
CursorPaginationResult(36-36)RecalculateCostsResult(70-70)RecalculateCostsProgress(71-71)RecalculateRequestCostResult(69-69)internal/domain/model.go (3)
Progress(780-786)AttemptCostData(299-312)ResponseModel(613-625)internal/pricing/calculator.go (1)
GlobalCalculator(23-28)internal/usage/extractor.go (1)
Metrics(13-22)
web/src/lib/transport/types.ts (2)
web/src/lib/transport/index.ts (7)
StatsGranularity(68-68)RecalculateCostsResult(70-70)RecalculateCostsProgress(71-71)RecalculateStatsProgress(72-72)RecalculateRequestCostResult(69-69)ModelPricing(82-82)PriceTable(83-83)internal/service/admin.go (3)
RecalculateCostsResult(716-721)RecalculateCostsProgress(724-730)RecalculateRequestCostResult(867-873)
web/src/pages/requests/index.tsx (6)
web/src/pages/providers/types.ts (1)
ProviderTypeKey(15-15)web/src/hooks/queries/use-requests.ts (2)
useProxyRequests(26-31)useProxyRequestsCount(34-39)web/src/components/ui/index.ts (5)
TableCell(15-15)Select(27-27)SelectTrigger(32-32)SelectValue(33-33)SelectContent(28-28)internal/domain/model.go (1)
Provider(76-104)web/src/lib/transport/types.ts (1)
Provider(43-53)web/src/components/ui/select.tsx (4)
Select(177-177)SelectTrigger(185-185)SelectValue(186-186)SelectContent(178-178)
web/src/pages/requests/detail/RequestDetailPanel.tsx (3)
web/src/lib/transport/types.ts (1)
ModelPricing(793-806)web/src/hooks/queries/index.ts (1)
usePricing(148-148)web/src/hooks/queries/use-pricing.ts (1)
usePricing(16-22)
internal/domain/model.go (2)
internal/repository/sqlite/models.go (2)
ResponseModel(302-308)ResponseModel(310-310)web/src/lib/transport/types.ts (1)
ResponseModel(615-621)
web/src/hooks/queries/use-pricing.ts (1)
web/src/lib/transport/index.ts (1)
getTransport(95-95)
🔇 Additional comments (50)
internal/repository/sqlite/models.go (1)
191-191: 增加 EndTime 索引合理。 有助于时间范围筛选/排序的查询性能,未见副作用。web/src/index.css (1)
27-31: 浅色模式下 chart-4 和 chart-5 的亮度值过高,可能导致对比度不足
--chart-4(L=0.9214) 和--chart-5(L=0.9151) 的 OKLCh 亮度值接近 1(白色),在浅色背景上作为图表元素使用时可能存在可见性问题。建议验证这些颜色在实际图表中的显示效果,确保满足数据可视化的对比度需求。相比之下,深色模式的图表颜色(第 108-112 行)亮度值在 0.52-0.72 之间,对比度更为合适。
web/src/locales/en.json (1)
24-24: LGTM!新增的本地化键结构正确,命名语义清晰。
common.apply、requests.allProviders、stats中的筛选和重新计算相关键都已正确添加。Also applies to: 112-112, 577-592, 630-632
web/src/locales/zh.json (1)
576-591: LGTM!筛选相关键(
filter、filterConditions、时间范围选项、filterSummary)和重新计算相关键(recalculate、recalculateCosts、recalculateStats)的中文翻译准确自然,与 en.json 结构一致。Also applies to: 629-631
web/src/hooks/queries/use-requests.ts (2)
78-123: LGTM! 乐观更新逻辑正确处理了 Provider 过滤。新的实现正确地:
- 遍历所有列表查询并提取各自的过滤参数
- 当请求的
providerID不再匹配过滤条件时,从列表中移除- 仅在首页(无
before参数)时添加新请求实现逻辑清晰,边界情况处理得当。
125-138: 计数更新逻辑正确。遍历所有
requestsCount缓存并根据providerId过滤条件决定是否更新计数,逻辑正确。web/src/pages/requests/index.tsx (3)
389-416: LGTM! 成本格式化逻辑正确。
nanoToUSD转换逻辑与其他文件一致,CostCell的颜色分级阈值合理:
- ≥ $0.10: 红色高亮
- ≥ $0.01: 琥珀色
- < $0.01: 默认色
666-742: LGTM! ProviderFilter 组件实现良好。组件正确地:
- 使用
useMemo按类型分组 Provider- 各组内按名称字母排序
- 未知类型回退到
custom组- 正确处理 "all" 选项与数值 ID 的转换
524-531: LGTM! 时间显示逻辑改进合理。优先显示
endTime(已完成请求的结束时间),若无有效结束时间则回退显示startTime或createdAt并使用较浅样式区分,用户体验良好。internal/handler/admin.go (3)
1429-1439: LGTM! 定价端点实现简洁。直接返回
pricing.DefaultPriceTable(),符合现有 handler 模式。
782-795: LGTM! 单条请求成本重算端点实现正确。POST 方法限制、错误处理和响应格式均符合项目约定。
1313-1327: LGTM! 批量成本重算端点实现正确。注释清晰说明了功能:使用当前价格表重算所有 attempts 的成本。
web/src/lib/transport/types.ts (2)
579-612: LGTM! 成本重算相关类型定义完整。新增的
RecalculateCostsResult、RecalculateCostsProgress、RecalculateStatsProgress和RecalculateRequestCostResult接口与后端定义保持一致,phase 枚举值准确。
789-812: LGTM! 定价类型定义清晰。
ModelPricing使用 microUSD/百万 tokens 作为价格单位,与计费逻辑分离(计费使用 nanoUSD),设计合理。可选字段的默认值注释有助于理解。internal/core/database.go (1)
262-279: AdminService 注入 broadcaster 合理与上方 wsHub / wailsBroadcaster 的创建保持一致,后续广播能力可用。
web/src/hooks/queries/use-usage-stats.ts (1)
142-155: 注释更新清晰“仅重新聚合,不重算成本”的语义说明更明确,逻辑保持一致。
internal/executor/executor.go (1)
235-238: 中断 attempt 补齐结束时间补齐 EndTime/Duration 有助于一致性,OK。
internal/pricing/calculator.go (2)
37-60: 纳美元语义说明到位注释与返回单位保持一致,易于理解。
67-127: 成本计算路径与缓存 fallback OK线性/分段计算与 cache fallback 逻辑清晰,覆盖无 5m/1h 细分的场景。
web/src/lib/transport/index.ts (1)
66-83: 类型导出补齐新增 pricing/重算相关类型供上层复用,OK。
web/src/hooks/queries/use-pricing.ts (1)
1-22: 新增 pricing 查询 Hook 清晰键名与职责明确,易于复用。
web/src/lib/transport/interface.ts (4)
44-51: 新增类型导出清晰,OK。
106-106: getProxyRequestsCount 增加 providerId 参数,接口定义一致。
170-171: 新增成本重算接口定义清晰。
183-184: Pricing API 暴露 OK。web/src/pages/requests/detail/RequestHeader.tsx (2)
1-3: 新增 Tooltip/刷新图标导入 OK。
31-40: 新增重算回调/状态 props 设计清晰。web/src/pages/requests/detail.tsx (3)
1-16: 新增 hooks/keys 引入 OK。
28-29: transport/queryClient 接入用于重算流程,OK。
140-145: Header 透传重算回调与状态,接入顺畅。web/src/pages/requests/detail/RequestDetailView.tsx (2)
18-29: 价格格式化辅助函数引入清晰,OK。
31-55: 新增 nanoUSD 格式化与 costBreakdown props OK。internal/pricing/tiered.go (3)
3-20: 常量与 big.Int 预置清晰,OK。
22-71: 请确认 nano 成本不会超过 uint64 上限,否则会静默截断。
big.Int 防止了中间溢出,但t.Uint64()/baseCostNano + ...在超出2^64-1时会截断。若存在极端 token/价格输入,建议增加BitLen()保护或上限处理。
73-86: 保留 micro 兼容层与 NanoToUSD OK。internal/repository/sqlite/proxy_upstream_attempt.go (4)
3-10: 新增依赖导入 OK。
47-53: ListAll 实现清晰。
55-61: CountAll 实现清晰。
136-186: SQLite 变量上限问题需要验证代码参数计算是正确的(3×batchSize + 1 = 3×500+1 = 1501),但该问题的严重程度取决于使用的 SQLite 版本:
- SQLite < 3.32.0:默认上限 999,当前 1501 参数会超限导致失败
- SQLite ≥ 3.32.0:默认上限已提高到 32766,1501 参数无问题
需要确认项目使用的具体 SQLite 版本(当前依赖 modernc.org/sqlite v1.23.1),以判断这是否为实际问题。若使用较旧版本,建议按建议修改 batchSize:
- // Use CASE WHEN for batch update - const batchSize = 500 + // Use CASE WHEN for batch update + // SQLite 默认变量上限 999;单批参数量 = 3*batchSize + 1 + const maxSQLiteVars = 999 + const paramsPerRow = 3 + batchSize := (maxSQLiteVars - 1) / paramsPerRow // 332internal/domain/model.go (2)
298-312: 新增 AttemptCostData 结构设计合理。该结构包含了成本重算所需的所有字段,避免了加载完整的
ProxyUpstreamAttempt对象,有助于减少内存占用和提升批量处理性能。
776-786: Progress 结构清晰,适合 WebSocket 进度广播。字段设计完整,包含阶段、当前/总数、百分比和消息,满足前端进度展示需求。
internal/repository/interfaces.go (2)
62-65: ProxyRequestFilter 结构简洁,支持扩展。当前仅包含 ProviderID 过滤,后续可按需扩展其他过滤条件(如 ProjectID、ClientType 等)。
111-113: StreamForCostCalc 设计良好,适合大数据量处理。使用回调模式进行流式批处理,避免一次性加载所有数据到内存,对于大量 attempts 的场景非常有效。
internal/repository/sqlite/proxy_request.go (1)
355-374: RecalculateCostsFromAttemptsWithProgress 子查询更新策略高效。使用
SELECT COALESCE(SUM(cost), 0) FROM proxy_upstream_attempts WHERE proxy_request_id = proxy_requests.id作为子查询,在数据库层面完成聚合,减少了应用层的数据传输和处理开销。web/src/pages/requests/detail/RequestDetailPanel.tsx (3)
50-61: BigInt 使用正确,有效防止大 token 数量下的整数溢出。JavaScript 的 Number 类型在超过
Number.MAX_SAFE_INTEGER(约 9×10¹⁵) 时会丢失精度。使用 BigInt 进行中间计算后转换为 Number 是正确的做法。
105-120: 模型定价的前缀匹配逻辑设计合理。先尝试精确匹配,失败后寻找最长前缀匹配,这种降级策略能够很好地处理模型版本变化(如
claude-3-sonnet-20240229匹配到claude-3-sonnet)。
202-212: Cache Write 回退逻辑完善。当没有 5m/1h 分解数据但存在
cacheWriteCount时,使用 5m 缓存价格作为默认值,这与后端的兼容性处理一致。internal/service/admin.go (1)
734-864: RecalculateCosts 整体实现清晰,批处理策略合理。使用
StreamForCostCalc流式处理避免内存问题,每批次立即更新数据库减少事务大小,affectedRequestIDs跟踪受影响的请求用于后续汇总更新。整体设计符合大数据量处理的最佳实践。internal/repository/sqlite/usage_stats.go (2)
62-64: GranularityYear 截断逻辑正确。UTC 和时区感知版本都正确将时间截断到年初(1月1日 00:00:00),与其他粒度的实现保持一致。
Also applies to: 88-89
353-362: 分钟粒度的实时数据合并策略清晰。对于分钟粒度使用
mergeRealtimeMinuteStats保留各分钟的独立数据,对于其他粒度聚合到当前时间桶,这种区分处理确保了分钟级查询的精确性。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| // Use ResponseModel for pricing (actual model from API response), fallback to MappedModel | ||
| pricingModel := attemptRecord.ResponseModel | ||
| if pricingModel == "" { | ||
| pricingModel = attemptRecord.MappedModel | ||
| } | ||
| attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics) |
There was a problem hiding this comment.
ResponseModel 不在价表时会导致成本为 0
当前只要 ResponseModel 非空就用于计价;若该模型未配置价格,Calculate 会返回 0,导致成本低估。建议在价格表缺失时回退到 MappedModel。
🔧 建议修复(成功与失败分支同样适用)
- // Use ResponseModel for pricing (actual model from API response), fallback to MappedModel
- pricingModel := attemptRecord.ResponseModel
- if pricingModel == "" {
- pricingModel = attemptRecord.MappedModel
- }
- attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics)
+ // Use ResponseModel for pricing; fallback to MappedModel if pricing missing
+ pricingModel := attemptRecord.ResponseModel
+ if pricingModel == "" || pricing.GlobalCalculator().GetPricing(pricingModel) == nil {
+ pricingModel = attemptRecord.MappedModel
+ }
+ attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics)Also applies to: 486-491
🤖 Prompt for AI Agents
In `@internal/executor/executor.go` around lines 409 - 414, The current pricing
logic uses attemptRecord.ResponseModel unconditionally which yields 0 if that
model lacks pricing; update the logic (both in the shown block and the similar
block around the other occurrence) to attempt pricing with ResponseModel first
and if pricing.GlobalCalculator().Calculate(pricingModel, metrics) returns 0 (or
indicates missing price) then fall back to attemptRecord.MappedModel and
recalculate so Cost is not underestimated; modify the code that sets
pricingModel and attemptRecord.Cost to perform this check and fallback using the
same symbols (attemptRecord.ResponseModel, attemptRecord.MappedModel,
pricing.GlobalCalculator().Calculate, attemptRecord.Cost).
|
|
||
| // 构建过滤条件 | ||
| var filter *repository.ProxyRequestFilter | ||
| if p := r.URL.Query().Get("providerId"); p != "" { | ||
| if providerID, err := strconv.ParseUint(p, 10, 64); err == nil { | ||
| filter = &repository.ProxyRequestFilter{ | ||
| ProviderID: &providerID, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| result, err := h.svc.GetProxyRequestsCursor(limit, before, after, filter) |
There was a problem hiding this comment.
providerId 解析错误处理与 count 端点不一致。
在 cursor 端点(此处),providerId 解析失败时会静默忽略;但在 count 端点(第 735-740 行)则返回 400 错误。建议保持一致的错误处理策略。
🔧 建议统一错误处理
// 构建过滤条件
var filter *repository.ProxyRequestFilter
if p := r.URL.Query().Get("providerId"); p != "" {
- if providerID, err := strconv.ParseUint(p, 10, 64); err == nil {
+ providerID, err := strconv.ParseUint(p, 10, 64)
+ if err != nil {
+ writeJSON(w, http.StatusBadRequest, map[string]string{"error": "invalid providerId"})
+ return
+ }
filter = &repository.ProxyRequestFilter{
ProviderID: &providerID,
}
- }
}🤖 Prompt for AI Agents
In `@internal/handler/admin.go` around lines 703 - 714, The cursor handler
currently parses the providerId query and silently ignores parse errors while
building a *repository.ProxyRequestFilter before calling
h.svc.GetProxyRequestsCursor; this is inconsistent with the count endpoint which
returns a 400 on parse failure. Update the providerId parsing logic in the
cursor handler (where filter is built before calling GetProxyRequestsCursor) to
mirror the count endpoint: if strconv.ParseUint(p, 10, 64) returns an error,
return an HTTP 400 response (bad request) with an appropriate error message
instead of silently ignoring the parse error, ensuring consistent behavior
across endpoints.
| if err := db.Exec("UPDATE proxy_requests SET cost = cost * 1000 WHERE cost > 0").Error; err != nil { | ||
| return err | ||
| } | ||
| // Convert cost in proxy_upstream_attempts table | ||
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost * 1000 WHERE cost > 0").Error; err != nil { | ||
| return err | ||
| } | ||
| // Convert cost in usage_stats table | ||
| if err := db.Exec("UPDATE usage_stats SET cost = cost * 1000 WHERE cost > 0").Error; err != nil { | ||
| return err | ||
| } | ||
| return nil | ||
| }, | ||
| Down: func(db *gorm.DB) error { | ||
| // Rollback: divide by 1000 | ||
| if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000").Error; err != nil { | ||
| return err | ||
| } | ||
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000").Error; err != nil { | ||
| return err | ||
| } | ||
| if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000").Error; err != nil { | ||
| return err |
There was a problem hiding this comment.
回滚条件与升级条件不一致,可能破坏负值成本
Up 只处理 cost > 0,如果历史中存在负成本(退款/冲销),Up 不会放大但 Down 会除以 1000,回滚后负值被错误缩小。建议 Up/Down 使用一致条件(如 cost != 0)或显式处理负值,并在注释中说明意图。
🔧 建议修复(条件对称)
- if err := db.Exec("UPDATE proxy_requests SET cost = cost * 1000 WHERE cost > 0").Error; err != nil {
+ if err := db.Exec("UPDATE proxy_requests SET cost = cost * 1000 WHERE cost != 0").Error; err != nil {
return err
}
- if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost * 1000 WHERE cost > 0").Error; err != nil {
+ if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost * 1000 WHERE cost != 0").Error; err != nil {
return err
}
- if err := db.Exec("UPDATE usage_stats SET cost = cost * 1000 WHERE cost > 0").Error; err != nil {
+ if err := db.Exec("UPDATE usage_stats SET cost = cost * 1000 WHERE cost != 0").Error; err != nil {
return err
}
@@
- if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000").Error; err != nil {
+ if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000 WHERE cost != 0").Error; err != nil {
return err
}
- if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000").Error; err != nil {
+ if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000 WHERE cost != 0").Error; err != nil {
return err
}
- if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000").Error; err != nil {
+ if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000 WHERE cost != 0").Error; err != nil {
return err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if err := db.Exec("UPDATE proxy_requests SET cost = cost * 1000 WHERE cost > 0").Error; err != nil { | |
| return err | |
| } | |
| // Convert cost in proxy_upstream_attempts table | |
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost * 1000 WHERE cost > 0").Error; err != nil { | |
| return err | |
| } | |
| // Convert cost in usage_stats table | |
| if err := db.Exec("UPDATE usage_stats SET cost = cost * 1000 WHERE cost > 0").Error; err != nil { | |
| return err | |
| } | |
| return nil | |
| }, | |
| Down: func(db *gorm.DB) error { | |
| // Rollback: divide by 1000 | |
| if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000").Error; err != nil { | |
| return err | |
| if err := db.Exec("UPDATE proxy_requests SET cost = cost * 1000 WHERE cost != 0").Error; err != nil { | |
| return err | |
| } | |
| // Convert cost in proxy_upstream_attempts table | |
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost * 1000 WHERE cost != 0").Error; err != nil { | |
| return err | |
| } | |
| // Convert cost in usage_stats table | |
| if err := db.Exec("UPDATE usage_stats SET cost = cost * 1000 WHERE cost != 0").Error; err != nil { | |
| return err | |
| } | |
| return nil | |
| }, | |
| Down: func(db *gorm.DB) error { | |
| // Rollback: divide by 1000 | |
| if err := db.Exec("UPDATE proxy_requests SET cost = cost / 1000 WHERE cost != 0").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE proxy_upstream_attempts SET cost = cost / 1000 WHERE cost != 0").Error; err != nil { | |
| return err | |
| } | |
| if err := db.Exec("UPDATE usage_stats SET cost = cost / 1000 WHERE cost != 0").Error; err != nil { | |
| return err | |
| } |
🤖 Prompt for AI Agents
In `@internal/repository/sqlite/migrations.go` around lines 27 - 49, The
migration's Up and Down handlers are asymmetric: Up multiplies only rows
matching "cost > 0" while Down divides all rows, which mis-handles negative
costs; update the Up and Down DB updates in the Up and Down functions (the
db.Exec calls for proxy_requests, proxy_upstream_attempts, usage_stats) to use a
symmetric condition (e.g., "WHERE cost != 0") or otherwise handle negative
values explicitly, and update the comment to state the chosen intent (e.g.,
"Convert all non-zero costs" or "Scale positive and negative costs
consistently") so both migration directions apply the same filter.
| func (r *ProxyUpstreamAttemptRepository) UpdateCost(id uint64, cost uint64) error { | ||
| return r.db.gorm.Model(&ProxyUpstreamAttempt{}).Where("id = ?", id).Update("cost", cost).Error | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the ProxyUpstreamAttempt struct definition
fd -t f proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 178
🏁 Script executed:
# Search for the struct definition and UpdatedAt field
rg -A 30 "type ProxyUpstreamAttempt struct" internal/repository/sqlite/Repository: awsl-project/maxx
Length of output: 2179
🏁 Script executed:
# Check other Update methods in the repository to see timestamp handling patterns
rg -B 2 -A 2 "func.*Update" internal/repository/sqlite/proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 599
🏁 Script executed:
# Check the BaseModel definition to see UpdatedAt type
rg -A 10 "type BaseModel struct" internal/repository/sqlite/Repository: awsl-project/maxx
Length of output: 699
🏁 Script executed:
# Check if there are other Update methods and how they handle timestamps
rg -B 2 -A 5 "Updates\|Update" internal/repository/sqlite/proxy_upstream_attempt.go | head -60Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check how the repository handles domain model vs database model
rg -B 5 -A 10 "func.*toModel" internal/repository/sqlite/proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 520
🏁 Script executed:
# Check the Update method implementation to see the full pattern
rg -B 2 -A 8 "func \(r \*ProxyUpstreamAttemptRepository\) Update\(a" internal/repository/sqlite/proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 514
🏁 Script executed:
# Check if there are other similar UpdateXxx methods and how they handle timestamps
rg -B 1 -A 3 "Update.*error" internal/repository/sqlite/proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 634
🏁 Script executed:
# Check the toTimestamp function to understand the conversion
rg -B 2 -A 5 "func toTimestamp" internal/repository/sqlite/Repository: awsl-project/maxx
Length of output: 880
🏁 Script executed:
# Check if time.Now().UnixMilli() is used elsewhere or if there's a standard conversion
rg "UnixMilli\|toTimestamp" internal/repository/sqlite/proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Look at the complete UpdateCost usage context
rg -B 5 -A 10 "BatchUpdateCosts" internal/repository/sqlite/proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 668
🏁 Script executed:
# Check if BatchUpdateCosts also updates updated_at
rg -A 20 "func \(r \*ProxyUpstreamAttemptRepository\) BatchUpdateCosts" internal/repository/sqlite/proxy_upstream_attempt.goRepository: awsl-project/maxx
Length of output: 584
🏁 Script executed:
# Check other repositories to see the pattern for similar Update methods
find internal/repository/sqlite -name "*.go" -type f -exec grep -l "func.*Update.*error" {} \; | head -5Repository: awsl-project/maxx
Length of output: 503
🏁 Script executed:
# Check one of those repositories for comparison
rg -B 2 -A 5 "\.Update\(" internal/repository/sqlite/ | head -40Repository: awsl-project/maxx
Length of output: 1650
🏁 Script executed:
# Check if BatchUpdateCosts handles updated_at field update
rg -A 30 "func \(r \*ProxyUpstreamAttemptRepository\) BatchUpdateCosts" internal/repository/sqlite/proxy_upstream_attempt.go | tail -25Repository: awsl-project/maxx
Length of output: 697
🏁 Script executed:
# Get more lines from BatchUpdateCosts to see the complete update statement
rg -A 50 "func \(r \*ProxyUpstreamAttemptRepository\) BatchUpdateCosts" internal/repository/sqlite/proxy_upstream_attempt.go | grep -A 30 "Build CASE WHEN"Repository: awsl-project/maxx
Length of output: 839
🏁 Script executed:
# Search for the actual Update/Updates call in BatchUpdateCosts
rg -A 60 "func \(r \*ProxyUpstreamAttemptRepository\) BatchUpdateCosts" internal/repository/sqlite/proxy_upstream_attempt.go | grep -E "Update|SET"Repository: awsl-project/maxx
Length of output: 297
UpdateCost 应同步更新 updated_at 字段。
由于 BaseModel 的 UpdatedAt 为 int64 类型,GORM 不会自动更新该字段。当前 UpdateCost() 仅更新 cost,导致 updated_at 保持陈旧状态,与 BatchUpdateCosts() 的行为不一致,可能引发数据审计和同步问题。
🔧 建议的修复方案
func (r *ProxyUpstreamAttemptRepository) UpdateCost(id uint64, cost uint64) error {
- return r.db.gorm.Model(&ProxyUpstreamAttempt{}).Where("id = ?", id).Update("cost", cost).Error
+ return r.db.gorm.Model(&ProxyUpstreamAttempt{}).Where("id = ?", id).Updates(map[string]interface{}{
+ "cost": cost,
+ "updated_at": time.Now().UnixMilli(),
+ }).Error
}🤖 Prompt for AI Agents
In `@internal/repository/sqlite/proxy_upstream_attempt.go` around lines 132 - 134,
UpdateCost 只更新了 cost 导致 BaseModel 的 UpdatedAt(int64)不会被 GORM 自动刷新;修改 UpdateCost
在更新 cost 的同时显式设置 updated_at 为当前 unix 时间戳(与 BatchUpdateCosts 保持一致),例如在
ProxyUpstreamAttemptRepository 的 UpdateCost 方法中对 ProxyUpstreamAttempt 使用
Updates/UpdateColumn 将 cost 和 updated_at 一并写入(设置为 time.Now().Unix()),确保
UpdatedAt 字段类型为 int64 并被正确赋值以保持数据审计一致性。
| if err != nil { | ||
| return nil, err | ||
| continue | ||
| } |
There was a problem hiding this comment.
扫描错误被静默忽略。
当 rows.Scan 失败时,仅使用 continue 跳过该行而不记录日志。这可能隐藏数据格式问题或类型不匹配错误,建议至少添加 debug 级别的日志。
🔧 建议添加日志
err := rows.Scan(
&endTime, &routeID, &providerID, &projectID, &apiTokenID, &clientType,
&model, &status, &durationMs,
&inputTokens, &outputTokens, &cacheRead, &cacheWrite, &cost,
)
if err != nil {
+ log.Printf("[queryRecentMinutesStats] Scan error: %v", err)
continue
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if err != nil { | |
| return nil, err | |
| continue | |
| } | |
| if err != nil { | |
| log.Printf("[queryRecentMinutesStats] Scan error: %v", err) | |
| continue | |
| } |
🤖 Prompt for AI Agents
In `@internal/repository/sqlite/usage_stats.go` around lines 617 - 619, The code
currently swallows Scan errors (rows.Scan) by just doing continue; change this
so the error is logged at debug (or warn) level before continuing: capture the
err returned by rows.Scan, call the repository's logger (e.g., r.logger.Debugf
or similar logger in this package) with a clear message like "rows.Scan failed"
and include the err plus identifying context (e.g., the query/table or any
available column values/row index), then continue; ensure you reference
rows.Scan and the existing err variable when adding the log call.
| // Create progress channel | ||
| progressChan := make(chan domain.Progress, 10) | ||
|
|
||
| // Start goroutine to listen to progress and broadcast via WebSocket | ||
| go func() { | ||
| for progress := range progressChan { | ||
| if s.broadcaster != nil { | ||
| s.broadcaster.BroadcastMessage("recalculate_stats_progress", RecalculateUsageStatsProgress{ | ||
| Phase: progress.Phase, | ||
| Current: progress.Current, | ||
| Total: progress.Total, | ||
| Percentage: progress.Percentage, | ||
| Message: progress.Message, | ||
| }) | ||
| } | ||
| } | ||
| }() | ||
|
|
||
| // Call repository method with progress channel | ||
| err := s.usageStatsRepo.ClearAndRecalculateWithProgress(progressChan) | ||
|
|
||
| // Close channel when done | ||
| close(progressChan) | ||
|
|
||
| return err | ||
| } |
There was a problem hiding this comment.
RecalculateUsageStats 中存在 goroutine 与 channel 关闭的时序问题。
当前实现在函数返回前关闭 progressChan(第 710 行),但 goroutine(第 692-704 行)可能仍在等待从 channel 读取。如果 ClearAndRecalculateWithProgress 发送完最后一个进度后立即返回,goroutine 可能会因 channel 关闭而提前退出,或在极端情况下尝试读取已关闭的 channel。
建议使用 sync.WaitGroup 确保 goroutine 完成处理后再关闭 channel,或改用带缓冲的 channel 并在 goroutine 中优雅退出。
🔧 建议的修复方案
func (s *AdminService) RecalculateUsageStats() error {
// Create progress channel
progressChan := make(chan domain.Progress, 10)
+ var wg sync.WaitGroup
+ wg.Add(1)
// Start goroutine to listen to progress and broadcast via WebSocket
go func() {
+ defer wg.Done()
for progress := range progressChan {
if s.broadcaster != nil {
s.broadcaster.BroadcastMessage("recalculate_stats_progress", RecalculateUsageStatsProgress{
Phase: progress.Phase,
Current: progress.Current,
Total: progress.Total,
Percentage: progress.Percentage,
Message: progress.Message,
})
}
}
}()
// Call repository method with progress channel
err := s.usageStatsRepo.ClearAndRecalculateWithProgress(progressChan)
// Close channel when done
close(progressChan)
+ wg.Wait() // 确保所有进度消息都已广播
return err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Create progress channel | |
| progressChan := make(chan domain.Progress, 10) | |
| // Start goroutine to listen to progress and broadcast via WebSocket | |
| go func() { | |
| for progress := range progressChan { | |
| if s.broadcaster != nil { | |
| s.broadcaster.BroadcastMessage("recalculate_stats_progress", RecalculateUsageStatsProgress{ | |
| Phase: progress.Phase, | |
| Current: progress.Current, | |
| Total: progress.Total, | |
| Percentage: progress.Percentage, | |
| Message: progress.Message, | |
| }) | |
| } | |
| } | |
| }() | |
| // Call repository method with progress channel | |
| err := s.usageStatsRepo.ClearAndRecalculateWithProgress(progressChan) | |
| // Close channel when done | |
| close(progressChan) | |
| return err | |
| } | |
| // Create progress channel | |
| progressChan := make(chan domain.Progress, 10) | |
| var wg sync.WaitGroup | |
| wg.Add(1) | |
| // Start goroutine to listen to progress and broadcast via WebSocket | |
| go func() { | |
| defer wg.Done() | |
| for progress := range progressChan { | |
| if s.broadcaster != nil { | |
| s.broadcaster.BroadcastMessage("recalculate_stats_progress", RecalculateUsageStatsProgress{ | |
| Phase: progress.Phase, | |
| Current: progress.Current, | |
| Total: progress.Total, | |
| Percentage: progress.Percentage, | |
| Message: progress.Message, | |
| }) | |
| } | |
| } | |
| }() | |
| // Call repository method with progress channel | |
| err := s.usageStatsRepo.ClearAndRecalculateWithProgress(progressChan) | |
| // Close channel when done | |
| close(progressChan) | |
| wg.Wait() // 确保所有进度消息都已广播 | |
| return err | |
| } |
🤖 Prompt for AI Agents
In `@internal/service/admin.go` around lines 688 - 713, The goroutine that ranges
over progressChan in RecalculateUsageStats can race with closing progressChan;
change the flow so the sender closes the channel and the goroutine exits before
the function returns—for example, create a sync.WaitGroup, increment it before
starting the goroutine that reads from progressChan and calls
s.broadcaster.BroadcastMessage, then have the caller (the goroutine that invokes
s.usageStatsRepo.ClearAndRecalculateWithProgress) be the only one to close
progressChan after that repository call returns and wait for the WaitGroup to
finish; specifically update RecalculateUsageStats to use waitGroup.Add(1) before
the anonymous goroutine, defer waitGroup.Done() inside the goroutine,
close(progressChan) only after s.usageStatsRepo.ClearAndRecalculateWithProgress
returns, and call waitGroup.Wait() before returning the error.
| // Small delay to allow UI to update (WebSocket messages need time to be processed) | ||
| time.Sleep(50 * time.Millisecond) |
There was a problem hiding this comment.
50ms Sleep 会显著增加大数据集的处理时间。
对于每个批次(100 条)都等待 50ms,10 万条 attempts 将额外增加约 50 秒的处理时间。建议将 Sleep 移到 broadcastProgress 内部,或降低频率(如每 10 个批次 Sleep 一次),或完全移除并依赖 WebSocket 的自然延迟。
♻️ 可选的优化建议
// Broadcast progress
broadcastProgress("calculating", processedCount, int(totalCount),
fmt.Sprintf("Processed %d/%d attempts", processedCount, totalCount))
- // Small delay to allow UI to update (WebSocket messages need time to be processed)
- time.Sleep(50 * time.Millisecond)
+ // 仅在每 10 个批次后 Sleep,减少总延迟
+ if (processedCount / batchSize) % 10 == 0 {
+ time.Sleep(50 * time.Millisecond)
+ }
return nil
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Small delay to allow UI to update (WebSocket messages need time to be processed) | |
| time.Sleep(50 * time.Millisecond) | |
| // Broadcast progress | |
| broadcastProgress("calculating", processedCount, int(totalCount), | |
| fmt.Sprintf("Processed %d/%d attempts", processedCount, totalCount)) | |
| // 仅在每 10 个批次后 Sleep,减少总延迟 | |
| if (processedCount / batchSize) % 10 == 0 { | |
| time.Sleep(50 * time.Millisecond) | |
| } | |
| return nil | |
| }) |
🤖 Prompt for AI Agents
In `@internal/service/admin.go` around lines 827 - 828, The unconditional
time.Sleep(50 * time.Millisecond) inside the processing loop significantly
increases total runtime for large datasets; remove this per-batch sleep and
instead implement one of: move the short sleep into broadcastProgress so
throttling happens only when sending progress updates, or throttle sleeps to run
once every N batches (e.g., every 10 batches) or use a time.Ticker to rate-limit
progress broadcasts; update the loop that calls time.Sleep and the
broadcastProgress function accordingly so progress emission is throttled without
sleeping on every batch.
| // 格式化成本 (纳美元 → 美元,向下取整到 6 位) | ||
| function formatCost(nanoUsd: number): string { | ||
| // 向下取整到 6 位小数 (microUSD 精度) | ||
| const usd = Math.floor(nanoUsd / 1000) / 1_000_000; | ||
| if (usd >= 1) { | ||
| return `$${usd.toFixed(2)}`; | ||
| } | ||
| if (usd >= 0.01) { | ||
| return `$${usd.toFixed(3)}`; | ||
| } | ||
| return `$${usd.toFixed(4)}`; | ||
| return `$${usd.toFixed(6).replace(/\.?0+$/, '')}`; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for backend Go code defining ProviderStats and totalCost
find . -name "*.go" -type f | head -20Repository: awsl-project/maxx
Length of output: 1118
🏁 Script executed:
# Check the frontend types.ts file for ProviderStats definition
fd -e ts -e tsx | xargs grep -l "ProviderStats" | head -5Repository: awsl-project/maxx
Length of output: 283
🏁 Script executed:
# Search for totalCost usage in the codebase
rg "totalCost" -B2 -A2 | head -60Repository: awsl-project/maxx
Length of output: 3015
🏁 Script executed:
# Search for ProviderStats struct definition in Go
rg "type ProviderStats|struct.*ProviderStats" --type=go -B2 -A10Repository: awsl-project/maxx
Length of output: 758
🏁 Script executed:
# Check the types.ts file for ProviderStats
cat web/src/lib/transport/types.ts | grep -A20 "ProviderStats"Repository: awsl-project/maxx
Length of output: 1805
🏁 Script executed:
# Find where formatCost is called
rg "formatCost" -B3 -A3 web/Repository: awsl-project/maxx
Length of output: 16724
🏁 Script executed:
# Get the complete ProviderStats struct definition from backend
rg "type ProviderStats" -A50 internal/domain/model.go | head -80Repository: awsl-project/maxx
Length of output: 1580
🏁 Script executed:
# Search for where totalCost is assigned/calculated in ProviderStats
rg "totalCost|TotalCost" internal/ --type=go -B2 -A2 | grep -E "ProviderStats|totalCost|TotalCost" | head -40Repository: awsl-project/maxx
Length of output: 1534
🏁 Script executed:
# Check if there's any conversion or calculation logic for totalCost in API responses
rg "toDTO|ToDTO|toResponse|ToResponse" internal/ --type=go -l | head -5Repository: awsl-project/maxx
Length of output: 92
🏁 Script executed:
# Find the exact line in types.ts with ProviderStats.totalCost
rg -n "totalCost.*微美元|totalCost.*microUSD" web/src/lib/transport/types.tsRepository: awsl-project/maxx
Length of output: 124
🏁 Script executed:
# Check if there are other totalCost references with wrong units
rg "totalCost.*微美元|TotalCost.*microUSD" --type=ts --type=tsx web/Repository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Verify the file path is correct
find web -name "types.ts" -o -name "types.d.ts" | grep transportRepository: awsl-project/maxx
Length of output: 91
更新 types.ts 中 ProviderStats.totalCost 的单位注释。
后端在 internal/domain/model.go 中明确将 TotalCost 标注为 // 成本 (纳美元),但前端 web/src/lib/transport/types.ts 第 326 行和第 564 行仍将其错误地标注为 // 微美元。
请将这两处注释改为 // 纳美元,以保持前后端文档一致。前端 formatCost 函数的转换逻辑已正确处理 nanoUSD。
🤖 Prompt for AI Agents
In `@web/src/pages/client-routes/components/provider-row.tsx` around lines 28 -
39, Update the comment for ProviderStats.totalCost in types.ts to match the
backend: change the unit annotation from "微美元" to "纳美元" wherever
ProviderStats.totalCost (and any other occurrences referring to TotalCost unit)
is documented in web/src/lib/transport/types.ts so frontend types match
internal/domain/model.go; keep the field name ProviderStats.totalCost and only
adjust the comment text.
| // Recalculate cost mutation | ||
| const recalculateMutation = useMutation({ | ||
| mutationFn: () => transport.recalculateRequestCost(Number(id)), | ||
| onSuccess: () => { | ||
| queryClient.invalidateQueries({ queryKey: requestKeys.detail(Number(id)) }); | ||
| }, | ||
| }); | ||
|
|
||
| const handleRecalculateCost = useCallback(() => { | ||
| recalculateMutation.mutate(); | ||
| }, [recalculateMutation]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file in question
cat -n web/src/pages/requests/detail.tsx | head -70Repository: awsl-project/maxx
Length of output: 3079
🏁 Script executed:
# Find query key definitions to understand the structure
fd -e ts -e tsx | xargs rg "requestKeys|attemptsKeys" -B 2 -A 2 | head -100Repository: awsl-project/maxx
Length of output: 4926
🏁 Script executed:
# Search for recalculateRequestCost implementation
rg "recalculateRequestCost" -B 3 -A 8Repository: awsl-project/maxx
Length of output: 2485
🏁 Script executed:
# Find RecalculateRequestCostResult type definition
rg "RecalculateRequestCostResult" -B 2 -A 8Repository: awsl-project/maxx
Length of output: 5489
🏁 Script executed:
# Look for sidebar component to see what data it displays
fd -e tsx RequestSidebarRepository: awsl-project/maxx
Length of output: 109
🏁 Script executed:
# Check if there's any backend or API documentation about recalculation
rg "recalculate.*cost" -i -B 2 -A 5Repository: awsl-project/maxx
Length of output: 25045
🏁 Script executed:
cat -n web/src/pages/requests/detail/RequestSidebar.tsx | head -150Repository: awsl-project/maxx
Length of output: 7268
🏁 Script executed:
# Look at the backend RecalculateRequestCost implementation more closely
rg "RecalculateRequestCost" -A 40 internal/service/admin.goRepository: awsl-project/maxx
Length of output: 2444
🏁 Script executed:
# Check how useProxyUpstreamAttempts is used and if it might be cached separately
rg "useProxyUpstreamAttempts|requestKeys.attempts" -B 2 -A 5Repository: awsl-project/maxx
Length of output: 2936
🏁 Script executed:
# Check if there's any special invalidation logic or if attempts should be manually invalidated
rg "invalidateQueries" web/src -B 2 -A 3Repository: awsl-project/maxx
Length of output: 21451
应在重算成本后额外失效 attempts 查询缓存,以确保侧边栏显示最新的 attempt 成本数据。
当前代码仅失效 requestKeys.detail(id),但 RecalculateRequestCost 后端接口会更新每个 attempt 的成本。这导致 requestKeys.attempts(id) 查询缓存仍保存旧数据,侧边栏中的 attempt 成本信息不会刷新。
建议在 onSuccess 中添加:
queryClient.invalidateQueries({ queryKey: requestKeys.attempts(Number(id)) });🤖 Prompt for AI Agents
In `@web/src/pages/requests/detail.tsx` around lines 42 - 52, The recalculate cost
mutation (recalculateMutation) currently only invalidates
requestKeys.detail(Number(id)) on success, but the backend also updates attempt
costs so you must also invalidate the attempts cache; update the onSuccess
handler inside the useMutation for recalculateMutation to call
queryClient.invalidateQueries({ queryKey: requestKeys.attempts(Number(id)) }) in
addition to invalidating requestKeys.detail(Number(id)) so the sidebar attempt
costs refresh; verify handleRecalculateCost continues to call
recalculateMutation.mutate().
| function formatCost(nanoUSD: number): string { | ||
| if (nanoUSD === 0) return '-'; | ||
| // 向下取整到 6 位小数 (microUSD 精度) | ||
| const usd = Math.floor(nanoUSD / 1000) / 1_000_000; | ||
| const formatted = usd.toFixed(6).replace(/\.?0+$/, ''); | ||
| return `$${formatted}`; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the file exists and view the relevant section
cd web/src/pages/requests/detail && wc -l RequestHeader.tsx && head -30 RequestHeader.tsxRepository: awsl-project/maxx
Length of output: 1085
🏁 Script executed:
# Search for formatCost usage in the codebase
rg "formatCost" --type tsx --type tsRepository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Check type definitions and interfaces related to costs
rg "nanoUSD|nano.*USD|cost" --type tsx --type ts -i | head -40Repository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Search for formatCost usage without type restriction
rg "formatCost" --type js --type tsRepository: awsl-project/maxx
Length of output: 3142
🏁 Script executed:
# Also search with broader pattern
rg "formatCost" -A 2 -B 2Repository: awsl-project/maxx
Length of output: 12595
🏁 Script executed:
# Look for ProxyRequest type definition to understand the data structure
rg "type ProxyRequest|interface ProxyRequest" -A 10Repository: awsl-project/maxx
Length of output: 6501
🏁 Script executed:
# Search for cost related constants or backend constraints
rg "MAX_SAFE_INTEGER|Number\.MAX|cost.*limit|price.*limit" -iRepository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# View the full ProxyRequest interface definition
rg "export interface ProxyRequest" -A 30Repository: awsl-project/maxx
Length of output: 1869
🏁 Script executed:
# Check the backend ProxyRequest model to see cost field type
rg "Cost|cost" internal/domain/model.go -A 2 -B 2Repository: awsl-project/maxx
Length of output: 1250
🏁 Script executed:
# Look for any cost validation, limits, or constants
rg "cost|Cost" internal/domain/model.go | head -50Repository: awsl-project/maxx
Length of output: 430
🏁 Script executed:
# Check for any documentation or comments about cost ranges
rg "nanoUSD|nano.*USD" -B 2 -A 2Repository: awsl-project/maxx
Length of output: 6747
🏁 Script executed:
# Search for cost limits, max values, or validation
rg "limit|max|cap" internal/pricing/ -i -B 2 -A 2 | head -60Repository: awsl-project/maxx
Length of output: 998
🏁 Script executed:
# Check for any backend service or handler logic that might validate/constrain costs
rg "Cost.*validation|validate.*cost|max.*cost" -iRepository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Look at the overview page to see typical cost amounts
rg "todayCost|totalCost|allTimeStats" web/src/pages/overview.tsx -B 3 -A 3Repository: awsl-project/maxx
Length of output: 1861
🏁 Script executed:
# Check if there's any documentation about cost ranges or practical usage
find . -name "*.md" -type f -exec grep -l "cost\|pricing\|budget" {} \; | head -10Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check ProxyRequest.cost field type in the TypeScript definition
rg "cost:" web/src/lib/transport/types.ts -B 2 -A 2Repository: awsl-project/maxx
Length of output: 557
🏁 Script executed:
# Look for any API response handling or cost value constraints
rg "proxyRequest|ProxyRequest" web/src/lib/transport/ -A 5 | grep -A 5 "cost"Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check if there are any maximum cost constants or documented limits
rg "MAX|LIMIT|max.*cost|cost.*max" -i --type js --type ts | head -30Repository: awsl-project/maxx
Length of output: 2344
🏁 Script executed:
# Verify the practical limit - calculate what the actual precision loss point is
python3 << 'EOF'
# JavaScript MAX_SAFE_INTEGER
max_safe = 2**53 - 1
print(f"JavaScript MAX_SAFE_INTEGER: {max_safe}")
# Convert to USD
nanoUSD_max = max_safe
usd_max = nanoUSD_max / 1_000_000_000
print(f"Max safe nanoUSD to USD: ${usd_max:.10f}")
# Show precision loss at this boundary
print("\n--- Precision Loss Examples ---")
test_values = [
9_007_199_254_740_992, # Just above MAX_SAFE_INTEGER
10_000_000_000_000_000, # $10M
100_000_000_000_000_000, # $100M
]
for nanoUSD in test_values:
# JavaScript number precision
import math
usd = math.floor(nanoUSD / 1000) / 1_000_000
print(f"nanoUSD: {nanoUSD:>18} → ${usd:>15.6f}")
EOFRepository: awsl-project/maxx
Length of output: 134
改用字符串或 BigInt 传递成本数据以保证精度。
当前 formatCost() 接收 number 类型的 nanoUSD,若成本超过 Number.MAX_SAFE_INTEGER(约 $9,007,199),会导致精度丢失。虽然单次请求成本通常较低,但在累计成本或高成本场景下存在风险。建议后端改为下发字符串或前端改用 BigInt 处理,以保证数值精度。
🤖 Prompt for AI Agents
In `@web/src/pages/requests/detail/RequestHeader.tsx` around lines 9 - 14,
formatCost currently takes a number which can lose precision for very large
nanoUSD values; change formatCost to accept a string or BigInt (e.g., nanoUSD:
string | bigint), convert string input to BigInt, use BigInt arithmetic to
compute microUSD and fractional parts (divide by 1_000n for micro, then divide
by 1_000_000n to get whole USD and use remainders to build up to 6 decimal
places), handle zero by returning '-' and strip trailing zeros from the
fractional part before joining, and update any callers to pass the backend
string value (or BigInt) instead of a JS number.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
web/src/pages/stats/index.tsx (1)
56-173: 修复"全部时间"范围的语义和数据截断问题当前
'all'时间范围被硬编码为最近 5 年,与TimeRangeConfig接口定义不符(接口注释明确说明start: null表示"全部时间")。这导致以下问题:
- 数据截断:早于 5 年前的数据被静默过滤(API 发送 start 参数到后端,后端据此限制查询范围)
- UI 显示错误:选择"全部"后,摘要显示具体日期范围(如"2020年1月1日 - 2025年1月")而非"全部时间"标签
- 设计不一致:接口允许
start: null,但实现中从不使用将
'all'分支改为:case 'all': // 全部时间,交由后端/数据范围决定 start = null; granularity = 'year'; durationMinutes = 0; break;注:
durationMinutes在代码中未被使用(RPM/TPM 计算基于stats.totalDurationMs),设为 0 不会产生影响。internal/repository/sqlite/proxy_request.go (1)
83-86: 列表查询未选出 ttft_ms,TTFT 会始终为 0当前 Select 列缺少
ttft_ms,toDomain 无法还原 TTFT。建议在 ListCursor 与 ListActive 的字段列表里加入该列。🔧 建议修复
- Select("id, created_at, updated_at, instance_id, request_id, session_id, client_type, request_model, response_model, start_time, end_time, duration_ms, is_stream, status, status_code, error, proxy_upstream_attempt_count, final_proxy_upstream_attempt_id, route_id, provider_id, project_id, input_token_count, output_token_count, cache_read_count, cache_write_count, cache_5m_write_count, cache_1h_write_count, cost, api_token_id") + Select("id, created_at, updated_at, instance_id, request_id, session_id, client_type, request_model, response_model, start_time, end_time, duration_ms, ttft_ms, is_stream, status, status_code, error, proxy_upstream_attempt_count, final_proxy_upstream_attempt_id, route_id, provider_id, project_id, input_token_count, output_token_count, cache_read_count, cache_write_count, cache_5m_write_count, cache_1h_write_count, cost, api_token_id")- Select("id, created_at, updated_at, instance_id, request_id, session_id, client_type, request_model, response_model, start_time, end_time, duration_ms, is_stream, status, status_code, error, proxy_upstream_attempt_count, final_proxy_upstream_attempt_id, route_id, provider_id, project_id, input_token_count, output_token_count, cache_read_count, cache_write_count, cache_5m_write_count, cache_1h_write_count, cost, api_token_id"). + Select("id, created_at, updated_at, instance_id, request_id, session_id, client_type, request_model, response_model, start_time, end_time, duration_ms, ttft_ms, is_stream, status, status_code, error, proxy_upstream_attempt_count, final_proxy_upstream_attempt_id, route_id, provider_id, project_id, input_token_count, output_token_count, cache_read_count, cache_write_count, cache_5m_write_count, cache_1h_write_count, cost, api_token_id").Also applies to: 112-114
🤖 Fix all issues with AI agents
In `@internal/repository/sqlite/proxy_request.go`:
- Around line 232-235: UpdateCost 和 AddCost 目前只改 cost 字段而不触及
updated_at;请把它们改为同时更新 updated_at 以与 BatchUpdateCosts 保持一致:在
ProxyRequestRepository.UpdateCost 用 Updates(...) 或 UpdateColumns(...) 一次性写入
{"cost": cost, "updated_at": time.Now()},在 AddCost 用 Updates/... 并配合
gorm.Expr("cost + ?", delta) 或等价方式增加 cost 同时设置 updated_at = time.Now();记得在文件中引入
time 包并使用同样的 time.Now() 语义以保持审计一致性(参见 BatchUpdateCosts、UpdateCost、AddCost
函数名以定位修改位置)。
In `@web/src/components/routes/ClientTypeRoutesContent.tsx`:
- Around line 478-483: The Button usage sets variant={null}, which violates the
Button CVA prop type; change the prop to an allowed value (e.g.,
variant="ghost") to minimize default styling, or add a new no-style variant in
the Button component's variant union (e.g., "unstyled"/"custom") and update its
CVA config and type before using variant="unstyled" here; locate the instance
where Button is rendered (the element with onClick={() =>
handleAddRoute(provider, isNative)} and disabled={createRoute.isPending}) and
update the variant accordingly.
♻️ Duplicate comments (7)
web/src/pages/stats/index.tsx (1)
821-854: 成功率颜色计算仍可能出现 NaN(totalRequests=0)。
当 totalRequests 为 0 时,直接相除会产生 NaN,导致颜色判断不可预期。建议先做保护。🐛 建议修复
iconClassName={cn( - (summary.successfulRequests / summary.totalRequests) >= 0.95 + summary.totalRequests > 0 && (summary.successfulRequests / summary.totalRequests) >= 0.95 ? 'text-emerald-600 dark:text-emerald-400' - : (summary.successfulRequests / summary.totalRequests) >= 0.8 + : summary.totalRequests > 0 && (summary.successfulRequests / summary.totalRequests) >= 0.8 ? 'text-amber-600 dark:text-amber-400' : 'text-red-600 dark:text-red-400' )}internal/executor/executor.go (1)
409-414: ResponseModel 无价表时仍会导致成本为 0目前只在 ResponseModel 为空时回退;若该模型未配置价格,
Calculate仍会返回 0,导致成本低估。建议在价表缺失时回退到 MappedModel(成功/失败分支一致处理)。🔧 建议修复(两处相同处理)
- pricingModel := attemptRecord.ResponseModel - if pricingModel == "" { - pricingModel = attemptRecord.MappedModel - } - attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics) + pricingModel := attemptRecord.ResponseModel + if pricingModel == "" || pricing.GlobalCalculator().GetPricing(pricingModel) == nil { + pricingModel = attemptRecord.MappedModel + } + attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics)Also applies to: 487-492
web/src/lib/transport/types.ts (1)
581-614: RecalculateCostsProgress 的 phase 缺少 aggregating_stats后端会下发
aggregating_stats,前端 union 缺失会导致类型不匹配/分支遗漏。🔧 建议修复
export interface RecalculateCostsProgress { - phase: 'calculating' | 'updating_attempts' | 'updating_requests' | 'completed'; + phase: 'calculating' | 'updating_attempts' | 'updating_requests' | 'aggregating_stats' | 'completed'; current: number; total: number; percentage: number; message: string; }internal/repository/sqlite/proxy_request.go (2)
237-241: AddCost 允许负增量会写出负值SQLite 会接受负数写入无符号字段,读取时可能出现数据腐蚀。建议在 SQL 层做下限保护。
🔧 建议修复
func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error { return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id). - Update("cost", gorm.Expr("cost + ?", delta)).Error + Update("cost", gorm.Expr("CASE WHEN cost + ? < 0 THEN 0 ELSE cost + ? END", delta, delta)).Error }
249-286: BatchUpdateCosts 参数数量可能超过 SQLite 上限每条记录占 3 个参数,batchSize=500 时约 1501 个参数,超过 SQLite 默认 999 限制。
🔧 建议修复
- const batchSize = 500 + const batchSize = 300 // 3 params/row,避免超过 SQLite 默认 999 上限internal/repository/sqlite/proxy_upstream_attempt.go (2)
132-134: UpdateCost 未同步 updated_atBaseModel 为 int64 时 GORM 不一定自动更新,可能导致时间戳陈旧。
🔧 建议修复
func (r *ProxyUpstreamAttemptRepository) UpdateCost(id uint64, cost uint64) error { - return r.db.gorm.Model(&ProxyUpstreamAttempt{}).Where("id = ?", id).Update("cost", cost).Error + return r.db.gorm.Model(&ProxyUpstreamAttempt{}).Where("id = ?", id).Updates(map[string]interface{}{ + "cost": cost, + "updated_at": time.Now().UnixMilli(), + }).Error }
142-179: BatchUpdateCosts 参数上限风险batchSize=500 会产生约 1501 个参数,超过 SQLite 默认 999 限制。
🔧 建议修复
- const batchSize = 500 + const batchSize = 300 // 3 params/row,避免超过 SQLite 默认 999 上限
🧹 Nitpick comments (4)
web/src/components/routes/ClientTypeRoutesContent.tsx (2)
50-58:ProviderTypeKey类型重复定义。根据提供的代码片段,
ProviderTypeKey已在web/src/pages/providers/types.ts中定义。建议从现有模块导入以避免维护负担和潜在的类型不一致。另外,
PROVIDER_TYPE_LABELS中的标签是硬编码的英文字符串,考虑到文件中其他地方使用了useTranslation,建议使用 i18n 进行国际化处理。♻️ 建议的重构
-type ProviderTypeKey = 'antigravity' | 'kiro' | 'custom'; +import { ProviderTypeKey } from '@/pages/providers/types';对于标签国际化,可以在翻译文件中添加对应的 key,然后:
// 在组件内部使用 const getProviderTypeLabel = (type: ProviderTypeKey) => t(`providers.types.${type}`);
460-530: 分组 UI 渲染结构清晰。分组渲染逻辑实现良好:
- 正确跳过空的 provider 组
- 响应式网格布局适配不同屏幕尺寸
- 每个组有清晰的标题分隔
可选的无障碍改进:考虑为分组区域添加
role="group"和aria-labelledby属性,以提升屏幕阅读器的体验。web/src/components/ui/chart.tsx (1)
3-6: 优先使用 Recharts 公共导出类型(若 v3 已提供)。当前从内部路径引入类型,后续小版本调整目录结构可能导致类型解析失败。若 v3.6.0 已在顶层导出这些类型,建议改为公共入口导入以降低升级风险。
♻️ 可能的替代写法(如公共导出可用)
-import type { NameType, ValueType } from 'recharts/types/component/DefaultTooltipContent'; -import type { LegendPayload, Props as DefaultLegendContentProps } from 'recharts/types/component/DefaultLegendContent'; +import type { NameType, ValueType, LegendPayload, DefaultLegendContentProps } from 'recharts';web/src/pages/stats/index.tsx (1)
391-441: 进度条完成后的 setTimeout 建议在卸载时清理。
当前定时器在组件卸载后仍可能触发 setState,建议在 cleanup 中清除,避免潜在泄漏/警告。🧹 建议修复(示例)
useEffect(() => { const transport = getTransport(); + let clearId: number | null = null; const unsubscribe = transport.subscribe<RecalculateCostsProgress>( 'recalculate_costs_progress', (data) => { setCostsProgress(data); // Clear progress after completion (with a delay to show final message) if (data.phase === 'completed') { - setTimeout(() => setCostsProgress(null), 3000); + if (clearId) window.clearTimeout(clearId); + clearId = window.setTimeout(() => setCostsProgress(null), 3000); } }, ); - return unsubscribe; + return () => { + if (clearId) window.clearTimeout(clearId); + unsubscribe(); + }; }, []); useEffect(() => { const transport = getTransport(); + let clearId: number | null = null; const unsubscribe = transport.subscribe<RecalculateStatsProgress>( 'recalculate_stats_progress', (data) => { setStatsProgress(data); // Clear progress after completion (with a delay to show final message) if (data.phase === 'completed') { - setTimeout(() => setStatsProgress(null), 3000); + if (clearId) window.clearTimeout(clearId); + clearId = window.setTimeout(() => setStatsProgress(null), 3000); } }, ); - return unsubscribe; + return () => { + if (clearId) window.clearTimeout(clearId); + unsubscribe(); + }; }, []);
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
web/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (15)
internal/adapter/provider/antigravity/adapter.gointernal/adapter/provider/custom/adapter.gointernal/adapter/provider/kiro/adapter.gointernal/adapter/provider/kiro/streaming.gointernal/domain/adapter_event.gointernal/domain/model.gointernal/executor/executor.gointernal/repository/sqlite/models.gointernal/repository/sqlite/proxy_request.gointernal/repository/sqlite/proxy_upstream_attempt.goweb/package.jsonweb/src/components/routes/ClientTypeRoutesContent.tsxweb/src/components/ui/chart.tsxweb/src/lib/transport/types.tsweb/src/pages/stats/index.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/repository/sqlite/models.go
🧰 Additional context used
🧬 Code graph analysis (8)
internal/adapter/provider/kiro/adapter.go (2)
internal/domain/errors.go (1)
NewProxyErrorWithMessage(61-63)internal/context/context.go (1)
GetEventChan(203-208)
internal/domain/adapter_event.go (3)
internal/domain/model.go (3)
RequestInfo(172-177)ResponseInfo(178-182)ResponseModel(619-631)web/src/lib/transport/types.ts (3)
RequestInfo(150-155)ResponseInfo(157-161)ResponseModel(617-623)internal/repository/sqlite/models.go (2)
ResponseModel(304-310)ResponseModel(312-312)
internal/executor/executor.go (5)
internal/domain/model.go (1)
ResponseModel(619-631)internal/repository/sqlite/models.go (2)
ResponseModel(304-310)ResponseModel(312-312)web/src/lib/transport/types.ts (1)
ResponseModel(617-623)internal/pricing/calculator.go (1)
GlobalCalculator(23-28)internal/domain/adapter_event.go (1)
EventFirstToken(16-16)
internal/repository/sqlite/proxy_upstream_attempt.go (4)
internal/repository/interfaces.go (1)
ProxyUpstreamAttemptRepository(103-118)internal/domain/model.go (3)
ProxyUpstreamAttempt(252-302)AttemptCostData(305-318)ResponseModel(619-631)internal/repository/sqlite/models.go (4)
ProxyUpstreamAttempt(218-241)ProxyUpstreamAttempt(243-243)ResponseModel(304-310)ResponseModel(312-312)internal/repository/sqlite/db.go (1)
DB(17-20)
web/src/pages/stats/index.tsx (3)
web/src/lib/transport/types.ts (3)
StatsGranularity(532-532)RecalculateCostsProgress(590-596)RecalculateStatsProgress(599-605)web/src/lib/transport/index.ts (4)
StatsGranularity(68-68)RecalculateCostsProgress(71-71)RecalculateStatsProgress(72-72)getTransport(95-95)web/src/hooks/queries/use-usage-stats.ts (2)
useRecalculateUsageStats(145-155)useRecalculateCosts(160-170)
web/src/components/routes/ClientTypeRoutesContent.tsx (4)
web/src/pages/providers/types.ts (1)
ProviderTypeKey(15-15)internal/repository/sqlite/models.go (2)
Provider(63-70)Provider(72-72)web/src/lib/transport/types.ts (1)
Provider(43-53)web/src/lib/theme.ts (2)
getProviderColor(553-555)ProviderType(9-20)
internal/domain/model.go (2)
internal/repository/sqlite/models.go (2)
ResponseModel(304-310)ResponseModel(312-312)web/src/lib/transport/types.ts (1)
ResponseModel(617-623)
web/src/lib/transport/types.ts (1)
internal/service/admin.go (3)
RecalculateCostsResult(716-721)RecalculateCostsProgress(724-730)RecalculateRequestCostResult(867-873)
🔇 Additional comments (34)
web/src/components/routes/ClientTypeRoutesContent.tsx (2)
154-190: 分组逻辑实现良好。Provider 分组逻辑正确处理了未知类型的情况(回退到
custom组),并在每个组内按字母顺序排序。这是良好的防御性编程实践。
192-195: LGTM!使用
some()进行短路求值是高效的实现方式。web/package.json (1)
38-38: 确认 Recharts 3 大版本升级的兼容性。这是一次主版本升级,建议核对 Recharts 3 的破坏性变更、peer 依赖要求,并在相关图表页面运行类型检查与回归测试,确保 API 与样式表现一致。
web/src/components/ui/chart.tsx (2)
100-121: Tooltip 类型收敛到 TooltipContentProps 的改动合理。这能更贴合 Recharts 3 的类型定义,降低泛型漂移风险。
238-289: Legend 类型对齐 DefaultLegendContentProps/LegendPayload 是加分项。强类型化 payload 有助于后续渲染逻辑稳定。
web/src/pages/stats/index.tsx (8)
1-54: 导入调整清晰,新增图表/进度相关依赖匹配本次改动。
191-287: year 粒度补轴 + nanoUSD→USD 转换逻辑一致,整体可读性不错。
294-371: 聚合键与标签格式对 year 的支持清晰,便于一致展示。
472-596: 重算 mutations 的禁用逻辑与进度条呈现一致,用户体验友好。
598-780: 左侧筛选栏重构清晰,Provider 按类型分组排序也更易用。
857-999: 图表在 requests/tokens 维度切换 + cost 线叠加的组合合理。
1009-1051: StatCard 组件抽象简洁,便于复用与统一样式。
1053-1117: FilterSection / FilterChip 组件化清晰,状态/交互一致性好。internal/executor/executor.go (2)
235-238: 补齐 attempt 结束时间与时长记录 👍defer 中补写 EndTime/Duration 能避免异常退出时记录缺失。
453-454: TTFT 在请求/尝试与事件链路中已贯通成功/失败分支与事件处理都能落地 TTFT,链路完整。
Also applies to: 524-525, 817-821, 872-878
web/src/lib/transport/types.ts (3)
183-185: TTFT 字段补齐,类型与后端对齐以纳秒数表示,便于统一展示与传递。
Also applies to: 223-225
259-260: 新增 provider 过滤、WS 事件与 year 粒度 👍接口能力扩展清晰,类型声明一致。
Also applies to: 284-285, 532-532
792-814: 价格表类型声明清晰字段粒度覆盖全面,便于后续定价展示与校验。
internal/domain/model.go (4)
204-205: TTFT 字段在请求/尝试模型中补齐语义明确,便于全链路延迟追踪。
Also applies to: 262-263
245-246: 成本单位注释统一为纳美元注释与计费精度升级保持一致。
Also applies to: 457-458, 502-503
304-318: AttemptCostData 最小化重算载荷 👍字段聚焦成本计算所需,便于批处理与传输。
470-471: 新增 year 粒度与 Progress 类型,便于统计扩展通用进度结构有利于后续后台任务复用。
Also applies to: 783-792
internal/repository/sqlite/proxy_request.go (3)
126-143: CountWithFilter 与缓存逻辑一致,OK有过滤条件时走数据库,无过滤时复用缓存计数。
295-376: 成本重算的分批更新与进度上报实现稳健批处理与进度通道设计合理,便于长任务可视化。
395-396: TTFT 映射到持久化模型已补齐与 TTFTMs 字段对齐,序列化一致。
Also applies to: 432-433
internal/repository/sqlite/proxy_upstream_attempt.go (2)
47-130: 流式读取成本数据便于大表重算字段裁剪 + 分批回调的设计很实用。
198-199: TTFT Ms 映射一致持久化与领域模型转换保持对齐。
Also applies to: 227-227
internal/adapter/provider/custom/adapter.go (1)
319-370: 首包写入后发送 FirstToken 事件,TTFT 记录准确以首次成功写出为准,语义清晰。
internal/adapter/provider/antigravity/adapter.go (1)
651-709: TTFT 仅在首个成功写入后发送,逻辑清晰。以首个成功写入作为 TTFT 触发点并避免重复发送,符合预期。
internal/adapter/provider/kiro/streaming.go (3)
28-28: 新增 firstTokenTimeMs 字段用于 TTFT 追踪,位置合理。
173-177: 在成功发送事件后记录 TTFT,触发时机合理。
332-335: 提供 Getter 便于下游透传 TTFT。internal/domain/adapter_event.go (1)
15-16: 新增 EventFirstToken 事件及非阻塞发送实现到位。时间戳承载与发送方式与现有事件风格一致,避免阻塞主流程。
Also applies to: 31-37, 92-101
internal/adapter/provider/kiro/adapter.go (1)
365-366: TTFT 在 sendFinalEvents 中统一透传,调用点覆盖完整。GetFirstTokenTimeMs 的传递与事件发送顺序清晰,符合预期。
Also applies to: 373-374, 379-380, 385-386, 390-391, 395-405
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| // UpdateCost updates only the cost field of a request | ||
| func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error { | ||
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Update("cost", cost).Error | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the ProxyRequest model and related methods
cd /tmp && find . -type f -name "proxy_request.go" 2>/dev/null | head -5Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Search for the file in a broader way
find . -type f -name "*proxy_request*" 2>/dev/nullRepository: awsl-project/maxx
Length of output: 106
🏁 Script executed:
# Let's check the repository structure first
git ls-files | grep -E "proxy_request|repository" | head -20Repository: awsl-project/maxx
Length of output: 896
🏁 Script executed:
# Read the proxy_request.go file
wc -l internal/repository/sqlite/proxy_request.goRepository: awsl-project/maxx
Length of output: 108
🏁 Script executed:
# Read the entire proxy_request.go file
cat -n internal/repository/sqlite/proxy_request.goRepository: awsl-project/maxx
Length of output: 18091
🏁 Script executed:
# Also check the ProxyRequest model definition
grep -n "type ProxyRequest" internal/repository/sqlite/models.go -A 30Repository: awsl-project/maxx
Length of output: 1494
🏁 Script executed:
# Check AddCost method (around line 237-241) to see if it has similar issue
sed -n '237,241p' internal/repository/sqlite/proxy_request.goRepository: awsl-project/maxx
Length of output: 323
UpdateCost 和 AddCost 都应同步 updated_at,保持与 BatchUpdateCosts 一致
这两个方法只更新 cost 字段而不更新 updated_at,但 BatchUpdateCosts 会显式设置 updated_at。为保持审计一致性,建议这两个方法也应更新 updated_at。
🔧 建议修复
-func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error {
- return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Update("cost", cost).Error
-}
+func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error {
+ return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Updates(map[string]interface{}{
+ "cost": cost,
+ "updated_at": time.Now().UnixMilli(),
+ }).Error
+}-func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error {
- return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).
- Update("cost", gorm.Expr("cost + ?", delta)).Error
-}
+func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error {
+ return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Updates(map[string]interface{}{
+ "cost": gorm.Expr("cost + ?", delta),
+ "updated_at": time.Now().UnixMilli(),
+ }).Error
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // UpdateCost updates only the cost field of a request | |
| func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error { | |
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Update("cost", cost).Error | |
| } | |
| // UpdateCost updates only the cost field of a request | |
| func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error { | |
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Updates(map[string]interface{}{ | |
| "cost": cost, | |
| "updated_at": time.Now().UnixMilli(), | |
| }).Error | |
| } | |
| // AddCost adds a delta to the cost field of a request | |
| func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error { | |
| return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Updates(map[string]interface{}{ | |
| "cost": gorm.Expr("cost + ?", delta), | |
| "updated_at": time.Now().UnixMilli(), | |
| }).Error | |
| } |
🤖 Prompt for AI Agents
In `@internal/repository/sqlite/proxy_request.go` around lines 232 - 235,
UpdateCost 和 AddCost 目前只改 cost 字段而不触及 updated_at;请把它们改为同时更新 updated_at 以与
BatchUpdateCosts 保持一致:在 ProxyRequestRepository.UpdateCost 用 Updates(...) 或
UpdateColumns(...) 一次性写入 {"cost": cost, "updated_at": time.Now()},在 AddCost 用
Updates/... 并配合 gorm.Expr("cost + ?", delta) 或等价方式增加 cost 同时设置 updated_at =
time.Now();记得在文件中引入 time 包并使用同样的 time.Now() 语义以保持审计一致性(参见
BatchUpdateCosts、UpdateCost、AddCost 函数名以定位修改位置)。
| <Button | ||
| key={provider.id} | ||
| variant={null} | ||
| onClick={() => handleAddRoute(provider, isNative)} | ||
| disabled={createRoute.isPending} | ||
| className="h-auto group relative flex items-center justify-between gap-4 p-4 rounded-xl border border-border/40 bg-background hover:bg-secondary/50 hover:border-border shadow-sm hover:shadow transition-all duration-300 text-left disabled:opacity-50 disabled:cursor-not-allowed overflow-hidden" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the Button component definition
fd -e tsx -e ts -e jsx -e js | xargs grep -l "export.*Button" | grep -E "button|ui" | head -20Repository: awsl-project/maxx
Length of output: 124
🏁 Script executed:
#!/bin/bash
# Search for Button component definition with variant type
rg -n "variant" web/src/components/ui/button.tsx -A 3 -B 3Repository: awsl-project/maxx
Length of output: 1484
🏁 Script executed:
#!/bin/bash
# Check the type definition for Button variant prop
rg -n "interface ButtonProps|type ButtonProps" web/src/components -A 10Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Look for other Button usages with variant={null}
rg -n 'variant=\{null\}' web/srcRepository: awsl-project/maxx
Length of output: 244
🏁 Script executed:
#!/bin/bash
# Get the full ButtonProps type with variant
rg -n "ButtonPrimitive.Props|VariantProps" web/src/components/ui/button.tsx -B 5 -A 5Repository: awsl-project/maxx
Length of output: 1174
🏁 Script executed:
#!/bin/bash
# Check the cva variant definitions more completely
cat -n web/src/components/ui/button.tsx | head -50Repository: awsl-project/maxx
Length of output: 3413
🏁 Script executed:
#!/bin/bash
# Look at how variant={null} is used in the context and surrounding code
cat -n web/src/components/routes/ClientTypeRoutesContent.tsx | sed -n '475,490p'Repository: awsl-project/maxx
Length of output: 1509
variant={null} 违反 Button 组件的类型约束。
Button 组件使用 CVA (class-variance-authority) 管理样式,variant prop 被限制为特定字符串值('default' | 'outline' | 'secondary' | 'ghost' | 'destructive' | 'link')。传递 null 不在这个联合类型中,将导致 TypeScript 类型错误。
若要完全自定义样式,建议使用 variant="ghost" 来最小化默认样式,或在 Button 组件中添加专用的无样式 variant 选项(如 variant="unstyled" 或 variant="custom")。
🤖 Prompt for AI Agents
In `@web/src/components/routes/ClientTypeRoutesContent.tsx` around lines 478 -
483, The Button usage sets variant={null}, which violates the Button CVA prop
type; change the prop to an allowed value (e.g., variant="ghost") to minimize
default styling, or add a new no-style variant in the Button component's variant
union (e.g., "unstyled"/"custom") and update its CVA config and type before
using variant="unstyled" here; locate the instance where Button is rendered (the
element with onClick={() => handleAddRoute(provider, isNative)} and
disabled={createRoute.isPending}) and update the variant accordingly.
为 ProxyRequest 和 ProxyUpstreamAttempt 添加 TTFT (Time To First Token) 字段, 记录流式 API 请求从开始到收到第一个 token 的时间间隔。 主要改动: - domain: 添加 TTFT time.Duration 字段 - sqlite: 添加 TTFTMs 毫秒存储字段及转换逻辑 - adapters: custom/antigravity/kiro 适配器追踪首字时间 - executor: 处理 EventFirstToken 事件并同步到 proxyReq - frontend: 同步添加 ttft 类型字段
- 添加状态过滤器下拉框,支持按 COMPLETED/FAILED/IN_PROGRESS 等状态筛选 - 后端 API 支持 status 查询参数过滤 - 实时更新逻辑支持状态过滤,新数据会根据当前过滤条件正确显示或隐藏 - 表格添加斑马纹背景(奇数行微透明),提升可读性 - 失败状态行根据奇偶行使用不同透明度的红色背景 - Hover 效果使用伪元素叠加,不覆盖原有背景色 - 修复 TTFT 列表查询时数据丢失问题(添加 ttft_ms 到 SELECT) - Cost 格式化保留完整 6 位小数
- 移除 TableRow 伪元素避免列错位 - 为所有单元格添加明确宽度和 padding - 修复 Failed 行顶部细线问题(移除 border-l) - 添加 Table border-collapse 消除行间隙 - 移除 TableRow 默认边框 - 增强正常行 hover 效果 - 添加 TTFT 字段聚合到 UsageStats - 修复多处 lint 警告
- Token 统计包含所有 token (input + output + cacheRead + cacheWrite) - 添加缓存命中率显示 (cacheRead / (input + cacheRead)) - 添加平均首字时长 (TTFT) 显示在 RPM 后面 - 实现 Proxy Server 优雅关机,等待所有请求完成后再关闭 - 增加 SIGTERM/SIGINT 信号处理,支持 docker down 优雅关机 - 关机超时 2 分钟,每 5 秒检查一次活跃请求
- 使用 channel 通信替代定时轮询,请求完成时实时打印日志 - 标记失败请求时设置 end_time 和计算 duration_ms - 同步处理 upstream_attempt 表的失败状态
aba8483 to
1fe3117
Compare
将 FAILED 状态但 end_time=0 的请求/attempts 设置正确的 end_time, 避免它们一直显示在请求列表最前面
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
internal/converter/claude_to_openai.go (1)
310-316:OpenAIToolCallWithIndex应该被移除,因为它是冗余的未使用类型。代码库搜索结果确认:
OpenAIToolCallWithIndex仅在internal/converter/claude_to_openai.go:311定义,整个代码库中没有任何其他使用OpenAIToolCall(定义在internal/converter/types_openai.go:54)已包含Index字段,且标注了注释 "Used in streaming"- 流式处理代码(第 260-265 行)直接使用
OpenAIToolCall,而不是OpenAIToolCallWithIndex此类型为死代码,建议删除。
web/src/locales/zh.json (1)
112-123: “全部供应商”建议与全局“提供商”用词统一。
当前 UI 其余位置多用“提供商”,这里改为“全部提供商”能保持一致性。💬 建议调整
- "allProviders": "全部供应商", + "allProviders": "全部提供商",web/src/locales/en.json (1)
579-635: 缺少granularity_year本地化键,影响代码一致性。en.json 和 zh.json 中都缺少
granularity_year键。虽然 StatsGranularity 类型包括 'year',且代码在"全部时间"选项时使用该粒度(stats/index.tsx:167),但目前不会导致 UI 显示异常,因为粒度标签是通过date.toLocaleString()而非 i18n 翻译键渲染的。建议补齐两个文件中的该键,以保持一致性和便于未来维护:
建议补充的 key
在 en.json 中:
"granularity_month": "Month", + "granularity_year": "Year",在 zh.json 中:
"granularity_month": "月", + "granularity_year": "年",web/src/pages/stats/index.tsx (1)
262-287: 使用number累计成本存在精度丢失风险
cost字段在aggregateForChart()中使用 JavaScriptnumber类型累加(第 268 行),当数据规模较大时可能超过Number.MAX_SAFE_INTEGER(≈9×10^15),导致精度丢失。建议统一成本单位的文档定义(当前代码注释提及纳美元但UsageStatsSummary标注为微美元),并在累积前使用bigint或Decimal类型处理,最后在展示层转换为美元单位。
🤖 Fix all issues with AI agents
In `@internal/core/request_tracker.go`:
- Around line 29-38: The RequestTracker has races: RequestTracker.Add currently
checks isShutdown then calls wg.Add(1) which can panic if a concurrent
WaitWithTimeout/WaitWithContext/GracefulShutdown is calling wg.Wait(), and
multiple shutdown methods directly close t.shutdownCh causing close panics; fix
by introducing a sync.Mutex (e.g., trackMu) to serialize Add() and all
Wait/Shutdown paths (lock in Add before checking isShutdown and calling wg.Add,
unlock after increment) and use a sync.Once (e.g., shutdownOnce) to perform a
single close(t.shutdownCh) from WaitWithTimeout, WaitWithContext, and
GracefulShutdown while also setting the isShutdown flag via atomic.Store so Add
sees shutdown consistently; update RequestTracker.Add, WaitWithTimeout,
WaitWithContext, and GracefulShutdown to use these new primitives (trackMu,
shutdownOnce, isShutdown) to prevent WaitGroup misuse and double-close panics.
In `@internal/repository/sqlite/proxy_request.go`:
- Around line 82-107: ListCursor currently uses id as the cursor while ordering
by end_time (and id), which causes unstable pagination; change to a composite
cursor on (end_time, id) in ProxyRequestRepository.ListCursor: update the
function signature to accept composite cursor values (e.g., afterEndTime,
afterID and beforeEndTime, beforeID or a single struct), change the Where
clauses from "id > ?" / "id < ?" to tuple comparisons like "(end_time, id) > (?,
?)" and "(end_time, id) < (?, ?)", pass the corresponding end_time and id
values, and ensure the query.Order remains "CASE WHEN end_time = 0 THEN 0 ELSE 1
END, end_time DESC, id DESC" so comparisons match the ordering; alternatively,
if you prefer a simpler change, switch ordering to a single id-based order
(e.g., "id DESC") and keep the existing id-only cursor logic.
In `@internal/stats/pure_test.go`:
- Around line 58-60: The tests call time.LoadLocation and discard the error
(e.g., the shanghai and tokyo location lookups), which can return an error in
minimal CI images lacking tzdata; update each LoadLocation call (references:
variables named shanghai, tokyo and any other LoadLocation usages in
pure_test.go) to check the returned error and call t.Skipf("skipping test:
failed to load timezone %q: %v", "<ZoneName>", err) when err != nil so the test
is skipped instead of panicking on a nil location. Ensure you handle all
occurrences (the other LoadLocation pairs mentioned) the same way.
In `@internal/stats/pure.go`:
- Around line 33-49: TruncateToGranularity currently uses
t.Truncate(time.Minute/time.Hour) which aligns to absolute UTC moments and
breaks buckets for non-integer hour timezones; update TruncateToGranularity
(function name) to build minute- and hour-granularity buckets using time.Date on
t.In(loc) (like the existing Day/Month cases) by zeroing out smaller components
(seconds/nanoseconds for Minute, minutes/seconds/nanos for Hour) so truncation
is done in local wall-clock time; keep the loc parameter required and
nil-checked before using t.In(loc).
In `@web/src/hooks/queries/use-requests.ts`:
- Around line 34-38: The requestsCount cache (used by useProxyRequestsCount with
queryKey ['requestsCount', providerId, status]) is only incremented on new
requests and not adjusted when an existing request's providerId or status
changes; update your mutation handlers to, for each cached requestsCount entry,
compute whether the request matched the cached filter both before and after the
change and then increment or decrement that specific cache by the difference
(±1) accordingly: locate places that update requests (the codepath that receives
updated request objects) and for each cache entry built from
useProxyRequestsCount compare oldRequest.status/providerId vs
newRequest.status/providerId and call the queryClient.setQueryData for
['requestsCount', providerId, status] to add or subtract 1 as appropriate so
counts remain consistent with filtered lists.
In `@web/src/lib/transport/interface.ts`:
- Around line 170-171: The RecalculateRequestCostResult's oldCost and newCost
are defined as number which will lose precision for backend uint64 nanoUSD
values; update the transport/types and interface so recalculateRequestCost() and
RecalculateRequestCostResult use string for oldCost/newCost (or a
BigInt-serializable representation) and adjust callers to parse into BigInt or a
decimal library, ensuring any other references to oldCost/newCost and
recalculateRequestCost are updated to handle the string->BigInt/decimal
conversion and validation of potential overflow.
In `@web/src/lib/transport/types.ts`:
- Line 534: The frontend type StatsGranularity includes 'week' and 'year' but
the backend (internal/domain/model.go constants and TruncateToGranularity() in
internal/stats/pure.go) only supports minute/hour/day/month and silently falls
back to hour; remove 'week' and 'year' from the StatsGranularity union to match
backend behavior and update any callers/usage sites (serialization, UI
selectors, API payload builders) to stop sending those values; alternatively, if
you prefer backend support instead, add GranularityWeek and GranularityYear
constants in internal/domain/model.go and implement proper handling in
TruncateToGranularity() (and add validation) so those values are supported —
choose one approach and make sure all affected places reference the same set of
granularity symbols.
In `@web/src/pages/stats/index.tsx`:
- Around line 1091-1104: The clear button currently remains in the DOM and can
receive keyboard focus when hidden (button with props onClear and className
using showClear), causing an a11y issue; update the JSX so that when showClear
is false the button is not focusable — either don't render the button at all
(wrap rendering in showClear && <button ...>) or, if you must keep it, set
tabIndex={-1} and aria-hidden="true" on the button (the element using onClear
and className/cn) so it is removed from the tab order and accessibility tree
when hidden.
- Around line 403-410: handleResetFilters currently resets providerId,
projectId, clientType, apiTokenId and model but omits timeRange; update the
handleResetFilters function to also reset timeRange (use the app's default e.g.
'24h' or 'all' depending on intent) by calling setTimeRange with that default,
and update the comment or any UI text that says "Reset all filters" to reflect
the included timeRange reset; refer to the handleResetFilters function and the
setTimeRange state setter when making the change.
- Around line 164-169: In the switch branch for case 'all' (where variables
start, granularity, durationMinutes are set), replace the hard-coded 5-year
window with a true "all time" sentinel by setting start = null (compatible with
the Date | null contract) instead of new Date(...); keep granularity as 'year'
if needed, and adjust or clear durationMinutes if your caller treats it as
optional for all-time queries so the request doesn't get artificially truncated.
♻️ Duplicate comments (11)
web/src/pages/stats/index.tsx (1)
871-879: 成功率图标颜色仍可能出现 NaN(重复问题)这里仍在
totalRequests = 0时做除法,可能导致 NaN,从而使样式判断不稳定。该问题已在过去评审中指出,请确认是否已统一修复。web/src/components/routes/ClientTypeRoutesContent.tsx (1)
478-483:variant={null}违反 Button 组件类型约束。此问题已在之前的 review 中指出。Button 的
variantprop 应为预定义的字符串值,传递null会导致 TypeScript 类型错误。建议使用
variant="ghost"或添加自定义的无样式 variant。internal/executor/executor.go (1)
409-414: ResponseModel 无价表时成本可能被算成 0
当 ResponseModel 有值但未配置价格时,Calculate 会返回 0,导致成本低估。建议在价表缺失时回退到 MappedModel 并重算(成功/失败分支一致处理)。🔧 建议修复(成功与失败分支同样适用)
- // Use ResponseModel for pricing (actual model from API response), fallback to MappedModel - pricingModel := attemptRecord.ResponseModel - if pricingModel == "" { - pricingModel = attemptRecord.MappedModel - } - attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics) + // Use ResponseModel for pricing; fallback to MappedModel if pricing missing + pricingModel := attemptRecord.ResponseModel + if pricingModel == "" || pricing.GlobalCalculator().GetPricing(pricingModel) == nil { + pricingModel = attemptRecord.MappedModel + } + attemptRecord.Cost = pricing.GlobalCalculator().Calculate(pricingModel, metrics)Also applies to: 487-492
internal/handler/admin.go (1)
703-719: providerId 解析失败仍被静默忽略
与 count 端点的 400 处理不一致(同此前 review 指出的问题)。web/src/pages/requests/index.tsx (1)
44-52: ProviderTypeKey 与标签建议复用统一定义并做 i18n当前在页面内重复定义 ProviderTypeKey,且 PROVIDER_TYPE_LABELS 含硬编码英文(Custom)。建议复用 providers/types.ts 的定义,并在 ProviderFilter 内用 t(...) 生成标签,避免类型漂移与本地化缺失。
internal/repository/sqlite/proxy_request.go (1)
245-254: AddCost 可能写入负值且未更新 updated_atdelta 为负且绝对值超过当前 cost 时,
cost + ?可能写入负值(SQLite 会接受),导致 uint64 字段被污染;同时 UpdateCost/AddCost 未更新 updated_at,与 BatchUpdateCosts 不一致。🔧 建议修复(防负值 + 同步 updated_at)
func (r *ProxyRequestRepository) UpdateCost(id uint64, cost uint64) error { - return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Update("cost", cost).Error + return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Updates(map[string]any{ + "cost": cost, + "updated_at": time.Now().UnixMilli(), + }).Error } func (r *ProxyRequestRepository) AddCost(id uint64, delta int64) error { - return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id). - Update("cost", gorm.Expr("cost + ?", delta)).Error + return r.db.gorm.Model(&ProxyRequest{}).Where("id = ?", id).Updates(map[string]any{ + "cost": gorm.Expr("CASE WHEN cost + ? < 0 THEN 0 ELSE cost + ? END", delta, delta), + "updated_at": time.Now().UnixMilli(), + }).Error }internal/repository/sqlite/proxy_upstream_attempt.go (2)
132-134: UpdateCost 应同步更新 updated_at 字段。此问题在之前的 review 中已指出。由于
BaseModel.UpdatedAt是int64类型,GORM 不会自动更新。建议一并更新updated_at。
170-214: BatchUpdateCosts 的 batchSize=500 超出 SQLite 参数上限。此问题在之前的 review 中已指出。每行消耗 3 个参数(CASE WHEN 的 id 和 cost,加上 WHERE IN 中的 id),总计 500×2 + 1 + 500 = 1501 个参数,超出 SQLite 默认的 999 参数限制。
web/src/lib/transport/types.ts (1)
592-599: RecalculateCostsProgress 的 phase 类型与后端不完全匹配。此问题在之前的 review 中已指出。后端
RecalculateCostsProgress(参见 internal/service/admin.go lines 720-726)包含'aggregating_stats'phase,但前端定义缺失该值。internal/repository/sqlite/usage_stats.go (2)
522-524: Scan 错误被静默忽略。此问题在之前的 review 中已指出。当
rows.Scan失败时,仅使用continue跳过该行而不记录日志,可能隐藏数据格式问题。
1308-1318: RollUp 错误被静默丢弃。此问题在之前的 review 中已指出。
ClearAndRecalculateWithProgress中的四个RollUpAllWithProgress调用都使用_, _丢弃返回值和错误。如果 rollup 失败,统计数据将不完整但函数会返回成功。
🧹 Nitpick comments (9)
internal/adapter/provider/antigravity/transform_tools.go (1)
97-100: 可选:统一 nil 检查风格以保持一致性。
buildTools已简化为len(claudeReq.Tools) == 0,但此处仍使用显式nil检查。虽然功能正确,但为了代码风格一致性,可以考虑统一写法。♻️ 建议修改
func hasWebSearchTool(claudeReq *ClaudeRequest) bool { - if claudeReq.Tools == nil { - return false - } - - for _, tool := range claudeReq.Tools { + for _, tool := range claudeReq.Tools { if isWebSearchTool(tool) { return true }在 Go 中,对
nil切片进行range迭代会直接跳过循环体,因此可以安全地移除显式nil检查。web/src/pages/stats/index.tsx (1)
412-442: 完成后的 setTimeout 未清理,卸载后可能触发 setState进度完成后会
setTimeout清理状态,但未在卸载/重复触发时清理定时器,可能导致内存泄漏或卸载后的状态更新。建议保存 timer 并在 cleanup 中清除(两处 effect 都同样处理)。♻️ 建议处理定时器清理
+ const costsTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null); + const statsTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null); useEffect(() => { const transport = getTransport(); const unsubscribe = transport.subscribe<RecalculateCostsProgress>( 'recalculate_costs_progress', (data) => { setCostsProgress(data); if (data.phase === 'completed') { - setTimeout(() => setCostsProgress(null), 3000); + if (costsTimerRef.current) clearTimeout(costsTimerRef.current); + costsTimerRef.current = setTimeout(() => setCostsProgress(null), 3000); } }, ); - return unsubscribe; + return () => { + if (costsTimerRef.current) clearTimeout(costsTimerRef.current); + unsubscribe(); + }; }, []); useEffect(() => { const transport = getTransport(); const unsubscribe = transport.subscribe<RecalculateStatsProgress>( 'recalculate_stats_progress', (data) => { setStatsProgress(data); if (data.phase === 'completed') { - setTimeout(() => setStatsProgress(null), 3000); + if (statsTimerRef.current) clearTimeout(statsTimerRef.current); + statsTimerRef.current = setTimeout(() => setStatsProgress(null), 3000); } }, ); - return unsubscribe; + return () => { + if (statsTimerRef.current) clearTimeout(statsTimerRef.current); + unsubscribe(); + }; }, []);internal/adapter/provider/kiro/compliant_event_stream_parser.go (2)
32-35: 建议记录被忽略的错误以便调试。当前实现静默忽略
ParseStream返回的错误。虽然继续处理部分消息是合理的容错策略,但完全丢弃错误会影响问题排查。建议至少将错误添加到
errs切片中,或记录日志:♻️ 建议的修改
messages, err := cesp.robustParser.ParseStream(streamData) if err != nil { - _ = err // Continue with partial messages. + errs = append(errs, fmt.Errorf("parse stream: %w", err)) }注意:需要将
errs的声明移到此处之前。
63-67: 同上,建议保留错误信息。
ParseStream方法同样静默忽略错误。由于此方法返回([]SSEEvent, error),可以考虑返回累积的错误或记录日志,而不是完全丢弃。♻️ 建议的修改
messages, err := cesp.robustParser.ParseStream(data) if err != nil { - _ = err // Continue with partial messages. + // 可选:记录错误但继续处理 + // log.Printf("ParseStream partial error: %v", err) }internal/stats/aggregator.go (1)
22-25: 考虑在 RunPeriodicSync 中处理 AggregateEvent 中的错误信息。AggregateEvent 包含 Error 字段,当前实现完全忽略了聚合过程中可能发生的错误。虽然
for range模式能正确等待聚合完成,但未检查错误会导致问题无法被察觉。建议至少记录错误信息以便监控。♻️ 可选的改进
func (sa *StatsAggregator) RunPeriodicSync() { - for range sa.usageStatsRepo.AggregateAndRollUp() { - // drain the channel to wait for completion - } + for event := range sa.usageStatsRepo.AggregateAndRollUp() { + if event.Error != nil { + log.Printf("aggregation error in %s: %v", event.Phase, event.Error) + } + } }internal/repository/sqlite/proxy_upstream_attempt.go (1)
63-130: 循环变量r遮蔽了方法接收者。Line 101 的
for i, r := range results中的r会遮蔽方法接收者r *ProxyUpstreamAttemptRepository,虽然在当前代码中不会导致错误,但可能造成混淆。建议重命名循环变量。建议修改
- for i, r := range results { + for i, row := range results { batch[i] = &domain.AttemptCostData{ - ID: r.ID, - ProxyRequestID: r.ProxyRequestID, - ResponseModel: r.ResponseModel, - MappedModel: r.MappedModel, - RequestModel: r.RequestModel, - InputTokenCount: r.InputTokenCount, - OutputTokenCount: r.OutputTokenCount, - CacheReadCount: r.CacheReadCount, - CacheWriteCount: r.CacheWriteCount, - Cache5mWriteCount: r.Cache5mWriteCount, - Cache1hWriteCount: r.Cache1hWriteCount, - Cost: r.Cost, + ID: row.ID, + ProxyRequestID: row.ProxyRequestID, + ResponseModel: row.ResponseModel, + MappedModel: row.MappedModel, + RequestModel: row.RequestModel, + InputTokenCount: row.InputTokenCount, + OutputTokenCount: row.OutputTokenCount, + CacheReadCount: row.CacheReadCount, + CacheWriteCount: row.CacheWriteCount, + Cache5mWriteCount: row.Cache5mWriteCount, + Cache1hWriteCount: row.Cache1hWriteCount, + Cost: row.Cost, } }internal/domain/model.go (1)
464-469: 考虑添加 GranularityYear 常量。前端
StatsGranularity已包含'year',但后端Granularity常量缺少对应的GranularityYear。如果 All Time 查询使用 year 粒度,建议添加此常量以保持前后端一致。建议修改
const ( GranularityMinute Granularity = "minute" GranularityHour Granularity = "hour" GranularityDay Granularity = "day" GranularityMonth Granularity = "month" + GranularityYear Granularity = "year" )internal/repository/sqlite/usage_stats.go (2)
407-445: 使用 sort.Slice 替代冒泡排序。
mergeRealtimeMinuteStats中的排序实现使用了 O(n²) 的冒泡排序,对于大数据量场景可能影响性能。建议使用标准库的sort.Slice。建议修改
+import "sort" // 合并实时数据和历史数据,按时间倒序排列 merged := append(realtimeStats, filtered...) -// 按 TimeBucket 倒序排列 -for i := 0; i < len(merged)-1; i++ { - for j := i + 1; j < len(merged); j++ { - if merged[j].TimeBucket.After(merged[i].TimeBucket) { - merged[i], merged[j] = merged[j], merged[i] - } - } -} +// 按 TimeBucket 倒序排列 +sort.Slice(merged, func(i, j int) bool { + return merged[i].TimeBucket.After(merged[j].TimeBucket) +})
1050-1052: Scan 错误被静默忽略。
aggregateMinute中的rows.Scan错误仅continue跳过,建议至少添加 debug 级别日志以便排查问题。建议修改
if err != nil { + log.Printf("[aggregateMinute] Scan error: %v", err) continue }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
coverage.outis excluded by!**/*.outweb/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (51)
cmd/maxx/main.gointernal/adapter/provider/antigravity/adapter.gointernal/adapter/provider/antigravity/claude_request_postprocess.gointernal/adapter/provider/antigravity/response.gointernal/adapter/provider/antigravity/retry_delay.gointernal/adapter/provider/antigravity/transform_request.gointernal/adapter/provider/antigravity/transform_tools.gointernal/adapter/provider/custom/adapter.gointernal/adapter/provider/kiro/adapter.gointernal/adapter/provider/kiro/compliant_event_stream_parser.gointernal/adapter/provider/kiro/robust_parser.gointernal/adapter/provider/kiro/streaming.gointernal/converter/claude_to_codex.gointernal/converter/claude_to_openai.gointernal/converter/codex_to_gemini.gointernal/converter/gemini_to_openai.gointernal/core/database.gointernal/core/request_tracker.gointernal/core/server.gointernal/core/task.gointernal/domain/adapter_event.gointernal/domain/model.gointernal/executor/executor.gointernal/handler/admin.gointernal/handler/proxy.gointernal/repository/interfaces.gointernal/repository/sqlite/models.gointernal/repository/sqlite/proxy_request.gointernal/repository/sqlite/proxy_upstream_attempt.gointernal/repository/sqlite/usage_stats.gointernal/service/admin.gointernal/service/backup.gointernal/stats/aggregator.gointernal/stats/pure.gointernal/stats/pure_test.gomain.goweb/package.jsonweb/src/components/routes/ClientTypeRoutesContent.tsxweb/src/components/ui/chart.tsxweb/src/components/ui/table.tsxweb/src/hooks/queries/use-requests.tsweb/src/lib/transport/http-transport.tsweb/src/lib/transport/interface.tsweb/src/lib/transport/types.tsweb/src/locales/en.jsonweb/src/locales/zh.jsonweb/src/pages/requests/detail/RequestDetailPanel.tsxweb/src/pages/requests/detail/RequestDetailView.tsxweb/src/pages/requests/detail/RequestHeader.tsxweb/src/pages/requests/index.tsxweb/src/pages/stats/index.tsx
💤 Files with no reviewable changes (1)
- internal/adapter/provider/antigravity/response.go
✅ Files skipped from review due to trivial changes (2)
- internal/adapter/provider/kiro/robust_parser.go
- internal/adapter/provider/antigravity/transform_request.go
🚧 Files skipped from review as they are similar to previous changes (3)
- web/src/components/ui/chart.tsx
- web/package.json
- web/src/pages/requests/detail/RequestDetailView.tsx
🧰 Additional context used
🧬 Code graph analysis (23)
internal/adapter/provider/kiro/adapter.go (2)
internal/domain/errors.go (1)
NewProxyErrorWithMessage(61-63)internal/context/context.go (1)
GetEventChan(203-208)
internal/converter/gemini_to_openai.go (1)
internal/converter/types_openai.go (1)
OpenAIFunction(48-52)
internal/service/admin.go (9)
internal/event/broadcaster.go (1)
Broadcaster(7-12)internal/repository/interfaces.go (1)
ProxyRequestFilter(63-66)web/src/lib/transport/types.ts (6)
CursorPaginationResult(266-273)DashboardData(784-793)RecalculateCostsResult(585-590)RecalculateCostsProgress(593-599)ResponseModel(620-626)RecalculateRequestCostResult(611-617)web/src/lib/transport/index.ts (5)
CursorPaginationResult(36-36)DashboardData(74-74)RecalculateCostsResult(70-70)RecalculateCostsProgress(71-71)RecalculateRequestCostResult(69-69)web/src/components/ui/index.ts (1)
Progress(63-63)web/src/components/ui/progress.tsx (1)
Progress(73-73)internal/pricing/calculator.go (1)
GlobalCalculator(23-28)internal/repository/sqlite/models.go (2)
ResponseModel(305-311)ResponseModel(313-313)internal/usage/extractor.go (1)
Metrics(13-22)
internal/executor/executor.go (5)
web/src/lib/transport/types.ts (1)
ResponseModel(620-626)internal/domain/model.go (1)
ResponseModel(618-630)internal/repository/sqlite/models.go (2)
ResponseModel(305-311)ResponseModel(313-313)internal/pricing/calculator.go (1)
GlobalCalculator(23-28)internal/domain/adapter_event.go (1)
EventFirstToken(16-16)
internal/core/server.go (2)
internal/core/request_tracker.go (1)
RequestTracker(12-20)internal/handler/proxy.go (1)
RequestTracker(19-23)
web/src/lib/transport/types.ts (2)
web/src/lib/transport/index.ts (7)
StatsGranularity(68-68)RecalculateCostsResult(70-70)RecalculateCostsProgress(71-71)RecalculateStatsProgress(72-72)RecalculateRequestCostResult(69-69)ModelPricing(82-82)PriceTable(83-83)internal/service/admin.go (3)
RecalculateCostsResult(713-718)RecalculateCostsProgress(721-727)RecalculateRequestCostResult(864-870)
web/src/pages/stats/index.tsx (4)
web/src/lib/transport/types.ts (3)
StatsGranularity(534-534)RecalculateCostsProgress(593-599)RecalculateStatsProgress(602-608)web/src/lib/transport/index.ts (4)
StatsGranularity(68-68)RecalculateCostsProgress(71-71)RecalculateStatsProgress(72-72)getTransport(95-95)web/src/hooks/queries/use-usage-stats.ts (1)
useRecalculateCosts(160-170)web/src/lib/utils.ts (1)
cn(8-10)
internal/handler/proxy.go (1)
internal/core/request_tracker.go (1)
RequestTracker(12-20)
internal/core/request_tracker.go (1)
internal/handler/proxy.go (1)
RequestTracker(19-23)
web/src/hooks/queries/use-requests.ts (4)
web/src/lib/transport/factory.ts (1)
getTransport(79-102)web/src/lib/query-client.ts (1)
queryClient(3-14)web/src/lib/transport/types.ts (3)
CursorPaginationParams(253-263)ProxyRequest(171-206)CursorPaginationResult(266-273)internal/domain/model.go (1)
ProxyRequest(185-250)
internal/repository/sqlite/proxy_upstream_attempt.go (3)
internal/repository/interfaces.go (1)
ProxyUpstreamAttemptRepository(104-121)internal/domain/model.go (3)
ProxyUpstreamAttempt(252-302)AttemptCostData(305-318)ResponseModel(618-630)internal/repository/sqlite/models.go (4)
ProxyUpstreamAttempt(218-241)ProxyUpstreamAttempt(243-243)ResponseModel(305-311)ResponseModel(313-313)
web/src/pages/requests/detail/RequestDetailPanel.tsx (4)
web/src/lib/transport/types.ts (1)
ModelPricing(798-811)web/src/hooks/queries/use-pricing.ts (1)
usePricing(16-22)web/src/hooks/queries/index.ts (1)
usePricing(148-148)web/src/lib/utils.ts (1)
formatDuration(17-26)
internal/converter/claude_to_openai.go (1)
internal/converter/types_openai.go (1)
OpenAIToolCall(54-59)
internal/handler/admin.go (2)
internal/repository/interfaces.go (1)
ProxyRequestFilter(63-66)internal/pricing/default_prices.go (1)
DefaultPriceTable(11-16)
web/src/pages/requests/detail/RequestHeader.tsx (2)
web/src/lib/utils.ts (1)
formatDuration(17-26)web/src/components/ui/tooltip.tsx (3)
Tooltip(56-56)TooltipTrigger(56-56)TooltipContent(56-56)
web/src/lib/transport/http-transport.ts (2)
web/src/lib/transport/types.ts (3)
RecalculateCostsResult(585-590)RecalculateRequestCostResult(611-617)PriceTable(814-817)internal/service/admin.go (2)
RecalculateCostsResult(713-718)RecalculateRequestCostResult(864-870)
web/src/components/ui/table.tsx (1)
web/src/lib/utils.ts (1)
cn(8-10)
web/src/lib/transport/interface.ts (2)
web/src/lib/transport/types.ts (3)
RecalculateCostsResult(585-590)RecalculateRequestCostResult(611-617)PriceTable(814-817)internal/service/admin.go (2)
RecalculateCostsResult(713-718)RecalculateRequestCostResult(864-870)
internal/stats/pure.go (2)
internal/domain/model.go (8)
ClientType(6-6)Granularity(462-462)GranularityMinute(465-465)GranularityHour(466-466)GranularityDay(467-467)GranularityMonth(468-468)UsageStats(472-503)ProviderStats(439-459)internal/repository/sqlite/models.go (2)
UsageStats(279-300)UsageStats(302-302)
internal/core/database.go (2)
internal/core/request_tracker.go (2)
RequestTracker(12-20)NewRequestTracker(23-27)internal/handler/proxy.go (1)
RequestTracker(19-23)
internal/repository/interfaces.go (2)
internal/domain/model.go (3)
ProxyRequest(185-250)ProxyUpstreamAttempt(252-302)AggregateEvent(794-802)internal/repository/sqlite/models.go (4)
ProxyRequest(182-213)ProxyRequest(215-215)ProxyUpstreamAttempt(218-241)ProxyUpstreamAttempt(243-243)
internal/repository/sqlite/proxy_request.go (4)
internal/repository/interfaces.go (2)
ProxyRequestRepository(68-102)ProxyRequestFilter(63-66)internal/domain/model.go (2)
ProxyRequest(185-250)Progress(785-791)internal/repository/sqlite/models.go (2)
ProxyRequest(182-213)ProxyRequest(215-215)internal/repository/sqlite/db.go (1)
DB(17-20)
internal/repository/sqlite/usage_stats.go (1)
internal/stats/pure.go (4)
TruncateToGranularity(36-50)AttemptRecord(14-31)AggregateAttempts(55-132)RollUp(137-206)
🔇 Additional comments (112)
internal/service/backup.go (1)
591-593: 简化提示字符串 👍
移除fmt.Sprintf并改用字面量更清晰,也避免不必要的格式化调用。internal/adapter/provider/antigravity/claude_request_postprocess.go (1)
34-34: 初始化modified的方式清晰且一致。直接用
injectAntigravityIdentity的返回值初始化状态,逻辑更直观且不会影响后续修改判断。internal/adapter/provider/antigravity/transform_tools.go (1)
10-13: LGTM!简化后的条件检查是正确的 Go 惯用写法。在 Go 中,对
nil切片调用len()会返回 0,因此len(claudeReq.Tools) == 0已经涵盖了nil和空切片两种情况,行为与之前完全一致。internal/adapter/provider/antigravity/retry_delay.go (1)
50-55: LGTM!变量声明方式的修改是正确的。
var reason RateLimitReason的零值为 0,与RateLimitReasonUnknown(通过iota定义为 0)语义上完全等价,代码逻辑无变化。internal/converter/claude_to_codex.go (1)
69-70: LGTM!移除 map 访问时未使用的空白标识符
_是正确的简化。单值形式m["input"]更符合 Go 惯用写法,且后续json.Marshal能正确处理nil值。internal/converter/codex_to_gemini.go (1)
55-57: LGTM!与其他转换器文件保持一致的简化。
content的nil情况会在后续的类型 switch 中被隐式处理(不匹配任何 case)。internal/converter/claude_to_openai.go (2)
77-82: LGTM!与其他转换器文件保持一致的简化,移除未使用的空白标识符。
157-161: LGTM!格式对齐调整,代码逻辑无变化。
internal/converter/gemini_to_openai.go (1)
104-112: 类型转换完全兼容,代码改动正确。
OpenAIFunction(decl)的类型转换有效。Gemini 的GeminiFunctionDecl与OpenAIFunction具有完全相同的底层结构:均包含Name(string)、Description(string)、Parameters(interface{})三个字段,字段顺序和 JSON 标签也一致。该转换避免了手动逐字段赋值,简化了代码逻辑。web/src/locales/en.json (2)
24-24: 新增通用按钮文案 OK。
112-123: 请求筛选与 TTFT 文案清晰。web/src/locales/zh.json (2)
24-24: 新增“应用”文案 OK。
578-635: 统计筛选与重算相关文案覆盖完整。web/src/components/ui/table.tsx (2)
10-10: LGTM!添加
border-collapse类是表格样式的合理改进,确保单元格边框正确合并显示。
51-51: LGTM!简化 TableRow 样式,保留 hover 和选中状态效果,符合 UI 整体清理方向。
web/src/components/routes/ClientTypeRoutesContent.tsx (3)
50-58: LGTM!Provider 类型分组的常量定义清晰,使用
Record类型确保类型安全,便于后续维护和扩展。
154-190: LGTM!分组逻辑实现合理:
- 对未知类型有 fallback 到
custom的容错处理- 每组内按名称字母排序,用户体验一致
245-247: LGTM!位置排序逻辑正确,确保 Antigravity routes 重排时保持原有位置槽位的数值顺序。
web/src/pages/requests/detail/RequestHeader.tsx (3)
9-14: 成本格式化实现正确。
formatCost将 nanoUSD 转换为 USD,保留 6 位小数精度(microUSD 级别)。对于单次请求成本,JavaScript number 类型的精度足够。之前 review 提到的精度问题主要影响超大累计成本(> $9M),对单次请求场景可以接受。
91-98: LGTM!TTFT(首 Token 响应时间)显示块实现正确,使用
formatDuration格式化纳秒值,并对零值显示-。
149-162: LGTM!成本重算 UI 实现良好:
- Tooltip 提供操作提示
isRecalculating状态控制 loading 动画和禁用状态- 条件渲染确保无回调时不显示按钮
main.go (1)
47-49: LGTM!将空循环改为 10ms 睡眠是正确的优化,避免了 CPU 空转(busy-wait),同时保持足够的响应速度等待
appCtx初始化。internal/adapter/provider/kiro/streaming.go (2)
173-177: TTFT 时间点记录位置合理TTFT 在成功发送事件并 flush 后记录,这正确地捕获了客户端收到第一个 token 的时间点。使用
firstTokenTimeMs == 0检查确保只记录第一次。
28-28: TTFT 字段和访问器实现正确新增的
firstTokenTimeMs字段和GetFirstTokenTimeMs()访问器为 TTFT 跟踪提供了清晰的接口。Also applies to: 331-335
internal/adapter/provider/custom/adapter.go (2)
319-319: TTFT 跟踪实现正确在流式响应中正确实现了 TTFT 跟踪:在第一次成功写入并 flush 后发送
SendFirstToken事件。与 kiro adapter 的实现保持一致。Also applies to: 365-370
589-589: 类型声明简化移除了显式的
string类型注解,Go 会自动从字面量推断类型。这是一个无害的简化。internal/repository/sqlite/models.go (3)
237-237: ProxyUpstreamAttempt 和 UsageStats 的 TTFT 字段
TTFTMs和TotalTTFTMs字段的添加与ProxyRequest保持一致,支持完整的 TTFT 数据追踪链路。Also applies to: 294-294
191-194: EndTime 索引和 TTFTMs 字段添加正确
EndTime上的索引有助于提升按时间范围查询的性能TTFTMs字段已在多个模块中被积极使用,包括统计计算、上游请求跟踪和使用数据聚合,成功支持了 TTFT 跟踪功能新增列由 GORM 的 AutoMigrate 自动处理,无需显式迁移脚本。
21-21:db.Name()的使用已验证为正确在 GORM v1.31.1 中,
db.Name()是获取数据库驱动名称的推荐方法,与db.Dialector.Name()功能等同。此更改是安全的,无需进一步修改。internal/adapter/provider/kiro/adapter.go (2)
365-391: TTFT 传播路径完整所有流式处理路径都正确地将
streamCtx.GetFirstTokenTimeMs()传递给sendFinalEvents,确保 TTFT 数据在各种场景(正常完成、客户端断开、错误等)下都能被捕获。
395-404: sendFinalEvents 中的 TTFT 事件发送新增的
firstTokenTimeMs参数和条件发送逻辑实现正确:
- 仅当
firstTokenTimeMs > 0时发送事件,避免发送无效数据- 在发送其他事件之前发送 TTFT 事件,保证事件顺序的逻辑性
internal/domain/adapter_event.go (2)
15-16: EventFirstToken 事件类型新增的
EventFirstToken事件类型用于 TTFT 跟踪,与现有事件类型保持一致的命名和定义风格。
92-101: SendFirstToken 方法实现正确方法实现遵循了与其他
Send*方法相同的模式:
- nil 检查防止空指针
timeMs == 0检查防止发送无效时间戳- 非阻塞发送避免通道满时阻塞调用方
internal/core/task.go (1)
29-41: 审查注释不适用于当前代码审查注释中的代码片段与实际仓库中的代码不一致。实际代码(第 29-78 行)使用的是分离的任务方法(
runMinuteAggregation()、runHourlyRollup()等),而非直接使用AggregateAndRollUp()的通道排空模式。审查注释中提出的关于通道关闭和直接错误处理的问题不适用于当前实现。但实际代码存在一个合法的问题:第 82-87 行中的方法调用返回错误但被忽略(
_, _模式),这可能掩盖聚合或汇总失败。建议考虑添加错误日志或监控。Likely an incorrect or invalid review comment.
internal/core/server.go (3)
12-18: 关机超时常量拆分清晰
命名明确、便于统一调参与复用。
129-148: 先等待代理请求完成的关机顺序合理
先触发 tracker 再进入 HTTP Shutdown 的流程清晰。
150-155: HTTP Shutdown 超时与强制关闭处理到位
超时后 fallback 到 Close 并记录日志,行为可控。web/src/lib/transport/interface.ts (3)
44-51: 新增类型导入 OK
与新增 API 能力保持一致。
106-106: 计数接口参数扩展一致
签名与后端过滤能力对齐。
183-185: Pricing API 接口补充合理
调用面统一,便于前端消费价表。internal/handler/proxy.go (4)
18-23: RequestTracker 接口定义清晰
满足关机期间的请求统计与拦截需要。
31-33: 为 ProxyHandler 增加 tracker 与锁是合理的
多线程读写安全。
50-55: SetRequestTracker 线程安全实现 OK
锁的粒度与用途匹配。
61-74: 关机中拒绝新请求的逻辑合理
返回 503 并及时释放计数,行为明确。internal/core/database.go (5)
73-74: ServerComponents 暴露 RequestTracker 便于关机协同
对上层控制更友好。
175-180: 启动时标记陈旧 attempt 失败可避免悬挂
有助于数据一致性。
285-285: AdminService 传入 broadcaster 线路清晰
依赖注入方向一致。
309-312: 创建并注入 request tracker 流程 OK
与 ProxyHandler 的关机协作一致。
325-325: 组件集合包含 tracker 便于上层使用
对 Stop 阶段统一管理有帮助。web/src/lib/transport/http-transport.ts (6)
47-54: 新增类型导入 OK
与新增 API 方法匹配。
279-288: 按条件拼接查询参数合理
避免发送多余过滤条件。
538-538: 支持 model 过滤的查询拼装 OK
与筛选面板能力一致。
550-555: 重算成本接口封装清晰
返回值处理一致。
557-562: 单请求重算接口封装清晰
路径与接口命名一致。
598-601: 获取价表接口封装 OK
方便 UI 直接消费。internal/executor/executor.go (3)
237-238: 未完成 attempt 的结束时间补写合理
确保异常路径也能落盘 duration。
453-454: 成功/失败分支同步 TTFT 到 ProxyRequest OK
TTFT 透传位置一致。Also applies to: 524-525
817-821: 首次 token 事件计算 TTFT 并广播合理
实时与批处理路径一致性好。Also applies to: 872-878
internal/adapter/provider/antigravity/adapter.go (4)
114-137: 客户端分流逻辑更清晰
switch 分支把 Claude/Gemini/OpenAI 的转换路径显式化,后续扩展与排错更直观。
443-458: Claude 后处理的 modified 初始化更简洁
直接以 InjectToolConfig 返回值初始化,语义更清楚。
546-559: 非流响应分支处理合理
不同 clientType 的转换路径保持清晰可读。
650-708: 首 token 事件只触发一次的处理到位
firstChunkSent 保证 TTFT 仅在首次成功写入后上报一次。cmd/maxx/main.go (2)
124-129: 启动时清理陈旧 upstream attempts 合理
有助于避免历史脏数据影响统计与状态展示。
270-276: 优雅关机流程完整
请求追踪器与 HTTP shutdown 组合清晰,符合平滑退出预期。Also applies to: 326-376
web/src/hooks/queries/use-requests.ts (1)
78-135: 列表缓存的过滤与分页处理细致
按查询参数逐一匹配并在非首页避免插入新项,逻辑稳健。internal/stats/pure_test.go (8)
10-133: Granularity 截断测试覆盖全面
包含未知粒度兜底与多时区验证,能有效防回归。
134-419: AggregateAttempts 覆盖充分
空输入、同分钟/多维度/时区场景都覆盖到位。
421-641: RollUp 流水线验证充分
跨粒度与时区聚合的断言完整。
643-797: MergeStats 行为测试清晰
覆盖合并/不同 key/不可变性,质量很好。
798-857: SumStats 覆盖到位
空输入与聚合结果断言完整。
859-989: GroupByProvider 验证充分
包含 zero provider 与成功率计算校验。
991-1112: 过滤函数测试全面
粒度与时间范围的边界条件覆盖充分。
1114-1492: 全链路聚合测试很扎实
从 minute→month 的数据守恒与多模型/时区场景覆盖全面。internal/handler/admin.go (3)
92-94: Pricing 端点接入清晰
路由与 handler 分离明确,便于前端展示价格表。Also applies to: 1443-1453
740-758: Count 端点过滤解析严谨
provider/status 过滤参数校验明确。
676-680: Cost 相关端点扩展合理
recalculate-cost 与批量重算的路由与处理逻辑清晰。Also applies to: 798-811, 1233-1341
web/src/pages/requests/detail/RequestDetailPanel.tsx (3)
23-217: 成本拆解工具函数结构清晰
线性/分段计价与 cache 价格回退处理完整。
252-283: pricing 注入与 costBreakdown 传递合理
仅在 priceTable 就绪时计算,避免无数据状态的误判。
607-613: TTFT 单位无误,无需修改经过验证,当前代码对 TTFT 单位的处理是正确的:
- 后端数据库存储
ttft_ms(毫秒),通过 Go 的time.Duration转换后序列化为纳秒传输到前端- 前端类型定义明确注明
ttft: number; // nanosecondsformatDuration()正确期望纳秒输入(第 18 行:const ms = ns / 1_000_000)- RequestDetailPanel.tsx 和 RequestHeader.tsx 都正确地直接将
ttft传递给formatDuration()- index.tsx 的手动转换(
/ 1_000_000_000)也是正确的单位处理无需添加
* 1_000_000乘法,否则会导致显示值放大 100 万倍。web/src/pages/requests/index.tsx (4)
1-12: 筛选状态与查询参数接入清晰provider/status 变更时重置游标与分页、计数同步刷新,整体流程易理解。
Also applies to: 23-40, 74-87, 141-175
222-263: TTFT 与 Cost 列展示一致性良好表头与行内的 TTFT/Cost 展示和 nanoUSD 转换逻辑对齐,阅读清晰。
Also applies to: 411-437, 646-701
267-272: Token 格式化与行样式/时间展示良好K/M/千分位格式化 + 斑马纹/时间显示的组合提升可读性。
Also applies to: 394-409, 529-663
706-850: Provider/Status 过滤组件结构清晰分组排序与 All 选项处理直观,Select 交互符合预期。
internal/repository/interfaces.go (1)
62-66: 过滤条件与成本相关接口扩展清晰ProxyRequestFilter 与 ListCursor/CountWithFilter/成本重算接口说明明确,便于上层组合调用。
Also applies to: 76-102, 108-120, 168-175
internal/repository/sqlite/proxy_request.go (3)
3-12: CountWithFilter 的缓存/回退策略 OK无过滤走缓存,有过滤查询数据库,符合性能与准确性平衡。
Also applies to: 129-149
256-390: 批量更新与重算流程实现清晰CASE WHEN 批量更新 + 分批重算/进度汇报的组织结构易读。
392-446: TTFT 映射转换到模型层 OKTTFTMs 与 DurationMs 的域模型映射补齐后数据一致性更好。
internal/service/admin.go (5)
3-20: AdminService 注入 broadcaster 的改动 OK构造函数与字段扩展清晰,便于后续进度推送。
Also applies to: 31-86
370-400: 过滤游标/计数接口对齐良好GetProxyRequestsCursor/CountWithFilter 透传 filter,接口一致。
665-710: UsageStats 进度广播流程清晰progressChan + WebSocket 广播拆分直观,便于 UI 订阅。
729-861: 成本重算主流程结构清晰按批次计算、更新 attempts,再重算 requests 的流程组织良好。
872-937: 单请求重算逻辑可读性好逐 attempt 计算并回写成本,结果返回清晰。
internal/stats/pure.go (1)
52-344: 纯函数聚合/rollup/过滤实现扎实聚合、合并、汇总、过滤逻辑清晰且易测。
internal/repository/sqlite/proxy_upstream_attempt.go (3)
47-61: LGTM!
ListAll和CountAll实现简洁正确,用于成本重算场景。
136-162: LGTM!
MarkStaleAttemptsFailed使用子查询关联proxy_requests状态来标记孤立的 attempts,逻辑正确,end_time、duration_ms、updated_at都有正确更新。
216-245: LGTM!
toModel和toDomain正确添加了 TTFT/TTFTMs 字段的双向映射,与internal/repository/sqlite/models.go中的模型定义保持一致。Also applies to: 247-274
web/src/lib/transport/types.ts (4)
184-184: LGTM!
ttft字段(纳秒单位)与后端domain.ProxyRequest和domain.ProxyUpstreamAttempt的 TTFT 字段保持一致。Also applies to: 224-224
551-551: LGTM!
totalTtftMs字段与后端domain.UsageStats.TotalTTFTMs保持一致。
584-617: LGTM!新增的
RecalculateCostsResult、RecalculateStatsProgress、RecalculateRequestCostResult接口与后端internal/service/admin.go中的对应结构体字段匹配。
794-817: LGTM!Pricing API 类型定义完整,
ModelPricing包含了分层定价所需的所有字段(包括 1M context 阈值和倍率配置)。internal/domain/model.go (5)
204-206: LGTM!TTFT (Time To First Token) 字段正确添加到
ProxyRequest和ProxyUpstreamAttempt,用于追踪流式接口的首字时长,这是一个重要的性能指标。Also applies to: 262-264
304-318: LGTM!
AttemptCostData是一个精简的结构体,仅包含成本重算所需的最小字段集,避免了加载完整的ProxyUpstreamAttempt(包括昂贵的 JSON 解析)。这是一个好的性能优化设计。
493-493: LGTM!
TotalTTFTMs字段用于累计首字时长(毫秒),支持计算平均 TTFT 等指标。
782-802: LGTM!
Progress和AggregateEvent结构体为长时间运行的操作(如成本重算、统计重算)提供了良好的进度报告机制。AggregateEvent.Error使用json:"-"标签避免序列化错误对象是正确的做法。
245-246: 所有成本单位变更均已正确实施。数据库迁移已将现有 cost 字段乘以 1000,所有成本计算逻辑已更新为使用 nanoUSD,前端通过 formatCost 函数正确转换单位显示。无需进一步调整。internal/repository/sqlite/usage_stats.go (7)
66-66: LGTM!Upsert 正确添加了
total_ttft_ms字段的更新。
140-277: LGTM!
Query方法的分层查询策略设计合理:
- 历史数据使用目标粒度的预聚合数据
- 当前时间桶分层使用 day → hour → minute → 最近 2 分钟实时
- 使用
stats.TruncateToGranularity进行时区感知的时间截断- 对于分钟粒度保留独立数据,其他粒度聚合到当前时间桶
836-989: LGTM!
GetProviderStats重构为分层查询策略:
- 查询历史月数据(当前月之前)
- 查询当前月的 day 粒度数据
- 累加合并结果
使用时区感知的月份边界计算,与系统配置时区保持一致。
1105-1157: LGTM!
AggregateAndRollUp返回带缓冲的 channel 用于异步进度报告,设计合理。调用者可以 range 遍历获取进度,或忽略 channel 让操作异步执行。
1159-1214: LGTM!
rollUp方法正确使用stats.RollUp进行时区感知的粒度聚合,对 day/month 粒度使用配置的时区,其他粒度使用 UTC。
1324-1459: LGTM!
aggregateAllMinutesWithProgress实现了带进度报告的全量分钟级聚合:
- 先获取总数用于进度百分比计算
- 使用
stats.AttemptRecord和stats.AggregateAttempts进行纯函数聚合- 定期(每 100 条)报告进度
- 正确记录 response models 到独立表
1461-1517: LGTM!
toModel和toDomain正确添加了TotalTTFTMs字段的双向映射。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| // Add increments the active request count | ||
| // Returns false if shutdown is in progress (request should be rejected) | ||
| func (t *RequestTracker) Add() bool { | ||
| if t.isShutdown.Load() { | ||
| return false | ||
| } | ||
| t.wg.Add(1) | ||
| atomic.AddInt64(&t.activeCount, 1) | ||
| return true | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "request_tracker.go" -type fRepository: awsl-project/maxx
Length of output: 178
🏁 Script executed:
cat -n internal/core/request_tracker.goRepository: awsl-project/maxx
Length of output: 5178
需要修复:WaitGroup 和 shutdownCh 的并发冲突问题
存在两个并发问题:
-
WaitGroup 误用 panic:Add() 检查 isShutdown 后,若此时调用 WaitWithTimeout/WaitWithContext/GracefulShutdown 之一,会启动 wg.Wait()。之后 Add() 继续执行 wg.Add(1) 会触发 panic("sync: WaitGroup misuse: Add called with outstanding Waiter")。
-
shutdownCh 多次 close panic:WaitWithTimeout、WaitWithContext、GracefulShutdown 三个方法都直接 close(t.shutdownCh)(分别在第 70、90、125 行)。若这些方法被调用多次或混合调用,第二次 close 会触发 panic。
建议集中化 shutdown 逻辑:使用 sync.Once 保护 shutdownCh 关闭,用互斥锁保护 Add() 与 wg.Wait() 的竞争。
修复方案
type RequestTracker struct {
activeCount int64
wg sync.WaitGroup
shutdownCh chan struct{}
isShutdown atomic.Bool
notifyCh chan struct{}
notifyMu sync.Mutex
+ shutdownMu sync.Mutex
+ shutdownOnce sync.Once
}
+func (t *RequestTracker) beginShutdown() {
+ t.shutdownMu.Lock()
+ defer t.shutdownMu.Unlock()
+ if t.isShutdown.Swap(true) {
+ return
+ }
+ t.shutdownOnce.Do(func() { close(t.shutdownCh) })
+}
+
func (t *RequestTracker) Add() bool {
+ t.shutdownMu.Lock()
+ defer t.shutdownMu.Unlock()
if t.isShutdown.Load() {
return false
}
t.wg.Add(1)
atomic.AddInt64(&t.activeCount, 1)
return true
}
func (t *RequestTracker) WaitWithTimeout(timeout time.Duration) bool {
- t.isShutdown.Store(true)
- close(t.shutdownCh)
+ t.beginShutdown()
...
}
func (t *RequestTracker) WaitWithContext(ctx context.Context) bool {
- t.isShutdown.Store(true)
- close(t.shutdownCh)
+ t.beginShutdown()
...
}
func (t *RequestTracker) GracefulShutdown(maxWait time.Duration) bool {
t.notifyMu.Lock()
t.notifyCh = make(chan struct{}, 100)
t.notifyMu.Unlock()
- t.isShutdown.Store(true)
- close(t.shutdownCh)
+ t.beginShutdown()
...
}🤖 Prompt for AI Agents
In `@internal/core/request_tracker.go` around lines 29 - 38, The RequestTracker
has races: RequestTracker.Add currently checks isShutdown then calls wg.Add(1)
which can panic if a concurrent WaitWithTimeout/WaitWithContext/GracefulShutdown
is calling wg.Wait(), and multiple shutdown methods directly close t.shutdownCh
causing close panics; fix by introducing a sync.Mutex (e.g., trackMu) to
serialize Add() and all Wait/Shutdown paths (lock in Add before checking
isShutdown and calling wg.Add, unlock after increment) and use a sync.Once
(e.g., shutdownOnce) to perform a single close(t.shutdownCh) from
WaitWithTimeout, WaitWithContext, and GracefulShutdown while also setting the
isShutdown flag via atomic.Store so Add sees shutdown consistently; update
RequestTracker.Add, WaitWithTimeout, WaitWithContext, and GracefulShutdown to
use these new primitives (trackMu, shutdownOnce, isShutdown) to prevent
WaitGroup misuse and double-close panics.
| func (r *ProxyRequestRepository) ListCursor(limit int, before, after uint64, filter *repository.ProxyRequestFilter) ([]*domain.ProxyRequest, error) { | ||
| // 使用 Select 排除大字段 | ||
| query := r.db.gorm.Model(&ProxyRequest{}). | ||
| Select("id, created_at, updated_at, instance_id, request_id, session_id, client_type, request_model, response_model, start_time, end_time, duration_ms, is_stream, status, status_code, error, proxy_upstream_attempt_count, final_proxy_upstream_attempt_id, route_id, provider_id, project_id, input_token_count, output_token_count, cache_read_count, cache_write_count, cache_5m_write_count, cache_1h_write_count, cost, api_token_id") | ||
| Select("id, created_at, updated_at, instance_id, request_id, session_id, client_type, request_model, response_model, start_time, end_time, duration_ms, ttft_ms, is_stream, status, status_code, error, proxy_upstream_attempt_count, final_proxy_upstream_attempt_id, route_id, provider_id, project_id, input_token_count, output_token_count, cache_read_count, cache_write_count, cache_5m_write_count, cache_1h_write_count, cost, api_token_id") | ||
|
|
||
| if after > 0 { | ||
| query = query.Where("id > ?", after) | ||
| } else if before > 0 { | ||
| query = query.Where("id < ?", before) | ||
| } | ||
|
|
||
| // 应用过滤条件 | ||
| if filter != nil { | ||
| if filter.ProviderID != nil { | ||
| query = query.Where("provider_id = ?", *filter.ProviderID) | ||
| } | ||
| if filter.Status != nil { | ||
| query = query.Where("status = ?", *filter.Status) | ||
| } | ||
| } | ||
|
|
||
| var models []ProxyRequest | ||
| if err := query.Order("id DESC").Limit(limit).Find(&models).Error; err != nil { | ||
| // 按结束时间排序:未完成的请求(end_time=0)在最前面,已完成的按 end_time DESC 排序 | ||
| // SQLite 不支持 NULLS FIRST,使用 CASE WHEN 实现 | ||
| if err := query.Order("CASE WHEN end_time = 0 THEN 0 ELSE 1 END, end_time DESC, id DESC").Limit(limit).Find(&models).Error; err != nil { | ||
| return nil, err |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n internal/repository/sqlite/proxy_request.go | head -150 | tail -80Repository: awsl-project/maxx
Length of output: 3816
🏁 Script executed:
rg -A 30 "func.*ListCursor" internal/repository/sqlite/proxy_request.goRepository: awsl-project/maxx
Length of output: 1433
🏁 Script executed:
# Check if there are other cursor-based pagination methods
rg -B 2 "before.*after.*uint64" internal/repository/sqlite/Repository: awsl-project/maxx
Length of output: 408
游标字段与排序字段不匹配,导致分页可能丢失或重复
ListCursor 使用 id 作为游标(where id > after 或 id < before),但排序主键是 end_time(再加 id)。当数据按 end_time 排序时,仅基于 id 的游标无法保证分页一致性——跨页时可能遗漏或重复某些记录。
建议改为基于 (end_time, id) 元组的复合游标,或若业务允许,改用 id DESC 单一排序保持游标稳定性。
🤖 Prompt for AI Agents
In `@internal/repository/sqlite/proxy_request.go` around lines 82 - 107,
ListCursor currently uses id as the cursor while ordering by end_time (and id),
which causes unstable pagination; change to a composite cursor on (end_time, id)
in ProxyRequestRepository.ListCursor: update the function signature to accept
composite cursor values (e.g., afterEndTime, afterID and beforeEndTime, beforeID
or a single struct), change the Where clauses from "id > ?" / "id < ?" to tuple
comparisons like "(end_time, id) > (?, ?)" and "(end_time, id) < (?, ?)", pass
the corresponding end_time and id values, and ensure the query.Order remains
"CASE WHEN end_time = 0 THEN 0 ELSE 1 END, end_time DESC, id DESC" so
comparisons match the ordering; alternatively, if you prefer a simpler change,
switch ordering to a single id-based order (e.g., "id DESC") and keep the
existing id-only cursor logic.
| shanghai, _ := time.LoadLocation("Asia/Shanghai") | ||
| tokyo, _ := time.LoadLocation("Asia/Tokyo") | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l internal/stats/pure_test.goRepository: awsl-project/maxx
Length of output: 93
🏁 Script executed:
cat -n internal/stats/pure_test.go | sed -n '55,65p'Repository: awsl-project/maxx
Length of output: 485
🏁 Script executed:
cat -n internal/stats/pure_test.go | sed -n '111,120p'Repository: awsl-project/maxx
Length of output: 498
🏁 Script executed:
cat -n internal/stats/pure_test.go | sed -n '388,395p'Repository: awsl-project/maxx
Length of output: 366
🏁 Script executed:
cat -n internal/stats/pure_test.go | sed -n '611,620p'Repository: awsl-project/maxx
Length of output: 533
🏁 Script executed:
cat -n internal/stats/pure_test.go | sed -n '1328,1335p'Repository: awsl-project/maxx
Length of output: 483
处理 time.LoadLocation 失败,缺少 tzdata 时应跳过测试
在精简容器/CI 环境中缺少 tzdata 时,time.LoadLocation 会返回错误。当前代码使用 _ 忽略了这个错误,可能导致后续使用 nil location 时发生 panic。应检查错误并使用 t.Skipf 跳过测试。
🔧 修复示例
- shanghai, _ := time.LoadLocation("Asia/Shanghai")
+ shanghai, err := time.LoadLocation("Asia/Shanghai")
+ if err != nil {
+ t.Skipf("tzdata unavailable: %v", err)
+ }同样适用于:114-115, 391-392, 614-615, 1331-1332 行。
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| shanghai, _ := time.LoadLocation("Asia/Shanghai") | |
| tokyo, _ := time.LoadLocation("Asia/Tokyo") | |
| shanghai, err := time.LoadLocation("Asia/Shanghai") | |
| if err != nil { | |
| t.Skipf("tzdata unavailable: %v", err) | |
| } | |
| tokyo, err := time.LoadLocation("Asia/Tokyo") | |
| if err != nil { | |
| t.Skipf("tzdata unavailable: %v", err) | |
| } |
🤖 Prompt for AI Agents
In `@internal/stats/pure_test.go` around lines 58 - 60, The tests call
time.LoadLocation and discard the error (e.g., the shanghai and tokyo location
lookups), which can return an error in minimal CI images lacking tzdata; update
each LoadLocation call (references: variables named shanghai, tokyo and any
other LoadLocation usages in pure_test.go) to check the returned error and call
t.Skipf("skipping test: failed to load timezone %q: %v", "<ZoneName>", err) when
err != nil so the test is skipped instead of panicking on a nil location. Ensure
you handle all occurrences (the other LoadLocation pairs mentioned) the same
way.
| // TruncateToGranularity truncates a time to the start of its time bucket | ||
| // based on granularity using the specified timezone. | ||
| // The loc parameter is required and must not be nil. | ||
| func TruncateToGranularity(t time.Time, g domain.Granularity, loc *time.Location) time.Time { | ||
| t = t.In(loc) | ||
| switch g { | ||
| case domain.GranularityMinute: | ||
| return t.Truncate(time.Minute) | ||
| case domain.GranularityHour: | ||
| return t.Truncate(time.Hour) | ||
| case domain.GranularityDay: | ||
| return time.Date(t.Year(), t.Month(), t.Day(), 0, 0, 0, 0, loc) | ||
| case domain.GranularityMonth: | ||
| return time.Date(t.Year(), t.Month(), 1, 0, 0, 0, 0, loc) | ||
| default: | ||
| return t.Truncate(time.Hour) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Go time.Truncate timezone behavior UTC alignment
💡 Result:
Short answer
- time.Time.Truncate works on the underlying absolute instant (the stored wall-clock seconds/nanoseconds), not on the “display” hour/minute in a Location. That means Truncate(time.Hour) can produce a time whose hour/minute look odd in locations with non-hour offsets (e.g. +05:30, +05:45). [1][2]
Why
- The Go docs and issue discussion explain Truncate operates on the instant since the zero time, not the presentation in a Location. That behavior is by design and long-standing. [1][2]
How to get timezone-aligned truncation
- If you want truncation aligned to a particular zone’s wall-clock hour/minute, do your operations in that zone’s wall-clock representation, e.g.:
- Convert to the desired location and build a new time from its components:
tLoc := t.In(loc)
aligned := time.Date(tLoc.Year(), tLoc.Month(), tLoc.Day(), tLoc.Hour(), 0, 0, 0, loc) - Or convert to UTC (or any fixed zone) first if you want truncation aligned to UTC:
aligned := t.UTC().Truncate(time.Hour).In(t.Location())
- Convert to the desired location and build a new time from its components:
- Don’t rely on Truncate(time.Hour) to be zone-aware for non-hour offsets. [1][2]
References
- time package docs (Time/Truncate behavior). [1]
- GitHub issue/discussion describing Truncate and non-hour timezones. [2]
(If you want, tell me the exact zone example you care about and I’ll give a short code snippet.)
🏁 Script executed:
# Check if there are any tests or usage patterns that might reveal the intended behavior
cd internal/stats && grep -r "TruncateToGranularity" --include="*.go" -A 3 -B 3Repository: awsl-project/maxx
Length of output: 3233
时区偏移不是整小时时,Truncate(Hour/Minute) 会导致错桶
time.Truncate(time.Hour/Minute) 基于 UTC 绝对时刻对齐,而非本地时区的墙钟时间。即使先调用 t.In(loc) 转换时区,Truncate 仍然作用于底层的绝对时刻,在 UTC+5:30/UTC+9:45 等非整小时偏移的时区会导致桶边界偏移(如 +5:30 时区的桶会错开 30 分钟)。而 Day/Month 的实现已正确使用 time.Date() 按本地时间构造,Minute/Hour 应保持一致。
🔧 建议修复(按本地时间构造桶)
case domain.GranularityMinute:
- return t.Truncate(time.Minute)
+ return time.Date(t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), 0, 0, loc)
case domain.GranularityHour:
- return t.Truncate(time.Hour)
+ return time.Date(t.Year(), t.Month(), t.Day(), t.Hour(), 0, 0, 0, loc)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // TruncateToGranularity truncates a time to the start of its time bucket | |
| // based on granularity using the specified timezone. | |
| // The loc parameter is required and must not be nil. | |
| func TruncateToGranularity(t time.Time, g domain.Granularity, loc *time.Location) time.Time { | |
| t = t.In(loc) | |
| switch g { | |
| case domain.GranularityMinute: | |
| return t.Truncate(time.Minute) | |
| case domain.GranularityHour: | |
| return t.Truncate(time.Hour) | |
| case domain.GranularityDay: | |
| return time.Date(t.Year(), t.Month(), t.Day(), 0, 0, 0, 0, loc) | |
| case domain.GranularityMonth: | |
| return time.Date(t.Year(), t.Month(), 1, 0, 0, 0, 0, loc) | |
| default: | |
| return t.Truncate(time.Hour) | |
| } | |
| // TruncateToGranularity truncates a time to the start of its time bucket | |
| // based on granularity using the specified timezone. | |
| // The loc parameter is required and must not be nil. | |
| func TruncateToGranularity(t time.Time, g domain.Granularity, loc *time.Location) time.Time { | |
| t = t.In(loc) | |
| switch g { | |
| case domain.GranularityMinute: | |
| return time.Date(t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), 0, 0, loc) | |
| case domain.GranularityHour: | |
| return time.Date(t.Year(), t.Month(), t.Day(), t.Hour(), 0, 0, 0, loc) | |
| case domain.GranularityDay: | |
| return time.Date(t.Year(), t.Month(), t.Day(), 0, 0, 0, 0, loc) | |
| case domain.GranularityMonth: | |
| return time.Date(t.Year(), t.Month(), 1, 0, 0, 0, 0, loc) | |
| default: | |
| return t.Truncate(time.Hour) | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@internal/stats/pure.go` around lines 33 - 49, TruncateToGranularity currently
uses t.Truncate(time.Minute/time.Hour) which aligns to absolute UTC moments and
breaks buckets for non-integer hour timezones; update TruncateToGranularity
(function name) to build minute- and hour-granularity buckets using time.Date on
t.In(loc) (like the existing Day/Month cases) by zeroing out smaller components
(seconds/nanoseconds for Minute, minutes/seconds/nanos for Hour) so truncation
is done in local wall-clock time; keep the loc parameter required and
nil-checked before using t.In(loc).
| export function useProxyRequestsCount(providerId?: number, status?: string) { | ||
| return useQuery({ | ||
| queryKey: ['requestsCount'] as const, | ||
| queryFn: () => getTransport().getProxyRequestsCount(), | ||
| queryKey: ['requestsCount', providerId, status] as const, | ||
| queryFn: () => getTransport().getProxyRequestsCount(providerId, status), | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# 首先找到并读取目标文件
find . -type f -name "use-requests.ts" | head -5Repository: awsl-project/maxx
Length of output: 100
🏁 Script executed:
# 读取指定行的代码
sed -n '1,200p' web/src/hooks/queries/use-requests.ts | head -200Repository: awsl-project/maxx
Length of output: 6776
🏁 Script executed:
# 搜索requestsCount相关的查询和更新逻辑
rg "useProxyRequestsCount|requestsCount" web/src/hooks/queries/use-requests.ts -A 5 -B 2Repository: awsl-project/maxx
Length of output: 967
🏁 Script executed:
# 查看缓存更新逻辑(第137-154行)
sed -n '130,160p' web/src/hooks/queries/use-requests.tsRepository: awsl-project/maxx
Length of output: 1338
请求状态/Provider 变更时,过滤计数不会被调整
当前逻辑仅在新请求时对计数 +1,但当现有请求的状态或 Provider 变更时,计数不会被修正。例如:请求初始状态为 PENDING(匹配过滤),后更新为 COMPLETED(不匹配过滤),列表会正确移除该请求,但对应的 count 缓存不会递减,导致计数长期失真。
建议在更新时对每个 requestsCount 缓存都比较更新前后是否匹配过滤条件,按差异进行增减:
修复建议
- // 新请求时乐观更新 count(需要考虑每个 count 查询的过滤条件)
- if (isNewRequest) {
- // 遍历所有 requestsCount 缓存
- const countQueries = queryCache.findAll({ queryKey: ['requestsCount'] });
- for (const query of countQueries) {
- // queryKey: ['requestsCount', providerId, status]
- const filterProviderId = query.queryKey[1] as number | undefined;
- const filterStatus = query.queryKey[2] as string | undefined;
- // 如果有过滤条件且不匹配,不更新计数
- if (filterProviderId !== undefined && updatedRequest.providerID !== filterProviderId) {
- continue;
- }
- if (filterStatus !== undefined && updatedRequest.status !== filterStatus) {
- continue;
- }
- queryClient.setQueryData<number>(query.queryKey, (old) => (old ?? 0) + 1);
- }
- }
+ // 计数按更新前后是否匹配过滤条件进行增减
+ const countQueries = queryCache.findAll({ queryKey: ['requestsCount'] });
+ for (const query of countQueries) {
+ const filterProviderId = query.queryKey[1] as number | undefined;
+ const filterStatus = query.queryKey[2] as string | undefined;
+ const matchesFilter = (req: ProxyRequest | undefined) => {
+ if (!req) return false;
+ if (filterProviderId !== undefined && req.providerID !== filterProviderId) return false;
+ if (filterStatus !== undefined && req.status !== filterStatus) return false;
+ return true;
+ };
+ const prevMatches = matchesFilter(existingDetail as ProxyRequest | undefined);
+ const nextMatches = matchesFilter(updatedRequest);
+ if (prevMatches === nextMatches) continue;
+ const delta = nextMatches ? 1 : -1;
+ queryClient.setQueryData<number>(query.queryKey, (old) =>
+ Math.max(0, (old ?? 0) + delta),
+ );
+ }🤖 Prompt for AI Agents
In `@web/src/hooks/queries/use-requests.ts` around lines 34 - 38, The
requestsCount cache (used by useProxyRequestsCount with queryKey
['requestsCount', providerId, status]) is only incremented on new requests and
not adjusted when an existing request's providerId or status changes; update
your mutation handlers to, for each cached requestsCount entry, compute whether
the request matched the cached filter both before and after the change and then
increment or decrement that specific cache by the difference (±1) accordingly:
locate places that update requests (the codepath that receives updated request
objects) and for each cache entry built from useProxyRequestsCount compare
oldRequest.status/providerId vs newRequest.status/providerId and call the
queryClient.setQueryData for ['requestsCount', providerId, status] to add or
subtract 1 as appropriate so counts remain consistent with filtered lists.
| recalculateCosts(): Promise<RecalculateCostsResult>; | ||
| recalculateRequestCost(requestId: number): Promise<RecalculateRequestCostResult>; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the web directory and specific files exist
find . -type f -name "interface.ts" | grep transport
find . -type f -name "types.ts" | grep transportRepository: awsl-project/maxx
Length of output: 130
🏁 Script executed:
# Check the content of the interface file around lines 170-171
if [ -f "web/src/lib/transport/interface.ts" ]; then
sed -n '165,175p' web/src/lib/transport/interface.ts
fiRepository: awsl-project/maxx
Length of output: 466
🏁 Script executed:
# Check the content of the types file to see RecalculateRequestCostResult definition
if [ -f "web/src/lib/transport/types.ts" ]; then
cat -n web/src/lib/transport/types.ts | grep -A 10 "RecalculateRequestCostResult"
fiRepository: awsl-project/maxx
Length of output: 444
🏁 Script executed:
# Search for all occurrences of RecalculateCostsResult and RecalculateRequestCostResult
rg "RecalculateCostsResult|RecalculateRequestCostResult" -A 5Repository: awsl-project/maxx
Length of output: 5569
🏁 Script executed:
# Search for any nanoUSD references to understand the context
rg "nanoUSD|nano.*USD|precision.*upgrade" -iRepository: awsl-project/maxx
Length of output: 3021
🏁 Script executed:
# Search for cost-related types to see if there are other similar issues
rg "oldCost|newCost|Cost.*number" -A 2Repository: awsl-project/maxx
Length of output: 5925
nanoUSD 下 cost 字段用 number 类型会丢失精度
RecalculateRequestCostResult 中的 oldCost 和 newCost 使用 number 类型接收后端 uint64 值,但 JavaScript 的 Number.MAX_SAFE_INTEGER 为 9,007,199,254,740,991。在 nanoUSD 度量下,超过约 $9,007 USD 的成本会超过此限制,导致精度丢失,与纳美元精度升级目标相悖。
建议后端返回字符串,前端使用 BigInt 或 decimal 库进行处理,同时确认最大可能成本值是否会越界。
建议的类型调整(web/src/lib/transport/types.ts)
export interface RecalculateRequestCostResult {
requestId: number;
- oldCost: number;
- newCost: number;
+ oldCost: string;
+ newCost: string;
updatedAttempts: number;
message: string;
}🤖 Prompt for AI Agents
In `@web/src/lib/transport/interface.ts` around lines 170 - 171, The
RecalculateRequestCostResult's oldCost and newCost are defined as number which
will lose precision for backend uint64 nanoUSD values; update the
transport/types and interface so recalculateRequestCost() and
RecalculateRequestCostResult use string for oldCost/newCost (or a
BigInt-serializable representation) and adjust callers to parse into BigInt or a
decimal library, ensuring any other references to oldCost/newCost and
recalculateRequestCost are updated to handle the string->BigInt/decimal
conversion and validation of potential overflow.
|
|
||
| /** 统计数据时间粒度 */ | ||
| export type StatsGranularity = 'minute' | 'hour' | 'day' | 'week' | 'month'; | ||
| export type StatsGranularity = 'minute' | 'hour' | 'day' | 'week' | 'month' | 'year'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /tmp/repo 2>/dev/null || cd . && \
echo "=== Checking Granularity constants in model.go ===" && \
rg -n "Granularity" internal/domain/model.go 2>/dev/null | head -30
echo -e "\n=== Searching for year/Year mentions ===" && \
rg -n -i "year|GranularityYear" internal/domain/model.go internal/repository/sqlite/usage_stats.go internal/stats/pure.go 2>/dev/nullRepository: awsl-project/maxx
Length of output: 1024
🏁 Script executed:
# Check if backend files exist and explore structure
echo "=== Listing backend structure ===" && \
git ls-files | grep -E "^internal/(domain|repository|stats)" | head -20Repository: awsl-project/maxx
Length of output: 813
🏁 Script executed:
# Get full Granularity definition context
echo "=== Full Granularity definition ===" && \
rg -B 2 -A 10 "const.*Granularity|type Granularity" internal/domain/model.go 2>/dev/nullRepository: awsl-project/maxx
Length of output: 375
🏁 Script executed:
# Check frontend usage of year granularity
echo "=== Frontend year granularity usage ===" && \
rg -n "'year'|\"year\"" web/src/lib/transport/types.ts web/src -i 2>/dev/null | head -20Repository: awsl-project/maxx
Length of output: 559
🏁 Script executed:
# Check how frontend uses year granularity in API calls
echo "=== Frontend stats code context around year usage ===" && \
sed -n '160,230p' web/src/pages/stats/index.tsx | cat -n
echo -e "\n=== Check if backend validates granularity ===" && \
rg -B 3 -A 5 "Granularity" internal/repository/sqlite/usage_stats.go | head -50Repository: awsl-project/maxx
Length of output: 3982
🏁 Script executed:
# Check API endpoints that accept granularity
echo "=== API endpoints using granularity ===" && \
rg -n "granularity" internal/infra/http/ --type go 2>/dev/null | head -20
# Check if there's validation for granularity values
echo -e "\n=== Granularity validation ===" && \
rg -B 2 -A 5 "GranularityMinute|GranularityHour|GranularityDay|GranularityMonth" internal/ -i | grep -E "(switch|case|if|valid)" | head -10Repository: awsl-project/maxx
Length of output: 668
🏁 Script executed:
# Check the actual API request structure for stats
echo "=== API request for stats ===" && \
rg -B 5 -A 10 "GetStats|getStats" web/src/lib/transport/ internal/infra/http/ 2>/dev/null | head -60Repository: awsl-project/maxx
Length of output: 90
🏁 Script executed:
# Check the TruncateToGranularity function to see what happens with unknown granularities
echo "=== TruncateToGranularity implementation ===" && \
rg -B 5 -A 30 "func TruncateToGranularity|func.*Truncate" internal/stats/pure.go 2>/dev/null | head -80Repository: awsl-project/maxx
Length of output: 1298
🏁 Script executed:
# Check if there's error handling or fallback for unknown granularities
echo "=== Granularity handling in stats ===" && \
rg -B 2 -A 5 "switch.*Granularity|case.*Granularity" internal/stats/pure.go 2>/dev/nullRepository: awsl-project/maxx
Length of output: 483
🏁 Script executed:
# Check the API handler to see how granularity parameter is processed
echo "=== API handlers ===" && \
git ls-files internal/infra/http/ | head -20 && \
echo -e "\n=== Search for GetStats endpoint ===" && \
rg -l "stats|Stats" internal/infra/http/ 2>/dev/null | head -10Repository: awsl-project/maxx
Length of output: 119
🏁 Script executed:
# Check if there's any default or fallback behavior
echo "=== Default granularity handling ===" && \
rg -n "granularity" web/src/lib/transport/ -i | head -20Repository: awsl-project/maxx
Length of output: 549
同步后端 Granularity 支持或从前端移除 'week' 和 'year' 粒度
前端 StatsGranularity 包含 'week' 和 'year',但后端 internal/domain/model.go 中 Granularity 常量仅定义了 minute、hour、day、month 四种。
后端 internal/stats/pure.go 的 TruncateToGranularity() 函数对未知粒度会默认回退到小时级别(time.Truncate(time.Hour)),而不是返回错误。这导致当前端发送 'year' 或 'week' 时,后端会静默地使用小时粒度进行数据聚合,造成数据不匹配且用户无法察觉。
需要选择以下方案之一:
- 后端添加
GranularityWeek和GranularityYear支持 - 或从前端
StatsGranularity类型移除'week'和'year' - 或在后端添加验证,对不支持的粒度返回错误而非静默回退
🤖 Prompt for AI Agents
In `@web/src/lib/transport/types.ts` at line 534, The frontend type
StatsGranularity includes 'week' and 'year' but the backend
(internal/domain/model.go constants and TruncateToGranularity() in
internal/stats/pure.go) only supports minute/hour/day/month and silently falls
back to hour; remove 'week' and 'year' from the StatsGranularity union to match
backend behavior and update any callers/usage sites (serialization, UI
selectors, API payload builders) to stop sending those values; alternatively, if
you prefer backend support instead, add GranularityWeek and GranularityYear
constants in internal/domain/model.go and implement proper handling in
TruncateToGranularity() (and add validation) so those values are supported —
choose one approach and make sure all affected places reference the same set of
granularity symbols.
| case 'all': | ||
| // 最近 12 个月 | ||
| start = new Date(now.getFullYear(), now.getMonth() - 11, 1); // 12 个月前的月初 | ||
| granularity = 'month'; | ||
| durationMinutes = 365 * 24 * 60; // 约一年 | ||
| // 全部时间,使用 year 粒度 | ||
| start = new Date(now.getFullYear() - 4, 0, 1); // 5年前的1月1日 | ||
| granularity = 'year'; | ||
| durationMinutes = 5 * 365 * 24 * 60; | ||
| break; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file and the specific lines in context
cd web/src/pages/stats && wc -l index.tsxRepository: awsl-project/maxx
Length of output: 75
🏁 Script executed:
# Read the relevant section around lines 164-169 with broader context
sed -n '150,185p' web/src/pages/stats/index.tsxRepository: awsl-project/maxx
Length of output: 1105
🏁 Script executed:
# Search for where this getTimeRangeConfig function is called and used
rg "getTimeRangeConfig" web/src/pages/stats/index.tsx -A 5 -B 2Repository: awsl-project/maxx
Length of output: 602
🏁 Script executed:
# Search for any backend API calls or queries that use the start/end time
rg "start.*end" web/src/pages/stats/index.tsx -B 2 -A 2Repository: awsl-project/maxx
Length of output: 810
🏁 Script executed:
# Check if there's any documentation or product requirements about time ranges
fd -e md -e txt -e json | xargs grep -l "all.?time\|历史数据" 2>/dev/null | head -5Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Look for any backend data retention or earliest data constraints
rg "year\|5.*year\|retention\|earliest" web/src/pages/stats/index.tsxRepository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check what happens when start is null - look at the API filter
rg "UsageStatsFilter\|granularity.*end" web/src/pages/stats/index.tsx -A 10 | head -50Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Look at the full context of how timeConfig.start is used
rg "timeConfig\.start" web/src/pages/stats/index.tsx -B 2 -A 2Repository: awsl-project/maxx
Length of output: 859
🏁 Script executed:
# Check if there are any API calls or hooks that handle the time range
rg "useUsageStats\|fetchStats\|api.*stats" web/src -r --type ts --type tsx | head -20Repository: awsl-project/maxx
Length of output: 88
🏁 Script executed:
# Search for the UsageStatsFilter type definition to understand the API contract
rg "interface UsageStatsFilter\|type UsageStatsFilter" web/src -A 10Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check for any comments or documentation about the 5-year limit
rg "5.*year|year.*5|历史|retention|oldest|earliest" web/src/pages/stats/index.tsx -B 2 -A 2Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Search for any backend API documentation or types related to stats
fd -type f \( -name "*.ts" -o -name "*.tsx" \) -path "*/api*" -o -name "*stats*" | grep -E "(api|hook|service)" | head -10Repository: awsl-project/maxx
Length of output: 231
🏁 Script executed:
# Find the API hook or service that actually uses the UsageStatsFilter
rg "useUsageStats\|UsageStats" web/src --type ts -l | head -10Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Search for the filter definition and API call structure
rg "interface.*Filter\|export.*Filter" web/src -B 2 -A 8 | grep -A 8 "UsageStats"Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check if there's any backend documentation or comments about why 5 years was chosen
rg "5.*year|5.?年|all.*time|allTime" web/src --type ts -B 3 -A 3 | head -40Repository: awsl-project/maxx
Length of output: 2645
🏁 Script executed:
# Look at the actual case 'all' implementation more carefully with broader context
sed -n '150,180p' web/src/pages/stats/index.tsxRepository: awsl-project/maxx
Length of output: 999
🏁 Script executed:
# Search for any related issues, PRs, or comments about time range limits
rg "TODO|FIXME|NOTE|XXX" web/src/pages/stats/index.tsx | grep -i "time\|range\|year\|all"Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check if there's a constants file that defines these ranges
fd "constant\|config" web/src --type f | head -5Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Check the API hook implementation to understand if start: null is supported
rg "useUsageStats|UsageStatsFilter" web/src -B 5 -A 15 | head -100Repository: awsl-project/maxx
Length of output: 5678
🏁 Script executed:
# Look for the hook that uses UsageStatsFilter
find web/src -name "*.ts" -o -name "*.tsx" | xargs grep -l "UsageStatsFilter" | head -5Repository: awsl-project/maxx
Length of output: 253
🏁 Script executed:
# Check the broader context - how is the data fetched when start is absent?
sed -n '230,260p' web/src/pages/stats/index.tsxRepository: awsl-project/maxx
Length of output: 922
🏁 Script executed:
# Look for any query or API call that shows what happens when start is undefined
rg "const.*filter|const.*response|fetch.*stats" web/src/pages/stats/index.tsx -A 3 | head -40Repository: awsl-project/maxx
Length of output: 578
🏁 Script executed:
# Check the backend implementation to see if it truly supports unlimited history when start is null
rg "start.*null|start.*undefined|all.*time" web/src/lib/transport -B 2 -A 2Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Look at the HTTP transport to see how the filter is sent to the backend
sed -n '1,50p' web/src/lib/transport/http-transport.tsRepository: awsl-project/maxx
Length of output: 1084
🏁 Script executed:
# Check for any comments or documentation about the 5-year design decision
rg "5.*year|year.*粒度|granularity.*year|all.*time" web/src -B 3 -A 3 | grep -E "//|/\*" | head -20Repository: awsl-project/maxx
Length of output: 876
🏁 Script executed:
# Look at git history for when this 5-year limit was introduced
git log --oneline --all -- web/src/pages/stats/index.tsx | head -10Repository: awsl-project/maxx
Length of output: 488
🏁 Script executed:
# Check if there are any related tests or specifications
find web -name "*.test.*" -o -name "*.spec.*" | xargs grep -l "all.*time\|5.*year" 2>/dev/nullRepository: awsl-project/maxx
Length of output: 45
🏁 Script executed:
# Check commit 29359fb which added statistics and data retention configuration
git show 29359fb --stat | head -30Repository: awsl-project/maxx
Length of output: 1407
🏁 Script executed:
# Search for data retention configuration
rg "retention\|data.*keep\|历史\|保留" web/src -B 2 -A 2 | head -50Repository: awsl-project/maxx
Length of output: 43
🏁 Script executed:
# Look at the complete TimeRangeConfig interface definition
rg "interface TimeRangeConfig|type TimeRangeConfig" web/src/pages/stats/index.tsx -A 10Repository: awsl-project/maxx
Length of output: 355
🏁 Script executed:
# Check if there's a backend config or constant that defines the 5-year window
rg "365|5.*year|year.*5" web/src -B 3 -A 1 | grep -v node_modulesRepository: awsl-project/maxx
Length of output: 5231
🏁 Script executed:
# Verify that start: null design intent is consistently used elsewhere
rg "null means|all time" web/src -B 2 -A 2Repository: awsl-project/maxx
Length of output: 614
"All Time" 应返回 null 而非固定 5 年窗口
接口设计明确注明 start: Date | null; // null means all time,且后端支持可选的 start 参数。当前 case 'all' 硬编码 5 年前的日期,违反了架构契约,导致实际查询范围被截断为最近 5 年。应将 start 设为 null 以启用真正的全历史数据查询。
🤖 Prompt for AI Agents
In `@web/src/pages/stats/index.tsx` around lines 164 - 169, In the switch branch
for case 'all' (where variables start, granularity, durationMinutes are set),
replace the hard-coded 5-year window with a true "all time" sentinel by setting
start = null (compatible with the Date | null contract) instead of new
Date(...); keep granularity as 'year' if needed, and adjust or clear
durationMinutes if your caller treats it as optional for all-time queries so the
request doesn't get artificially truncated.
| // Reset all filters to 'all' | ||
| const handleResetFilters = () => { | ||
| setProviderId('all'); | ||
| setProjectId('all'); | ||
| setClientType('all'); | ||
| setApiTokenId('all'); | ||
| setModel('all'); | ||
| }; |
There was a problem hiding this comment.
“重置”未包含时间范围,和注释/用户预期不一致
handleResetFilters 只清理其他筛选项,未重置 timeRange,与“Reset all filters”的语义不符。建议重置到默认值(如 24h)或明确包含 all,并同步更新注释/文案。
🛠️ 建议调整(示例:重置为默认 24h)
const handleResetFilters = () => {
+ setTimeRange('24h');
setProviderId('all');
setProjectId('all');
setClientType('all');
setApiTokenId('all');
setModel('all');
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Reset all filters to 'all' | |
| const handleResetFilters = () => { | |
| setProviderId('all'); | |
| setProjectId('all'); | |
| setClientType('all'); | |
| setApiTokenId('all'); | |
| setModel('all'); | |
| }; | |
| // Reset all filters to 'all' | |
| const handleResetFilters = () => { | |
| setTimeRange('24h'); | |
| setProviderId('all'); | |
| setProjectId('all'); | |
| setClientType('all'); | |
| setApiTokenId('all'); | |
| setModel('all'); | |
| }; |
🤖 Prompt for AI Agents
In `@web/src/pages/stats/index.tsx` around lines 403 - 410, handleResetFilters
currently resets providerId, projectId, clientType, apiTokenId and model but
omits timeRange; update the handleResetFilters function to also reset timeRange
(use the app's default e.g. '24h' or 'all' depending on intent) by calling
setTimeRange with that default, and update the comment or any UI text that says
"Reset all filters" to reflect the included timeRange reset; refer to the
handleResetFilters function and the setTimeRange state setter when making the
change.
| return ( | ||
| <div className="space-y-3"> | ||
| <div className="flex items-center justify-between"> | ||
| <label className="text-xs font-bold text-muted-foreground uppercase tracking-widest pl-1 opacity-80"> | ||
| {label} | ||
| </label> | ||
| {onClear && ( | ||
| <button | ||
| type="button" | ||
| onClick={onClear} | ||
| className={cn( | ||
| "p-1 rounded hover:bg-muted text-muted-foreground hover:text-foreground transition-colors", | ||
| showClear ? "opacity-100" : "opacity-0 pointer-events-none" | ||
| )} |
There was a problem hiding this comment.
隐藏的清除按钮仍可被键盘聚焦
opacity-0 + pointer-events-none 仍会让按钮进入 tab 序列,造成“不可见但可聚焦”的 a11y 问题。建议在 showClear=false 时不渲染按钮,或设置 tabIndex={-1}/aria-hidden。
♿ 建议调整渲染条件
- {onClear && (
+ {onClear && showClear && (
<button
type="button"
onClick={onClear}
className={cn(
"p-1 rounded hover:bg-muted text-muted-foreground hover:text-foreground transition-colors",
- showClear ? "opacity-100" : "opacity-0 pointer-events-none"
+ "opacity-100"
)}
title="Clear"
>
<X className="h-3 w-3" />
</button>
)}🤖 Prompt for AI Agents
In `@web/src/pages/stats/index.tsx` around lines 1091 - 1104, The clear button
currently remains in the DOM and can receive keyboard focus when hidden (button
with props onClear and className using showClear), causing an a11y issue; update
the JSX so that when showClear is false the button is not focusable — either
don't render the button at all (wrap rendering in showClear && <button ...>) or,
if you must keep it, set tabIndex={-1} and aria-hidden="true" on the button (the
element using onClear and className/cn) so it is removed from the tab order and
accessibility tree when hidden.
Summary
Stats 页面优化
计费精度升级
Test plan
Summary by CodeRabbit
新功能
改进
✏️ Tip: You can customize this high-level summary in your review settings.