Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
📝 WalkthroughWalkthrough将原基于 net/http 的执行模型替换为基于 flow.Ctx 的中间件流水线;统一 ProviderAdapter.Execute 签名为 Execute(c *flow.Ctx, provider *domain.Provider)。新增 flow 引擎与 helpers、executor 中间件链(ingress/route_match/dispatch/egress)、有状态响应转换与流式 SSE 支持,并大范围迁移适配器与转换器到 flow 上下文。 Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Proxy as ProxyHandler
participant Flow as FlowEngine
participant Exec as Executor
participant Adapter as ProviderAdapter
participant Upstream as Upstream
Client->>Proxy: 发起 HTTP 请求
Proxy->>Flow: NewCtx & HandleWith(handlers...)
Flow->>Exec: ingress -> routeMatch -> dispatch
Exec->>Adapter: Adapter.Execute(c *flow.Ctx, provider)
Adapter->>Upstream: 发起上游请求(stream / non-stream)
Upstream-->>Adapter: 上游响应 / SSE
Adapter->>Exec: 返回并触发 TransformResponseWithState(带 state)
Exec->>Flow: 持久化、广播、可能重试
Flow->>Proxy: egress 完成请求
Proxy-->>Client: 返回 HTTP 响应 或 流数据
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
将 Executor 拆分为 Ingress/Dispatch/Egress 中间件管线, 引入 Flow Engine 统一请求生命周期管理。 重构 Converter 层以对齐 Codex/OpenAI/Claude 协议转换, 新增 schema_cleaner 和 translator_helpers 辅助模块。
pnpm v10 默认阻止依赖包的构建脚本,导致 esbuild 无法 下载平台原生二进制,Vite 运行时报 Cannot find module 错误。
a4ad371 to
766c097
Compare
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
internal/converter/coverage_openai_stream_test.go (1)
508-527:⚠️ Potential issue | 🟠 Major在测试中验证
[DONE]标记和完整的 usage 信息。当前测试缺少对
[DONE]信号的验证。codexToOpenAIResponse.TransformChunk在处理 done 事件时(internal/converter/codex_to_openai.go第 248-249 行),会输出 OpenAI 格式的[DONE]标记,但测试仅检查了"chat.completion.chunk"的存在,并未验证流的正确终止。建议补充断言:if !strings.Contains(string(out), "chat.completion.chunk") { t.Fatalf("expected openai chunk") } +if !strings.Contains(string(out), "[DONE]") { + t.Fatalf("expected [DONE] marker in output") +}另外,
completed事件中的 usage 仅包含input_tokens,缺少output_tokens。如果转换器需要完整映射 usage 信息,建议补充output_tokens或确认这种不完整的 usage 数据是有意的测试场景。internal/converter/gemini_to_openai.go (1)
275-320:⚠️ Potential issue | 🔴 CriticalBug:
responseId提前设置state.MessageID导致首个 SSE chunk 被跳过。第 291–294 行从
responseId提取并赋值给state.MessageID,但第 304 行if state.MessageID == ""是"首包初始化"的守卫条件。当 Gemini 流式响应的第一个事件包含responseId(这是常见情况)时,state.MessageID在到达第 304 行时已经非空,导致初始的role: "assistant", content: ""SSE chunk 永远不会被发出。OpenAI 流式协议中,这个初始 chunk 用于告知客户端流已开始并标记角色,部分 OpenAI 兼容客户端依赖它。
建议把
responseId的提取移到首包判断之后,或者改用独立的firstChunkSent布尔标志来控制首包逻辑。🐛 建议修复:使用独立标志控制首包发送
方案一:将
responseId赋值移至首包 block 内部,并优先使用responseId作为 MessageID:- if state.MessageID == "" { - if rid := meta.Get("responseId"); rid.Exists() && rid.String() != "" { - state.MessageID = rid.String() - } - } var createdAt int64 if ct := meta.Get("createTime"); ct.Exists() { if t, err := time.Parse(time.RFC3339Nano, ct.String()); err == nil { createdAt = t.Unix() } } // First chunk if state.MessageID == "" { - state.MessageID = "chatcmpl-gemini" + if rid := meta.Get("responseId"); rid.Exists() && rid.String() != "" { + state.MessageID = rid.String() + } else { + state.MessageID = "chatcmpl-gemini" + } openaiChunk := OpenAIStreamChunk{internal/adapter/provider/antigravity/service.go (1)
22-23:⚠️ Potential issue | 🟠 MajorOAuth 客户端凭据硬编码在源码中。
OAuthClientID和OAuthClientSecret以明文常量形式存在于代码中。虽然这是已有代码而非本 PR 引入,但对于公开仓库来说,这些凭据的暴露可能存在安全风险。建议评估是否可以改为从环境变量或密钥管理服务加载。internal/converter/claude_to_openai_stream.go (1)
141-168:⚠️ Potential issue | 🟠 Major
message_stop事件中不应该追加FormatDone(),避免与done事件重复Line 167-168:
message_stop处理中追加了FormatDone()。但根据代码流程,当 Claude 流同时发送message_stop事件和后续的done事件(这在 Claude 流式 API 中是标准行为)时,客户端会收到两次data: [DONE]\n\n。应该移除第 168 行的
output = append(output, FormatDone()...),让仅 Line 20-21 的done事件处理器负责追加终止符。internal/adapter/provider/antigravity/adapter.go (1)
686-699:⚠️ Potential issue | 🔴 Critical
eventChan可能为 nil 导致 panic第 692 行
eventChan := flow.GetEventChan(c)可能返回 nil(当上下文中未设置该值时),但第 695 行eventChan.SendResponseInfo(...)直接调用而无 nil 检查。对比第 195 行和第 253 行的写法,这些地方都有 nil 守卫。同样的问题也出现在
handleCollectedStreamResponse(第 974 行和第 999-1000 行),在初始的 nil 检查(第 892 行)保护范围之外再次调用eventChan方法。建议的改动(handleStreamResponse)
eventChan := flow.GetEventChan(c) // Send initial response info (for streaming, we only capture status and headers) - eventChan.SendResponseInfo(&domain.ResponseInfo{ - Status: resp.StatusCode, - Headers: flattenHeaders(resp.Header), - Body: "[streaming]", - }) + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ + Status: resp.StatusCode, + Headers: flattenHeaders(resp.Header), + Body: "[streaming]", + }) + }internal/adapter/provider/kiro/adapter.go (2)
126-131:⚠️ Potential issue | 🔴 Critical
eventChan未做 nil 检查即调用方法第 103 行获取
eventChan,第 126 行直接调用eventChan.SendRequestInfo(...),但GetEventChan在上下文中未找到值时返回 nil。类似问题还出现在第 182、352、449、467 行等处。与
sendFinalEvents(第 406 行)正确检查了 nil 形成对比。建议在所有调用点统一添加 nil 守卫,或在Execute入口处做前置校验。建议统一处理方式
// Get EventChannel for sending events to executor eventChan := flow.GetEventChan(c) + if eventChan == nil { + return domain.NewProxyErrorWithMessage(domain.ErrInvalidInput, false, "event channel not set in flow context") + }
156-174:⚠️ Potential issue | 🟠 Major401 重试路径中忽略了
http.NewRequestWithContext的错误第 158 行
upstreamReq, _ = http.NewRequestWithContext(...)忽略了错误返回值。如果 URL 无效或 context 已取消,upstreamReq为 nil,第 159 行upstreamReq.Header.Set(...)会 panic。建议的改动
- upstreamReq, _ = http.NewRequestWithContext(ctx, "POST", upstreamURL, bytes.NewReader(cwBody)) + upstreamReq, err = http.NewRequestWithContext(ctx, "POST", upstreamURL, bytes.NewReader(cwBody)) + if err != nil { + return domain.NewProxyErrorWithMessage(err, true, "failed to create retry request") + }internal/adapter/provider/custom/adapter.go (1)
297-304:⚠️ Potential issue | 🔴 Critical
eventChan未做 nil 检查,会导致空指针 panic。
flow.GetEventChan(c)可能返回nil(在handleNonStreamResponse中就做了 nil 检查,同文件 Line 248)。此处 Line 300 直接调用eventChan.SendResponseInfo(...)会在eventChan == nil时触发空指针解引用 panic。同样的问题存在于
sendFinalEvents闭包中(Lines 344、354、366),其中所有eventChan方法调用均未做 nil 保护。🐛 建议修复
Line 300 处添加 nil 检查:
- eventChan.SendResponseInfo(&domain.ResponseInfo{ + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ Status: resp.StatusCode, Headers: flattenHeaders(resp.Header), Body: "[streaming]", }) + }
sendFinalEvents闭包入口处添加 nil 守卫:sendFinalEvents := func() { + if eventChan == nil { + return + } if sseBuffer.Len() > 0 {
🤖 Fix all issues with AI agents
In `@internal/adapter/provider/antigravity/adapter.go`:
- Around line 204-246: The use of defer resp.Body.Close() inside the retry loop
(around the client.Do(upstreamReq) calls that assign resp) causes resource leaks
and potential double-closes (see resp, client.Do, the 401 retry path and
getAccessToken logic); replace those defers with explicit resp.Body.Close()
calls: after any early continue/return path call resp.Body.Close() before
continuing or returning, avoid calling Close twice in the 401 branch (remove the
extra defer there since the code already closes before retry), and ensure the
success path that forwards to handleStreamResponse leaves closing responsibility
to that downstream function rather than deferring here.
In `@internal/adapter/provider/antigravity/openai_request.go`:
- Around line 250-254: The bug is that sjson.SetBytes(node,
"parts."+itoa(p)+".functionCall.args.params", []byte(fargs)) causes
[]byte(fargs) to be JSON-marshaled as base64; instead, when fargs is not valid
JSON you should store it as a plain string. Replace the sjson.SetBytes call with
sjson.Set (or sjson.SetRaw with a quoted string) so the value for
"parts."+itoa(p)+".functionCall.args.params" is the string fargs (keep the
existing branch that uses sjson.SetRawBytes for valid JSON), updating the code
paths that reference node, fargs, itoa and sjson.SetBytes accordingly.
In `@internal/adapter/provider/codex/adapter.go`:
- Around line 486-514: The global codexCaches map can grow unbounded because
expired entries are only removed on access; modify the code to add a background
eviction mechanism (or replace with a TTL cache library) that periodically scans
and removes expired entries: add an init() goroutine that uses a time.Ticker to
lock codexCacheMu, iterate codexCaches, delete entries where
time.Now().After(cache.Expire), and unlock; alternatively implement a size cap
in setCodexCache to prune expired entries when capacity is reached or swap
codexCaches for a thread-safe TTL cache (e.g., patrickmn/go-cache) and update
getCodexCache/setCodexCache to use it.
- Around line 268-294: The code extracts PII from the ID token
(tokenResp.IDToken via ParseIDToken) and writes Email, Name, Picture and other
personal fields into the provider config (config.Email, config.Name,
config.Picture, config.AccountID, config.UserID, config.PlanType,
SubscriptionStart/End); update this to avoid unapproved PII persistence: either
stop assigning Email/Name/Picture to config by default or gate those assignments
behind an explicit privacy/consent flag (e.g., providerConfig.AllowStorePII) and
a minimalization check, ensure AccountID/UserID are the only persisted
identifiers if required, mask or omit PII from any logs, apply encryption at
rest for stored PII and hook into existing data retention/cleanup logic so these
fields are deleted per policy, and add unit tests verifying the gating and that
ParseIDToken assignments do not persist PII unless consent/flag is set.
- Around line 155-156: The retry branch currently discards the error from
http.NewRequestWithContext (upstreamReq, _ = http.NewRequestWithContext(...))
which can leave upstreamReq nil and cause panics when
applyCodexHeaders(upstreamReq, ...) or client.Do(upstreamReq) are called; change
the retry path to capture and check the error from http.NewRequestWithContext,
handle it by returning the error (or logging and continuing the retry loop as
appropriate), and only call applyCodexHeaders and client.Do when upstreamReq is
non-nil and err == nil to avoid nil pointer dereferences.
In `@internal/converter/claude_to_codex.go`:
- Around line 33-47: The response transformation must apply the same short-name
mapping used in the request: create a reverse map by calling
buildReverseMapFromClaudeOriginalShortToOriginal(state.OriginalRequestBody) and
use it whenever you read tool names (e.g., when using block.Name in Transform
and in TransformChunk for streaming tool_use blocks). Update the Transform
signature to accept a TransformState (or add TransformWithState) so Transform
can access state. Ensure TransformChunk uses the reverse map from the provided
state to rewrite any tool_use blocks' names back to the shortened form so
request/response tool names remain consistent with shortMap.
In `@internal/converter/claude_to_openai_stream.go`:
- Around line 76-87: The three delta branches (text_delta, thinking_delta,
input_json_delta) risk panics because they type-assert state.Custom to
*claudeOpenAIStreamMeta without nil checks before using streamMeta.Model when
building OpenAIStreamChunk; extract the extraction/validation into a helper like
getClaudeOpenAIStreamMeta(state) that returns a non-nil *claudeOpenAIStreamMeta
(or an error/ok flag) after verifying state.Custom is the correct type, then
replace direct assertions in the text_delta, thinking_delta, and
input_json_delta branches with calls to that helper and bail out or skip
emitting chunks if it returns nil/false so OpenAIStreamChunk construction (and
access to streamMeta.Model) is always safe.
In `@internal/converter/codex_to_claude.go`:
- Around line 315-324: TransformWithState 在无法识别输入格式时当前返回 nil,
nil,导致调用方仅根据错误判断并最终写入 nil 到下游;请修改 codexToClaudeResponse.TransformWithState(body
[]byte, state *TransformState) 在既不是 "response.completed" 也没有 "output" 字段时不要返回
nil,而是回退到原始 body(即返回 body, nil 或者明确返回原始未转换字节切片),以保持与流式回退行为一致;在实现中仅修改该 else
分支(保持现有 response 解析逻辑不变),并确保调用方不会收到 nil 体。
In `@internal/converter/codex_to_openai.go`:
- Around line 134-143: TransformWithState currently returns (nil, nil) when the
incoming JSON doesn't match expected shapes, which leads to downstream code
(TransformResponseWithState -> Finalize in converting_writer.go) writing an
empty response; change TransformWithState so that when neither
"response.completed" nor "output" branches match it returns the original body
(or a clear empty JSON like "{}") with nil error instead of nil body, ensuring
callers like TransformResponseWithState and Finalize receive a non-nil []byte;
update the return path in function TransformWithState to return body (or a
deterministic empty JSON) and keep error handling unchanged.
In `@internal/converter/coverage_claude_response_test.go`:
- Around line 221-228: The Transform method of codexToClaudeResponse
intentionally returns (nil, nil) for invalid/unmatched JSON, but
converting_writer.go's Finalize() only checks err and not for converted == nil;
update Finalize() to detect when converted is nil after calling
codexToClaudeResponse.Transform (and other Transformers that use this pattern)
and in that case fallback to writing the original response body instead of
calling Write(nil). Ensure you reference the converted variable and the Write
call in Finalize() and add the nil check before attempting Write so non-stream
responses return the original response rather than an empty body.
In `@internal/converter/coverage_openai_stream_test.go`:
- Around line 511-512: The test uses a plain string for the "delta" field but
the CodexStreamEvent struct (types_codex.go, Delta *CodexDelta) expects a nested
object with a text field; update the test data in coverage_openai_stream_test.go
so the delta map is {"type":"response.output_text.delta","delta":
map[string]interface{}{"text":"hi"}} (matching CodexStreamEvent.Delta and other
tests like codex_openai_more_test.go) so the test payload conforms to the type
definition used by codex_to_openai.go and codex_to_claude.go.
In `@internal/converter/tool_name.go`:
- Around line 31-46: The closure baseCandidate duplicates the exact truncation
logic already implemented in shortenNameIfNeeded; remove the duplicated logic by
delegating to or reusing shortenNameIfNeeded instead of reimplementing it.
Replace the body of baseCandidate (or remove the closure entirely) so it calls
shortenNameIfNeeded(n, maxToolNameLen) or returns its result, and ensure callers
expecting baseCandidate still receive the same behavior; keep special "mcp__"
handling centralized in shortenNameIfNeeded to avoid future divergence.
In `@internal/executor/middleware_dispatch.go`:
- Around line 88-376: The retry loop in dispatch duplicates pricing, attempt
update/broadcast, and proxyReq update/broadcast logic; extract that repeated
logic into helper methods (e.g., Executor.calculateAttemptCost(attempt
*domain.ProxyUpstreamAttempt, provider *domain.Provider, clientType
domain.ClientType), Executor.updateAndBroadcastAttempt(attempt
*domain.ProxyUpstreamAttempt), and
Executor.updateAndBroadcastProxyRequest(proxyReq *domain.ProxyRequest,
responseCapture ResponseCapture) ) and call them from both the success and
failure branches in the retry loop inside dispatch; ensure the helpers
encapsulate the usage.Metrics construction +
pricing.GlobalCalculator().CalculateWithResult, the e.attemptRepo.Update +
e.broadcaster.BroadcastProxyUpstreamAttempt calls, the shouldClearRequestDetail
checks, and the proxyReq field updates (including ExtractFromResponse logic) so
the main loop only sets attempt/proxyReq fields and invokes these helpers to
remove duplication.
In `@internal/executor/middleware_ingress.go`:
- Around line 90-107: The code assigns proxyReq.RequestInfo using
c.Request.Method and other fields without guaranteeing c.Request is non-nil,
causing a panic when shouldClearRequestDetail() is false but c.Request == nil;
update the block around shouldClearRequestDetail() to guard accesses to
c.Request (e.g., check c.Request != nil before reading Method/URL and only set
Host/header when c.Request present) and populate proxyReq.RequestInfo with safe
defaults (empty strings or existing requestURI/requestHeaders/requestBody) when
c.Request is nil so proxyReq.RequestInfo is always constructed without
dereferencing a nil c.Request; touch the shouldClearRequestDetail(),
proxyReq.RequestInfo assignment, and the Host header logic to ensure all
c.Request usages are protected.
- Around line 130-141: Replace the direct equality check against
context.Canceled in the block handling e.projectWaiter.WaitForProject's error
with errors.Is(err, context.Canceled) so wrapped cancellation errors are
detected; update the branch that sets status = "CANCELLED", errorMsg = "client
cancelled: "+err.Error() and the
e.broadcaster.BroadcastMessage("session_pending_cancelled", ...) call (which
references state.sessionID) to run when errors.Is(...) is true instead of err ==
context.Canceled so cancellations are correctly classified even when the error
is wrapped.
🟡 Minor comments (9)
internal/executor/middleware_route_match.go-36-36 (1)
36-36:⚠️ Potential issue | 🟡 Minor原始路由匹配错误被丢弃,丢失了诊断上下文。
router.Match返回的err可能包含具体的失败原因(如权限不足、模型不匹配等),但第 36 行用通用的"no routes available"消息替换了它。建议在错误消息中保留原始错误信息:- err = domain.NewProxyErrorWithMessage(domain.ErrNoRoutes, false, "no routes available") + err = domain.NewProxyErrorWithMessage(domain.ErrNoRoutes, false, "no routes available: "+err.Error())internal/adapter/provider/antigravity/request.go-272-300 (1)
272-300:⚠️ Potential issue | 🟡 Minor
finalizeOpenAIWrappedRequest中多个sjson.SetBytes错误被静默忽略。Lines 284-294 连续调用了 8 次
sjson.SetBytes/sjson.DeleteBytes,所有错误均用_丢弃。如果中间某次操作失败(例如无效 JSON),后续操作将基于损坏的 payload 继续执行,产出不可预期的结果。建议至少对关键字段(如
project、model)检查错误,或在整体失败时返回原始 payload。internal/adapter/provider/antigravity/translator_helpers.go-1-4 (1)
1-4:⚠️ Potential issue | 🟡 Minor包注释与实际包名不一致。
注释写的是
Package util,但实际包名是antigravity。🔧 建议修复
-// Package util provides utility functions for the CLI Proxy API server. -// It includes helper functions for JSON manipulation, proxy configuration, -// and other common operations used across the application. -package antigravity +// Package antigravity provides the Antigravity provider adapter, +// including helpers for JSON manipulation and request/response transformation. +package antigravityinternal/adapter/provider/antigravity/translator_helpers.go-90-97 (1)
90-97:⚠️ Potential issue | 🟡 Minor
DeleteKey静默忽略删除错误,且多次删除后路径可能失效。
Walk预先收集所有路径,但每次sjson.Delete会改变 JSON 结构,可能导致后续路径指向错误位置(尤其是数组索引场景)。建议从后往前删除(逆序遍历paths),减少路径偏移问题,并至少记录删除错误。🔧 逆序删除建议
func DeleteKey(jsonStr, keyName string) string { paths := make([]string, 0) Walk(gjson.Parse(jsonStr), "", keyName, &paths) - for _, p := range paths { + for i := len(paths) - 1; i >= 0; i-- { + p := paths[i] jsonStr, _ = sjson.Delete(jsonStr, p) } return jsonStr }internal/adapter/provider/antigravity/schema_cleaner.go-1-2 (1)
1-2:⚠️ Potential issue | 🟡 Minor包注释与实际包名不匹配。
注释写的是
"Package util provides utility functions for the CLI Proxy API server.",但实际包名是antigravity。🐛 修复建议
-// Package util provides utility functions for the CLI Proxy API server. -package antigravity +// Package antigravity provides Antigravity provider adapter and utilities. +package antigravityinternal/executor/middleware_dispatch.go-54-84 (1)
54-84:⚠️ Potential issue | 🟡 Minor缩进严重不一致,影响代码可读性和可维护性。
Lines 55-84 的缩进层级混乱,嵌套块的 tab 数量不一致。例如:
- Line 55 的
if e.converter.NeedConvert比周围代码多了一级缩进- Line 62 的
if currentClientType == domain.ClientTypeCodex缩进级别与预期不符- Line 77 的
convertedURI赋值缩进回退虽然 Go 编译器会验证大括号配对,但这种缩进会误导代码审查者对逻辑嵌套的理解。建议使用
gofmt或goimports重新格式化此段代码。internal/converter/codex_to_openai.go-376-403 (1)
376-403:⚠️ Potential issue | 🟡 Minor删除未使用的函数
extractCodexOutputText。该函数在整个代码库中未被调用,属于死代码。
internal/converter/openai_to_codex.go-414-414 (1)
414-414:⚠️ Potential issue | 🟡 Minor
state.Custom类型断言缺少安全检查第 414 行
st := state.Custom.(*openaiToResponsesState)是裸类型断言。如果state.Custom被其他代码意外设置为不同类型,运行时会 panic。建议使用带 ok 检查的断言(与getClaudeStreamState中的做法保持一致)。建议的改动
- st := state.Custom.(*openaiToResponsesState) + st, ok := state.Custom.(*openaiToResponsesState) + if !ok { + st = &openaiToResponsesState{ /* ... 初始化字段 ... */ } + state.Custom = st + }internal/converter/openai_to_codex.go-80-80 (1)
80-80:⚠️ Potential issue | 🟡 Minor变量
c遮蔽了方法接收者第 80 行
c := m.Get("content")中的局部变量c遮蔽了第 23 行的方法接收者c *openaiToCodexRequest。虽然接收者在此处未再使用,但这降低了可读性且容易引入隐患。建议重命名为contentVal或其他不冲突的名称。建议的改动
- c := m.Get("content") - if c.Exists() && c.Type == gjson.String && c.String() != "" { + contentVal := m.Get("content") + if contentVal.Exists() && contentVal.Type == gjson.String && contentVal.String() != "" { partType := "input_text" if role == "assistant" { partType = "output_text" @@ ... - } else if c.Exists() && c.IsArray() { - for _, it := range c.Array() { + } else if contentVal.Exists() && contentVal.IsArray() { + for _, it := range contentVal.Array() {
🧹 Nitpick comments (51)
web/src/pages/providers/components/clients-config-section.tsx (1)
173-186: Strict Mode 区域布局重构,整体结构合理。布局从之前的结构改为 flex 行排列,左侧放标签和描述,右侧放开关,语义清晰。
小建议:
items-center在描述文字较长换行时,Switch 会相对整个左侧 div 垂直居中,可能导致开关与标签文字不对齐。如果描述可能换行,可考虑使用items-start并给 Switch 加少量上边距以保持与标签对齐。不过如果描述始终较短,当前实现没有问题。web/src/hooks/queries/use-providers.ts (2)
54-63: 获取-合并策略:data as Provider回退分支实际不可达
getTransport().getProvider(id)在 provider 不存在时会抛异常(网络错误 / 404),不会返回undefined。因此 Line 61 的existing ?三元表达式中 falsy 分支 (data as Provider) 在实际运行中永远不会被执行。这不是功能性 Bug,但会误导后续读者以为存在「无现有数据时仅用 partial data 发请求」的合法路径。建议简化逻辑,移除不可达分支:
♻️ 建议的简化写法
mutationFn: async ({ id, data }: { id: number; data: Partial<Provider> }) => { const existing = queryClient.getQueryData<Provider>(providerKeys.detail(id)) || queryClient .getQueryData<Provider[]>(providerKeys.list()) ?.find((provider) => provider.id === id) || (await getTransport().getProvider(id)); - const payload = existing ? { ...existing, ...data } : (data as Provider); + const payload = { ...existing, ...data }; return getTransport().updateProvider(id, payload); },
55-60: 潜在的竞态与陈旧数据风险此处优先从查询缓存读取现有 provider 数据再与
data浅合并。需注意两点:
- 缓存陈旧:如果缓存数据已过期但尚未被 refetch,合并后会把陈旧的字段值发送到后端,可能覆盖其他来源的更新。在多标签页或多人协作场景下尤为明显。
- 浅合并语义:
{ ...existing, ...data }对嵌套对象(如config)是整体替换而非深度合并。调用方在更新config时需确保传入完整的config对象,否则会丢失未包含的字段。如果后端已从 PATCH(部分更新)切换为 PUT(全量替换)语义,这个 fetch-then-merge 方案是合理的,但建议在函数或模块顶部添加注释说明这一设计决策,帮助后续维护者理解为何需要先获取再合并。
internal/converter/gemini_to_openai.go (3)
296-301:time.Parse失败时createdAt静默为 0,依赖后续time.Now()兜底——逻辑正确但缺乏可观测性。当
createTime字段存在但格式不符合time.RFC3339Nano时,解析错误被静默吞掉,createdAt保持为 0,后续各 chunk 仍使用time.Now().Unix()。功能上没问题,但建议至少加一行 debug 日志以便排查时间戳异常。
306-319: 重复的 chunk 构造 +createdAt覆盖模式出现了 7 次,建议提取辅助函数。每次构造
OpenAIStreamChunk都重复设置ID、Object、Created、Model,然后紧跟一个if createdAt > 0的覆盖逻辑。可以提取一个小型工厂函数来消除重复:♻️ 提取辅助函数示例
func newOpenAIStreamChunk(messageID, model string, createdAt int64, choices []OpenAIChoice) OpenAIStreamChunk { created := time.Now().Unix() if createdAt > 0 { created = createdAt } return OpenAIStreamChunk{ ID: messageID, Object: "chat.completion.chunk", Created: created, Model: model, Choices: choices, } }然后各处构造可简化为一行调用,例如:
openaiChunk := newOpenAIStreamChunk(state.MessageID, streamMeta.Model, createdAt, []OpenAIChoice{{ Index: 0, Delta: &OpenAIMessage{Role: "assistant", Content: part.Text}, }})Also applies to: 326-339, 343-356, 359-378, 385-405, 416-429
275-280: 将已有的结构体字段转换移至解析后的geminiChunk使用,避免重复解析。第 272 行已通过
json.Unmarshal将event.Data反序列化到geminiChunk(GeminiStreamChunk结构体),该结构体已包含ModelVersion和ResponseID字段。第 275 行的gjson.ParseBytes对同一份数据进行重复解析,用来提取这两个字段的值不必要。建议直接使用geminiChunk.ModelVersion和geminiChunk.ResponseID,避免对同一数据的二次解析。(
createTime的提取可保留,因为需要特殊的 RFC3339Nano 解析和 Unix 时间戳转换)internal/executor/middleware_route_match.go (1)
27-57: 两个错误分支高度重复,建议提取为辅助函数。第 27-41 行(
router.Match返回错误)和第 43-57 行(路由列表为空)的处理逻辑几乎完全相同,仅错误消息不同。可以提取为一个私有辅助方法来消除重复。另外,第 36 行丢弃了
router.Match返回的原始错误err,而是创建了新的ErrNoRoutes包装。建议使用fmt.Errorf保留原始错误上下文,便于问题排查:♻️ 建议的重构方案
+func (e *Executor) failRouteMatch(c *flow.Ctx, state *execState, msg string, cause error) { + proxyReq := state.proxyReq + proxyReq.Status = "FAILED" + proxyReq.Error = msg + proxyReq.EndTime = time.Now() + proxyReq.Duration = proxyReq.EndTime.Sub(proxyReq.StartTime) + _ = e.proxyRequestRepo.Update(proxyReq) + if e.broadcaster != nil { + e.broadcaster.BroadcastProxyRequest(proxyReq) + } + err := domain.NewProxyErrorWithMessage(domain.ErrNoRoutes, false, msg) + state.lastErr = err + c.Err = err + c.Abort() +}然后在
routeMatch中复用:if err != nil { - proxyReq.Status = "FAILED" - proxyReq.Error = "no routes available" - ... - c.Abort() - return + e.failRouteMatch(c, state, "no routes available: "+err.Error(), err) + return } if len(routes) == 0 { - proxyReq.Status = "FAILED" - proxyReq.Error = "no routes configured" - ... - c.Abort() - return + e.failRouteMatch(c, state, "no routes configured", nil) + return }internal/converter/tool_name.go (2)
14-22:mcp__前缀的LastIndex("__")在仅有一个__分隔符时会产生无效缩短。当名称为
"mcp__very_long_tool_name_..."且只包含一个__时,LastIndex("__")返回 3(即mcp后的__),此时candidate = "mcp__" + name[5:]等同于原始名称,最终走到第 18-19 行的截断逻辑。虽然结果仍然正确(截断到 64 字符),但这意味着
mcp__特殊处理在单层命名时并没有实际效果,只有在存在多层__分隔(如mcp__server__tool)时才真正发挥作用。如果这是预期行为,建议添加注释说明。
48-67:makeUnique中的无限循环在实际场景中安全,但可考虑添加上限保护。
for i := 1; ; i++理论上是无限循环。虽然在实际使用中工具名称数量有限,不会产生问题,但作为防御性编码,可以考虑添加合理的上限(如 10000)并在超出时 panic 或返回错误,避免极端情况下的挂起。internal/adapter/provider/antigravity/translator_helpers.go (1)
30-48:strings.NewReplacer在每次ForEach回调中重复创建。
keyReplacer的替换规则是固定的,可以提取为包级变量,避免在递归遍历大型 JSON 时的重复分配。♻️ 建议提取为包级变量
+var keyReplacer = strings.NewReplacer(".", "\\.", "*", "\\*", "?", "\\?") + func Walk(value gjson.Result, path, field string, paths *[]string) { switch value.Type { case gjson.JSON: value.ForEach(func(key, val gjson.Result) bool { var childPath string - var keyReplacer = strings.NewReplacer(".", "\\.", "*", "\\*", "?", "\\?") safeKey := keyReplacer.Replace(key.String())internal/flow/engine.go (1)
34-47:Ctx结构设计合理,但StreamBody的生命周期管理需调用方负责。
StreamBody io.ReadCloser在Ctx中没有自动清理机制。确保所有使用StreamBody的中间件在完成后正确调用Close(),避免资源泄漏。internal/adapter/provider/antigravity/request.go (2)
33-40:isStreamRequest和extractSessionID可用gjson替代完整反序列化。这两个函数各自
json.Unmarshal完整请求体,仅为读取一个字段。项目已引入gjson,可直接使用gjson.GetBytes(body, "stream")和gjson.GetBytes(body, "metadata.user_id")避免不必要的完整解析开销。♻️ 使用 gjson 的简化示例
func isStreamRequest(body []byte) bool { - var req map[string]interface{} - if err := json.Unmarshal(body, &req); err != nil { - return false - } - stream, _ := req["stream"].(bool) - return stream + return gjson.GetBytes(body, "stream").Bool() } func extractSessionID(body []byte) string { - var req map[string]interface{} - if err := json.Unmarshal(body, &req); err != nil { - return "" - } - metadata, ok := req["metadata"].(map[string]interface{}) - if !ok { - return "" - } - userID, _ := metadata["user_id"].(string) - return userID + return gjson.GetBytes(body, "metadata.user_id").String() }Also applies to: 44-57
304-315:applyAntigravityRequestTuning中parametersJsonSchema重命名路径计算依赖字符串切割。Line 312 使用
p[:len(p)-len("parametersJsonSchema")]+"parameters"假设路径一定以parametersJsonSchema结尾。虽然Walk的搜索逻辑保证了这一点,但建议用strings.TrimSuffix使意图更明确,且增加防御性。♻️ 使用 strings.TrimSuffix
for _, p := range paths { - if renamed, err := RenameKey(strJSON, p, p[:len(p)-len("parametersJsonSchema")]+"parameters"); err == nil { + if renamed, err := RenameKey(strJSON, p, strings.TrimSuffix(p, "parametersJsonSchema")+"parameters"); err == nil { strJSON = renamedinternal/adapter/provider/antigravity/request_test.go (1)
9-44: 测试覆盖了 Claude 模型的核心调优路径,建议补充更多场景。当前仅测试
claude-sonnet-4-5模型路径。applyAntigravityRequestTuning对非 Claude 模型(如gemini-2.5-flash)有不同的分支逻辑(删除maxOutputTokens而非设置VALIDATEDmode)。建议补充:
- 非 Claude 模型的 Gemini 路径测试
- 验证原始
systemInstruction.parts内容("original")是否被追加保留- 空 payload / 缺失字段的边界情况
internal/adapter/provider/antigravity/response.go (1)
271-302: 流聚合后的响应构建逻辑合理,注意sjson.Set错误被忽略。Lines 276-294 共有约 10 次
sjson.Set/sjson.SetRaw调用,所有错误均被_丢弃。虽然在这些上下文中失败概率很低(输入来自有效 JSON 解析),但若某次设置失败,后续操作会基于不完整的responseTemplate继续构建。对于关键路径的响应数据,建议至少在整体结果上做一次gjson.Valid检查。internal/domain/model.go (1)
815-852: 这些辅助函数(containsWildcard、splitByWildcard、hasPrefix、hasSuffix、indexOf)现已成为死代码,建议删除。
MatchWildcard已重写为使用独立的迭代式匹配算法实现,不再依赖这些辅助函数。在整个代码库中均未发现对这些函数的任何调用,包括测试文件,清理这些未使用的函数可以减少维护负担。internal/adapter/provider/antigravity/transform_request.go (1)
56-82: 注释步骤编号不一致。移除 safety settings 代码后,注释编号变得混乱:
- 第 53 行标注 "8. Build Gemini request",但子步骤标注为 7.1-7.4(应为 8.1-8.4)
- 第 77 行标注 "5.5 deep-clean" 出现在步骤 7.4 之后
- 第 81 行标注 "6. Serialize" 出现在 "5.5" 之后
建议统一重新编号,保持逻辑清晰。
internal/adapter/provider/antigravity/openai_request.go (3)
170-182: data URL 解析逻辑重复且使用魔数。用户消息(170-182 行)和助手消息(220-232 行)中的 data URL 解析逻辑完全相同,且使用了魔数
5("data:"长度)和7("base64,"长度)。建议提取为辅助函数以减少重复并提高可读性。♻️ 提取辅助函数
// parseDataURL extracts MIME type and base64 data from a data URL. // Returns ("", "", false) if the URL is not a valid data URL. func parseDataURL(dataURL string) (mimeType, data string, ok bool) { const dataPrefix = "data:" // len = 5 const base64Marker = "base64," // len = 7 if !strings.HasPrefix(dataURL, dataPrefix) { return "", "", false } pieces := strings.SplitN(dataURL[len(dataPrefix):], ";", 2) if len(pieces) != 2 || !strings.HasPrefix(pieces[1], base64Marker) { return "", "", false } return pieces[0], pieces[1][len(base64Marker):], true }
404-404:itoa可以使用strconv.Itoa替代。
fmt.Sprintf("%d", i)比strconv.Itoa(i)慢且分配更多内存。在此文件中itoa被频繁调用(消息和工具处理循环中),建议使用标准库。♻️ 建议修改
-func itoa(i int) string { return fmt.Sprintf("%d", i) } +func itoa(i int) string { return strconv.Itoa(i) }需要在 import 中添加
"strconv"。
19-25: 函数体量过大(~280 行),建议拆分。
ConvertOpenAIRequestToAntigravity包含系统指令提取、消息转换、工具映射等多个不同职责,且有三遍消息遍历。考虑将消息处理(lines 90-293)和工具处理(lines 295-399)拆分为独立的辅助函数,提高可读性和可测试性。internal/adapter/provider/antigravity/schema_cleaner.go (2)
422-427:append(unsupportedConstraints, ...)可能在未来产生隐患。虽然当前 Go slice literal 的
cap == len,append会分配新数组不会修改全局变量,但这种对模块级变量直接append的写法容易被误读为修改原始 slice。建议显式构建新 slice 更清晰:♻️ 建议修改
func removeUnsupportedKeywords(jsonStr string) string { - keywords := append(unsupportedConstraints, + keywords := make([]string, 0, len(unsupportedConstraints)+7) + keywords = append(keywords, unsupportedConstraints...) + keywords = append(keywords, "$schema", "$defs", "definitions", "const", "$ref", "additionalProperties", - "propertyNames", // Gemini doesn't support property name validation + "propertyNames", )
14-14:gjsonPathKeyReplacer与translator_helpers.go中的keyReplacer重复。同一个包内
translator_helpers.go:Walk()函数中定义了相同的strings.NewReplacer(".", "\\.", "*", "\\*", "?", "\\?")。建议统一使用这里的包级变量gjsonPathKeyReplacer,避免重复定义。internal/converter/coverage_openai_response_test.go (1)
542-544:finish_reason断言方式脆弱,依赖 JSON 序列化的字面文本。
strings.Contains(string(out), "finish_reason\":null")对 JSON 序列化格式做了隐式假设(无空格、key 前缀恰好一致)。建议使用gjson(本文件已引入)进行结构化断言,与 Lines 381-382 中的做法保持一致。♻️ 建议修改
- if !strings.Contains(string(out), "finish_reason\":null") { - t.Fatalf("expected empty finish reason") + if gjson.GetBytes(out, "choices.0.finish_reason").Exists() && gjson.GetBytes(out, "choices.0.finish_reason").Type != gjson.Null { + t.Fatalf("expected null finish_reason, got: %v", gjson.GetBytes(out, "choices.0.finish_reason").Value()) }同样的问题也出现在 Lines 643-644。
internal/executor/middleware_egress.go (2)
55-55:_ = state.lastErr是无操作的死代码。访问结构体字段并赋值给空白标识符不会产生任何副作用。如果是为了将来使用可以加
// TODO注释,否则建议直接移除。
35-35: 仓库Update错误被静默丢弃,建议至少记录日志。
_ = e.proxyRequestRepo.Update(proxyReq)和_ = e.attemptRepo.Update(state.currentAttempt)的错误均被忽略。在 egress 清理阶段,持久化失败可能导致监控和审计数据不一致。参考middleware_ingress.goLine 110 中对Create失败的处理模式,建议加上日志。♻️ 建议修改
- _ = e.proxyRequestRepo.Update(proxyReq) + if err := e.proxyRequestRepo.Update(proxyReq); err != nil { + log.Printf("[Executor] egress: failed to update proxy request: %v", err) + }- _ = e.attemptRepo.Update(state.currentAttempt) + if err := e.attemptRepo.Update(state.currentAttempt); err != nil { + log.Printf("[Executor] egress: failed to update upstream attempt: %v", err) + }Also applies to: 49-49
internal/adapter/provider/codex/oauth.go (1)
279-421:codexUsageAPIResponse中匿名结构体大量重复,建议提取为命名类型。同一个 8 字段的 window 匿名结构体被复制了约 8 次,同一个 rate-limit 匿名结构体(含
Allowed、LimitReached、窗口等)也复制了 4 次。总计约 140 行几乎完全相同的定义。parseWindow的参数已经在手动声明结构体类型了——可以将 window 和 rate-limit 块提取为包级命名类型,大幅减少重复。♻️ 建议提取命名类型示例
+// codexAPIWindow is the raw API window supporting both snake_case and camelCase. +type codexAPIWindow struct { + UsedPercent *float64 `json:"used_percent,omitempty"` + UsedPercentCamel *float64 `json:"usedPercent,omitempty"` + LimitWindowSeconds *int64 `json:"limit_window_seconds,omitempty"` + LimitWindowSecondsCamel *int64 `json:"limitWindowSeconds,omitempty"` + ResetAfterSeconds *int64 `json:"reset_after_seconds,omitempty"` + ResetAfterSecondsCamel *int64 `json:"resetAfterSeconds,omitempty"` + ResetAt *int64 `json:"reset_at,omitempty"` + ResetAtCamel *int64 `json:"resetAt,omitempty"` +} + +// codexAPIRateLimit is the raw API rate limit supporting both naming conventions. +type codexAPIRateLimit struct { + Allowed *bool `json:"allowed,omitempty"` + LimitReached *bool `json:"limit_reached,omitempty"` + LimitReachedCamel *bool `json:"limitReached,omitempty"` + PrimaryWindow *codexAPIWindow `json:"primary_window,omitempty"` + PrimaryWindowCamel *codexAPIWindow `json:"primaryWindow,omitempty"` + SecondaryWindow *codexAPIWindow `json:"secondary_window,omitempty"` + SecondaryWindowCamel *codexAPIWindow `json:"secondaryWindow,omitempty"` +}然后
codexUsageAPIResponse中直接引用:type codexUsageAPIResponse struct { PlanType string `json:"plan_type,omitempty"` PlanTypeCamel string `json:"planType,omitempty"` - RateLimit *struct { /* ~40 lines */ } `json:"rate_limit,omitempty"` - RateLimitCamel *struct { /* ~40 lines */ } `json:"rateLimit,omitempty"` + RateLimit *codexAPIRateLimit `json:"rate_limit,omitempty"` + RateLimitCamel *codexAPIRateLimit `json:"rateLimit,omitempty"` // ... same for CodeReviewRateLimit }internal/executor/middleware_ingress.go (3)
121-121:GetBySessionID错误被静默丢弃。如果 repository 查询失败(数据库连接问题等),
session为nil后会创建一个空 session 继续执行。在数据库临时不可用场景下,这会导致新建不必要的 session 对象,并可能绕过已有的 project 绑定。建议至少记录日志。♻️ 建议修改
- session, _ := e.sessionRepo.GetBySessionID(state.sessionID) + session, err := e.sessionRepo.GetBySessionID(state.sessionID) + if err != nil { + log.Printf("[Executor] ingress: failed to get session %s: %v", state.sessionID, err) + }
22-75: 流上下文键提取重复模板代码较多,可考虑泛型辅助函数简化。约 50 行
c.Get+ 双层类型断言的重复模式。可用泛型辅助函数(Go 1.18+)减少样板代码。♻️ 示例辅助函数
func flowGet[T any](c *flow.Ctx, key string) (T, bool) { v, ok := c.Get(key) if !ok { var zero T return zero, false } t, ok := v.(T) return t, ok }使用:
if ct, ok := flowGet[domain.ClientType](c, flow.KeyClientType); ok { state.clientType = ct }
160-162: Line 162state.ctx = ctx是冗余赋值。
ctx在 Line 22 从state.ctx读取,Line 118 已经将其赋回state.ctx,此处再次赋值没有任何效果(ctx局部变量未被修改)。internal/converter/registry.go (2)
52-54:ResponseTransformerWithState接口——建议补充文档注释。该接口缺少 Go doc 注释,与同文件中其他接口(如
RequestTransformer、ResponseTransformer)的风格不一致。建议添加注释说明其用途和与ResponseTransformer的关系。♻️ 建议补充注释
+// ResponseTransformerWithState extends response transformation with per-request state. +// Transformers implementing this interface will be preferred over ResponseTransformer.Transform +// when TransformResponseWithState is called with a non-nil state. type ResponseTransformerWithState interface { TransformWithState(body []byte, state *TransformState) ([]byte, error) }
186-188:mustMarshal静默忽略序列化错误,可能导致下游空数据。
json.Marshal失败时返回nil,调用方若未检查返回值,可能产生空字节切片被写入响应。虽然对基本类型不太可能失败,但must前缀通常暗示会 panic 而非静默失败。♻️ 建议:要么 panic(匹配 must 语义),要么改名并返回 error
func mustMarshal(v interface{}) []byte { - b, _ := json.Marshal(v) + b, err := json.Marshal(v) + if err != nil { + panic(fmt.Sprintf("mustMarshal: %v", err)) + } return b }internal/executor/converting_writer.go (2)
253-262:updateContentType三个分支设置了相同的 Content-Type。所有分支均设置
"application/json",可以简化。如果未来不会有差异化需求,建议合并。♻️ 简化建议
func (c *ConvertingResponseWriter) updateContentType() { - switch c.originalType { - case domain.ClientTypeClaude: - c.underlying.Header().Set("Content-Type", "application/json") - case domain.ClientTypeOpenAI: - c.underlying.Header().Set("Content-Type", "application/json") - case domain.ClientTypeGemini: - c.underlying.Header().Set("Content-Type", "application/json") - } + if c.originalType != "" { + c.underlying.Header().Set("Content-Type", "application/json") + } }
279-318:GetPreferredTargetType中 OpenAI 类型不在优先级链中。当
providerType != "codex"时,优先级顺序为 Gemini → Claude → 第一个支持类型 → originalType。如果supportedTypes中只有openai,会走到 Line 313 的 fallback 返回supportedTypes[0],这是正确的。但建议在注释中明确说明 OpenAI 不需要显式偏好,因为大多数客户端本身就是 OpenAI 格式。internal/converter/codex_to_openai.go (1)
419-446:buildReverseMapFromOriginalOpenAI在original为nil时仍调用gjson.GetBytes(nil, ...)。
gjson.GetBytes(nil, ...)返回空 Result,功能上是安全的,会返回空 map。但 Line 168 传入nil再由 Line 169-171 可能覆盖为有效值——这段逻辑可以简化,避免不必要的空 map 创建。♻️ 简化建议
- rev := buildReverseMapFromOriginalOpenAI(nil) - if state != nil && len(state.OriginalRequestBody) > 0 { - rev = buildReverseMapFromOriginalOpenAI(state.OriginalRequestBody) - } + var originalBody []byte + if state != nil { + originalBody = state.OriginalRequestBody + } + rev := buildReverseMapFromOriginalOpenAI(originalBody)internal/converter/codex_openai_stream_test.go (1)
45-53: 建议提取重复的 SSE 事件驱动循环为测试辅助函数。
TestCodexToOpenAIStreamToolCalls(Lines 45-53)和TestCodexToClaudeStreamToolStopReason(Lines 129-137)中的事件循环逻辑完全一致。可以提取为transformEvents(t, conv, state, events)辅助函数减少重复。Also applies to: 129-137
internal/handler/proxy.go (2)
161-211: Flow context 中存在重复的键设置。以下键存在语义重复:
KeyIsStream(Line 169)与KeyProxyStream(Line 209)KeyRequestModel(Line 163)与KeyProxyRequestModel(Line 210)
dispatch方法(Line 217-221)还需要从KeyProxyStream重新读取 stream 值。如果这些键的用途不同(例如一个是原始值,另一个是中间件可修改的值),建议在键常量定义处添加注释说明区别;否则应统一使用一个键。
130-130:r.WithContext(ctx)是冗余操作。Line 130
ctx := r.Context(),Line 204r = r.WithContext(ctx)—— 将请求的 context 取出后又原封不动地设置回去,没有实际效果,仅产生不必要的对象分配。♻️ 建议移除冗余代码
- ctx := r.Context() clientType := h.clientAdapter.DetectClientType(r, body) ... - r = r.WithContext(ctx) - c.Request = r + c.Request = r如果后续确实需要向 context 中注入值,应改为
r = r.WithContext(context.WithValue(ctx, key, val))。Also applies to: 204-205
internal/executor/middleware_dispatch.go (1)
309-328: 上下文取消检测逻辑存在冗余判断。Line 310:
if ok && ctx.Err() != nil—— 检查是否为 ProxyError 且 context 已取消。
Line 330:if ok && ctx.Err() != context.Canceled—— 检查是否为 ProxyError 且 context 未被取消(可能是 DeadlineExceeded 或其他情况)。但 Line 310 的分支已经处理了
ctx.Err() != nil并 return,所以 Line 330 执行时ctx.Err()理论上为nil(除非在 310-329 之间 context 状态变化)。这意味着 Line 330 的ctx.Err() != context.Canceled条件始终为 true(因为nil != context.Canceled),该条件检查是多余的。♻️ 简化条件判断
- if ok && ctx.Err() != context.Canceled { + if ok { log.Printf("[Executor] ProxyError - IsNetworkError: %v, IsServerError: %v, Retryable: %v, Provider: %d", proxyErr.IsNetworkError, proxyErr.IsServerError, proxyErr.Retryable, matchedRoute.Provider.ID) e.handleCooldown(proxyErr, matchedRoute.Provider, currentClientType, originalClientType)同时可以移除 Line 339 的
else if ok && ctx.Err() == context.Canceled分支,因为已不可达。internal/converter/codex_to_claude.go (2)
287-294:msgDelta["usage"]的类型断言较脆弱第 293 行
msgDelta["usage"].(map[string]int)直接对刚在第 287-290 行创建的值做类型断言。虽然目前不会 panic,但如果将来有人修改第 287 行的值类型,这里会在运行时崩溃。建议提取为局部变量复用。建议的改动
- msgDelta := map[string]interface{}{ + usageMap := map[string]int{ + "input_tokens": inputTokens, + "output_tokens": outputTokens, + } + if cachedTokens > 0 { + usageMap["cache_read_input_tokens"] = cachedTokens + } + msgDelta := map[string]interface{}{ "type": "message_delta", "delta": map[string]interface{}{ "stop_reason": stopReason, }, - "usage": map[string]int{ - "input_tokens": inputTokens, - "output_tokens": outputTokens, - }, - } - if cachedTokens > 0 { - msgDelta["usage"].(map[string]int)["cache_read_input_tokens"] = cachedTokens + "usage": usageMap, }
436-459:tools.Array()被重复调用第 439 行
len(tools.Array()) > 0和第 441 行arr := tools.Array()分别调用了tools.Array(),每次调用都会分配一个新切片。建议只调用一次。建议的改动
func buildReverseMapFromClaudeOriginalShortToOriginal(original []byte) map[string]string { tools := gjson.GetBytes(original, "tools") rev := map[string]string{} - if tools.IsArray() && len(tools.Array()) > 0 { - var names []string - arr := tools.Array() + arr := tools.Array() + if len(arr) > 0 { + var names []string for i := 0; i < len(arr); i++ {internal/flow/helpers.go (1)
10-152: 可考虑使用泛型减少样板代码16 个 helper 函数全部遵循相同模式:
c.Get(key)→ 类型断言 → 返回。Go 1.18+ 可用泛型大幅简化:func getValue[T any](c *Ctx, key string) T { if v, ok := c.Get(key); ok { if t, ok := v.(T); ok { return t } } var zero T return zero } // 用法: func GetClientType(c *Ctx) domain.ClientType { return getValue[domain.ClientType](c, KeyClientType) } func GetSessionID(c *Ctx) string { return getValue[string](c, KeySessionID) } // ...当前写法也没有问题,只是维护成本略高。可作为后续优化。
internal/adapter/provider/antigravity/adapter.go (1)
554-573:isAntigravityEndpoint包含冗余检查第 560 行
cloudcode-pa.googleapis.com已经涵盖了第 563 行daily-cloudcode-pa.googleapis.com的匹配(因为使用strings.Contains)。第 569 行的组合检查也被前面的条件覆盖。可以简化逻辑。internal/adapter/provider/kiro/adapter.go (1)
705-732: HTTP 客户端未设置整体 Timeout注释说明是为了匹配 kiro2api 行为,但缺少 Timeout 意味着请求可能无限挂起。建议至少设置一个较大的兜底超时(如 10 分钟),以防止连接泄漏。
internal/converter/openai_to_codex.go (5)
23-28:json.Unmarshal仅用于验证但结果未使用第 24-27 行反序列化为
interface{}仅用于校验 JSON 合法性,但tmp变量后续未使用。第 28 行bytes.Clone(body)也非必要,因为gjson不会修改输入。如果只需验证 JSON 合法性,可使用json.Valid(body)。建议的改动
func (c *openaiToCodexRequest) Transform(body []byte, model string, stream bool) ([]byte, error) { - var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { + if !json.Valid(body) { - return nil, err + return nil, fmt.Errorf("invalid JSON input") } - rawJSON := bytes.Clone(body) + rawJSON := body out := `{"instructions":""}`
250-253:TransformWithState中同样存在冗余的 JSON 验证与
Transform一样,第 251-253 行的json.Unmarshal仅用于验证。可统一改为json.Valid(body)。
670-714: 手写冒泡排序出现多次,建议使用slices.Sort或sort.Ints第 676-682、726-732、780-786、803-808 行均使用手写冒泡排序对
[]int排序。标准库sort.Ints()或 Go 1.21+ 的slices.Sort()更简洁且不易出错。示例改动(以第 672-682 行为例)
+import "sort" + if len(st.MsgItemAdded) > 0 { idxs := make([]int, 0, len(st.MsgItemAdded)) for i := range st.MsgItemAdded { idxs = append(idxs, i) } - for i := 0; i < len(idxs); i++ { - for j := i + 1; j < len(idxs); j++ { - if idxs[j] < idxs[i] { - idxs[i], idxs[j] = idxs[j], idxs[i] - } - } - } + sort.Ints(idxs)
396-851:convertOpenAIChatCompletionsChunkToResponses超过 450 行,建议拆分该函数是一个大型状态机,处理 OpenAI 流式到 Codex 流式的转换。可考虑按职责拆分为若干子函数:
- 初始化逻辑(
Started检查 + 初始事件发射)→emitResponseStart- 内容 delta 处理 →
handleContentDelta- 推理内容处理 →
handleReasoningDelta- 工具调用处理 →
handleToolCallDelta- 完成事件处理 →
handleFinishReason这样也有助于单元测试各子路径。
841-843: 使用state.OriginalRequestBody前未检查state是否为 nil第 841 行直接访问
state.OriginalRequestBody,但state可能为 nil(虽然第 397-399 行在函数入口有 nil 返回,但convertOpenAIChatCompletionsChunkToResponses是从TransformChunk调用的,state来自外部)。在此处state实际上不会为 nil(因为第 397 行已检查),但若将来代码路径变更,此处缺少防护。如果确信 state 不会为 nil 可以忽略。internal/adapter/provider/custom/adapter.go (1)
43-51:context.Background()回退模式的一致性注意。当
c.Request为nil时使用context.Background()作为兜底,这意味着上游请求将没有超时/取消传播。同样的模式在handleStreamResponse(Line 335-338)中也出现了。如果c.Request在生产环境中不可能为nil,可考虑直接断言或返回错误,避免静默降级。internal/adapter/provider/codex/adapter.go (2)
97-108:stream字段被重复设置。
applyCodexRequestTuning(Line 552)已将body中的"stream"设为true,紧接着 Lines 105-107 又通过sjson.SetBytes再次设置为true。可以移除其中一处以避免混淆。♻️ 建议移除重复设置
由于
applyCodexRequestTuning已经设置了stream,可以移除 Lines 103-108:cacheID, updatedBody := applyCodexRequestTuning(c, requestBody) requestBody = updatedBody // Build upstream URL upstreamURL := CodexBaseURL + "/responses" - upstreamStream := true - if len(requestBody) > 0 { - if updated, err := sjson.SetBytes(requestBody, "stream", upstreamStream); err == nil { - requestBody = updated - } - } + upstreamStream := true同时在
applyCodexRequestTuning中保留stream设置作为唯一来源。
420-424:response.completed的字符串匹配可能产生误判。Line 422 使用
strings.Contains(line, "\"response.completed\"")来检测完成事件。如果消息内容中恰好包含"response.completed"字符串(如用户讨论 API 事件类型),会导致误判responseCompleted = true。建议解析 JSON 后检查type字段,或至少使用更精确的匹配。当前影响较低(仅影响提前标记完成状态),但在边缘场景下可能掩盖真正的断连错误。
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
internal/executor/converting_writer.go (1)
279-318:⚠️ Potential issue | 🟡 Minor
GetPreferredTargetType应在优先级链中显式包含domain.ClientTypeOpenAI。目前仅在
providerType为"codex"时特殊处理 Codex,其他情况优先级为:Gemini → Claude → 首个支持类型。对于配置为 OpenAI 兼容的 custom provider(默认支持 OpenAI),如果 provider 同时支持 OpenAI 和 Claude,当前逻辑会选择 Claude 而非 OpenAI,这可能不符合预期。建议在 Codex 之后、Gemini 之前添加 OpenAI 的检查。internal/adapter/provider/kiro/adapter.go (1)
156-166:⚠️ Potential issue | 🟠 Major重试路径中
http.NewRequestWithContext的错误被丢弃,可能导致 nil panic。Line 158 使用
upstreamReq, _ = http.NewRequestWithContext(...)丢弃了错误。虽然此处失败的概率极低(URL 和 context 都已验证过),但如果创建失败,upstreamReq为 nil,后续upstreamReq.Header.Set将触发 panic。🐛 修复建议
- upstreamReq, _ = http.NewRequestWithContext(ctx, "POST", upstreamURL, bytes.NewReader(cwBody)) + upstreamReq, err = http.NewRequestWithContext(ctx, "POST", upstreamURL, bytes.NewReader(cwBody)) + if err != nil { + return domain.NewProxyErrorWithMessage(err, true, "failed to create retry upstream request") + }internal/adapter/provider/codex/adapter.go (1)
467-476:⚠️ Potential issue | 🟡 Minor流式读取中的非 EOF 错误被静默吞没。
Line 475 在
err != io.EOF且ctx.Err() == nil的情况下直接return nil,意味着上游连接的异常断开(如网络错误)不会向调用方报告。如果这是有意为之(因为部分数据已写入客户端),建议添加注释说明;否则应记录日志或返回错误。internal/adapter/provider/custom/adapter.go (1)
297-304:⚠️ Potential issue | 🔴 Critical
eventChan未进行 nil 检查,将导致 nil 指针 panic。Line 297 获取
eventChan,但 Line 300 直接调用eventChan.SendResponseInfo(...)未做 nil 守卫。同样,sendFinalEvents闭包(Lines 341-369)内部也未检查 nil。对比
handleNonStreamResponse(Line 248)正确使用了if eventChan != nil守卫。🐛 建议修复
eventChan := flow.GetEventChan(c) // Send initial response info (for streaming, we only capture status and headers) - eventChan.SendResponseInfo(&domain.ResponseInfo{ - Status: resp.StatusCode, - Headers: flattenHeaders(resp.Header), - Body: "[streaming]", - }) + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ + Status: resp.StatusCode, + Headers: flattenHeaders(resp.Header), + Body: "[streaming]", + }) + }同样需要在
sendFinalEvents闭包中添加 nil 检查:sendFinalEvents := func() { + if eventChan == nil { + return + } if sseBuffer.Len() > 0 {internal/adapter/provider/antigravity/adapter.go (2)
709-722:⚠️ Potential issue | 🔴 Critical
handleStreamResponse中eventChan未做 nil 守卫,可能导致 panic。Line 715 获取
eventChan,Line 718 直接调用eventChan.SendResponseInfo(...)。若flow.GetEventChan(c)返回 nil(例如 event channel 未被上游中间件设置),将触发 nil 指针 panic。同样的问题出现在 Line 864 的eventChan.SendFirstToken。建议参考
handleNonStreamResponse(Line 654)和handleCollectedStreamResponse初始块(Line 915)中的 nil 检查模式。🐛 建议修复
eventChan := flow.GetEventChan(c) // Send initial response info (for streaming, we only capture status and headers) - eventChan.SendResponseInfo(&domain.ResponseInfo{ + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ ... - }) + }) + }同时在
sendFinalEvents闭包和SendFirstToken调用处添加 nil 检查。
995-1024:⚠️ Potential issue | 🔴 Critical
handleCollectedStreamResponse尾部的eventChan调用缺少 nil 检查。Lines 915-921 初始块正确使用了
if eventChan != nil守卫,但 Lines 997、1004-1013、1022-1023 处的调用缺少相同的守卫,不一致且存在 panic 风险。🐛 建议修复
- eventChan.SendResponseInfo(&domain.ResponseInfo{ - Status: resp.StatusCode, - Headers: flattenHeaders(resp.Header), - Body: upstreamSSE.String(), - }) + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ + Status: resp.StatusCode, + Headers: flattenHeaders(resp.Header), + Body: upstreamSSE.String(), + }) + }对 Lines 1004-1013 和 1022-1023 也需同样处理。
🤖 Fix all issues with AI agents
In `@internal/adapter/provider/antigravity/adapter.go`:
- Around line 168-175: The code writes to the shared provider config
(config.ProjectID) inside the project ID resolution block (where
FetchProjectInfo is called), which causes a data race under concurrent Execute
calls; protect the read/write of config.ProjectID with synchronization (e.g.,
add a sync.Mutex on the provider struct or a sync.Once to perform the assignment
once) and update the resolution logic that uses projectID and assigns
config.ProjectID so that concurrent goroutines cannot concurrently write the
field; specifically, wrap the check/assignment of config.ProjectID (the lines
using strings.TrimSpace(config.ProjectID) and config.ProjectID = pid) in the
chosen lock/once protection and ensure FetchProjectInfo is still called only
when needed.
- Around line 254-258: The retry branch is ignoring the error from
http.NewRequestWithContext (upstreamReq, _ = http.NewRequestWithContext(ctx,
"POST", upstreamURL, bytes.NewReader(upstreamBody))) which can leave upstreamReq
nil and cause panics when setting headers or calling client.Do; update the retry
path in the function to capture and check the error returned by
http.NewRequestWithContext (with ctx, upstreamURL, upstreamBody), and if non-nil
return or propagate the error (or handle it consistently with the original
request path) before setting headers (Content-Type, Authorization using
accessToken, User-Agent using AntigravityUserAgent) and before calling client.Do
to avoid nil dereference.
In `@internal/adapter/provider/antigravity/request.go`:
- Around line 304-315: In applyAntigravityRequestTuning, the path-truncation
assumes every path from Walk ends with "parametersJsonSchema" which can panic;
before slicing p, verify the suffix using a safe check (e.g.,
strings.HasSuffix(p, "parametersJsonSchema")) and only build the replacement
prefix when true, otherwise skip that path (or log and continue); update the
loop over paths that calls RenameKey to compute newPath only after the suffix
check and ensure you never slice with a negative index so RenameKey is only
invoked with valid paths.
In `@internal/adapter/provider/antigravity/response.go`:
- Around line 190-194: convertStreamToNonStream is rejecting SSE lines because
they still include the "data: " prefix from unwrapV1InternalSSEChunk (which
returns bytes like "data: {...}\n\n"); update convertStreamToNonStream to strip
the SSE prefix before JSON validation: for each trimmed line from unwrappedSSE,
remove a leading "data: " (and ignore empty "data: " lines) then run
gjson.ValidBytes on the stripped bytes so valid JSON payloads are accepted and
responseTemplate is populated instead of falling back to the default; reference
unwrapV1InternalSSEChunk, unwrappedSSE, and convertStreamToNonStream when making
the change.
In `@internal/converter/codex_to_openai.go`:
- Around line 240-257: The done branch in TransformChunk can emit a duplicate
finish chunk because response.completed already sends one; modify the OpenAI
stream state (use getOpenAIStreamState / the returned st) to add a boolean flag
(e.g., FinishEmitted) and set it when you emit a finish chunk in the
response.completed handling path, then in the "done" branch check
st.FinishEmitted before calling buildOpenAIStreamDone/FormatDone to avoid
emitting the finish chunk twice; ensure you set the flag in both places that
produce a finish chunk so subsequent events are suppressed.
- Around line 376-403: The function extractCodexOutputText is dead code and
should be removed; delete the entire extractCodexOutputText function declaration
(including its switch handling for string, []interface{}, and
map[string]interface{}) from internal/converter/codex_to_openai.go and run
tests/grep to confirm no remaining references; if any tests or helper logic
relied on it, replace those call sites with the appropriate existing utility or
inline equivalent before removal.
In `@internal/converter/openai_to_codex.go`:
- Around line 670-682: The duplicated O(n²) sorting block that builds idxs from
st.MsgItemAdded and bubble-sorts it (seen around the choice.Get("finish_reason")
handling and repeated at the four locations) should be replaced by a single
helper and the standard library sorter: extract a helper like sortIndices([]int)
or buildIndicesAndSort(msgs []<type>) that constructs the index slice and calls
sort.Ints (or slices.Sort) to sort it, then replace the repeated loops with
calls to that helper; reference the identifiers st.MsgItemAdded, idxs, and the
finish_reason handling in openai_to_codex.go to find and update each occurrence.
- Around line 514-519: The sjson.Set call is targeting "item.summary.text" but
summary is an array in the outputItemDone template; update the sjson.Set
invocations that write summary to use the array index path "item.summary.0.text"
(and similarly "item.summary.0.type" if applicable) so the first element of the
summary array is updated correctly — locate the output construction around the
outputItemDone variable in openai_to_codex.go (the block that calls sjson.Set
for "item.id", "output_index", "sequence_number" and "item.summary.text") and
change the summary path to "item.summary.0.text".
- Around line 396-414: The type assertion at the start of
convertOpenAIChatCompletionsChunkToResponses can panic because state.Custom may
hold a different type; update this function to attempt a safe type assertion for
state.Custom to *openaiToResponsesState (using the comma-ok form), and if the
assertion fails or state.Custom is nil, reinitialize state.Custom to a new
*openaiToResponsesState (same fields as currently created) before
proceeding—mirror the defensive pattern used by getOpenAIStreamState in
codex_to_openai.go to avoid panics when other converters set state.Custom to
different types.
In `@internal/converter/tool_name.go`:
- Around line 14-22: The branch handling names with the "mcp__" prefix
incorrectly treats the prefix's own "__" as the "last" separator; change the
check so the found index refers to a separator after the prefix (e.g., require
idx > len("mcp__") instead of idx > 0), then build candidate =
"mcp__"+name[idx+2:] and apply the existing maxToolNameLen truncation logic
(references: the strings.HasPrefix(name, "mcp__") branch,
strings.LastIndex(name, "__"), candidate, and maxToolNameLen).
In `@internal/executor/middleware_route_match.go`:
- Line 36: The current assignment replaces the original error returned by
router.Match with domain.NewProxyErrorWithMessage, losing the root cause;
instead preserve or wrap the original err from router.Match when creating the
ProxyError. Locate the router.Match call and the subsequent assignment to err
and change the creation of the ProxyError (currently using
domain.NewProxyErrorWithMessage) to include the original err as the cause or
append its message (e.g., use the library's wrapping constructor if available or
include fmt.Sprintf with err.Error()) so the original router.Match error is
retained in the resulting error.
- Line 32: The call to e.proxyRequestRepo.Update(proxyReq) is silently
discarding errors (seen where e.proxyRequestRepo.Update(proxyReq) is invoked),
which can leave in-memory proxyReq out of sync with persistent state; update
those calls to check the returned error and handle it—at minimum log the error
with context (include proxyReq.ID or other identifying fields) via the
executor's logger (or return the error up the stack) so persistence failures are
visible and can be acted on.
In `@internal/handler/proxy.go`:
- Around line 70-73: ServeHTTP is appending h.dispatch onto the shared slice
h.extra which can cause a data race under concurrent requests; fix by creating a
new slice copy before appending (e.g., allocate a new slice with len(h.extra)
and copy(h.extra, ...), or use make to create capacity len+1 and then append)
and then call h.engine.HandleWith(ctx, newSlice) so h.extra and h.dispatch are
not modified concurrently; update the ServeHTTP function (refer to ServeHTTP,
h.extra, and h.dispatch) to use this explicit copy approach.
In `@web/src/hooks/queries/use-providers.ts`:
- Around line 54-62: The mutation uses a shallow merge when building payload ({
...existing, ...data }) which can clobber nested objects; update mutationFn to
perform a deep/recursive merge of existing and data (e.g., replace the shallow
spread with a deepMerge(existing, data) call or inline equivalent) before
calling getTransport().updateProvider(id, payload), keeping the existing
fallback behavior when existing is undefined; ensure the deep merge handles
Partial<Provider> types and nested config objects so
providerKeys.detail/providerKeys.list lookups remain valid.
🧹 Nitpick comments (30)
internal/converter/coverage_openai_stream_test.go (1)
516-527: 流末尾追加FormatDone()后,建议补充对[DONE]标记的转发断言。当前测试在第 518 行新增了
FormatDone()到流尾部,但后续断言(第 524 行)仅验证输出包含"chat.completion.chunk",并未验证codexToOpenAIResponse是否正确地将[DONE]传递到输出流中。如果 Codex-to-OpenAI 转换器应当在末尾输出[DONE],建议增加类似断言:if !strings.Contains(string(out), "[DONE]") { t.Fatalf("expected [DONE] marker in output") }如果转换器设计上不转发
[DONE],可忽略此建议。internal/executor/middleware_route_match.go (1)
27-57: 两个错误分支存在大量重复代码,建议提取辅助函数。
err != nil和len(routes) == 0两个分支的逻辑几乎相同(设置 proxyReq 状态、计算 Duration、更新仓库、广播、设置错误并 Abort)。可以抽取一个failRouteMatch辅助方法来消除重复。♻️ 建议重构
+func (e *Executor) failRouteMatch(c *flow.Ctx, state *execState, errMsg string) { + proxyReq := state.proxyReq + proxyReq.Status = "FAILED" + proxyReq.Error = errMsg + proxyReq.EndTime = time.Now() + proxyReq.Duration = proxyReq.EndTime.Sub(proxyReq.StartTime) + _ = e.proxyRequestRepo.Update(proxyReq) + if e.broadcaster != nil { + e.broadcaster.BroadcastProxyRequest(proxyReq) + } + err := domain.NewProxyErrorWithMessage(domain.ErrNoRoutes, false, errMsg) + state.lastErr = err + c.Err = err + c.Abort() +}internal/adapter/provider/codex/adapter_test.go (1)
11-37: 测试覆盖了核心行为,但建议补充对model字段保留和user_id提取的断言。当前测试验证了缓存 ID 生成、
stream强制设为true以及敏感字段移除,这些是核心逻辑。但缺少以下验证:
model字段是否被原样保留("gpt-5")- 从
KeyOriginalRequestBody中提取的user_id是否被正确使用(如果applyCodexRequestTuning涉及该逻辑)此外,
flow.NewCtx(nil, nil)传入了nil的Writer和Request,如果被测函数未来访问这些字段会导致 panic。建议使用httptest.NewRecorder()和httptest.NewRequest()以提高健壮性。internal/adapter/provider/antigravity/response.go (1)
275-295:sjson.Set/sjson.SetRaw的错误均被丢弃。Lines 276-294 中多次调用
sjson.Set和sjson.SetRaw,返回的错误全部被_忽略。虽然 sjson 在正常模板上很少出错,但如果responseTemplate因解析异常导致格式不正确(例如上游返回截断的 JSON),这些静默失败会使输出数据不完整且难以排查。建议对关键字段(至少
candidates.0.content.parts)的设置做错误检查并记录日志。internal/domain/model.go (1)
856-893: 旧辅助函数已变为死代码。
containsWildcard、splitByWildcard、hasPrefix、hasSuffix、indexOf在MatchWildcard重写后不再被调用。如果包内或包外无其他引用,建议移除以减少维护负担。#!/bin/bash # 检查这些辅助函数是否在其他地方被引用 for fn in containsWildcard splitByWildcard hasPrefix hasSuffix indexOf; do echo "=== $fn ===" rg -n --type=go "\b$fn\b" -g '!internal/domain/model.go' doneinternal/converter/registry.go (2)
52-54:ResponseTransformerWithState缺少文档注释。其他接口(
RequestTransformer、ResponseTransformer)均有 Go doc 注释,此处应保持一致。建议添加注释
+// ResponseTransformerWithState extends ResponseTransformer with stateful non-streaming response conversion. type ResponseTransformerWithState interface { TransformWithState(body []byte, state *TransformState) ([]byte, error) }
140-158:TransformResponseWithState中state为nil时可安全传递给TransformWithState,但缺乏防御性保护。当前调用路径(
converting_writer.go的Finalize)总会传入非 nil 的state,但此方法为公开 API。若未来有新调用者传入nil,下游的TransformWithState实现(如codexToOpenAIResponse、openaiToCodexResponse)在访问state.OriginalRequestBody时会 panic。可考虑在入口处做防御性检查,或在文档中明确标注
state不可为 nil。internal/converter/gemini_to_openai.go (2)
271-301: 每个 SSE 事件同时进行json.Unmarshal(Line 272)和gjson.ParseBytes(Line 275)——双重解析。
event.Data在循环中先通过json.Unmarshal解析为GeminiStreamChunk,再通过gjson.ParseBytes提取元数据字段。对于高频流式场景,这会带来额外开销。可考虑统一使用 gjson 提取所有字段(包括 candidates),或将结构化反序列化的结果与 gjson 查询合并,避免两次解析同一份 JSON。当前不影响正确性,属于优化建议。
326-339: 各分支中OpenAIStreamChunk构造逻辑高度重复。thinking、text、inline_data、function_call、finish 五个分支的 chunk 构造代码结构几乎相同(设置 ID、Object、Created、Model、createdAt 覆盖),仅 Delta 内容不同。可考虑提取一个 helper 函数来减少重复,例如:
func newGeminiOpenAIChunk(state *TransformState, meta *geminiOpenAIStreamMeta, createdAt int64) OpenAIStreamChunk { c := OpenAIStreamChunk{ ID: state.MessageID, Object: "chat.completion.chunk", Created: time.Now().Unix(), Model: meta.Model, } if createdAt > 0 { c.Created = createdAt } return c }这与
codex_to_openai.go中的newOpenAIStreamTemplate思路一致。非阻塞建议。Also applies to: 342-356, 358-378, 380-408, 416-432
internal/converter/openai_to_codex.go (2)
23-28:json.Unmarshal到interface{}仅用于验证——可以更轻量。Line 24-27 将 body 反序列化到
var tmp interface{}但从未使用tmp,仅为验证 JSON 有效性。随后 Line 28 又bytes.Clone(body)获取原始字节。TransformWithState(Line 251-253)也有相同模式。若仅需验证 JSON 合法性,可使用
json.Valid(body)代替,避免不必要的内存分配。♻️ 建议
- var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { + if !json.Valid(body) { + err := json.Unmarshal(body, &json.RawMessage{}) return nil, err } - rawJSON := bytes.Clone(body) + rawJSON := bytes.Clone(body)或直接信任上游已验证的输入,跳过此步骤。
78-81: 变量c遮蔽了方法接收者。Line 80 的
c := m.Get("content")遮蔽了方法接收者c *openaiToCodexRequest。虽然当前不影响功能(此分支后续未再使用接收者),但在后续维护时容易引发混淆。建议改用不同变量名,如contentVal。♻️ 建议
- c := m.Get("content") - if c.Exists() && c.Type == gjson.String && c.String() != "" { + contentVal := m.Get("content") + if contentVal.Exists() && contentVal.Type == gjson.String && contentVal.String() != "" {后续对
c的引用也需相应替换。internal/flow/engine.go (1)
57-69:Next()中 handler 自身调用c.Next()时的递归行为——确认符合预期。当 handler 内部调用
c.Next()时,c.index在递归Next()中被推进。递归返回后,外层循环继续c.index++,由于 index 已超过len(handlers),循环立即退出。这与 Gin 框架的行为一致,但没有显式文档说明。如果团队中有人不熟悉此模式,建议在
Next()方法上添加简短注释说明递归调用语义。internal/flow/helpers.go (1)
10-152: 考虑使用泛型辅助函数减少重复代码。这 16 个 helper 函数的结构完全一致(取值 → 类型断言 → 返回默认值),可以引入一个泛型辅助函数统一处理:
♻️ 泛型重构示例
func getTyped[T any](c *Ctx, key string, defaultVal T) T { if v, ok := c.Get(key); ok { if t, ok := v.(T); ok { return t } } return defaultVal } // 使用示例: func GetClientType(c *Ctx) domain.ClientType { return getTyped[domain.ClientType](c, KeyClientType, "") } func GetProjectID(c *Ctx) uint64 { return getTyped[uint64](c, KeyProjectID, 0) }这样可以将 16 个函数体简化为单行调用,消除重复逻辑,同时保持类型安全。
internal/adapter/provider/antigravity/openai_request.go (3)
170-182: Data URL 解析逻辑脆弱,依赖硬编码偏移量。
imageURL[5:]假设前缀为"data:",pieces[1][7:]假设前缀为"base64,",但没有显式验证这些前缀。如果 URL 格式略有不同(如缺少data:前缀或使用其他编码方式),可能导致索引越界或数据截断。此外,Lines 220-232 中对 assistant 消息有完全相同的解析逻辑,建议抽取为公共函数。
♻️ 建议抽取 data URL 解析
// parseDataURL extracts MIME type and base64 data from a data URL. // Returns ("", "", false) if the URL is not a valid data URL. func parseDataURL(dataURL string) (mimeType, data string, ok bool) { if !strings.HasPrefix(dataURL, "data:") { return "", "", false } rest := dataURL[5:] // skip "data:" pieces := strings.SplitN(rest, ";", 2) if len(pieces) != 2 || !strings.HasPrefix(pieces[1], "base64,") { return "", "", false } return pieces[0], pieces[1][7:], true }
404-404:itoa使用fmt.Sprintf效率较低。在高频调用路径中,
strconv.Itoa比fmt.Sprintf更高效且语义更清晰。♻️ 建议
-func itoa(i int) string { return fmt.Sprintf("%d", i) } +func itoa(i int) string { return strconv.Itoa(i) }同时在 import 中添加
"strconv"。
19-25: 大量sjson.SetBytes调用丢弃了错误返回值。整个函数中几乎所有
sjson.SetBytes调用都使用out, _ =丢弃错误。虽然 sjson 在正常情况下很少返回错误,但在构建复杂嵌套结构时,静默忽略错误可能导致难以调试的数据丢失。建议至少在关键路径上记录错误日志,与文件中其他地方(如 Lines 310-316)处理errSet的模式保持一致。internal/adapter/provider/antigravity/request.go (1)
67-75:Execute中创建context.Background()作为后备,但会丢失请求级别的 context 传播。当
c.Request为nil时回退到context.Background()是安全的防御编码,但如果 flow.Ctx 本身应该携带 context,建议考虑在 flow.Ctx 上提供统一的 context 获取方法,避免在多个 adapter 中重复此模式(同样出现在 kiro adapter 中)。internal/adapter/provider/antigravity/request_test.go (1)
9-43: 测试覆盖范围有限,建议补充 Gemini 路径和边界情况测试。当前测试仅覆盖了 Claude 模型的调优路径。
applyAntigravityRequestTuning中还有 Gemini 模型的分支(使用CleanJSONSchemaForGemini、删除maxOutputTokens等),这些逻辑未被测试到。建议额外添加:
- Gemini 模型的调优测试(非 Claude 分支)
- 验证原始
systemInstruction.parts中的"original"文本是否被保留在parts[2]- 空 payload 或无效 JSON 输入的边界情况
internal/converter/codex_to_claude.go (1)
342-421:output.ForEach闭包中修改外部变量out和hasToolCall— 功能正确但需注意。
gjson.Result.ForEach的回调闭包通过捕获修改了out(string)和hasToolCall(bool)。由于ForEach是同步执行的,这在功能上是正确的。但out作为 string 在闭包内被sjson.Set反复重新赋值,且整个闭包体较长(约 80 行),建议考虑将各 case 分支提取为独立函数以提升可读性。internal/adapter/provider/kiro/adapter.go (1)
114-123: 上游请求头在初始请求和重试路径中完全重复。Lines 114-123 和 Lines 158-166 的 header 设置逻辑完全一致(包括硬编码的 user-agent 字符串)。建议提取为辅助方法,减少重复并确保两条路径始终保持一致。
♻️ 重构建议
func setKiroUpstreamHeaders(req *http.Request, accessToken string, stream bool) { req.Header.Set("Authorization", "Bearer "+accessToken) req.Header.Set("Content-Type", "application/json") if stream { req.Header.Set("Accept", "text/event-stream") } req.Header.Set("x-amzn-kiro-agent-mode", "spec") req.Header.Set("x-amz-user-agent", "aws-sdk-js/1.0.18 KiroIDE-0.2.13-...") req.Header.Set("user-agent", "aws-sdk-js/1.0.18 ua/2.1 ...") }Also applies to: 158-166
internal/adapter/provider/codex/adapter.go (2)
118-129:stream字段被重复设置为true。
applyCodexRequestTuning(Line 119)内部已在 Line 573-575 将 body 中的stream设为true,随后 Lines 125-129 再次用sjson.SetBytes将stream设为upstreamStream(也是true)。两处操作冗余。建议从
applyCodexRequestTuning中移除设置stream的逻辑,或移除 Execute 中的重复设置,保持单一职责。
679-700:extractCodexCompletedResponse使用 50MB scanner buffer,建议添加注释说明原因。Line 681 设置了
scanner.Buffer(nil, 52_428_800)(约 50MB)。对于正常 SSE 响应这个值偏大。如果确有需要(如超大 response.completed 事件),建议添加注释说明;否则可考虑降低上限以避免意外的高内存分配。internal/converter/tool_name.go (1)
48-68:makeUnique无限循环缺少安全上限。当所有候选名都被占用时(理论上不太可能,但属于防御性编程),
for i := 1; ; i++会无限循环。建议加一个合理的上限并在超出时返回错误或 fallback,避免极端场景下的 goroutine 挂起。♻️ 建议加上限
makeUnique := func(cand string) string { if _, ok := used[cand]; !ok { return cand } base := cand - for i := 1; ; i++ { + for i := 1; i <= 10000; i++ { suffix := "_" + strconv.Itoa(i) allowed := maxToolNameLen - len(suffix) if allowed < 0 { allowed = 0 } tmp := base if len(tmp) > allowed { tmp = tmp[:allowed] } tmp = tmp + suffix if _, ok := used[tmp]; !ok { return tmp } } + return cand // fallback — 实际不应触发 }internal/executor/middleware_ingress.go (3)
22-75: 从 flow 上下文提取状态的样板代码过多,建议抽取辅助函数。22-75 行连续 10 个几乎相同的
c.Get(key)+ 双层类型断言模式,阅读负担较重。可以考虑封装泛型辅助函数来减少重复:func getTyped[T any](c *flow.Ctx, key string) (T, bool) { v, ok := c.Get(key) if !ok { var zero T return zero, false } t, ok := v.(T) return t, ok }这样每次提取可简化为一行:
if ct, ok := getTyped[domain.ClientType](c, flow.KeyClientType); ok { state.clientType = ct }
120-121:GetBySessionID错误被静默忽略。第 121 行
session, _ := e.sessionRepo.GetBySessionID(state.sessionID)丢弃了错误。如果数据库出错(非"未找到"),此处会把session当作 nil 处理,继续创建临时 session 并尝试等待 project 绑定。建议至少记录日志以辅助排查。
160-162: 第 162 行state.ctx = ctx赋值冗余。
ctx在第 22 行从state.ctx读出,此后未被修改,第 162 行又赋值回去,没有实际效果。可移除。♻️ 清理
state.projectID = session.ProjectID proxyReq.ProjectID = state.projectID - state.ctx = ctx }internal/handler/proxy.go (4)
58-60:Use方法缺少并发安全性说明或保护。
Use直接append到h.extra,没有加锁。如果在服务启动后仍可能调用Use(与ServeHTTP并发),则存在数据竞争。即使目前只在初始化阶段调用,也建议加文档注释说明"必须在 ServeHTTP 前调用"或添加锁保护,防止后续误用。
75-213:ingress方法职责过多,建议拆分。
ingress承担了请求校验、body 读取/规范化、客户端检测、token 认证、session 管理、project ID 解析、flow context 填充等多项职责(约 140 行)。这使得测试和维护都比较困难。考虑将功能拆分为更小的中间件或辅助方法,例如:
- 请求校验 + body 读取
- 客户端检测 + token 认证
- session / project 解析
- flow context 填充
这与 PR 标题"中间件架构"的设计意图也更契合。
101-103: 直接修改r.URL.Path会影响原始请求对象。第 102 行
r.URL.Path = strings.TrimPrefix(r.URL.Path, "/v1")直接修改了传入的*http.Request的 URL,可能影响日志、中间件、或其他引用同一 request 的逻辑。考虑在需要转发时再修改,或者使用r.Clone()/ 局部变量避免副作用。
215-241:dispatch错误处理合理。区分了
ProxyError(502 / SSE error event)与通用错误(500),流式与非流式路径分开处理。逻辑正确。一个小注意点:第 238 行
writeError(c.Writer, http.StatusInternalServerError, err.Error())会将内部错误信息直接暴露给客户端,在生产环境中可能泄露实现细节。建议考虑返回通用错误消息。
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (50)
internal/adapter/provider/adapter.gointernal/adapter/provider/antigravity/adapter.gointernal/adapter/provider/antigravity/openai_request.gointernal/adapter/provider/antigravity/request.gointernal/adapter/provider/antigravity/request_test.gointernal/adapter/provider/antigravity/response.gointernal/adapter/provider/antigravity/schema_cleaner.gointernal/adapter/provider/antigravity/service.gointernal/adapter/provider/antigravity/transform_request.gointernal/adapter/provider/antigravity/translator_helpers.gointernal/adapter/provider/codex/adapter.gointernal/adapter/provider/codex/adapter_test.gointernal/adapter/provider/codex/oauth.gointernal/adapter/provider/custom/adapter.gointernal/adapter/provider/kiro/adapter.gointernal/context/context.gointernal/context/keys.gointernal/converter/claude_to_codex.gointernal/converter/claude_to_openai_stream.gointernal/converter/codex_openai_stream_test.gointernal/converter/codex_to_claude.gointernal/converter/codex_to_openai.gointernal/converter/coverage_claude_response_test.gointernal/converter/coverage_codex_instructions_test.gointernal/converter/coverage_misc_helpers_test.gointernal/converter/coverage_openai_response_test.gointernal/converter/coverage_openai_stream_test.gointernal/converter/gemini_to_openai.gointernal/converter/more_converter_extra_test.gointernal/converter/openai_to_codex.gointernal/converter/registry.gointernal/converter/test_helpers_test.gointernal/converter/tool_name.gointernal/domain/model.gointernal/executor/converting_writer.gointernal/executor/converting_writer_test.gointernal/executor/executor.gointernal/executor/flow_state.gointernal/executor/middleware_dispatch.gointernal/executor/middleware_egress.gointernal/executor/middleware_ingress.gointernal/executor/middleware_route_match.gointernal/flow/engine.gointernal/flow/helpers.gointernal/flow/keys.gointernal/handler/proxy.gointernal/service/admin.goweb/package.jsonweb/src/hooks/queries/use-providers.tsweb/src/pages/providers/components/clients-config-section.tsx
💤 Files with no reviewable changes (1)
- internal/executor/converting_writer_test.go
🚧 Files skipped from review as they are similar to previous changes (15)
- web/package.json
- internal/executor/flow_state.go
- internal/adapter/provider/antigravity/transform_request.go
- internal/adapter/provider/antigravity/translator_helpers.go
- internal/converter/coverage_codex_instructions_test.go
- internal/executor/middleware_dispatch.go
- internal/adapter/provider/antigravity/schema_cleaner.go
- internal/converter/coverage_openai_response_test.go
- internal/converter/coverage_claude_response_test.go
- internal/adapter/provider/codex/oauth.go
- web/src/pages/providers/components/clients-config-section.tsx
- internal/context/keys.go
- internal/adapter/provider/adapter.go
- internal/converter/more_converter_extra_test.go
- internal/executor/middleware_egress.go
🧰 Additional context used
🧬 Code graph analysis (16)
internal/converter/coverage_openai_stream_test.go (1)
internal/converter/sse.go (2)
FormatSSE(83-106)FormatDone(109-111)
internal/executor/middleware_ingress.go (5)
internal/executor/executor.go (1)
Executor(21-36)internal/flow/engine.go (1)
Ctx(34-47)internal/domain/errors.go (2)
NewProxyErrorWithMessage(61-63)ErrInvalidInput(13-13)internal/flow/keys.go (10)
KeyClientType(9-9)KeyProjectID(12-12)KeySessionID(11-11)KeyRequestModel(13-13)KeyIsStream(19-19)KeyAPITokenID(20-20)KeyRequestBody(15-15)KeyOriginalRequestBody(16-16)KeyRequestHeaders(17-17)KeyRequestURI(18-18)internal/domain/model.go (4)
ClientType(9-9)ProxyRequest(285-354)RequestInfo(272-277)Session(221-237)
internal/adapter/provider/codex/adapter_test.go (3)
internal/flow/engine.go (1)
NewCtx(49-55)internal/flow/keys.go (2)
KeyOriginalClientType(10-10)KeyOriginalRequestBody(16-16)internal/domain/model.go (1)
ClientTypeClaude(12-12)
internal/converter/openai_to_codex.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (1)
FormatSSE(83-106)
internal/converter/codex_to_openai.go (3)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (3)
ParseSSE(15-61)FormatDone(109-111)FormatSSE(83-106)internal/converter/types_openai.go (3)
OpenAIStreamChunk(104-112)OpenAIChoice(89-95)OpenAIMessage(27-34)
internal/converter/registry.go (3)
internal/domain/model.go (1)
ClientType(9-9)web/src/lib/transport/types.ts (1)
ClientType(8-8)web/src/lib/transport/index.ts (1)
ClientType(8-8)
internal/converter/codex_openai_stream_test.go (4)
internal/converter/registry.go (1)
NewTransformState(178-183)internal/converter/sse.go (2)
FormatSSE(83-106)ParseSSE(15-61)internal/converter/types_openai.go (1)
OpenAIStreamChunk(104-112)internal/converter/types_codex.go (2)
CodexRequest(5-21)CodexTool(34-39)
internal/adapter/provider/antigravity/openai_request.go (1)
internal/adapter/provider/antigravity/translator_helpers.go (1)
RenameKey(70-88)
web/src/hooks/queries/use-providers.ts (5)
internal/domain/model.go (1)
Provider(176-204)web/src/lib/transport/types.ts (1)
Provider(67-77)internal/repository/sqlite/models.go (2)
Provider(63-70)Provider(72-72)web/src/lib/transport/index.ts (2)
Provider(9-9)getTransport(105-105)web/src/lib/transport/factory.ts (1)
getTransport(79-102)
internal/executor/converting_writer.go (2)
internal/converter/registry.go (1)
NewTransformState(178-183)internal/domain/model.go (3)
ClientType(9-9)ClientTypeCodex(13-13)ClientTypeGemini(14-14)
internal/converter/claude_to_codex.go (1)
internal/converter/types_codex.go (1)
CodexTool(34-39)
internal/handler/proxy.go (4)
internal/flow/engine.go (5)
Engine(10-12)HandlerFunc(8-8)NewEngine(14-16)NewCtx(49-55)Ctx(34-47)internal/domain/model.go (3)
APIToken(702-734)ClientTypeCodex(13-13)ClientTypeOpenAI(15-15)internal/flow/keys.go (13)
KeyClientType(9-9)KeySessionID(11-11)KeyRequestModel(13-13)KeyRequestBody(15-15)KeyOriginalRequestBody(16-16)KeyRequestHeaders(17-17)KeyRequestURI(18-18)KeyIsStream(19-19)KeyAPITokenID(20-20)KeyProjectID(12-12)KeyProxyContext(4-4)KeyProxyStream(5-5)KeyProxyRequestModel(6-6)internal/converter/register.go (1)
GetGlobalRegistry(48-50)
internal/executor/middleware_route_match.go (3)
internal/domain/errors.go (3)
NewProxyErrorWithMessage(61-63)ErrInvalidInput(13-13)ErrNoRoutes(14-14)internal/router/router.go (1)
MatchContext(23-28)internal/domain/model.go (1)
ClientType(9-9)
internal/adapter/provider/antigravity/adapter.go (9)
web/src/lib/transport/types.ts (4)
ClientType(8-8)Provider(67-77)RequestInfo(174-179)ResponseInfo(181-185)internal/flow/engine.go (1)
Ctx(34-47)internal/flow/helpers.go (4)
GetClientType(10-17)GetRequestModel(46-53)GetMappedModel(55-62)GetRequestBody(64-71)internal/flow/keys.go (2)
KeyRequestModel(13-13)KeyRequestBody(15-15)internal/adapter/provider/antigravity/openai_request.go (1)
ConvertOpenAIRequestToAntigravity(19-402)internal/adapter/provider/antigravity/service.go (1)
FetchProjectInfo(177-179)internal/usage/extractor.go (1)
ExtractFromResponse(31-49)internal/domain/adapter_event.go (1)
AdapterMetrics(20-27)internal/converter/registry.go (2)
TransformState(11-21)NewTransformState(178-183)
internal/converter/codex_to_claude.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (2)
ParseSSE(15-61)FormatSSE(83-106)
internal/adapter/provider/kiro/adapter.go (4)
internal/flow/engine.go (1)
Ctx(34-47)internal/context/context.go (5)
GetRequestModel(91-96)GetRequestBody(119-124)GetIsStream(162-167)GetUpstreamAttempt(126-131)GetEventChan(184-189)internal/flow/helpers.go (5)
GetRequestModel(46-53)GetRequestBody(64-71)GetIsStream(100-107)GetUpstreamAttempt(127-134)GetEventChan(136-143)internal/domain/adapter_event.go (1)
AdapterEventChan(40-40)
🔇 Additional comments (25)
internal/converter/test_helpers_test.go (1)
13-24: 类型分支实现清晰,符合惯用写法。使用 type switch 同时支持
string和[]interface{}两种内容格式,逻辑正确,保留了原有行为并扩展了数组场景的匹配能力。一个可选的小建议:
[]interface{}分支中(第 21 行),当前只检查了pm["text"] == text,没有校验 part 的"type"字段(如"input_text")。对于测试辅助函数这通常没问题,但如果未来测试需要更精确地区分不同类型的 content part,可能会产生误匹配。如有需要可补充"type"检查。[approve_code_changes, suggest_optional_refactor]
internal/converter/coverage_openai_stream_test.go (1)
177-182: LGTM!断言字符串已正确更新为新的事件类型名称(
response.output_text.delta和response.completed),与本次协议重构一致。internal/adapter/provider/antigravity/service.go (1)
29-31: 请检查 User-Agent 字符串中Darwin和darwin的大小写差异是否符合上游 API 预期Line 29 中
UserAgentFetchModels使用Darwin/arm64(大写 D),而本次改动的 Line 31AntigravityUserAgent使用darwin/arm64(小写 d)。两个常量虽然服务不同的 API 端点(fetchAvailableModels vs CLIProxyAPI),但大小写差异可能导致上游 API 拒绝或错误处理请求,特别是如果 Antigravity API 对 User-Agent 大小写敏感的情况下。建议确认:
- 实际的 Antigravity CLI 在代理请求时使用的是大写还是小写
darwin- 是否需要统一这两个常量中的大小写
(注:版本号差异 1.11.3 vs 1.104.0 是合理的,因为两个常量分别对应不同的 API 端点)
internal/service/admin.go (1)
556-561: LGTM!Gemini 优先的排序与本 PR 中 Antigravity 适配器的整体重构方向一致(Gemini 作为首选目标格式)。注释清晰说明了转换偏好。
internal/context/context.go (1)
11-11: LGTM!将
contextKey类型和常量提取到keys.go是合理的代码组织优化,与 PR 中集中管理 key 的方向一致,无功能变更。internal/domain/model.go (1)
816-852: LGTM!算法实现正确。新的迭代式 glob 匹配器是经典的双指针 + 回溯算法,比之前基于辅助函数的多分支实现更简洁且正确。新增的空模式判断(Line 819-821)和
strings.TrimSpace预处理(Line 817-818)也增强了健壮性。internal/converter/registry.go (1)
10-21:TransformState新增字段设计合理。
Custom interface{}和OriginalRequestBody []byte为各协议转换器提供了灵活的状态传递机制,整体设计与现有模式一致。internal/executor/converting_writer.go (2)
150-171:NewConvertingResponseWriter的状态初始化和originalRequestBody克隆逻辑正确。
bytes.Clone防止了外部修改影响内部状态,NewTransformState()确保 ToolCalls 和 Usage 已初始化。整体设计合理。
224-250:Finalize切换到TransformResponseWithState以支持有状态转换——逻辑正确。传入
c.streamState(始终由构造函数初始化为非 nil)确保了下游转换器可以安全访问OriginalRequestBody等字段。internal/flow/engine.go (1)
1-88: 轻量级中间件引擎实现清晰且正确。
Next()的循环逻辑兼容显式调用(中间件模式)和隐式顺序执行。HandleWith通过append([]HandlerFunc{}, e.handlers...)正确避免了对e.handlers底层数组的修改。Abort()在当前 handler 返回后生效,阻止后续 handler 执行——符合标准中间件语义。internal/converter/gemini_to_openai.go (1)
303-320: "首块"判断逻辑在responseId设置之后仍可能触发。Line 291-294 已尝试从
responseId设置state.MessageID,Line 304 又检查state.MessageID == ""。只有当事件中没有responseId时才会进入此分支并赋予硬编码"chatcmpl-gemini"ID,这意味着此首块的发射条件取决于 Gemini 是否在首个事件中包含responseId。如果首个事件包含
responseId(Line 293 设置了 MessageID),则此分支不会触发,客户端将不会收到包含Role: "assistant"和空Content: ""的初始 chunk。请确认这是期望行为——某些 OpenAI 兼容客户端依赖首块携带 role 信息。#!/bin/bash # 检查是否有其他地方发射了初始 role chunk,或者 Gemini 响应是否总包含 responseId rg -n 'responseId' --type=go -C3internal/converter/codex_openai_stream_test.go (1)
1-223: 测试覆盖了关键的流式转换和工具名称缩短场景,整体实现良好。三个测试分别覆盖了:
- Codex→OpenAI 流式工具调用(
finish_reason=tool_calls)- Codex→Claude 流式工具停止原因(
stop_reason=tool_use)- Claude→Codex 工具名称缩短(
maxToolNameLen限制及服务端工具保留)测试结构清晰,SSE 事件构造和验证逻辑准确。
internal/converter/codex_to_claude.go (1)
461-468:extractResponsesUsage中cached_tokens路径input_tokens_details.cached_tokens已确认与 Codex Responses API 一致。该路径在整个代码库中使用一致,特别是在
internal/converter/openai_to_codex.go中的转换逻辑中,从 OpenAI 格式的prompt_tokens_details.cached_tokens转换为 Codex 格式的input_tokens_details.cached_tokens。代码注释(internal/usage/extractor.go)也明确记录了这是 Codex/Response API 格式的标准用法。路径正确,无需修改。internal/adapter/provider/kiro/adapter.go (1)
67-75:Execute方法中从 flow.Ctx 提取数据的模式清晰正确。从
flow.GetRequestModel、flow.GetRequestBody、flow.GetIsStream等辅助函数获取请求数据,替代了之前的ctxutil方式。context.Background()作为c.Request == nil时的后备也是合理的防御性编码。关于 Line 126 的
eventChan.SendRequestInfo(...)调用:即使flow.GetEventChan(c)返回 nil,也不会 panic。SendRequestInfo方法在internal/domain/adapter_event.go第 49-50 行明确检查了 nil 条件(if ch == nil || info == nil { return }),所有其他 Send* 方法也遵循相同的防御性编程模式。代码是安全的。internal/adapter/provider/antigravity/request.go (1)
19-22:rand.NewSource(time.Now().UnixNano())模式在 Go 1.20+ 中未被弃用。Go 1.20 弃用的是顶级
rand.Seed()函数,而不是rand.NewSource()。当前创建本地随机生成器并显式种子的做法实际上是推荐模式,特别是对于需要确定性结果的场景。对于非安全敏感的 ID 生成(如项目 ID),此实现具有适当的线程安全性保护。该代码无需更改。internal/adapter/provider/codex/adapter.go (1)
103-110: LGTM!Execute 方法整体迁移到 flow.Ctx 的结构清晰,上下文降级处理合理。
internal/adapter/provider/custom/adapter.go (1)
43-51: LGTM!Execute 方法迁移到 flow.Ctx 的实现逻辑清晰,各 client type 的分支处理和 header 设置保持了良好的结构。
internal/flow/keys.go (1)
1-25: LGTM!Key 常量命名清晰一致,集中管理便于维护。
internal/executor/executor.go (2)
69-71: LGTM!中间件注册接口简洁,符合可扩展设计。
82-100: 验证确认:所有中间件都正确更新了state.lastErr,无错误遗失风险。检查结果显示:
routeMatch: 在路由匹配失败时设置state.lastErr(middleware_route_match.go 第 37、53 行)ingress: 在项目绑定错误时设置state.lastErr(middleware_ingress.go 第 154 行)dispatch: 在所有失败路径中设置state.lastErr(middleware_dispatch.go 第 31、90、240、325、370、394-395 行),包括上下文错误、派发错误和重试耗尽后的降级错误egress: 作为包装器中间件在链前后运行,仅负责清理工作所有错误路径都被捕获并显式赋值给
state.lastErr,最终由 ExecuteWith 的第 99 行返回。此模式在中间件链内无错误遗失,无需进一步改进。internal/adapter/provider/antigravity/adapter.go (1)
81-91: LGTM!Execute 方法的 flow.Ctx 集成结构清晰,client type 检测和请求转换管线设计合理。
internal/converter/coverage_misc_helpers_test.go (1)
358-368: LGTM!断言从检查下划线改为检查
short != name,更健壮地验证了缩短行为,不依赖具体实现细节。Also applies to: 459-468
internal/converter/claude_to_codex.go (2)
33-47: LGTM!
shortMap的构建逻辑正确:排除了 server tools(有 Type 字段的工具),仅对 user tools 收集名称并构建映射。
126-131: 服务器工具的 Type-only 表示是正确的设计。服务器工具(如
web_search_20250305)是内置协议级工具,在 Claude 格式中本身就仅包含Type字段(见类型定义的注释)。Codex 格式中的CodexTool对Name、Description和Parameters都标记了omitempty,表示这些字段是可选的。Lines 126-131 的逻辑正确地将这类服务器工具转换为最小表示。Likely an incorrect or invalid review comment.
internal/handler/proxy.go (1)
243-270:normalizeOpenAIChatCompletionsPayload逻辑清晰,LGTM。Codex(Responses API)格式的检测逻辑(有
input/instructions、无messages)合理,转换失败时安全地返回false不影响原始请求。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
rebase main 后 CLIProxyAPI adapter 的 Execute 签名与新的 ProviderAdapter 接口不匹配,改用 *flow.Ctx 并移除未使用的 ctxutil import。
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/adapter/provider/antigravity/adapter.go (2)
714-721:⚠️ Potential issue | 🔴 Critical
eventChan未做 nil 检查直接调用方法,将导致 nil interface panic。Line 714 获取
eventChan,Line 717 直接调用eventChan.SendResponseInfo(...)而未检查 nil。如果 flow context 中没有设置 EventChan,flow.GetEventChan(c)返回 nil interface,调用其方法将立即 panic。同一文件的其他位置(如 Line 76、131、653)都使用了
if eventChan := ...; eventChan != nil的安全模式,此处遗漏了。🐛 建议修复
eventChan := flow.GetEventChan(c) - if eventChan != nil { - eventChan.SendResponseInfo(&domain.ResponseInfo{ + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ Status: resp.StatusCode, Headers: flattenHeaders(resp.Header), Body: "[streaming]", }) }此外,Line 863 的
eventChan.SendFirstToken和sendFinalEvents闭包内的调用也需要 nil 检查:if !firstChunkSent { firstChunkSent = true - eventChan.SendFirstToken(time.Now().UnixMilli()) + if eventChan != nil { + eventChan.SendFirstToken(time.Now().UnixMilli()) + } }
993-1023:⚠️ Potential issue | 🔴 Critical
handleCollectedStreamResponse中eventChan同样缺少 nil 检查。Line 996
eventChan.SendResponseInfo(...)、Line 1004eventChan.SendMetrics(...)、Line 1022eventChan.SendResponseModel(...)均未检查eventChan是否为 nil。虽然 Line 914 有 nil 检查,但后续代码路径未保持一致。🐛 建议修复
// Send events via EventChannel - // Send response info with collected body - eventChan.SendResponseInfo(&domain.ResponseInfo{ - Status: resp.StatusCode, - Headers: flattenHeaders(resp.Header), - Body: upstreamSSE.String(), - }) - - // Extract and send token usage - if metrics := usage.ExtractFromStreamContent(upstreamSSE.String()); metrics != nil { - eventChan.SendMetrics(&domain.AdapterMetrics{ + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ + Status: resp.StatusCode, + Headers: flattenHeaders(resp.Header), + Body: upstreamSSE.String(), + }) + + if metrics := usage.ExtractFromStreamContent(upstreamSSE.String()); metrics != nil { + eventChan.SendMetrics(&domain.AdapterMetrics{ InputTokens: metrics.InputTokens, ... - }) - } - - // Extract and send response model - ... - if modelVersion != "" { - eventChan.SendResponseModel(modelVersion) + }) + } + + if modelVersion != "" { + eventChan.SendResponseModel(modelVersion) + } }
🤖 Fix all issues with AI agents
In `@internal/adapter/provider/cliproxyapi_antigravity/adapter.go`:
- Around line 163-171: executeStream uses c.Request.Context() without a nil
check which can panic if c.Request is nil; update executeStream (mirroring the
fix in executeNonStream) to verify c.Request is non-nil before calling
c.Request.Context() and fall back to a safe context (e.g., c.Context() or
context.Background()) when nil, then pass that context into
a.executor.ExecuteStream; ensure the nil-checked symbol names are c.Request and
the call site a.executor.ExecuteStream so the change is localized.
- Around line 124-125: executeNonStream calls c.Request.Context() without
ensuring c.Request is non-nil which can panic; update the function
(executeNonStream in adapter.go) to check if c.Request != nil before using
c.Request.Context() and fall back to a safe context (e.g. context.Background()
or c.Context() if available) when c.Request is nil, and add the "context" import
at the top of the file; ensure the adjusted context value is passed into
a.executor.Execute(...) instead of unguarded c.Request.Context().
In `@internal/adapter/provider/cliproxyapi_codex/adapter.go`:
- Around line 93-94: The executeNonStream (and similarly executeStream) calls
use c.Request.Context() without nil-checking c.Request; add a guard like
checking if c.Request != nil before calling Context() (or default to
context.Background()) and use that ctx when calling a.executor.Execute and any
other places that currently call c.Request.Context(), mirroring the
nil-protection used in codex/adapter.go (e.g., wrap creation of the context in
executeNonStream/executeStream so c.Request is validated first and a safe
context is passed to a.executor.Execute).
In `@internal/adapter/provider/codex/adapter.go`:
- Around line 456-462: The TTFT value is being passed as an absolute timestamp
here (eventChan.SendFirstToken(time.Now().UnixMilli())), causing semantic
mismatch with other adapters that pass relative elapsed milliseconds; change the
call in the codex adapter to pass the elapsed milliseconds since request start
(use time.Since(startTime).Milliseconds()) so SendFirstToken receives a relative
duration like cliproxyapi_codex/cliproxyapi_antigravity do, and audit other
adapters (antigravity, custom) to ensure they also pass the same relative value.
🧹 Nitpick comments (5)
internal/adapter/provider/cliproxyapi_antigravity/adapter.go (1)
244-274:extractModelFromResponse和extractModelFromSSE与 codex adapter 完全重复。这两个 helper 函数在
cliproxyapi_antigravity和cliproxyapi_codex中完全一致(参见cliproxyapi_codex/adapter.goLines 207-237)。建议提取到共享的 helper 包中以消除重复。internal/adapter/provider/codex/adapter.go (2)
117-128:stream字段被双重设置。Line 118 调用
applyCodexRequestTuning内部已在 Line 572 设置body.stream = true,随后 Line 123-128 又通过sjson.SetBytes再次设置。虽然结果正确(幂等操作),但冗余逻辑增加了理解成本。♻️ 建议移除冗余设置
由于
applyCodexRequestTuning已处理stream字段,可以移除 Line 123-128 的重复设置:// Apply Codex CLI payload adjustments (CLIProxyAPI-aligned) cacheID, updatedBody := applyCodexRequestTuning(c, requestBody) requestBody = updatedBody // Build upstream URL upstreamURL := CodexBaseURL + "/responses" - upstreamStream := true - if len(requestBody) > 0 { - if updated, err := sjson.SetBytes(requestBody, "stream", upstreamStream); err == nil { - requestBody = updated - } - } + upstreamStream := true
679-700:extractCodexCompletedResponse的 scanner buffer 上限为 50MB,可能过大。Line 681
scanner.Buffer(nil, 52_428_800)设置了约 50MB 的最大行长度。对于正常的 SSE 响应来说这个值偏大,建议评估实际响应体大小并适当降低,或者添加注释说明选择此值的原因。internal/adapter/provider/antigravity/adapter.go (2)
576-595:isAntigravityEndpoint的域名白名单检查存在冗余。Line 582 检查
cloudcode-pa.googleapis.com已覆盖 Line 585 的daily-cloudcode-pa.googleapis.com(子串匹配)。同时 Line 591 的组合条件也已被前面的检查覆盖。建议简化逻辑。♻️ 简化建议
func isAntigravityEndpoint(endpoint string) bool { endpoint = strings.ToLower(strings.TrimSpace(endpoint)) if endpoint == "" { return false } - if strings.Contains(endpoint, "cloudcode-pa.googleapis.com") { - return true - } - if strings.Contains(endpoint, "daily-cloudcode-pa.googleapis.com") { - return true - } - if strings.Contains(endpoint, "daily-cloudcode-pa.sandbox.googleapis.com") { - return true - } - if strings.Contains(endpoint, "/v1internal") && strings.Contains(endpoint, "cloudcode-pa") { - return true - } - return false + return strings.Contains(endpoint, "cloudcode-pa") }如果需要更严格的匹配(排除
cloudcode-pa-something-else.example.com),可保留cloudcode-pa.googleapis.com和cloudcode-pa.sandbox.googleapis.com两个检查。
80-402:Execute方法过于庞大(~320 行),建议拆分。当前 Execute 方法包含 token 获取、请求转换、多端点重试、401 刷新、错误处理、stream/non-stream 分发等逻辑。建议将内层的端点重试循环和错误处理提取为独立方法(如
executeWithRetry),以降低认知复杂度、提升可维护性。
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
internal/adapter/provider/antigravity/adapter.gointernal/adapter/provider/cliproxyapi_antigravity/adapter.gointernal/adapter/provider/cliproxyapi_codex/adapter.gointernal/adapter/provider/codex/adapter.go
🧰 Additional context used
🧬 Code graph analysis (2)
internal/adapter/provider/cliproxyapi_antigravity/adapter.go (3)
internal/flow/engine.go (1)
Ctx(34-47)internal/domain/model.go (1)
Provider(176-204)internal/domain/errors.go (1)
NewProxyErrorWithMessage(61-63)
internal/adapter/provider/cliproxyapi_codex/adapter.go (3)
internal/flow/engine.go (1)
Ctx(34-47)internal/domain/model.go (1)
Provider(176-204)internal/domain/errors.go (1)
NewProxyErrorWithMessage(61-63)
🔇 Additional comments (7)
internal/adapter/provider/cliproxyapi_antigravity/adapter.go (2)
58-65: flow context 迁移一致且清晰。Execute 方法从 flow context 正确提取所有必要参数(clientType、requestBody、stream、model),与新的 flow 中间件架构对齐良好。
256-272: 无需更改 - 项目已支持strings.SplitSeq。项目的
go.mod指定 Go 版本为 1.25,已满足strings.SplitSeq(Go 1.24+ 新增)的要求,此处代码无兼容性问题。internal/adapter/provider/cliproxyapi_codex/adapter.go (1)
55-60: flow context 迁移清晰,Codex 特定处理正确。Execute 方法正确提取 flow context 数据,并且不需要 clientType(始终为 Codex)。
usage.AdjustForClientType的调用确保了 Codex 的 token 指标正确调整。internal/adapter/provider/codex/adapter.go (2)
102-109:c.Requestnil 防护模式正确。Line 106-109 正确地在
c.Request为 nil 时回退到context.Background(),与其他 adapter 中缺少此防护形成对比。
358-393:handleCollectedStreamResponse逻辑清晰,SSE 检测和提取设计合理。新增的
handleCollectedStreamResponse正确处理了 Codex 始终返回 SSE 流的场景:先读取全部响应体,检测是否为 SSE 格式,若是则提取response.completed事件中的最终响应。这与 Codex API 行为对齐。internal/adapter/provider/antigravity/adapter.go (2)
80-89: Execute 入口 flow 迁移模式正确,ctx fallback 一致。Line 86-89 正确处理了
c.Request为 nil 的情况,回退到context.Background(),与 codex adapter 保持一致。
536-548:buildUpstreamURL的路径拼接逻辑清晰。正确处理了 base URL 中已包含和未包含
/v1internal的两种情况,避免了路径重复。stream/non-stream 端点选择也符合 API 规范。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
- 修复 antigravity/codex retry 路径忽略 NewRequestWithContext 错误导致 nil panic - 修复 antigravity 循环内 defer resp.Body.Close() 资源泄漏 - 修复 openai_request sjson.SetBytes 传入 []byte 导致 base64 编码 - 修复 antigravity response SSE 数据未剥离 data: 前缀 - 修复 proxy handler append 并发 data race - 修复 converter streamMeta/state 类型断言缺少 nil 检查 - 修复 openai_to_codex sjson 路径和重复冒泡排序 - 修复 middleware_ingress c.Request nil panic 和 context.Canceled 判断 - 修复 middleware_route_match 错误静默丢弃和根因信息丢失 - 添加 cliproxyapi adapter c.Request nil 防护 - 添加 codex cache 后台清理 goroutine - 删除 codex_to_openai 死代码 extractCodexOutputText - 重构 tool_name baseCandidate 复用 shortenNameIfNeeded
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/adapter/provider/antigravity/adapter.go (2)
726-733:⚠️ Potential issue | 🔴 Critical
eventChan未做 nil 检查直接调用方法,可能导致 nil 指针 panic。第 726 行
flow.GetEventChan(c)可能返回nil(参见flow/helpers.go:135-142的实现)。但第 729 行立即调用eventChan.SendResponseInfo(...)而未做 nil 检查,如果 event channel 未设置则会 panic。对比同文件中
handleCollectedStreamResponse(第 926 行)和 codex adapter 的handleStreamResponse(第 414 行),它们都正确地做了if eventChan != nil检查。此外,
sendFinalEvents闭包(第 773-805 行)内部也未检查eventChan是否为 nil。🐛 建议修复
eventChan := flow.GetEventChan(c) // Send initial response info (for streaming, we only capture status and headers) - eventChan.SendResponseInfo(&domain.ResponseInfo{ - Status: resp.StatusCode, - Headers: flattenHeaders(resp.Header), - Body: "[streaming]", - }) + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ + Status: resp.StatusCode, + Headers: flattenHeaders(resp.Header), + Body: "[streaming]", + }) + }同样在
sendFinalEvents闭包中添加 nil 检查:sendFinalEvents := func() { + if eventChan == nil { + return + } if sseBuffer.Len() > 0 {
1006-1034:⚠️ Potential issue | 🔴 Critical
handleCollectedStreamResponse中eventChan同样存在 nil 指针风险。第 926 行有 nil 检查,但第 1008、1015、1033 行直接使用
eventChan而未再次检查。根因与handleStreamResponse中的问题相同。🐛 建议修复
// Send events via EventChannel // Send response info with collected body - eventChan.SendResponseInfo(&domain.ResponseInfo{ - Status: resp.StatusCode, - Headers: flattenHeaders(resp.Header), - Body: upstreamSSE.String(), - }) + if eventChan != nil { + eventChan.SendResponseInfo(&domain.ResponseInfo{ + Status: resp.StatusCode, + Headers: flattenHeaders(resp.Header), + Body: upstreamSSE.String(), + }) + } // Extract and send token usage - if metrics := usage.ExtractFromStreamContent(upstreamSSE.String()); metrics != nil { + if eventChan != nil && metrics := usage.ExtractFromStreamContent(upstreamSSE.String()); metrics != nil {或者在函数开头保证
eventChan不为 nil(使用 noop 实现)。
🤖 Fix all issues with AI agents
In `@internal/adapter/provider/antigravity/openai_request.go`:
- Around line 170-182: The image URL parsing in the "image_url" case assumes a
data: URI and slices imageURL[5:] without validating the prefix; update the
logic that builds imageURL (inside the switch case "image_url" where imageURL :=
item.Get("image_url.url").String()) to first check strings.HasPrefix(imageURL,
"data:") (and handle case-insensitive variants if needed), only then split the
substring after "data:" into mime and data parts and set
parts.+itoa(p)+".inlineData.*" and thoughtSignature; apply the same guarded
check to the duplicated assistant-content parsing block that uses the same
imageURL slicing pattern so both code paths validate "data:" before slicing.
In `@internal/converter/openai_to_codex.go`:
- Around line 625-669: The stream handling currently only reads the first
tool_call via tcs.Get("0.*"), causing dropped calls when
parallel_tool_calls=true; update the block that reads tcs (where
newCallID/nameChunk/args are set and where st.FuncNames, st.FuncCallIDs,
st.FuncArgsBuf are updated and SSE events are emitted) to iterate over all
elements of the tool_calls array (e.g., loop over tcs.ForEach or for i := 0; i <
tcs.Get("length").Int(); i++), compute per-index values (use the loop index to
build the JSON path instead of "0"), and emit/append events and buffer writes
per element (preserving use of nextSeq(), FormatSSE, st.FuncNames[idx],
st.FuncCallIDs[idx], st.FuncArgsBuf[idx], etc.) so no tool_call beyond index 0
is ignored.
🧹 Nitpick comments (19)
internal/adapter/provider/antigravity/response.go (2)
169-188:json.Unmarshal错误被静默忽略,可能导致数据丢失。第 171 行
_ = json.Unmarshal([]byte(partResult.Raw), &m)忽略了解析错误。虽然第 172-173 行有 nil 保护,但当partResult.Raw存在但格式不合法时,会返回一个空 map 而非原始数据,可能丢失 part 内容。在此场景下风险较低,因为
partResult来自gjson解析的有效 JSON,但仍建议添加日志方便调试。
276-296: 多处sjson.Set错误被忽略,最终 payload 可能不完整。第 276-295 行中,
json.Marshal(parts)和多个sjson.Set/SetRaw调用的错误均以_丢弃。如果序列化或设置操作失败(例如非法 JSON),最终输出的responseTemplate可能缺少关键字段(如parts、finishReason、usageMetadata)。考虑到这是流转非流的关键路径,建议至少对
json.Marshal(parts)的错误进行检查或记录日志。internal/adapter/provider/antigravity/request.go (2)
328-337:systemInstruction.parts读取与覆写存在时序依赖,建议添加注释说明。第 329 行在覆写
parts.0和parts.1之前读取了partsResult(原始 parts 快照),然后在第 334-336 行将原始 parts 追加回去。逻辑上是正确的——先读后写保证了原始数据不丢失。但这个隐式的"先快照再覆写再追加"模式容易在后续维护中被打乱。💡 建议添加注释
+ // Snapshot existing system instruction parts before overwriting index 0 and 1 partsResult := gjson.GetBytes(payload, "request.systemInstruction.parts") payload, _ = sjson.SetBytes(payload, "request.systemInstruction.role", "user") payload, _ = sjson.SetBytes(payload, "request.systemInstruction.parts.0.text", antigravitySystemInstruction) payload, _ = sjson.SetBytes(payload, "request.systemInstruction.parts.1.text", fmt.Sprintf("Please ignore following [ignore]%s[/ignore]", antigravitySystemInstruction)) + // Append original parts after the injected ones if partsResult.Exists() && partsResult.IsArray() {
272-300:finalizeOpenAIWrappedRequest中多处sjson.SetBytes错误被忽略。第 284-298 行中所有
sjson.SetBytes/sjson.DeleteBytes调用的错误均以_丢弃。虽然这些操作在正常 JSON 输入下不太可能失败,但如果上游传入了畸形 payload,静默失败可能导致发送到 upstream 的请求缺少必要字段(如project、requestId)。与
wrapV1InternalRequest(第 261-266 行)的风格不一致——后者对json.Marshal做了错误检查。internal/adapter/provider/codex/adapter.go (2)
116-124:context.Background()作为后备可能导致取消信号丢失。第 120 行
ctx := context.Background()在c.Request == nil时使用无取消能力的 context。这意味着如果 flow context 中没有 HTTP request(例如在某些测试或内部调用场景下),所有带超时/取消的操作(如 token 刷新、upstream 请求)将无法被中断。同样的模式在
handleStreamResponse(第 439 行)中也存在。虽然这是防御性编程,但建议考虑从 flow context 本身获取可取消的 context,而非仅依赖
c.Request。
137-142:stream标志被无条件覆盖为true,upstreamStream变量名可能误导。第 137 行
upstreamStream := true意味着无论客户端是否请求流式响应,upstream 请求始终以流式模式发送。第 589 行applyCodexRequestTuning中也无条件设置"stream": true。这个行为本身可能是有意为之(Codex API 始终使用 SSE),但变量命名和两处重复设置可能让后续维护者困惑。建议添加注释说明原因。
Also applies to: 589-591
internal/adapter/provider/antigravity/adapter.go (1)
207-401: 多层重试循环结构复杂但逻辑完整。三层控制流(thinking 重试 → 端点重试 → 容量重试)配合
break/continue跳转较难追踪。考虑在关键跳转处添加注释标注目标循环,提高可读性。当前实现中错误处理、资源释放和重试条件判断均正确。internal/executor/middleware_ingress.go (1)
123-131:sessionRepo.GetBySessionID错误被忽略,存在静默降级风险。第 124 行
session, _ := e.sessionRepo.GetBySessionID(state.sessionID)丢弃了错误。如果是数据库连接失败等可恢复错误,会静默创建一个临时 session(第 126-131 行),可能导致后续项目绑定行为异常。建议至少记录日志:
💡 建议
- session, _ := e.sessionRepo.GetBySessionID(state.sessionID) + session, sessErr := e.sessionRepo.GetBySessionID(state.sessionID) + if sessErr != nil { + log.Printf("[Executor] Failed to get session %s: %v", state.sessionID, sessErr) + }internal/handler/proxy.go (2)
164-213: 请求数据存在两个来源:flow keys 和Ctx直接字段,容易不一致。第 168 行设置了
flow.KeyRequestBody,第 209 行又设置了c.InboundBody;第 172 行设置了flow.KeyIsStream,第 210 行又设置了c.IsStream。下游代码(如 adapter 使用flow.GetRequestBody(c),executor 使用c.InboundBody)可能从不同来源读取相同数据。如果后续中间件修改了其中一个而未同步更新另一个,会导致行为不一致。建议统一为单一数据来源,或在文档中明确哪个是权威数据。
78-81:ingress中直接使用c.Request的Method和URL.Path未做 nil 检查。第 81 行
log.Printf("[Proxy] Received request: %s %s", r.Method, r.URL.Path)中r来自c.Request。虽然在ServeHTTP(第 71 行)中通过flow.NewCtx(w, r)创建时r不会为 nil(由http.Server保证),但如果ingress被其他路径调用(如测试中flow.NewCtx(nil, nil)),则会 panic。鉴于此 handler 只从
ServeHTTP进入,实际风险很低。internal/adapter/provider/cliproxyapi_codex/adapter.go (2)
159-184: 流式路径:sseBuffer在内存中积累全部 SSE 内容。对于长时间运行或大量输出的流式请求,
sseBuffer会持续增长。当前模式在两个 adapter(codex 和 antigravity)中均相同,属于已知的设计权衡,但建议在后续迭代中考虑对 buffer 大小做上限控制或增量提取 token 指标。
229-247: 提取重复的extractModelFromResponse和extractModelFromSSE到共享的 helper 包。这两个函数在
internal/adapter/provider/codex/adapter.go、cliproxyapi_codex/adapter.go和cliproxyapi_antigravity/adapter.go中完全重复,其中extractModelFromResponse三个文件实现相同,extractModelFromSSE在 cliproxyapi_codex 和 cliproxyapi_antigravity 中实现相同。建议提取到共享的 helper 包中避免维护多份相同代码。Go 版本要求已满足:项目 go.mod 指定 Go 1.25,支持
strings.SplitSeq(Go 1.24 引入)。internal/converter/codex_to_openai.go (2)
364-380:buildOpenAIStreamDone与response.completed分支可能产生不一致的 finish chunk 格式。
buildOpenAIStreamDone(Lines 364-380)使用 Go struct 序列化生成 finish chunk(包含OpenAIStreamChunk结构),而response.completed分支(Lines 327-339)使用newOpenAIStreamTemplate+sjson.Set构建。两种路径产生的 JSON 字段集不同:
buildOpenAIStreamDone输出包含system_fingerprint(omitempty)、usage(omitempty)等字段newOpenAIStreamTemplate输出包含native_finish_reason、delta中的reasoning_content等字段下游客户端可能对 finish chunk 的字段一致性有要求。建议统一两个路径的生成方式。
145-161:sjson.Set返回值的错误均被忽略(_)。整个文件中大量
sjson.Set调用的 error 被丢弃。sjson.Set在路径格式不正确时会返回错误。虽然此处使用的路径都是硬编码的字面量,出错概率极低,但作为防御性编码建议,至少在关键路径(如最终输出的 template 构建)记录一下错误。这是一个可选的改进,不阻塞合并。
internal/converter/openai_to_codex.go (4)
78-82: 变量c遮蔽了方法接收者。Line 81 的
c := m.Get("content")遮蔽了方法接收者c *openaiToCodexRequest(虽然在此作用域内接收者未被使用)。建议改用其他变量名(如contentVal)以提高可读性。建议修改
- c := m.Get("content") - if c.Exists() && c.Type == gjson.String && c.String() != "" { + contentVal := m.Get("content") + if contentVal.Exists() && contentVal.Type == gjson.String && contentVal.String() != "" {后续对
c的引用也需相应改为contentVal。
24-29:json.Unmarshal仅用于验证 JSON 合法性,tmp变量未使用。Lines 25-28 将 body 反序列化到
interface{}仅为验证 JSON 格式,然后 Line 29 又bytes.Clone(body)直接操作原始字节。虽然功能正确,但可以用json.Valid(body)替代以减少不必要的内存分配。
TransformWithState(Lines 252-255)中也有相同模式。建议修改
- var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { + if !json.Valid(body) { + return nil, fmt.Errorf("invalid JSON input") - return nil, err }
459-499: 首次 chunk 处理时重复初始化 state maps。Lines 468-485 在
!st.Started分支中重新初始化了st.MsgTextBuf、st.FuncArgsBuf等所有 map,但这些 map 刚刚在 Lines 404-414 中已被初始化。这段重复初始化不会造成 bug(仅在首次 chunk 执行),但如果convertOpenAIChatCompletionsChunkToResponses被同一 state 对象多次调用(如流式场景),第二次进入时st.Started已为 true,不会再重置。目前逻辑正确,仅作为可读性注记。
672-805:finish_reason处理块超过 130 行,建议拆分子函数。Lines 672-803 的
finish_reason处理逻辑包含了消息完成、reasoning 完成、function call 完成和 response.completed 的构建。虽然逻辑正确,但这个单一的if块过于庞大,建议拆分为emitMsgDone、emitFuncCallDone、emitResponseCompleted等子函数以降低认知复杂度。internal/adapter/provider/cliproxyapi_antigravity/adapter.go (1)
115-123:updateModelInBody使用 JSON 反序列化/序列化完整 round-trip 仅为更新一个字段。
json.Unmarshal+json.Marshalround-trip 会重新排列 JSON key 顺序,并可能改变数字精度。考虑到项目已广泛使用sjson,可以用sjson.SetBytes(body, "model", model)替代,保留原始 JSON 结构。建议修改
+import "github.com/tidwall/sjson" + func updateModelInBody(body []byte, model string) ([]byte, error) { - var req map[string]any - if err := json.Unmarshal(body, &req); err != nil { - return nil, err - } - req["model"] = model - return json.Marshal(req) + return sjson.SetBytes(body, "model", model) }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
internal/adapter/provider/antigravity/adapter.gointernal/adapter/provider/antigravity/openai_request.gointernal/adapter/provider/antigravity/request.gointernal/adapter/provider/antigravity/response.gointernal/adapter/provider/cliproxyapi_antigravity/adapter.gointernal/adapter/provider/cliproxyapi_codex/adapter.gointernal/adapter/provider/codex/adapter.gointernal/converter/claude_to_openai_stream.gointernal/converter/codex_to_openai.gointernal/converter/coverage_openai_response_test.gointernal/converter/openai_to_codex.gointernal/converter/tool_name.gointernal/executor/middleware_ingress.gointernal/executor/middleware_route_match.gointernal/handler/proxy.go
🚧 Files skipped from review as they are similar to previous changes (4)
- internal/converter/tool_name.go
- internal/converter/claude_to_openai_stream.go
- internal/executor/middleware_route_match.go
- internal/converter/coverage_openai_response_test.go
🧰 Additional context used
🧬 Code graph analysis (7)
internal/adapter/provider/cliproxyapi_codex/adapter.go (1)
internal/flow/engine.go (1)
Ctx(34-47)
internal/adapter/provider/antigravity/openai_request.go (1)
internal/adapter/provider/antigravity/translator_helpers.go (1)
RenameKey(70-88)
internal/converter/openai_to_codex.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (1)
FormatSSE(83-106)
internal/converter/codex_to_openai.go (3)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (3)
ParseSSE(15-61)FormatDone(109-111)FormatSSE(83-106)internal/converter/types_openai.go (3)
OpenAIStreamChunk(104-112)OpenAIChoice(89-95)OpenAIMessage(27-34)
internal/adapter/provider/codex/adapter.go (5)
internal/flow/engine.go (1)
Ctx(34-47)internal/flow/helpers.go (5)
GetRequestBody(64-71)GetIsStream(100-107)GetEventChan(136-143)GetOriginalRequestBody(73-80)GetOriginalClientType(19-26)internal/adapter/provider/codex/settings.go (5)
CodexBaseURL(20-20)CodexVersion(26-26)OpenAIBetaHeader(35-35)CodexUserAgent(29-29)CodexOriginator(32-32)internal/adapter/provider/codex/oauth.go (2)
RefreshAccessToken(186-222)ParseIDToken(226-249)internal/usage/extractor.go (1)
ExtractFromResponse(31-49)
internal/adapter/provider/cliproxyapi_antigravity/adapter.go (2)
internal/flow/engine.go (1)
Ctx(34-47)internal/domain/errors.go (1)
NewProxyErrorWithMessage(61-63)
internal/adapter/provider/antigravity/request.go (2)
internal/adapter/provider/antigravity/translator_helpers.go (2)
Walk(26-53)RenameKey(70-88)internal/adapter/provider/antigravity/schema_cleaner.go (2)
CleanJSONSchemaForAntigravity(21-23)CleanJSONSchemaForGemini(27-29)
🔇 Additional comments (23)
internal/adapter/provider/antigravity/response.go (1)
190-195: 过去的评审意见已修复:SSEdata:前缀现在已正确剥离。第 192 行的
bytes.TrimPrefix(trimmed, []byte("data: "))修复了之前convertStreamToNonStream中未剥离 SSE 前缀导致 JSON 校验失败的问题。[duplicate_comment, approve_code_changes]
internal/adapter/provider/antigravity/request.go (2)
304-318: 过去的评审意见已修复:路径后缀安全检查已添加。第 312 行已增加
strings.HasSuffix(p, "parametersJsonSchema")检查,避免了路径截断产生负索引导致 panic 的风险。[duplicate_comment, approve_code_changes]
19-22:rand.NewSource在 Go 1.20+ 中并未被弃用。在 Go 1.20+ 中,被弃用的是
math/rand.Seed()函数,而非rand.NewSource。当前代码使用rand.New(rand.NewSource(time.Now().UnixNano()))配合互斥锁保护,这正是 Go 官方推荐的模式,用于创建可重现/确定性的随机数序列(例如测试、模拟场景)。该实现已符合 Go 1.20+ 的最佳实践,无需修改。Likely an incorrect or invalid review comment.
internal/adapter/provider/antigravity/openai_request.go (3)
250-254: 过去的评审意见已修复:fargs现在作为string类型传入。第 253 行
sjson.SetBytes(node, ..., fargs)中fargs是string类型(来自tc.Get("function.arguments").String()),sjson.SetBytes会将其作为 JSON 字符串正确设置,不会再出现 base64 编码问题。[duplicate_comment, approve_code_changes]
296-399: 工具处理逻辑清晰,整体实现良好。函数声明的处理、google_search/code_execution/url_context 的透传、以及
parametersJsonSchema的重命名逻辑结构清晰,错误路径有日志记录和合理的降级处理。
114-126: 第 123 行存储c.Raw时,对于字符串类型的 tool content 需要特别注意。当 tool 消息的 content 是 JSON 字符串(如
"ok")时,c.Raw包含 JSON 表示形式(包含引号),存储后在第 279 行用sjson.SetBytes(toolNode, ..., resp)处理。由于resp已是带引号的 JSON 字符串,SetBytes 会对其再次转义,可能导致双重转义。建议在第 279 行改用
parsed.String()而非resp,确保字符串类型的响应被正确处理:} else { toolNode, _ = sjson.SetBytes(toolNode, "parts."+itoa(pp)+".functionResponse.response.result", parsed.String()) }internal/adapter/provider/codex/adapter.go (3)
189-192: 过去的评审意见已修复:重试路径中http.NewRequestWithContext的错误现已正确处理。第 189-192 行现在捕获并检查了错误,避免了
nil指针 panic。[duplicate_comment, approve_code_changes]
28-41: 过去的评审意见已修复:codexCaches现在有后台清理 goroutine。
init中启动的 5 分钟定时清理 goroutine 解决了过期缓存条目内存泄漏的问题。[duplicate_comment, approve_code_changes]
553-600:applyCodexRequestTuning实现清晰,flow context 集成正确。函数正确使用
flow.GetOriginalRequestBody和flow.GetOriginalClientType从 flow 上下文获取数据,缓存键构造合理,清理和默认值设置逻辑完整。internal/adapter/provider/antigravity/adapter.go (4)
168-180: 过去的评审意见已修复:projectIDOnce消除了并发写入config.ProjectID的数据竞争。
sync.Once保证了FetchProjectInfo和config.ProjectID赋值只执行一次,且第 180 行的读取在Once.Do返回后发生,满足 happens-before 保证。[duplicate_comment, approve_code_changes]
258-265: 过去的评审意见已修复:重试路径的http.NewRequestWithContext错误现已处理。第 259-262 行正确捕获并返回了错误。
[duplicate_comment, approve_code_changes]
387-400: 过去的评审意见已修复:循环内不再使用defer,改为显式关闭resp.Body。每个处理路径(第 390、395、399 行)在 handler 返回后显式调用
resp.Body.Close(),避免了资源泄漏和双重关闭。[duplicate_comment, approve_code_changes]
81-90: Flow 上下文集成实现正确。从
flow.Ctx获取客户端类型、请求模型、映射模型、请求体等数据的模式正确,context.Background()后备和 nil 检查与其他 adapter 保持一致。internal/executor/middleware_ingress.go (3)
91-110: 过去的评审意见已修复:c.Requestnil 保护已正确添加。第 96 行的
if c.Request != nil检查确保了RequestInfo只在c.Request非 nil 时创建,避免了c.Request.Method的空指针 panic。但注意,当c.Request == nil时proxyReq.RequestInfo将为nil——请确认下游是否能正确处理此情况。[duplicate_comment, approve_code_changes]
133-144: 过去的评审意见已修复:已使用errors.Is替代直接比较。第 136 行
errors.Is(err, context.Canceled)正确处理了包装过的 context 取消错误。[duplicate_comment, approve_code_changes]
64-70: 双重类型断言模式正确处理了http.Headervsmap[string][]string的区别。Go 的类型断言检查具体类型而非底层类型。如果存储的值是
http.Header类型,第 65 行的v.(map[string][]string)会失败,第 68 行的v.(http.Header)会成功。两种情况都能正确赋值给state.requestHeaders(类型为http.Header,即map[string][]string)。实现正确。internal/handler/proxy.go (2)
70-76: 过去的评审意见已修复:并发 data race 已通过显式拷贝消除。第 72-74 行使用
make+copy创建了独立的 handlers 切片,避免了append(h.extra, h.dispatch)在并发请求下的数据竞争。[duplicate_comment, approve_code_changes]
246-273:normalizeOpenAIChatCompletionsPayload的检测逻辑清晰。函数正确区分了 OpenAI Chat Completions(有
messages)和 Codex Responses(有input或instructions但无messages)格式,并使用 converter 进行转换。转换失败时静默回退(返回false),不影响原始请求处理。internal/adapter/provider/cliproxyapi_codex/adapter.go (1)
56-92:Execute方法整体结构清晰,flow 上下文集成合理。从
flow.Ctx提取请求数据,构建 executor 请求,并通过 event channel 报告请求信息,逻辑清晰。internal/converter/codex_to_openai.go (2)
346-362:getOpenAIStreamState防御性类型断言模式良好。先尝试断言,失败则重新初始化——这与
openai_to_codex.go中convertOpenAIChatCompletionsChunkToResponses应采用的模式一致(之前审查中有指出)。
298-325:response.output_item.done中st.Index自增逻辑的起始值需关注。
st.Index初始值为-1(Line 350),每次 function_call 时st.Index++(Line 301),所以第一个 tool call 的 index 为0。这是合理的。但若上游先发送了 text content(非 tool_call),OpenAI 的 choices index 通常保持
0,而这里 tool_calls 数组内的 index 从 0 开始独立计数——这符合 OpenAI streaming 中tool_calls数组内部的 index 语义(而非 choices index),逻辑正确。internal/converter/openai_to_codex.go (1)
820-860:applyRequestEchoToResponse跳过 response 中已存在的字段。Line 849-851:如果 response JSON 中某字段已有值,则不覆盖。这意味着如果上游已设置了
model等字段,request echo 不会覆盖它。逻辑合理。internal/adapter/provider/cliproxyapi_antigravity/adapter.go (1)
59-113:Execute方法流程清晰,flow 上下文集成合理。与 codex adapter 保持一致的结构:提取 flow 数据 → 更新 model → 发送事件 → 构建 executor 请求 → 分流 stream/non-stream。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
当 TransformResponseWithState 返回 (nil, nil) 时,原先会 写入空响应。现在检测 converted == nil 并回退到原始 body。
- 修复 openai_request 图片 URL 解析:使用 strings.HasPrefix("data:")
替代 len > 5 的脆弱判断,避免 HTTP URL 被错误解析
- 修复 openai_to_codex 流式 tool_calls 仅处理首元素:改为遍历
所有 tool_call 元素,支持 parallel_tool_calls 场景
- 统一 TTFT 指标语义:cliproxyapi adapter 从相对耗时改为绝对
时间戳(UnixMilli),与其他 adapter 和下游消费者一致
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@internal/adapter/provider/antigravity/openai_request.go`:
- Around line 308-325: When RenameKey(fnRaw, "parameters",
"parametersJsonSchema") fails, the fallback sets default parametersJsonSchema
fields but does not remove the original "parameters" key, leaving both keys
present; update the error branch in the RenameKey failure path to delete the
original "parameters" key from fnRaw (using sjson.Delete or equivalent) before
setting defaults (the block that currently calls sjson.Set and sjson.SetRaw),
ensuring fnRaw no longer contains "parameters" and only contains the fallback
"parametersJsonSchema" structure.
In `@internal/converter/openai_to_codex.go`:
- Around line 321-335: In the non-stream code path inside openai_to_codex.go
where you build the "usage" fields (the block that inspects the local variable
usage and calls sjson.Set on resp), add a fallback read for reasoning tokens:
when setting "usage.output_tokens_details.reasoning_tokens" first check
usage.Get("completion_tokens_details.reasoning_tokens") and if it does not exist
fall back to usage.Get("output_tokens_details.reasoning_tokens"); set whichever
exists (or skip only if both missing). Update the same pattern used for cached
tokens (using d.Exists()) so the code writes the correct reasoning token value
into resp the same way the streaming path does.
- Around line 806-808: The caller-safety issue is that TransformChunk assumes
its incoming *TransformState (state) is non-nil and directly accesses
state.Buffer (and later state.OriginalRequestBody elsewhere), which can panic if
a nil state is passed; add an explicit nil check at the start of TransformChunk
(the function named TransformChunk) to guard against a nil state, returning
early (nil or a safe default response) or an error as appropriate, and ensure
callers of TransformChunk handle the returned value; this keeps
convertOpenAIChatCompletionsChunkToResponses and the later
len(state.OriginalRequestBody) access safe because state will no longer be
unexpectedly nil.
🧹 Nitpick comments (7)
internal/adapter/provider/cliproxyapi_antigravity/adapter.go (2)
253-283:extractModelFromResponse/extractModelFromSSE与 codex adapter 完全重复,建议抽取到共享包。这两个函数在
cliproxyapi_codex/adapter.go(Lines 217-245)中有完全相同的实现。随着更多 CLIProxyAPI adapter 出现(如 Kiro),重复只会增加。建议抽取到如internal/adapter/provider/cliproxyapi_common或internal/usage等公共位置。
245-248: 流式传输中途出错后静默吞没错误 — 建议至少记录日志。当
streamErr != nil但已有部分数据写入(sseBuffer.Len() > 0),错误被完全忽略。此时 HTTP 状态码已发送无法回退,但缺少日志会导致排查困难。建议在此处补充一条 warning 级别日志。🔧 建议修复
+ // If error occurred after partial data was sent, log it for diagnostics + if streamErr != nil && sseBuffer.Len() > 0 { + log.Printf("[CLIProxyAPI-Antigravity] stream error after partial send: %v", streamErr) + } + // If error occurred before any data was sent, return error to caller if streamErr != nil && sseBuffer.Len() == 0 { return domain.NewProxyErrorWithMessage(streamErr, true, fmt.Sprintf("stream chunk error: %v", streamErr)) }internal/adapter/provider/antigravity/openai_request.go (3)
170-182: 重复的 data URI 解析逻辑可提取为辅助函数。用户内容(170-182 行)和助手内容(220-232 行)中 data URI 的解析逻辑完全相同。此外,
SplitN(…, ";", 2)后直接取pieces[1][7:]假设格式为<mime>;base64,<data>,但如果 data URI 含有多个参数(如data:image/png;charset=utf-8;base64,DATA),pieces[1]将是"charset=utf-8;base64,DATA",导致[7:]切出错误数据。建议提取为通用辅助函数并使用
strings.Index(pieces[1], ",")替代硬编码的[7:]。♻️ 建议修复
// parseDataURI extracts MIME type and base64 data from a data URI string. // Returns mimeType, data, ok. func parseDataURI(dataURI string) (string, string, bool) { if !strings.HasPrefix(dataURI, "data:") { return "", "", false } rest := dataURI[len("data:"):] pieces := strings.SplitN(rest, ";", 2) if len(pieces) != 2 { return "", "", false } mimeType := pieces[0] commaIdx := strings.Index(pieces[1], ",") if commaIdx < 0 { return "", "", false } data := pieces[1][commaIdx+1:] return mimeType, data, len(data) > 0 }然后在两处调用:
case "image_url": imageURL := item.Get("image_url.url").String() - if strings.HasPrefix(imageURL, "data:") { - pieces := strings.SplitN(imageURL[len("data:"):], ";", 2) - if len(pieces) == 2 && len(pieces[1]) > 7 { - mimeType := pieces[0] - data := pieces[1][7:] + if mimeType, data, ok := parseDataURI(imageURL); ok { node, _ = sjson.SetBytes(node, "parts."+itoa(p)+".inlineData.mime_type", mimeType) node, _ = sjson.SetBytes(node, "parts."+itoa(p)+".inlineData.data", data) node, _ = sjson.SetBytes(node, "parts."+itoa(p)+".thoughtSignature", geminiCLIFunctionThoughtSignature) p++ - } }Also applies to: 220-232
19-25: 整个文件中sjson.SetBytes的错误返回值被全部忽略。虽然 sjson 对合法路径几乎不会出错,但像 line 25 这样
out, _ = sjson.SetBytes(...)的模式贯穿整个函数(超过 50 处)。如果在构建过程中任何一步意外失败,后续的 set 操作可能在损坏的 JSON 上继续操作,产生难以调试的数据错误。考虑至少在关键路径(如设置 model、contents、tools 等顶层字段)上检查错误并记录日志。
404-404:itoa可直接使用标准库strconv.Itoa。
fmt.Sprintf("%d", i)的性能不如strconv.Itoa(i),且此函数在循环中被频繁调用。♻️ 建议修复
-func itoa(i int) string { return fmt.Sprintf("%d", i) } +func itoa(i int) string { return strconv.Itoa(i) }并在 imports 中添加
"strconv"。internal/converter/openai_to_codex.go (2)
25-29:json.Unmarshal仅用于验证后丢弃结果,可改用json.Valid。Lines 25-28 将 body 解析到
interface{}仅为验证 JSON 格式,tmp的值从未使用。json.Valid(body)更高效,避免了不必要的反射和内存分配。同样的模式也出现在 line 252-254。♻️ 建议修复
- var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { + if !json.Valid(body) { + err := json.Unmarshal(body, new(interface{})) + return nil, err + }或者更简洁:
- var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { - return nil, err - } + if !json.Valid(body) { + return nil, fmt.Errorf("invalid JSON input") + }
391-395:synthesizeResponseID使用fmt.Sprintf格式化,考虑并发热路径下的性能。该函数在每个流式请求中被调用。
fmt.Sprintf相比strconv系列函数有更高的分配开销。在高并发场景下可以优化:♻️ 可选优化
+import "strconv" + func synthesizeResponseID() string { - return fmt.Sprintf("resp_%x_%d", time.Now().UnixNano(), atomic.AddUint64(&responseIDCounter, 1)) + return "resp_" + strconv.FormatInt(time.Now().UnixNano(), 16) + "_" + strconv.FormatUint(atomic.AddUint64(&responseIDCounter, 1), 10) }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
internal/adapter/provider/antigravity/openai_request.gointernal/adapter/provider/cliproxyapi_antigravity/adapter.gointernal/adapter/provider/cliproxyapi_codex/adapter.gointernal/converter/openai_to_codex.go
🧰 Additional context used
🧬 Code graph analysis (4)
internal/adapter/provider/antigravity/openai_request.go (1)
internal/adapter/provider/antigravity/translator_helpers.go (1)
RenameKey(70-88)
internal/converter/openai_to_codex.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (1)
FormatSSE(83-106)
internal/adapter/provider/cliproxyapi_codex/adapter.go (2)
internal/flow/engine.go (1)
Ctx(34-47)internal/domain/errors.go (1)
NewProxyErrorWithMessage(61-63)
internal/adapter/provider/cliproxyapi_antigravity/adapter.go (4)
internal/flow/engine.go (1)
Ctx(34-47)internal/domain/model.go (1)
Provider(176-204)web/src/lib/transport/types.ts (1)
Provider(67-77)internal/domain/errors.go (1)
NewProxyErrorWithMessage(61-63)
🔇 Additional comments (9)
internal/adapter/provider/cliproxyapi_antigravity/adapter.go (3)
59-66: LGTM — flow 上下文迁移清晰正确。
Execute入口通过flow.Get*辅助函数统一提取请求参数,逻辑清晰。requestModel仅用于日志、model(映射后)用于实际执行,职责分离合理。
213-213: TTFT 语义统一为 UnixMilli 绝对时间戳 — 符合 PR 目标。
eventChan.SendFirstToken(time.Now().UnixMilli())从相对耗时改为绝对毫秒时间戳,与其它 adapter 及下游消费者一致。
265-281: 无需修改。go.mod已指定go 1.25,满足strings.SplitSeq(Go 1.24+ 新增)的版本要求。internal/adapter/provider/cliproxyapi_codex/adapter.go (3)
112-120:usage.AdjustForClientType仅在 Codex adapter 中调用 — 看起来符合预期。Codex 特有的
input_tokens包含cached_tokens的调整逻辑在 stream 和 non-stream 路径均正确应用。Antigravity adapter 不需要此调整,差异合理。Also applies to: 192-199
178-178: TTFT 语义统一为 UnixMilli — 与 antigravity adapter 一致。
56-61: Codex 适配器未调用updateModelInBody,与 Antigravity 适配器行为不一致 — 确认是否预期。CLIProxyAPIAntigravityAdapter 在构建 executor 请求前会调用
updateModelInBody(requestBody, model)(第 71 行),将 payload 中的model字段更新为映射后的模型名。而 CLIProxyAPICodexAdapter 直接传递原始requestBody作为execReq.Payload,未执行此更新。两个适配器均通过
execReq.Model = model分别传递映射后的模型,但由于使用不同的 executor 类型(exec.CodexExecutorvsexec.AntigravityExecutor),这种行为差异可能是设计上的考量。请确认 Codex executor 是否会从 payload 中读取 model 字段,还是仅使用execReq.Model。internal/converter/openai_to_codex.go (3)
397-416: 状态初始化防御模式正确,已修复之前的 panic 风险。使用 comma-ok 断言模式替代直接类型断言,正确处理了
state.Custom可能为 nil 或不同类型的情况。
627-675: 流式 tool_calls 现已正确遍历所有元素,修复了之前仅处理首元素的问题。Line 627 使用
for tcIndex, tc := range tcs.Array()遍历所有 tool_call,并在 line 629-631 优先使用tc.Get("index")字段确定 callIndex,正确支持parallel_tool_calls。
818-825:sortedKeys泛型辅助函数简洁有效,已替换之前的重复冒泡排序。使用
sort.Ints和泛型map[int]T签名消除了代码重复,比之前的 O(n²) 冒泡排序更高效。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
- 修复 openai_request RenameKey 失败时未删除原始 parameters 字段 - 修复 openai_to_codex 非流式路径缺少 completion_tokens_details 的 reasoning_tokens 回退读取 - 添加 openai_to_codex TransformChunk 入口 state nil 防护
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@internal/adapter/provider/antigravity/openai_request.go`:
- Around line 144-152: The array handling for system messages writes parts
without checking element type, causing empty parts when elements lack a text
field; in the block using content.IsArray()/contents := content.Array() iterate
contents and before using contents[j].Get("text").String() verify the element is
a text part (e.g., check contents[j].Get("type").String() == "text" and that the
"text" field exists/non-empty), only then call sjson.SetBytes to write
request.systemInstruction.parts.%d.text and increment systemPartIndex; skip any
non-text or missing-text elements to avoid creating empty parts.
- Around line 163-169: In the "text" case inside the switch (where
item.Get("type").String() == "text") avoid incrementing the part index p when
text == "" — only call sjson.SetBytes(node, "parts."+itoa(p)+".text", text) and
then p++ when you actually wrote a non-empty text value; apply the same change
to the assistant-content handling block that also increments p (the analogous
switch/case for assistant items) so both branches match the behavior of
image_url and file branches and prevent null placeholders in the parts array.
- Around line 270-280: The tool response string `resp` is already raw JSON (from
`c.Raw`), but when `parsed.Type` is not `gjson.JSON` the code uses
`sjson.SetBytes` which double-encodes strings/numbers/booleans; update the
branch that sets `parts."+itoa(pp)+".functionResponse.response.result` to call
`sjson.SetRawBytes` with `resp` (like the JSON branch) so raw JSON is written
unchanged into `toolNode` (refer to variables `resp`, `parsed`, `toolNode` and
the call site `sjson.SetBytes`/`sjson.SetRawBytes`).
In `@internal/converter/openai_to_codex.go`:
- Around line 464-490: The usage parsing should be moved so it runs after the
initial-state initialization to avoid being reset; update the logic in the
function that processes SSE chunks so the block that reads usage :=
root.Get("usage") and writes into st.PromptTokens, st.CachedTokens,
st.CompletionTokens, st.TotalTokens, st.ReasoningTokens and st.UsageSeen is
executed after the if !st.Started { ... } initialization block (the block that
sets st.ResponseID, st.Created, st.MsgTextBuf, st.ReasoningBuf, st.FuncArgsBuf,
st.FuncNames, st.FuncCallIDs, st.MsgItemAdded, st.MsgContentAdded,
st.MsgItemDone, st.FuncArgsDone, st.FuncItemDone and zeroes token fields);
ensure you only move the usage parsing and not change the initialization
semantics so that any usage payload in the first chunk is preserved.
🧹 Nitpick comments (9)
internal/adapter/provider/antigravity/openai_request.go (4)
19-25: 全局模式:sjson.SetBytes返回的 error 均被忽略。整个函数大量使用
out, _ = sjson.SetBytes(...)丢弃错误值。对于静态路径来说sjson几乎不会出错,因此风险很低。但如果将来路径变得动态化(例如拼接用户输入),静默失败可能导致数据丢失且难以排查。建议:可以在函数顶部统一声明一个
var setErr error,在关键设置步骤检查并 log,或至少在文档中说明这是有意为之。
53-55: 可能遗漏max_completion_tokens字段。OpenAI 较新的 API 使用
max_completion_tokens替代(或与max_tokens并存)。当前仅读取max_tokens,如果上游客户端使用max_completion_tokens,则该配置会被静默丢弃。建议增加回退读取:
🔧 建议修复
- if maxTok := gjson.GetBytes(rawJSON, "max_tokens"); maxTok.Exists() && maxTok.Type == gjson.Number { + maxTok := gjson.GetBytes(rawJSON, "max_tokens") + if !maxTok.Exists() || maxTok.Type != gjson.Number { + maxTok = gjson.GetBytes(rawJSON, "max_completion_tokens") + } + if maxTok.Exists() && maxTok.Type == gjson.Number { out, _ = sjson.SetBytes(out, "request.generationConfig.maxOutputTokens", maxTok.Num) }
174-176: data URI 解析硬编码了base64,前缀长度(7 字符)。
pieces[1][7:]假设编码方式始终为base64,。如果遇到非 base64 编码的 data URI(例如data:text/plain,Hello),这里会错误截取数据内容。虽然图片几乎都是 base64 编码,但建议改为基于
,分割以增强健壮性:🛡️ 建议修复
- if len(pieces) == 2 && len(pieces[1]) > 7 { - mimeType := pieces[0] - data := pieces[1][7:] + if len(pieces) == 2 { + mimeType := pieces[0] + commaIdx := strings.IndexByte(pieces[1], ',') + if commaIdx < 0 || commaIdx+1 >= len(pieces[1]) { + continue + } + data := pieces[1][commaIdx+1:]同样的模式出现在 Line 224-226 的 assistant 内容处理中。
405-405: 小建议:itoa可以用strconv.Itoa替代fmt.Sprintf。
strconv.Itoa相较fmt.Sprintf("%d", i)无需格式解析,性能略优且更为惯用。对当前场景影响极小,属于代码风格优化。internal/converter/openai_to_codex.go (5)
25-29: JSON 验证使用了不必要的反序列化和字节克隆。
json.Unmarshal(body, &tmp)分配了tmp对象仅用于验证 JSON 格式,可以用json.Valid(body)替代以避免不必要的内存分配。同时bytes.Clone(body)在这里没有必要,因为gjson.GetBytes不会修改输入的body,后续也未对body做任何修改操作。♻️ 建议简化
- var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { + if !json.Valid(body) { - return nil, err + return nil, fmt.Errorf("invalid JSON input") } - rawJSON := bytes.Clone(body) + rawJSON := body
81-81: 局部变量c遮蔽了方法接收者。Line 81 的
c := m.Get("content")与 Line 24 的方法接收者c *openaiToCodexRequest同名,产生变量遮蔽。虽然当前不影响功能(接收者后续未使用),但会降低可读性并增加维护风险。建议重命名为content或cnt。♻️ 建议修改
- c := m.Get("content") - if c.Exists() && c.Type == gjson.String && c.String() != "" { + content := m.Get("content") + if content.Exists() && content.Type == gjson.String && content.String() != "" {后续所有对该变量
c的引用(lines 82-115)也需相应更新为content。
252-255: 与 Line 25-28 相同的 JSON 验证模式问题。同上,此处也可用
json.Valid(body)替代。
573-587: Reasoning 状态(ReasoningID/ReasoningBuf)为全局单例,不支持多 choice(n>1)场景。
st.ReasoningID和st.ReasoningBuf在所有 choice 之间共享。若 OpenAI 流式返回多个 choice(n>1),不同 choice 的 reasoning_content 会相互覆盖。当前场景(n=1)无影响,但如需未来扩展支持多 choice,需将 reasoning 状态改为 per-choice(类似MsgTextBuf map[int]*strings.Builder的模式)。
602-630: 消息完结(finalize message)逻辑在两处重复。Lines 602-630(tool_calls 到达时关闭当前 message)和 Lines 685-717(finish_reason 到达时关闭所有未完结 message)的事件发射逻辑几乎完全相同(
output_text.done→content_part.done→output_item.done)。建议提取为辅助方法(如emitMessageDone(idx int, ...))以减少约 50+ 行重复代码。Also applies to: 685-717
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/adapter/provider/antigravity/openai_request.gointernal/converter/openai_to_codex.go
🧰 Additional context used
🧬 Code graph analysis (2)
internal/adapter/provider/antigravity/openai_request.go (1)
internal/adapter/provider/antigravity/translator_helpers.go (1)
RenameKey(70-88)
internal/converter/openai_to_codex.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (2)
ParseSSE(15-61)FormatSSE(83-106)
🔇 Additional comments (11)
internal/adapter/provider/antigravity/openai_request.go (4)
27-41: LGTM!
reasoning_effort到thinkingConfig的映射逻辑清晰:"auto"走预算模式,"none"关闭 thoughts,其他值直接透传。
90-126: LGTM!两轮遍历分别收集
tool_call_id → function.name和tool_call_id → response,逻辑清晰且易于维护。
296-400: LGTM!工具处理逻辑完整:function 类型走
parametersJsonSchema重命名路径(含回退),google_search/code_execution/url_context作为独立工具节点透传,最终合并为request.tools数组。RenameKey失败时先删除原始parameters再写入默认值的修复也已到位。
407-423: LGTM!
attachDefaultSafetySettings实现简洁:已有设置时直接返回,否则追加默认值,错误时安全回退。internal/converter/openai_to_codex.go (7)
345-363: LGTM!
TransformChunk逻辑清晰,已添加state == nil防护,SSE 事件循环正确处理done事件并跳过。
396-400: LGTM!
synthesizeResponseID使用atomic.AddUint64保证并发安全,结合UnixNano确保唯一性,适合作为合成 Response ID。
823-830: LGTM!
sortedKeys泛型辅助函数简洁高效,使用sort.Ints替代了之前的 O(n²) 冒泡排序,且支持所有map[int]T类型的复用。
832-872: LGTM!
applyRequestEchoToResponse逻辑清晰——只在响应中缺少对应字段时才从请求中回填,且正确区分了基本类型和复杂类型的 sjson 设置方式。
321-337: LGTM — 之前反馈的reasoning_tokens回退读取问题已修复。非流式路径现在先检查
completion_tokens_details.reasoning_tokens,再回退到output_tokens_details.reasoning_tokens,与流式路径(Lines 448-454)行为一致。
402-421: 之前反馈的state.Custom类型断言安全检查已正确实现。使用 comma-ok 断言模式,失败时重新初始化,避免了 panic 风险。
632-680: 之前反馈的多 tool_calls 遍历已修复,当前实现正确。使用
for tcIndex, tc := range tcs.Array()遍历所有 tool_calls 元素,并通过tc.Get("index")获取实际索引,每个 callIndex 维护独立的 FuncNames/FuncCallIDs/FuncArgsBuf 状态。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
- 修复 openai_request 系统消息数组空文本 part 写入 - 修复 openai_request text 分支空文本时 p++ 产生 null 占位 - 修复 openai_request tool response 非 JSON 类型双重编码 - 修复 openai_to_codex 流式 usage 解析在初始化前执行被重置
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@internal/converter/openai_to_codex.go`:
- Around line 753-814: The code currently builds and emits the
`response.completed` SSE inside the choices.ForEach callback (using variables
completed, outputsWrapper and calling FormatSSE), which causes duplicate
completed events when multiple choices have finish_reason; move the
completion-assembly and the FormatSSE("response.completed", ...) emission out of
the choices.ForEach loop so it's executed once after all choices are
processed—aggregate per-choice outputs into the existing outputsWrapper/arrays
while iterating, then after choices.ForEach returns build the completed JSON
(setting response.output, usage fields, and applyRequestEchoToResponse) and emit
a single `response.completed`; use a finalization condition (e.g., count of
choices or the ForEach completion) to trigger the single emission.
- Around line 632-679: The stream uses separate indexes (choice idx vs
callIndex) for "output_index", causing collisions; add a single global
incremental output-index counter in the conversion state (e.g.,
st.NextOutputIndex or a helper nextOutputIndex() method) and use it whenever you
set "output_index" on SSE items (replace uses of callIndex and the choice idx
when calling sjson.Set(..., "output_index", ...)), also when buffering/appending
final response items ensure you record and use that same output-index to place
items in the final response array so the SSE "output_index" values align with
the positions used during final assembly (update places that set "item_id",
"output_index", and any mapping logic that reads those indices such as where
out/response.output is assembled).
🧹 Nitpick comments (7)
internal/adapter/provider/antigravity/openai_request.go (3)
175-184: Data URI 解析未校验base64,前缀,硬编码偏移量[7:]脆弱。
pieces[1][7:]假设;后面一定是base64,(恰好 7 字符),但未做显式校验。如果数据 URI 使用其他编码(如data:text/plain,Hello不带 base64),或格式略有差异,这里会静默截取到错误的数据。同样的模式在 Lines 226-229(assistant 内容处理)中重复出现。♻️ 建议增加前缀校验
if strings.HasPrefix(imageURL, "data:") { pieces := strings.SplitN(imageURL[len("data:"):], ";", 2) - if len(pieces) == 2 && len(pieces[1]) > 7 { - mimeType := pieces[0] - data := pieces[1][7:] + if len(pieces) == 2 && strings.HasPrefix(pieces[1], "base64,") { + mimeType := pieces[0] + data := pieces[1][len("base64,"):]
157-206: 当content既非字符串也非数组时,会追加空 parts 的 user 节点。如果
content为null、数字或其他非预期类型,Line 206 仍会将{"role":"user","parts":[]}追加到request.contents。下游 Gemini API 可能会拒绝空 parts 的 content 节点。♻️ 建议在追加前校验 parts 是否为空
+ parts := gjson.GetBytes(node, "parts") + if parts.IsArray() && len(parts.Array()) > 0 { out, _ = sjson.SetRawBytes(out, "request.contents.-1", node) + }
408-408:itoa可使用strconv.Itoa替代fmt.Sprintf。
strconv.Itoa更惯用且避免了fmt.Sprintf的格式化开销。♻️ 建议修改
-func itoa(i int) string { return fmt.Sprintf("%d", i) } +func itoa(i int) string { return strconv.Itoa(i) }需在导入中添加
"strconv"。internal/converter/openai_to_codex.go (4)
24-29:json.Unmarshal仅用于校验但tmp未被使用,bytes.Clone也非必要。
json.Unmarshal(body, &tmp)仅做 JSON 合法性检查,tmp之后未使用。可改用json.Valid(body)来避免不必要的反序列化。同时gjson.GetBytes不会修改传入 slice,bytes.Clone(body)造成了多余的内存拷贝。♻️ 建议优化
func (c *openaiToCodexRequest) Transform(body []byte, model string, stream bool) ([]byte, error) { - var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { + if !json.Valid(body) { - return nil, err + return nil, fmt.Errorf("invalid JSON body") } - rawJSON := bytes.Clone(body) + rawJSON := body out := `{"instructions":""}`
81-81: 变量c遮蔽了方法接收者。
c := m.Get("content")与第 24 行的方法接收者c *openaiToCodexRequest同名。虽然接收者在此方法中未被使用,但会造成阅读困惑。建议改用content或ct等名称。
251-255: 与请求转换同样的json.Unmarshal+ 未使用tmp的问题。同第 24-29 行的建议,此处可用
json.Valid(body)替代。
60-142: 消息转换逻辑覆盖了主要角色和内容类型,但非text/image_url的 content part 会被静默丢弃。Lines 94-113 的 content array 遍历仅处理
"text"和"image_url"类型,其他如"input_audio"等类型会被跳过且无日志提示。如果后续需要支持更多内容类型,可考虑添加default分支记录日志以便排查。整体逻辑(
system→developer、tool→function_call_output、assistant tool_calls 展开)实现正确。
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/adapter/provider/antigravity/openai_request.gointernal/converter/openai_to_codex.go
🧰 Additional context used
🧬 Code graph analysis (2)
internal/adapter/provider/antigravity/openai_request.go (1)
internal/adapter/provider/antigravity/translator_helpers.go (1)
RenameKey(70-88)
internal/converter/openai_to_codex.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (2)
ParseSSE(15-61)FormatSSE(83-106)
🔇 Additional comments (10)
internal/adapter/provider/antigravity/openai_request.go (4)
207-293: Assistant 消息处理逻辑清晰,已修复历史问题。内容 parts、tool_calls 以及 tool response 的组装逻辑正确。之前的 base64 编码和双重 JSON 编码问题已修复。注意 Lines 225-229 的 data URI 解析与上面 user 分支存在同样的
[7:]硬编码问题,建议一并修改。
298-403: Tools 处理逻辑完整,覆盖 function / google_search / code_execution / url_context 四种类型。RenameKey 失败回退路径已正确清除原始
parameters字段。strict字段的清理也符合协议对齐需求。一个小优化点:Line 300
tools.Array()在条件判断和 Line 306 的 for-range 中被调用了两次,gjson 每次都会重新解析。可以提取到局部变量。
410-426: 安全设置默认值逻辑合理。当请求中未自带
safetySettings时自动附加默认值,且在设置失败时安全回退返回原始 JSON。实现简洁正确。
19-25: 函数签名和初始化逻辑合理。
bytes.Clone防止修改调用方的切片,基础信封随后被modelName覆盖。未使用的_ bool参数保留了接口兼容性。internal/converter/openai_to_codex.go (6)
345-363: TransformChunk 实现清晰,nil 检查已到位。入口处的 nil 保护、SSE 解析与 buffer 管理逻辑均合理。
396-400:synthesizeResponseID的线程安全性和唯一性设计合理。使用
atomic.AddUint64配合time.Now().UnixNano()确保进程内唯一性。
402-421: 安全类型断言模式正确,初始化逻辑清晰。
state.Custom的 comma-ok 断言与重新初始化的防御模式与之前建议一致,实现良好。
832-872:applyRequestEchoToResponse实现简洁,字段冲突处理正确。仅在 response 中不存在对应字段时才回写请求值,通过
gjson.Typeswitch 正确区分原始 JSON 类型与值类型,避免了 marshal 二次编码。
144-179:response_format与text字段的合并逻辑清晰。
json_schema的 name/strict/schema 字段正确提取,text.verbosity在两种代码路径(有/无response_format)中都得到了处理。
435-475: 在首次 chunk 中,map 被重复初始化两次。lines 406-421 在类型断言失败时初始化 9 个 map(FuncArgsBuf、FuncNames、FuncCallIDs、MsgTextBuf、MsgItemAdded、MsgContentAdded、MsgItemDone、FuncArgsDone、FuncItemDone),之后 lines 439-450 在
!st.Started条件下又完全重建这些 map。由于 st.Started 初始为 false,首次执行时这 9 个 map 会被不必要地初始化两次。建议在初始化块中移除第二次的 map 创建,或在首次类型断言成功后跳过!st.Started块中的 map 重建。Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
- 引入全局 NextOutputIndex 计数器,为消息和函数调用分配不重复的 output_index - 将 response.completed 事件从 choices.ForEach 循环内移至循环外,防止多 choice 时重复发送
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@internal/converter/openai_to_codex.go`:
- Around line 279-286: The code only reads reasoning_content from the first
choice using gjson.GetBytes(body, "choices.0.message.reasoning_content") which
drops reasoning from additional choices; change the logic to iterate over all
choices (the same iteration pattern used later in the file around the ForEach
handling of message and tool_calls) and for each choice that has
message.reasoning_content append a reasoning output into outputsWrapper (use the
same sjson.SetRaw pattern and generate a unique id per choice, e.g., using
respID plus the choice index) so that every choice's reasoning_content is
converted to a reasoning entry rather than only choices.0.
- Around line 829-843: The loop building function_call outputs currently
iterates sortedKeys(st.FuncArgsBuf) which can include entries without call IDs;
change it to iterate sortedKeys(st.FuncCallIDs) so only calls with IDs are
emitted, and when populating arguments use a safe lookup into st.FuncArgsBuf
(e.g., check if b := st.FuncArgsBuf[i]; b != nil before using b.String()) to
avoid empty call_id/id values; update the loop around sortedKeys(...) in the
block that sets id/arguments/call_id/name (uses sjson.Set/sjson.SetRaw and
variables callID, name, args) to reflect this change.
🧹 Nitpick comments (2)
internal/converter/openai_to_codex.go (2)
25-29:json.Unmarshal验证后bytes.Clone冗余——body未被修改。Line 25-28 用
json.Unmarshal做合法性校验(合理),但 Line 29bytes.Clone(body)没有必要:gjson.GetBytes不会修改底层切片,且body在函数内不会被写入。可直接对body使用gjson.GetBytes,省去一次完整拷贝。♻️ 建议简化
- rawJSON := bytes.Clone(body) - out := `{"instructions":""}` + out := `{"instructions":""}`后续所有
gjson.GetBytes(rawJSON, ...)改为gjson.GetBytes(body, ...)。
81-81: 变量c遮蔽了方法接收者。Line 81 的
c := m.Get("content")与接收者(c *openaiToCodexRequest)同名,虽然在 Go 中合法且当前不会引发 bug(接收者后续未使用),但降低了可读性。建议改为contentVal或cnt等名称。
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/converter/openai_to_codex.go
🧰 Additional context used
🧬 Code graph analysis (1)
internal/converter/openai_to_codex.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (2)
ParseSSE(15-61)FormatSSE(83-106)
🔇 Additional comments (6)
internal/converter/openai_to_codex.go (6)
1-22: LGTM!导入和类型声明整洁合理,
init()注册模式清晰。
345-363: LGTM!
TransformChunk入口的state == nil检查已到位,SSE 解析与事件分发逻辑清晰。
788-866:response.completed逻辑结构清晰,guard 和 usage 回退合理。
CompletedSent防重复发送、hasFinish二次检查以及total_tokens回退计算(Lines 855-858)逻辑正确。将 response.completed 移至choices.ForEach之外也解决了之前的重复发送问题。
871-878: LGTM!泛型
sortedKeys取代了之前重复的冒泡排序,使用sort.Ints简洁高效。
880-920: LGTM!
applyRequestEchoToResponse路径列表清晰,跳过已存在字段的逻辑防止覆盖,类型分发正确。
603-616: Reasoning 与 Message 共享同一output_index,设计意图需验证代码在第 606 行
st.ReasoningIndex = st.msgOutIdx(idx)中,为 reasoning 项分配了与同一 choice 的 message 相同的output_index。在流式协议中,reasoning 和 message 是两个独立的response.output_item,但两者的output_index相同。根据提交记录 6c1a3d7,全局
NextOutputIndex计数器被引入以"为消息和函数调用分配不重复的 output_index",但 reasoning 被明确排除在外,其 output_index 由msgOutIdx()分配。这表明该设计可能是有意为之。但需确认:Codex 协议是否规定同一 choice 的 reasoning 和 message 应共享 output_index,还是应各有独立的 output_index。若协议要求后者,则应为 reasoning 独立分配 output_index。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
…用遍历键 - 将 reasoning_content 读取从 choices.0 硬编码改为 ForEach 遍历,支持多 choice - response.completed 中函数调用输出遍历从 FuncArgsBuf 改为 FuncCallIDs,避免无效项
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@internal/converter/openai_to_codex.go`:
- Around line 285-291: The reasoning IDs are generated inconsistently: the
non-stream path builds the ID using strings.TrimPrefix(respID, "resp_") while
the stream path uses st.ResponseID directly, causing mismatched IDs; update the
non-stream logic in openai_to_codex.go (the block that checks
msg.Get("reasoning_content"), builds reasoning and sets "id") to stop trimming
the "resp_" prefix and use respID as-is (so the id format matches the stream
path like rs_resp_...); keep the rest of the fields and the outputsWrapper
SetRaw usage unchanged.
- Around line 604-607: The reasoning output is incorrectly sharing the same
output_index as the message because st.ReasoningIndex is set via
st.msgOutIdx(idx); change assignment so reasoning gets its own slot by using the
NextOutputIndex allocator instead (i.e., when detecting delta
"reasoning_content" and creating st.ReasoningID, set st.ReasoningIndex =
st.NextOutputIndex() rather than calling st.msgOutIdx(idx)) and ensure any
related bookkeeping that NextOutputIndex updates is used so reasoning and
message entries have distinct output_index values.
🧹 Nitpick comments (3)
internal/converter/openai_to_codex.go (3)
81-81: 变量c遮蔽了方法接收者。
c := m.Get("content")与 Line 24 的方法接收者(c *openaiToCodexRequest)同名。虽然接收者在此处未被使用,但建议改为ct或content以避免混淆。
25-28:json.Unmarshal仅用于校验,tmp未被使用。Lines 25-28(以及 252-254)中
var tmp interface{}仅为校验 JSON 合法性,反序列化的结果从未被读取。可使用json.Valid(body)避免不必要的内存分配。♻️ 建议修改
- var tmp interface{} - if err := json.Unmarshal(body, &tmp); err != nil { + if !json.Valid(body) { + return nil, fmt.Errorf("invalid JSON input") - return nil, err }
789-801: 重复遍历choices数组检查finish_reason,可用标志位替代。Line 715 处的第一轮
choices.ForEach已处理了finish_reason。Lines 793-801 再次遍历整个数组仅为判断是否存在finish_reason。可在第一轮循环中设置hasFinish标志,避免二次遍历。
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/converter/openai_to_codex.go
🧰 Additional context used
🧬 Code graph analysis (1)
internal/converter/openai_to_codex.go (2)
internal/converter/registry.go (1)
TransformState(11-21)internal/converter/sse.go (2)
ParseSSE(15-61)FormatSSE(83-106)
🔇 Additional comments (6)
internal/converter/openai_to_codex.go (6)
17-18: 注册和初始化逻辑清晰。
init()中注册 OpenAI→Codex 的转换器对,结构类型清楚。
346-364:TransformChunk入口已添加 nil 保护,逻辑正确。SSE 解析、缓冲区拼接和逐事件转换的流程清晰。
427-448: 安全类型断言模式已正确实现。使用 comma-ok 模式避免了
state.Custom类型不匹配时的 panic,与codex_to_openai.go中的防御性模式一致。
830-845:response.completed中函数调用遍历已正确使用FuncCallIDs而非FuncArgsBuf。此处已按之前建议修复,确保只输出有 call ID 的函数调用项。
872-879:sortedKeys泛型辅助函数实现简洁,消除了重复的冒泡排序代码。使用
sort.Ints替代手动排序,时间复杂度从 O(n²) 降至 O(n log n)。
881-921:applyRequestEchoToResponse逻辑完整,仅在目标字段不存在时写入。字段列表覆盖了主要的 Codex 请求参数,
prefix参数设计合理,支持不同的嵌套层级。
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
- 非流式路径去掉 TrimPrefix("resp_"),与流式路径 ID 格式对齐
- reasoning 使用 NextOutputIndex 独立分配,避免与 message 共用 msgOutIdx 冲突
Summary
Test plan
pnpm run lint和pnpm run build通过Summary by CodeRabbit
发布说明
新功能
修复
重构
测试
用户界面