-
Notifications
You must be signed in to change notification settings - Fork 377
Description
Description
I’ve noticed that the engine implementations for different AI providers (like OpenAI, Anthropic, etc.) are being maintained manually in this repo.
This makes it cumbersome to support each third-party API individually. For example:
- Handling the API response from OpenAI manually:
opencommit/src/engine/openAi.ts
Lines 58 to 62 in c1756b8
const completion = await this.client.chat.completions.create(params); const message = completion.choices[0].message; let content = message?.content; return removeContentTags(content, 'think'); - Dealing with Anthropic’s error response shapes:
opencommit/src/engine/anthropic.ts
Lines 64 to 74 in c1756b8
if ( axios.isAxiosError<{ error?: { message: string } }>(error) && error.response?.status === 401 ) { const anthropicAiError = error.response.data.error; if (anthropicAiError?.message) outro(anthropicAiError.message); outro( 'For help look into README https://github.com/di-sukharev/opencommit#setup' ); }
This approach will make it harder to keep up with rapidly evolving APIs and newer models being released. The boilerplate can become difficult to maintain at scale.
Suggested Solution
Instead of manually handling each engine and response type, I’d recommend offloading this work to a well-maintained abstraction layer.
AI SDK Core by Vercel is an excellent candidate. It provides:
- A unified interface for calling different LLM providers
- Built-in support for OpenAI, Anthropic, Mistral, Cohere, Fireworks, Groq, etc.
- Stream handling, errors, and provider-specific quirks abstracted away
- Plug-and-play architecture for future provider support
By integrating ai-sdk-core, opencommit could reduce technical debt and improve forward compatibility while maintaining flexibility.
Let me know your thoughts — happy to help if needed!
Alternatives
No response
Additional Context
No response