Skip to content

Feat/rag embeddings providers#222

Merged
veithly merged 6 commits intoXSpoonAi:mainfrom
veithly:feat/rag-embeddings-providers
Dec 19, 2025
Merged

Feat/rag embeddings providers#222
veithly merged 6 commits intoXSpoonAi:mainfrom
veithly:feat/rag-embeddings-providers

Conversation

@veithly
Copy link
Copy Markdown
Collaborator

@veithly veithly commented Dec 18, 2025

No description provided.

RAG embeddings selection is now configurable via env with first-class support for
OpenAI/OpenRouter/Gemini/Ollama and a custom OpenAI-compatible endpoint.
Improve URL ingestion by converting GitHub web (blob) URLs to raw content URLs
and add a smoke script to validate ingestion works as expected.
Add a local Ollama LLM provider and include default configuration values plus
an .env example entry for OLLAMA_BASE_URL.
Ensure OpenAI-compatible providers fall back to their default base_url when a
config explicitly passes None.
@chatgpt-codex-connector
Copy link
Copy Markdown

The account who enabled Codex for this repo no longer has access to Codex. Please contact the admins of this repo to enable Codex again.

Eliminate obsolete retrieval classes and modules including ChromaClient, QdrantClient, and associated document handling logic to streamline the retrieval package for SpoonAI.
Improve the ConfigurationManager by adding deduplication and filtering logic for fallback chains and configured providers. Update environment variable handling to support both LLM_PROVIDER and DEFAULT_LLM_PROVIDER. Ensure only valid providers with proper API keys are considered configured, streamlining the provider selection process.
@veithly veithly merged commit 4b7866f into XSpoonAi:main Dec 19, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant