Skip to content

Feat/rag embeddings providers#224

Merged
veithly merged 9 commits intoXSpoonAi:mainfrom
helloissariel:feat/rag-embeddings-providers
Dec 19, 2025
Merged

Feat/rag embeddings providers#224
veithly merged 9 commits intoXSpoonAi:mainfrom
helloissariel:feat/rag-embeddings-providers

Conversation

@helloissariel
Copy link
Copy Markdown
Contributor

No description provided.

veithly and others added 9 commits December 18, 2025 18:41
RAG embeddings selection is now configurable via env with first-class support for
OpenAI/OpenRouter/Gemini/Ollama and a custom OpenAI-compatible endpoint.
Improve URL ingestion by converting GitHub web (blob) URLs to raw content URLs
and add a smoke script to validate ingestion works as expected.
Add a local Ollama LLM provider and include default configuration values plus
an .env example entry for OLLAMA_BASE_URL.
Ensure OpenAI-compatible providers fall back to their default base_url when a
config explicitly passes None.
Eliminate obsolete retrieval classes and modules including ChromaClient, QdrantClient, and associated document handling logic to streamline the retrieval package for SpoonAI.
Improve the ConfigurationManager by adding deduplication and filtering logic for fallback chains and configured providers. Update environment variable handling to support both LLM_PROVIDER and DEFAULT_LLM_PROVIDER. Ensure only valid providers with proper API keys are considered configured, streamlining the provider selection process.
@chatgpt-codex-connector
Copy link
Copy Markdown

The account who enabled Codex for this repo no longer has access to Codex. Please contact the admins of this repo to enable Codex again.

@veithly veithly merged commit 2a0f021 into XSpoonAi:main Dec 19, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants