Skip to content

feat: Add vault context system for AI responses with file selection UI#252

Open
SKYLXN wants to merge 2 commits intoeharris128:mainfrom
SKYLXN:main
Open

feat: Add vault context system for AI responses with file selection UI#252
SKYLXN wants to merge 2 commits intoeharris128:mainfrom
SKYLXN:main

Conversation

@SKYLXN
Copy link

@SKYLXN SKYLXN commented Dec 3, 2025

Add Vault Context System for AI Responses

Overview

This PR adds a comprehensive context system that allows AI models to access Obsidian vault files when generating responses, enabling more contextual and relevant interactions.

Problem Solved

Previously, the AI had no access to vault files, limiting its ability to provide context-aware responses based on your notes. This feature addresses the core request: "l'IA n'a pas accès aux fichiers d'Obsidian, il faut lui rajouter les fichiers pour le contexte!"

Features Added

🎯 Automatic Context Injection

  • Active File: Automatically include the currently open file
  • Selected Text: Include any text selected in the editor
  • Smart Defaults: Both enabled by default for seamless experience

📁 Manual File Selection

  • File Selector UI: Searchable modal to select specific files from your vault
  • Multi-select: Choose multiple files to include in context
  • File Management: Easily add/remove selected files
  • Visual Feedback: See selected file count and paths

⚙️ Token Budget Management

  • Percentage-based allocation: Configure how tokens are split (default 70% context, 30% response)
  • Intelligent truncation: Context automatically truncated to fit budget
  • Token estimation: Uses 1 token ≈ 4 characters heuristic
  • Increased defaults: Max tokens raised from 300 to 8192 for better responses

💾 Context Persistence

  • History tracking: Context saved in conversation history
  • VaultContext type: Structured storage of active file, selection, and additional files
  • Replay conversations: Context preserved when viewing history

📝 Structured Format

Context is sent to AI as clean, structured markdown:

# Vault Context

## Active File: filename.md
Path: `path/to/file.md`
[file content]

## Selected Text
[selected text]

## Additional Files
### file1.md
Path: `path/to/file1.md`
[content]

New Components

ContextBuilder Service (src/services/ContextBuilder.ts)

  • buildContext(): Collects vault context based on settings
  • formatStructuredContext(): Formats context as structured markdown
  • truncateToTokenLimit(): Intelligently truncates to fit token budget
  • buildFormattedContext(): Complete pipeline from settings to formatted context
  • calculateContextTokenBudget(): Computes token allocation

FileSelector Modal (src/Plugin/Components/FileSelector.ts)

  • Searchable file list with filter by name/path
  • Checkbox selection for multiple files
  • Selected file count display
  • Confirm/Cancel workflow

Context Settings UI

Added to SettingsContainer.ts:

  • Toggle: Include active file
  • Toggle: Include selected text
  • Input: Context token budget percentage (0-100)
  • Button: Open file selector
  • Display: List of selected files with remove buttons

AI Model Updates

New Gemini Models Supported (10 models added)

  • gemini-2.0-flash-exp - Experimental Flash 2.0
  • gemini-2.0-flash-thinking-exp-1219 - Thinking mode
  • gemini-2.0-flash - Stable Flash 2.0
  • gemini-2.0-flash-lite - Lightweight version
  • gemini-2.5-pro - Pro tier 2.5
  • gemini-2.5-flash - Flash 2.5
  • gemini-2.5-flash-lite - Lite 2.5
  • gemini-3-pro-preview - Preview of Gemini 3
  • gemini-flash-latest - Latest Flash
  • gemini-flash-lite-latest - Latest Lite

Improved Streaming Responses

  • Thinking animation: Shows "Thinking..." with pulsing dots before first response chunk
  • Fixed Gemini streaming: Properly clears animation on first chunk
  • Fixed Claude streaming: Properly clears animation on first text
  • Better UX: Users see activity instead of blank screen during generation

Technical Changes

Type System Extensions

types.ts:

type ContextSettings = {
  includeActiveFile: boolean;
  includeSelection: boolean;
  selectedFiles: string[];
  maxContextTokensPercent: number;
};

type VaultContext = {
  activeFile?: { path: string; name: string; content: string };
  selectedText?: string;
  additionalFiles: { path: string; name: string; content: string }[];
};

// Extended existing types:
ViewSettings { contextSettings: ContextSettings }
ChatHistoryItem { vaultContext?: VaultContext }

Integration Flow

  1. User clicks send button in ChatContainer.handleGenerateClick()
  2. Context built using ContextBuilder.buildFormattedContext()
  3. Context injected as first user message with structured format
  4. User's actual prompt sent as second user message
  5. AI receives both context and prompt
  6. Context stored in history via historyPush(params, vaultContext)

Settings Management

  • Context settings added to defaultSettings in main.ts
  • Applied to all three views: modal, widget, fab
  • Saved/loaded through existing settings infrastructure
  • Per-view configuration (each view has independent context settings)

Files Modified

Core Files (5)

  • src/main.ts - Added contextSettings to defaultSettings, Gemini model validation
  • src/Types/types.ts - Added ContextSettings and VaultContext types
  • src/utils/utils.ts - Fixed getViewInfo to include contextSettings
  • src/utils/constants.ts - Added 10 new Gemini model constants
  • src/utils/models.ts - Added model definitions for new Gemini models

Component Files (2)

  • src/Plugin/Components/ChatContainer.ts - Context building and injection logic
  • src/Plugin/Components/SettingsContainer.ts - Context settings UI

New Files (2)

  • src/services/ContextBuilder.ts - Context service
  • src/Plugin/Components/FileSelector.ts - File selection modal

Styling (1)

  • styles.css - Added thinking animation CSS

Dependencies (1)

  • package-lock.json - Version bump to 0.19.19

Testing Recommendations

Basic Context Testing

  1. Open a note in Obsidian
  2. Open the plugin (modal/widget/fab)
  3. Send a message asking about the current file
  4. Verify AI references file content

Selected Text Testing

  1. Select text in editor
  2. Ask AI about the selection
  3. Verify AI has access to selected text

Manual File Selection

  1. Open plugin settings
  2. Click "Select Files" button
  3. Search and select files
  4. Ask AI about selected files
  5. Verify AI can reference multiple files

Token Budget Testing

  1. Set context budget to 10%
  2. Include large file
  3. Verify context is truncated
  4. Change to 90%
  5. Verify more context included

History Persistence

  1. Include files in conversation
  2. Switch to different view/conversation
  3. Return to original conversation via history
  4. Verify context still present

Breaking Changes

None - all changes are additive and backward compatible.

Migration Notes

  • Existing settings will be migrated automatically with default context settings
  • No user action required
  • Old conversation history will continue to work (no vaultContext field is OK)

Performance Considerations

  • File reading is async and non-blocking
  • Context built only when needed (on message send)
  • Token truncation is character-based (fast)
  • File selector uses virtual scrolling for large vaults

Future Enhancements

  • Smart file selection based on content similarity
  • Include linked notes automatically
  • Support for binary files (PDFs, images with OCR)
  • Context compression for better token efficiency
  • Per-message context override
  • Context templates/presets

Credits

Developed in response to user request for vault file access in AI conversations.

Related Issues

Closes: (insert issue number if applicable)


Ready for review! Please test with various vault sizes and file types.

This PR adds a comprehensive context system that allows the AI to access Obsidian vault files when generating responses.

Features Added:
- Vault context injection: AI can now access file content from your vault
- Automatic context: Include active file and selected text automatically
- Manual file selection: UI to select specific files to include in context
- Token budget management: Percentage-based allocation (default 70% context, 30% response)
- Context persistence: Selected files and context saved in conversation history
- Structured format: Context sent as formatted markdown with clear sections

New Components:
- ContextBuilder service: Handles context collection, formatting, and truncation
- FileSelector modal: Searchable UI for selecting vault files
- Context Settings UI: Configure context options per view (modal/widget/fab)

AI Model Updates:
- Added support for 10 new Gemini models (2.0, 2.5, 3.0 series)
- Fixed streaming response handling for Gemini and Claude
- Added 'thinking' animation during AI generation
- Improved token limit defaults (300 -> 8192 for better responses)

Technical Changes:
- Extended ViewSettings and ChatHistoryItem types with context support
- Context injected as first user message in conversation
- Truncation uses 1 token ≈ 4 chars estimation
- All file types supported (including images via base64)

This feature enables more contextual AI responses by providing relevant vault content automatically or through manual selection.
@eharris128
Copy link
Owner

Hi there - I am so sorry for the massive delay.

I must have missed the notification for this pull request.

I will give it a test tomorrow & look to get a release out ASAP if everything looks good.

Thank you so much for the contribution @SKYLXN

@eharris128
Copy link
Owner

Thank you so much for this pull request.

I really like the initial proof of concept.

I am going to rope in the designer & product owner for this project @jsmorabito so he can give it a pass through.

One technical bit that I cleaned up on my local testing of the branch was using 4096 as the default max token count, because gpt 3.5 turbo breaks if the max token count is higher than this number.

Likely the further extension of this idea is to default max token count to a number that fits to the respective model...

image

@SKYLXN
Copy link
Author

SKYLXN commented Jan 12, 2026

Hi @eharris128! Thanks for the feedback and for testing it out. No worries at all about the delay!

I agree that implementing dynamic token defaults based on the selected model would be the ideal next step to fully leverage models with larger context windows without breaking older ones.

I'll wait for @jsmorabito's feedback on the UI/UX side. Happy to make adjustments if needed :)

@jsmorabito
Copy link
Collaborator

Hey @SKYLXN , thank you so much for this PR. I've tested it out and the functionality is very exciting to see! I have only one requested change prior to merging: Please add a toggle to our plugin settings menu to enable/disable the file context feature and set it to disabled by default (below the "Toggle FAB" setting is a fine location). I want users to be able to try it out, but there's a number of UX/UI improvements that I'll want to make before we have it enabled all the time as part of our plugins core experience.

image

- Add enableFileContext boolean to plugin settings (disabled by default)
- Add 'Enable File Context' toggle in settings UI below 'Toggle FAB'
- Update ChatContainer to check if feature is enabled before building context
- Addresses feedback from @jsmorabito to make feature opt-in
@SKYLXN
Copy link
Author

SKYLXN commented Feb 9, 2026

Hey @jsmorabito! Thanks for the feedback and for testing the PR. I completely understand the need for UX/UI refinements before making this a core feature.

I've implemented the requested change, the implementation includes:

  • New enableFileContext boolean property in plugin settings (defaults to false)
  • Toggle UI in the main settings menu with clear description
  • Updated context building logic in ChatContainer to respect the toggle state

Let me know if you'd like any adjustments!
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants