Releases: logancyang/obsidian-copilot
3.2.1
Copilot for Obsidian - Release v3.2.1
A patch release with search improvements and bug fixes.
- Improved vault search: Better tag matching with hierarchical support (e.g. searching
#projectalso matches#project/alpha) and a cleaner, faster search pipeline. - New in-chat indexing progress: Indexing progress now shows as a card inside Copilot Chat with a progress bar and pause/resume/stop controls, instead of a popup notice. No more phantom re-indexing on mode switch.
Bug Fixes
- #2176 Fix ENAMETOOLONG error when Composer creates files with long names @logancyang
- #2174 Fix insert/replace at cursor accidentally including agent reasoning blocks @logancyang
- #2173 Fix phantom re-indexing on mode switch @logancyang
- #2172 Fix search recall for tag queries and short terms @logancyang
Troubleshoot
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the github issues!
3.2.0
Copilot for Obsidian - Release v3.2.0 💪
The first version of Self-Host Mode is finally here! You can simply toggle it on at the bottom of Plus settings, and your reliance on the Copilot Plus backend is gone (Believer required)!
The builtin model list has been updated, click "Refresh Built-in Models" above the model setting table to see them!
- 🚀 Autonomous Agent Evolution — The agent experience gets a major upgrade this release!
- ✨ New reasoning block: The new reasoning block replaces the old tool call banners for a cleaner and smoother UI in agent mode!
- 🔧 Native tool calling: We moved to native tool calling from the XML-based approach for a more reliable tool call experience. Nowadays more and more models support native tool calling, even local models!
- Brand new Quick Command and Editor "Quick Ask" Floating Panel! Select text in the editor and get an inline AI floating panel for quick questions — with persistent selection highlights so you never lose your place! (@Emt-lin)
- Twitter/X thread processing: Mention a tweet thread URL in chat and Copilot will fetch the entire thread! (@logancyang)
- Modular context compaction architecture — a cleaner, more extensible design for how Copilot manages long contexts. (@logancyang)
- LM Studio and Ollama reasoning/thinking token support — thinking models in LM Studio and Ollama now display reasoning output properly. (@logancyang)
- Major search improvements: better recall with note-diverse top-K scoring, and a new "Build Index" button replacing the warning triangle in Relevant Notes for a clearer UX. (@logancyang)
In the next iterations, self-host mode will let you configure your own web search and YouTube services, and integrate with our new standalone desktop app for more powerful features, stay tuned!
👨💻 Known Limitations: Agent mode performance varies by model, recommended models: Gemini Pro/Flash (copilot-plus-flash), Claude 4.5+ models, GPT 5+ and mini, grok 4 and fast. Many OpenRouter open source models work too but the performance can vary a lot.
More details in the changelog:
Improvements
- #2139 Add Editor "Quick Ask" Floating Panel with Persistent Selection Highlights @Emt-lin
- #2146 Address quick ask refinements @logancyang
- #2149 Agent UI/UX Improvements @logancyang
- #2123 Migrate to native tool call in Plus and Agent modes @logancyang
- #2159 Implement modular context compaction architecture @logancyang
- #2155 Miyo Integration Phase 1: abstract semantic index backend @wenzhengjiang
- #2161 Add twitter4llm support for Twitter/X URL processing @logancyang
- #2151 Add reasoning/thinking token support for LM Studio @logancyang
- #2141 Add PatternListEditor component for include/exclude settings @Emt-lin
- #2164 Audit context envelope, tag alignment, artifact dedup, and logging @logancyang
- #2166 Update builtin models to latest versions across all providers @logancyang
- #2167 Remove HyDE query rewriting from HybridRetriever @logancyang
- #2168 Replace warning triangle with Build Index button in Relevant Notes @logancyang
- #2147 Update Ollama support @logancyang
- Show Self-Host Mode section to all users with disabled toggle for non-lifetime @logancyang
Bug Fixes
- #2117 Fix: increase grep limit for larger vaults and unify chunking @logancyang
- #2137 Fix: prevent arrow keys from getting stuck in typeahead with no matches @zeroliu
- #2140 Fix: GitHub Copilot mobile CORS bypass and auth UX improvements @Emt-lin
- #2153 Fix LM Studio chat with only ending think tag @logancyang
- #2157 Fix: improve mobile keyboard/navbar CSS scoping and platform detection @Emt-lin
- #2160 Fix: remove tiktoken remote fetch from critical LLM path @logancyang
- #2165 Fix search recall with note-diverse top-K and chunk-aware scoring @logancyang
Troubleshoot
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
3.1.5
Copilot for Obsidian - Release v3.1.5 🔥
Our first release in 2026 has some long-awaited upgrades!
- Copilot can read web tabs in Obsidian now!! 🚀 With the new builtin YouTube and web clipper slash commands (use "generate default" button under the Commands settings tab), you can get beautiful clips with mindmap with just one prompt! 🤯
- We now have a new custom system prompt system where every system prompt is stored as a markdown file. You can add and switch your custom system prompt in the Advanced settings tab or just above the chat input via the new gear icon!
- As requested, we now have a new side-by-side diff view for composer edits! You can toggle between the inline diff view and side-by-side when a diff is displayed.
- New auto compact when the context attached is too long and overflows your model's context window. You can set the token threshold, default is 128k tokens. If you want it to be less aggressive, set it to 1M tokens.
- OpenRouter embedding models are supported! You can simply add them using the OpenRouter provider in the embedding model table.
There are a lot more upgrades, including a significant improvement in index-free search, better sorting of chat history and projects, composer auto-accept toggle in the chat input menu (the 3 dots), a new LLM provider "GitHub Copilot", etc. Huge shoutout to @Emt-lin for the significant contributions!
More details in the changelog:
Improvements
- #2110 Add GitHub Copilot integration with improved robustness @Emt-lin
- #2113 Add streaming support for GitHub Copilot @Emt-lin
- #1969 Add comprehensive system prompt management system @Emt-lin
- #2098 Enhance Model Settings with Local Services and Curl Command Support @Emt-lin
- #2096 Add Web Viewer bridge for referencing open web tabs in chat @Emt-lin
- #2112 Support OpenRouter embeddings @logancyang
- #2106 Implement compaction with adjustable threshold and loading messages @logancyang
- #2108 Simplify diff views to side-by-side and split modes with word-level highlighting @wenzhengjiang
- #2087 Add file status and think block state indicators @Emt-lin
- #2077 Add recent usage sorting for chat history and project list @Emt-lin
- #2076 Add auto-accept edits toggle in chat control setting @wenzhengjiang
- #2003 Refactor model API key handling and improve model filtering @Emt-lin
- #2073 Bring back toggle for inline citation @logancyang
- #2081 Update ApiKeyDialog layout for better visibility @Pleasurecruise
- #2115 Adjust settings @logancyang
Bug Fixes
- #2114 Fix default indicator and slash command @Emt-lin
- #2109 Fix dependencies @logancyang
- #2099 Always process think blocks regardless of current model selection @Emt-lin
- #2100 Fix view-content padding for different display modes @Emt-lin
- #2101 Fix search v3 ranking @logancyang
Troubleshoot
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
3.1.4
Copilot for Obsidian - Release v3.1.4 🔥
It's our 100th release!! 🚀 This release includes
- Fixed a critical bug that makes the UI laggy when there's a long conversation
- A major Relevant Notes algorithm improvement
- Big step toward self-host mode by deprecating several modules and moving forward
More details in the changelog:
Improvements
- #2073 Bring back toggle for inline citation @logancyang
- #2071 Clean up dead code and update readme for privacy disclosure @logancyang
- #2070 Enhance error handling in BaseChainRunner @logancyang
- #2069 Deprecate IntentAnalyzer @logancyang
- #2063 Improve new user onboarding by removing notice on missing api key @logancyang
- #2052 Improve relevant note search algorithm @zeroliu
- #2049 Add path to variable_note format and reorder elements @wenzhengjiang
Bug Fixes
- #2072 Prevent orphaned spinners in agent @logancyang
- #2038 Revert "Improve onboarding by removing the popups … #2015" @logancyang
Troubleshoot
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
3.1.3
This release includes
- Significant enhancements to AWS Bedrock support
- A new automatic text selection to chat context feature (default to off under Basic setting)
- Better user experience with composer - skip confirmation with an explicit instruction
- Reduced popups during onboarding
More details in the changelog:
Improvements
- #2023 Enable agent by default @logancyang
- #2018 Add auto selection to context setting @logancyang
- #2017 Implement auto context inclusion on text selection @logancyang
- #2015 Improve onboarding by removing the popups @logancyang
- #2011 Update bedrock model support @logancyang
- #2008 Add anthropic version required field for bedrock @logancyang
- #2010 Multiple UX improvement @zeroliu
- #2002 Enhance writeToFile tool with confirmation option @wenzhengjiang
- #2014 Update log file @logancyang
- #2007 Add AWS Bedrock cross-region inference profile guidance @vedmichv
Bug Fixes
- #2016 Fix thinking model verification @logancyang
- #2024 Do not show thinking if reasoning is not checked @logancyang
- #2012 Fix bedrock model image support @logancyang
- #2001 Fix template note processing @zeroliu
3.1.2
Release time again 🎉 We are ramping up to reach our big goals sooner! Some major changes
- 🫳 Drag-n-drop files from file navbar to Copilot Chat as context!
- 🧠 Revamped context management system that saves tokens by maximizing token cache hit
- 📂 Better context note loading from saved chats
- ↩️ New setting under Basic tab to set the send key - Enter / Shift + Enter
- 🔗 Embedded note
![[note]]now supported in context
More details in the changelog:
Improvements
- #1996 Support Tasks codeblock in AI response @logancyang
- #1995 Support embedded note in context @logancyang
- #1988 Update Corpus-in-Context and web search tool guide @logancyang
- #1979 Add SiliconFlow support for chat and embedding models @qychen2001
- #1982 Simplify log file @logancyang
- #1968 Add configurable send shortcut for chat messages @Emt-lin
- #1973 Integrate ProjectChainRunner and ChatManager with new layered context @logancyang
- #1971 Context revamp - Introduces layered context handling @logancyang
- #1964 Support drag-n-drop files from file navbar @zeroliu
- #1962 Prompt Improvement: Use getFileTree to explore ambiguous notes and folders @wenzhengjiang
- #1963 Stop condensing history in plus nonagent route @logancyang
Bug Fixes
- #1997 Enhance local search guidance prompt @logancyang
- #1994 Fixes rendering issues in saved chat notes when model names contain special characters @logancyang
- #1992 Fix HyDE calling the wrong model @logancyang
- #1976 Fix ENAMETOOLONG @logancyang
- #1975 Fix indexing complete UI hanging @logancyang
- #1977 Fix thinking block duplication text for openrouter thinking models @logancyang
- #1987 Focus on click copilot chat icon in left ribbon @logancyang
- #1986 Focus to chat input on opening chat window command @logancyang
3.1.1
This patch release 3.1.1 packs a punch 💪 with some significant upgrades and critical bug fixes.
- OpenRouter thinking models are supported now! As long as "Reasoning" is checked for a reasoning model from OpenRouter, the thinking block will render in chat. If you don't want to see it, simply uncheck "Reasoning" to hide it.
- Copilot can see Dataview results in the active note! 🔥🔥🔥 Simply add the active note with dataview queries to context, and the LLM will see the executed results of those queries and use them as context!
- New model provider Amazon Bedrock added! (We only support API key and region settings for now, other ways of Bedrock access are not supported)
More details in the changelog:
Improvements
- #1955 Add bedrock provider @logancyang
- #1954 Enable Openrouter thinking tokens @logancyang
- #1942 Improve custom command @zeroliu
- #1931 Improve error handling architecture across chain runners @Emt-lin
- #1929 Add CRUD to Saved Memory @wenzhengjiang
- #1928 Enhance canvas creation spec with with JSON Canvas Spec @wenzhengjiang
- #1923 Turn autosaveChat ON by default @wenzhengjiang
- #1922 Sort notes in typeahead menu by creation time @zeroliu
- #1919 Implement tag list builtin tool @logancyang
- #1918 Support dataview result in active note @logancyang
- #1914 Turn on memory feature by default @wenzhengjiang
Bug Fixes
- #1957 Fix ENAMETOOLONG error on chat save @logancyang
- #1956 Enhance error handling @logancyang
- #1950 Fix new note (renamed) not discoverable in Copilot chat @logancyang
- #1947 Stop rendering dataview result in AI response @logancyang
- #1927 Properly render pills in custom command @zeroliu
3.1.0
Copilot for Obsidian - Release v3.1.0 🔥
3.1.0 finally comes out of preview!! 🎉🎉🎉 This release introduces significant advancements in chat functionality and memory management, alongside various improvements and bug fixes.
New Features
- Brand New Copilot Chat Input: A completely redesigned chat input! This is a huge update we introduced after referencing all the industry-leading solutions.
- Enhanced Context Referencing: A new typeahead system allows direct referencing of notes, folders, tags, URLs, and tools using familiar syntax like
@,[[,#, and/. - Interactive "Pills": Referenced items appear as interactive pills for a cleaner interface and easier management. No tripping over typos again!
- Enhanced Context Referencing: A new typeahead system allows direct referencing of notes, folders, tags, URLs, and tools using familiar syntax like
- Long-Term Memory (plus): A major roadmap item, this feature allows Copilot to reference recent conversations and save relevant information to long-term memory. Memories are saved as
.mdfiles in thecopilot/memorydirectory by default (configurable), allowing for inspection and manual updates.- Major item on the roadmap, making its debut
- Enable "Reference Recent Conversation" and "Reference Saved Memory" in Plus settings
- AI can see a summary of recent chats
- AI can save and reference relevant info to long-term memory on its own
- Option to manually trigger save by asking the agent or using the new
@memorytool - Memories saved as md files under copilot/memory by default
- Users can inspect or update memories as they like
- Note Read Tool (plus agent mode): A new built-in agentic tool that can read linked notes when necessary.
- Token Counter: Displays the number of tokens in the current chat session's context window, resetting with each new chat.
- Max-Token Limit Warning: Alerts users when AI output is cutoff due to low token limits in user setting.
- YouTube Transcript Automation (plus): YouTube transcripts are now fetched automatically when a YouTube URL is entered in the chat input. A new command,
Copilot: Download YouTube Transcript, is available for raw transcript retrieval. - Projects Mode Enhancements (plus): Includes a new Chat History Picker and an enhanced progress bar.
- Backend & Tooling:
- Optimized agentic tool calls for smoother operation
- Migration of backend model services.
- Better search coverage when Semantic Search toggle is on.
- Better agent debugging infra
Breaking Changes
- The
@pomodoroand@youtubetools have been removed from the tool picker. - (plus) Sentence and word autocomplete features are temporarily disabled due to unstable performance, with plans to reintroduce them with user-customizable options.
Bug Fixes
- Fix random blank screen on Copilot Chat UI
- Addressed issues with extracting response text, mobile typeahead menu size, chat crashes, tool call UI freezes, and chat saving.
- Fixed illegal saved chat file names and improved image passing with
copilot-plus-flash. - Avoided unnecessary index rebuilds upon semantic search toggle changes.
- Ensured autonomous agent workflows use consistent tool call IDs and helper orchestration.
- Resolved issues with dropdown colors, badge borders, search result numbers, folder context, and spaces in typeahead triggers.
- Fix model addition in "Set Keys" window. "Verification" no longer required
- Fix verification of certain Claude models (was complaining about top p -1 before, now it works)
Troubleshoot
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Users are encouraged to report any issues in the pre-release channel.
3.0.3
This release has some big changes despite being a patch version. Notable changes:
- Introducing Inline Citations! Now any vault search response has inline citations and a collapsible sources section below the AI response. You have the option to toggle it off in QA settings. (This feature is experimental, if it's not working please report back!)
- Implement Log File, now you can share Copilot Log in the Advanced Setting, no more dev console!
- Removed User / Bot icons to save space in the Copilot Chat UI
- Add OpenRouter GPT 4.1 models and grok-4-fast to Projects mode
- Now AI-generated title for saved chats is optional, it's a toggle in the Basic setting
- Add new default
copilot/parent folder for saved conversations and custom prompts - Embedding model picker is no longer hidden under QA settings tab
Detailed changelog:
Improvements
- #1838 Update sources styling @logancyang
- #1837 Drop user and bot icons to save space and add shade to user message @logancyang
- #1813 Add mobile-responsive components for settings @Emt-lin
- #1832 Add OpenRouter GPT-4.1 models to projects mode @logancyang
- #1831 Refactor active note inclusion and index event handling to respect setting @logancyang
- #1821 Implement inline citation @logancyang
- #1829 Agent Mode: Map copilot
@commandto builtin agent tools @wenzhengjiang - #1817 Conditionally initialize VectorStoreManager @logancyang
- #1816 Ensure nested folder paths exist when enhancing folder management @logancyang
- #1811 Make AI chat title optional @logancyang
- #1810 Move context menu and markdown image handling settings @logancyang
- #1809 Show embedding model @logancyang
- #1805 Add search explanation table in log @logancyang
- #1804 Implement log file @logancyang
- #1788 Only scroll to bottom when user messages are added @zeroliu
Bug Fixes
- #1840 Adjust vertical positioning in ModelTable component @logancyang
- #1830 Ensure proper QA exclusion on copilot data folders @logancyang
- #1827 Fix chat crash issue @zeroliu
- #1796 Support creating new folders in composer tools @wenzhengjiang
- #1795 Add safe area bottom padding to view content @Emt-lin
- #1793 Fix mobile embedded image passing @logancyang
- #1787 Improve loading state management in project context updates @Emt-lin
- #1786 Optimize modal height and close button display on mobile @Emt-lin
- #1778 Improve regex for composer codeblock @wenzhengjiang
3.0.2
Improvements
- #1775 Switch to the new file when creating files with composer tools. @wenzhengjiang
Bug Fixes
- #1776 Fix url processing with image false triggers @logancyang
- #1770 Fix chat input responsiveness @zeroliu
- #1773 Fix canvas parsing in writeToFile tool @wenzhengjiang