Skip to content

Commit b4b5f40

Browse files
Release 0.14.14 (#20670)
1 parent 500eca1 commit b4b5f40

File tree

9 files changed

+660
-2
lines changed

9 files changed

+660
-2
lines changed

CHANGELOG.md

Lines changed: 263 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,269 @@
22

33
<!--- generated changelog --->
44

5+
## [2026-02-10]
6+
7+
### llama-index-callbacks-wandb [0.4.2]
8+
9+
- Fix potential crashes and improve security defaults in core components ([#20610](https://github.com/run-llama/llama_index/pull/20610))
10+
11+
### llama-index-core [0.14.14]
12+
13+
- fix: catch pydantic ValidationError in VectorStoreQueryOutputParser ([#20450](https://github.com/run-llama/llama_index/pull/20450))
14+
- fix: distinguish empty string from None in MediaResource.hash ([#20451](https://github.com/run-llama/llama_index/pull/20451))
15+
- Langchain1.x support ([#20472](https://github.com/run-llama/llama_index/pull/20472))
16+
- Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated ([#20517](https://github.com/run-llama/llama_index/pull/20517))
17+
- fix(core): fallback to bundled nltk cache if env var missing ([#20528](https://github.com/run-llama/llama_index/pull/20528))
18+
- feat(callbacks): add TokenBudgetHandler for cost governance ([#20546](https://github.com/run-llama/llama_index/pull/20546))
19+
- fix(core):handled a edge case in truncate_text function ([#20551](https://github.com/run-llama/llama_index/pull/20551))
20+
- fix(core):fix in types Thread passing None when target is None instead of copy_context().run ([#20553](https://github.com/run-llama/llama_index/pull/20553))
21+
- chore: bump llama-index lockfile, and minor test tweaks ([#20556](https://github.com/run-llama/llama_index/pull/20556))
22+
- Compatibility for workflows context changes ([#20557](https://github.com/run-llama/llama_index/pull/20557))
23+
- test(core): fix cache dir path test for Windows compatibility ([#20566](https://github.com/run-llama/llama_index/pull/20566))
24+
- fix(tests): enforce utf-8 encoding in json reader tests for windows compatibility ([#20576](https://github.com/run-llama/llama_index/pull/20576))
25+
- Fix BM25Retriever mapping in upgrade tool / 修复升级工具中的 BM25Retriever 映射 ([#20582](https://github.com/run-llama/llama_index/pull/20582))
26+
- fix(agent): handle empty LLM responses with retry logic and add test cases ([#20596](https://github.com/run-llama/llama_index/pull/20596))
27+
- fix: add show_progress parameter to run_transformations to prevent unexpected keyword argument error ([#20608](https://github.com/run-llama/llama_index/pull/20608))
28+
- Fix potential crashes and improve security defaults in core components ([#20610](https://github.com/run-llama/llama_index/pull/20610))
29+
- Add core 3.14 tests ([#20619](https://github.com/run-llama/llama_index/pull/20619))
30+
31+
### llama-index-embeddings-cohere [0.7.0]
32+
33+
- fix(embeddings-cohere): add retry logic with tenacity ([#20592](https://github.com/run-llama/llama_index/pull/20592))
34+
35+
### llama-index-embeddings-google-genai [0.3.2]
36+
37+
- Add client headers to Gemini API requests ([#20519](https://github.com/run-llama/llama_index/pull/20519))
38+
39+
### llama-index-embeddings-siliconflow [0.3.2]
40+
41+
- Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated ([#20517](https://github.com/run-llama/llama_index/pull/20517))
42+
43+
### llama-index-embeddings-upstage [0.5.1]
44+
45+
- chore(deps): bump the uv group across 4 directories with 4 updates ([#20531](https://github.com/run-llama/llama_index/pull/20531))
46+
47+
### llama-index-graph-stores-falkordb [0.4.2]
48+
49+
- fix(falkordb): Fix MENTIONS relationship creation with triplet_source_id ([#20650](https://github.com/run-llama/llama_index/pull/20650))
50+
51+
### llama-index-llms-anthropic [0.10.8]
52+
53+
- chore: Update cacheable Anthropic models ([#20581](https://github.com/run-llama/llama_index/pull/20581))
54+
- chore: add support for opus 4.6 ([#20635](https://github.com/run-llama/llama_index/pull/20635))
55+
56+
### llama-index-llms-bedrock-converse [0.12.8]
57+
58+
- fix bedrock converse empty tool config issue ([#20571](https://github.com/run-llama/llama_index/pull/20571))
59+
- fix(llms-bedrock-converse): improve bedrock converse retry handling ([#20590](https://github.com/run-llama/llama_index/pull/20590))
60+
- feat(bedrock-converse): Add support for Claude Opus 4.6 ([#20637](https://github.com/run-llama/llama_index/pull/20637))
61+
- Add support for adaptive thinking in Bedrock ([#20659](https://github.com/run-llama/llama_index/pull/20659))
62+
- chore(deps): bump the pip group across 2 directories with 7 updates ([#20662](https://github.com/run-llama/llama_index/pull/20662))
63+
64+
### llama-index-llms-cohere [0.7.1]
65+
66+
- Feat: add custom base_url support to Cohere LLM ([#20534](https://github.com/run-llama/llama_index/pull/20534))
67+
- fix(llms-cohere): handle additional error types in retry logic ([#20591](https://github.com/run-llama/llama_index/pull/20591))
68+
69+
### llama-index-llms-dashscope [0.5.2]
70+
71+
- fix(dashscope): remove empty tool_calls from assistant messages ([#20535](https://github.com/run-llama/llama_index/pull/20535))
72+
73+
### llama-index-llms-google-genai [0.8.7]
74+
75+
- Add client headers to Gemini API requests ([#20519](https://github.com/run-llama/llama_index/pull/20519))
76+
- fix(decorator):adds logic to llm_retry_decorator for async methods. ([#20588](https://github.com/run-llama/llama_index/pull/20588))
77+
- Fix/google genai cleanup ([#20607](https://github.com/run-llama/llama_index/pull/20607))
78+
- fix(google-genai): skip model meta fetch when not needed ([#20639](https://github.com/run-llama/llama_index/pull/20639))
79+
80+
### llama-index-llms-huggingface-api [0.6.2]
81+
82+
- Update sensible default provider for huggingface inference api ([#20589](https://github.com/run-llama/llama_index/pull/20589))
83+
84+
### llama-index-llms-langchain [0.7.1]
85+
86+
- Langchain1.x support ([#20472](https://github.com/run-llama/llama_index/pull/20472))
87+
88+
### llama-index-llms-openai [0.6.18]
89+
90+
- OpenAI response fix ([#20538](https://github.com/run-llama/llama_index/pull/20538))
91+
- feat: Add support for gpt-5.2-chat model ([#20549](https://github.com/run-llama/llama_index/pull/20549))
92+
- fix(openai): make image_url detail optional in message dict ([#20609](https://github.com/run-llama/llama_index/pull/20609))
93+
- Add new reasoning types ([#20612](https://github.com/run-llama/llama_index/pull/20612))
94+
- fix(openai): exclude unsupported params for all reasoning models ([#20627](https://github.com/run-llama/llama_index/pull/20627))
95+
96+
### llama-index-llms-openai-like [0.6.0]
97+
98+
- make transformers an optional dependency for openai-like ([#20580](https://github.com/run-llama/llama_index/pull/20580))
99+
100+
### llama-index-llms-openrouter [0.4.4]
101+
102+
- make transformers an optional dependency for openai-like ([#20580](https://github.com/run-llama/llama_index/pull/20580))
103+
104+
### llama-index-llms-siliconflow [0.4.3]
105+
106+
- Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated ([#20517](https://github.com/run-llama/llama_index/pull/20517))
107+
108+
### llama-index-llms-upstage [0.7.0]
109+
110+
- add new upstage model(solar-pro3) ([#20544](https://github.com/run-llama/llama_index/pull/20544))
111+
112+
### llama-index-llms-vllm [0.6.2]
113+
114+
- feat: add openai-like server mode for VllmServer ([#20537](https://github.com/run-llama/llama_index/pull/20537))
115+
116+
### llama-index-memory-bedrock-agentcore [0.1.2]
117+
118+
- Add event and memory record deletion methods in bedrock-agentcorememory ([#20428](https://github.com/run-llama/llama_index/pull/20428))
119+
- chore(deps): update llama-index-core dependency lock to include 0.14.x ([#20483](https://github.com/run-llama/llama_index/pull/20483))
120+
121+
### llama-index-memory-mem0 [1.0.0]
122+
123+
- fix: mem0 integration cleanup + refactor ([#20532](https://github.com/run-llama/llama_index/pull/20532))
124+
125+
### llama-index-node-parser-chonkie [0.1.1]
126+
127+
- feat: add chonkie integration ([#20622](https://github.com/run-llama/llama_index/pull/20622))
128+
- update readme ([#20656](https://github.com/run-llama/llama_index/pull/20656))
129+
130+
### llama-index-node-parser-docling [0.4.2]
131+
132+
- fix: catch pydantic ValidationError in VectorStoreQueryOutputParser ([#20450](https://github.com/run-llama/llama_index/pull/20450))
133+
134+
### llama-index-packs-code-hierarchy [0.6.1]
135+
136+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
137+
138+
### llama-index-packs-gmail-openai-agent [0.4.1]
139+
140+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
141+
142+
### llama-index-packs-multidoc-autoretrieval [0.4.1]
143+
144+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
145+
146+
### llama-index-packs-panel-chatbot [0.4.1]
147+
148+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
149+
150+
### llama-index-packs-recursive-retriever [0.7.1]
151+
152+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
153+
- chore(deps): bump the pip group across 2 directories with 7 updates ([#20662](https://github.com/run-llama/llama_index/pull/20662))
154+
155+
### llama-index-packs-resume-screener [0.9.3]
156+
157+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
158+
159+
### llama-index-packs-retry-engine-weaviate [0.5.1]
160+
161+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
162+
163+
### llama-index-packs-streamlit-chatbot [0.5.2]
164+
165+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
166+
167+
### llama-index-packs-sub-question-weaviate [0.4.1]
168+
169+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
170+
171+
### llama-index-packs-timescale-vector-autoretrieval [0.4.1]
172+
173+
- chore(deps): bump the uv group across 12 directories with 14 updates ([#20578](https://github.com/run-llama/llama_index/pull/20578))
174+
175+
### llama-index-postprocessor-cohere-rerank [0.6.0]
176+
177+
- fix(cohere-rerank): add retry logic and tenacity dependency to cohere rerank ([#20593](https://github.com/run-llama/llama_index/pull/20593))
178+
179+
### llama-index-postprocessor-nvidia-rerank [0.5.4]
180+
181+
- fix(nvidia-rerank): fix initialization logic for on-prem auth ([#20560](https://github.com/run-llama/llama_index/pull/20560))
182+
- fix(nvidia-rerank): correct private attribute reference ([#20570](https://github.com/run-llama/llama_index/pull/20570))
183+
- fix(nvidia-rerank): Fix POST request url for locally hosted NIM rerankers ([#20579](https://github.com/run-llama/llama_index/pull/20579))
184+
185+
### llama-index-postprocessor-tei-rerank [0.4.2]
186+
187+
- fix(tei-rerank): use index field from API response for correct score … ([#20599](https://github.com/run-llama/llama_index/pull/20599))
188+
- test(tei-rerank): add test coverage for rerank retry coverage ([#20600](https://github.com/run-llama/llama_index/pull/20600))
189+
190+
### llama-index-protocols-ag-ui [0.2.4]
191+
192+
- fix: avoid ValueError in ag-ui message conversion for multi-block ChatMessages ([#20648](https://github.com/run-llama/llama_index/pull/20648))
193+
194+
### llama-index-readers-datasets [0.1.0]
195+
196+
- chore(deps): bump the uv group across 4 directories with 4 updates ([#20531](https://github.com/run-llama/llama_index/pull/20531))
197+
198+
### llama-index-readers-microsoft-sharepoint [0.7.0]
199+
200+
- Sharepoint page support events ([#20572](https://github.com/run-llama/llama_index/pull/20572))
201+
202+
### llama-index-readers-obsidian [0.6.1]
203+
204+
- Langchain1.x support ([#20472](https://github.com/run-llama/llama_index/pull/20472))
205+
206+
### llama-index-readers-service-now [0.2.2]
207+
208+
- chore(deps): bump the pip group across 2 directories with 7 updates ([#20662](https://github.com/run-llama/llama_index/pull/20662))
209+
210+
### llama-index-tools-mcp [0.4.6]
211+
212+
- feat: implement partial_params support to McpToolSpec ([#20554](https://github.com/run-llama/llama_index/pull/20554))
213+
214+
### llama-index-tools-mcp-discovery [0.1.0]
215+
216+
- Add llama-index-tools-mcp-discovery integration ([#20502](https://github.com/run-llama/llama_index/pull/20502))
217+
218+
### llama-index-tools-moss [0.1.0]
219+
220+
- feat(tools): add Moss search engine integration ([#20615](https://github.com/run-llama/llama_index/pull/20615))
221+
222+
### llama-index-tools-seltz [0.1.0]
223+
224+
- feat(tools): add Seltz web knowledge tool integration ([#20626](https://github.com/run-llama/llama_index/pull/20626))
225+
226+
### llama-index-tools-typecast [0.1.0]
227+
228+
- Migrate Typecast tool to V2 API for voices endpoints ([#20548](https://github.com/run-llama/llama_index/pull/20548))
229+
230+
### llama-index-tools-wolfram-alpha [0.5.0]
231+
232+
- feat(wolfram-alpha): switch to LLM API with bearer auth ([#20586](https://github.com/run-llama/llama_index/pull/20586))
233+
234+
### llama-index-vector-stores-clickhouse [0.6.2]
235+
236+
- fix(clickhouse): Add drop_existing_table parameter to prevent data loss ([#20651](https://github.com/run-llama/llama_index/pull/20651))
237+
238+
### llama-index-vector-stores-milvus [0.9.6]
239+
240+
- chore(deps): bump the uv group across 4 directories with 4 updates ([#20531](https://github.com/run-llama/llama_index/pull/20531))
241+
242+
### llama-index-vector-stores-mongodb [0.9.1]
243+
244+
- Update MongoDB vector store tests to use newer model ([#20515](https://github.com/run-llama/llama_index/pull/20515))
245+
246+
### llama-index-vector-stores-oceanbase [0.4.0]
247+
248+
- feat(oceanbase): add sparse/fulltext/hybrid search ([#20524](https://github.com/run-llama/llama_index/pull/20524))
249+
250+
### llama-index-vector-stores-opensearch [1.0.0]
251+
252+
- Changed OpenSearch engine default from deprecated `nmslib` to `faiss` ([#20507](https://github.com/run-llama/llama_index/pull/20507))
253+
- chore(deps): bump the uv group across 4 directories with 4 updates ([#20531](https://github.com/run-llama/llama_index/pull/20531))
254+
255+
### llama-index-vector-stores-postgres [0.7.3]
256+
257+
- fix(postgres): disable bitmap scan for vector queries ([#20514](https://github.com/run-llama/llama_index/pull/20514))
258+
259+
### llama-index-vector-stores-yugabytedb [0.5.4]
260+
261+
- Add YugabyteDB as a Vector Store ([#20559](https://github.com/run-llama/llama_index/pull/20559))
262+
- chore(deps): bump the pip group across 2 directories with 7 updates ([#20662](https://github.com/run-llama/llama_index/pull/20662))
263+
264+
### llama-index-voice-agents-gemini-live [0.2.2]
265+
266+
- Add client headers to Gemini API requests ([#20519](https://github.com/run-llama/llama_index/pull/20519))
267+
5268
## [2026-01-21]
6269

7270
### llama-index-core [0.14.13]
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
::: llama_index.node_parser.chonkie
2+
options:
3+
members: - Chunker
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
::: llama_index.vector_stores.yugabytedb
2+
options:
3+
members: - YBVectorStore
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
::: llama_index.tools.mcp_discovery
2+
options:
3+
members: - MCPDiscoveryTool
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
::: llama_index.tools.moss
2+
options:
3+
members: - MossToolSpec
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
::: llama_index.tools.seltz
2+
options:
3+
members: - SeltzToolSpec

docs/api_reference/mkdocs.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -723,6 +723,11 @@ plugins:
723723
- ../../llama-index-integrations/tools/llama-index-tools-parallel-web-systems
724724
- ../../llama-index-integrations/ingestion/llama-index-ingestion-ray
725725
- ../../llama-index-integrations/readers/llama-index-readers-datasets
726+
- ../../llama-index-integrations/vector_stores/llama-index-vector-stores-yugabytedb
727+
- ../../llama-index-integrations/node_parser/llama-index-node-parser-chonkie
728+
- ../../llama-index-integrations/tools/llama-index-tools-mcp-discovery
729+
- ../../llama-index-integrations/tools/llama-index-tools-moss
730+
- ../../llama-index-integrations/tools/llama-index-tools-seltz
726731
site_name: LlamaIndex
727732
site_url: https://developers.llamaindex.ai/python/framework-api-reference/
728733
theme:

0 commit comments

Comments
 (0)