Skip to content

Commit d08cd1e

Browse files
olruasManul from Pathway
authored andcommitted
[Website] Moving old templates to blog (#7625)
GitOrigin-RevId: b30be25176338eb02bb3f2e923ac13b4c7cc0bc2
1 parent 6a32c25 commit d08cd1e

File tree

16 files changed

+18
-972
lines changed

16 files changed

+18
-972
lines changed

docs/2.developers/4.user-guide/50.llm-xpack/.vectorstore_pipeline/article.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@
175175
# ### Langchain
176176
#
177177
# You can use a Pathway Vector Store in LangChain pipelines with `PathwayVectorClient`
178-
# and configure a `VectorStoreServer` using LangChain components. For more information see [our article](/developers/templates/langchain-integration) or [LangChain documentation](https://python.langchain.com/v0.1/docs/integrations/vectorstores/pathway/).
178+
# and configure a `VectorStoreServer` using LangChain components. For more information see [our article](/blog/langchain-integration) or [LangChain documentation](https://python.langchain.com/v0.1/docs/integrations/vectorstores/pathway/).
179179
#
180180

181181
# %%

docs/2.developers/4.user-guide/50.llm-xpack/10.overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ You can learn more about Vector Store in Pathway in a [dedicated tutorial](/deve
174174

175175
### Integrating with LlamaIndex and LangChain
176176

177-
Vector Store offer integrations with both LlamaIndex and LangChain. These allow you to incorporate Vector Store Client in your LlamaIndex and LangChain pipelines or use LlamaIndex and LangChain components in the Vector Store. Read more about the integrations in the [article on LlamaIndex](/developers/templates/llamaindex-pathway) and [on LangChain](/developers/templates/langchain-integration).
177+
Vector Store offer integrations with both LlamaIndex and LangChain. These allow you to incorporate Vector Store Client in your LlamaIndex and LangChain pipelines or use LlamaIndex and LangChain components in the Vector Store. Read more about the integrations in the [article on LlamaIndex](/blog/llamaindex-pathway) and [on LangChain](/blog/langchain-integration).
178178

179179

180180
## Rerankers

docs/2.developers/7.templates/.gemini-multimodal-rag/__init__.py

Whitespace-only changes.

docs/2.developers/7.templates/.gemini-multimodal-rag/article.py

Lines changed: 0 additions & 328 deletions
This file was deleted.

docs/2.developers/7.templates/.langchain-integration/.gitignore

Lines changed: 0 additions & 1 deletion
This file was deleted.

docs/2.developers/7.templates/.langchain-integration/__init__.py

Whitespace-only changes.

docs/2.developers/7.templates/.langchain-integration/article.py

Lines changed: 0 additions & 169 deletions
This file was deleted.

docs/2.developers/7.templates/.multimodal-rag/article.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@
103103
# + [markdown] id="DBo5YKJzKpdR"
104104
# ## **What's the main difference between LlamaIndex and Pathway?**
105105
#
106-
# Pathway offers an indexing solution that always provides the latest information to your LLM application: Pathway Vector Store preprocesses and indexes your data in real time, always giving up-to-date answers. LlamaIndex is a framework for writing LLM-enabled applications. Pathway and LlamaIndex are best [used together](/developers/templates/llamaindex-pathway). Pathway vector store is natively available in LlamaIndex.
106+
# Pathway offers an indexing solution that always provides the latest information to your LLM application: Pathway Vector Store preprocesses and indexes your data in real time, always giving up-to-date answers. LlamaIndex is a framework for writing LLM-enabled applications. Pathway and LlamaIndex are best [used together](/blog/llamaindex-pathway). Pathway vector store is natively available in LlamaIndex.
107107

108108
# + [markdown] id="CzxFvo4S_RIj"
109109
# ## **Architecture Used for Multimodal RAG for Production Use Cases**

docs/2.developers/7.templates/.private_rag_ollama_mistral/article.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
# thumbnailFit: 'contain'
88
# tags: ['showcase', 'llm']
99
# date: '2024-04-23'
10-
# related: ['/developers/templates/adaptive-rag', '/developers/templates/llamaindex-pathway']
10+
# related: ['/developers/templates/adaptive-rag', '/developers/templates/demo-question-answering']
1111
# notebook_export_path: notebooks/showcases/mistral_adaptive_rag_question_answering.ipynb
1212
# author: 'berke'
1313
# keywords: ['LLM', 'RAG', 'Adaptive RAG', 'prompt engineering', 'explainability', 'mistral', 'ollama', 'private rag', 'local rag', 'ollama rag', 'notebook', 'docker']

docs/2.developers/7.templates/1001.template-adaptive-rag.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,6 @@ article:
1010
author: "pathway"
1111
keywords: ['LLM', 'RAG', 'Adaptive RAG', 'prompt engineering', 'prompt', 'explainability', 'docker']
1212
docker_github_link: "https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/adaptive-rag"
13-
#hide: true
1413
---
1514

1615
::alert{type="info" icon="heroicons:information-circle-16-solid"}

0 commit comments

Comments
 (0)