Skip to content

Commit e38cac7

Browse files
szymondudyczolruas
authored andcommitted
Remove pipelines from llm-app and update website (#9418)
Co-authored-by: Olivier Ruas <[email protected]> GitOrigin-RevId: 1a2b161e1de92d649b8fd938e8cbd54186a74d1c
1 parent b911c7a commit e38cac7

File tree

22 files changed

+91
-91
lines changed

22 files changed

+91
-91
lines changed

docs/2.developers/4.user-guide/20.connect/99.connectors/100.slack_send_alerts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,4 +60,4 @@ class: 'mx-auto'
6060

6161
Note, that the values of the column `messages` in the above example do not have spaces. It is a restriction of `pw.debug.table_from_markdown` which uses spaces to separate columns. Any regular string works with the other connectors.
6262

63-
If you want to see more examples with `pw.io.slack.send_alerts` you can check the [`drive_alert`](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/drive_alert) example in the llm-app.
63+
If you want to see more examples with `pw.io.slack.send_alerts` you can check the [`drive_alert`](https://github.com/pathwaycom/llm-app/tree/main/templates/drive_alert) example in the llm-app.

docs/2.developers/4.user-guide/30.data-transformation/.indexes-in-pathway/article.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -430,7 +430,7 @@ class PointSchema(pw.Schema):
430430
# run(data_dir=args.data_dir, host=args.host, port=args.port)
431431
# ```
432432
# %% [markdown]
433-
# A similar approach was taken in our [alerting example](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/drive_alert).
433+
# A similar approach was taken in our [alerting example](https://github.com/pathwaycom/llm-app/tree/main/templates/drive_alert).
434434
# It is an LLM app that can send you alerts on slack when the response to your query has changed significantly.
435435
# %% [markdown]
436436
# ## Summary

docs/2.developers/4.user-guide/50.llm-xpack/.llm-examples.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ If you want to see how it works, this page gathers practical examples using Path
1616
<tbody>
1717
<tr>
1818
<td class="text-center">
19-
<a href="https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/demo-question-answering">Pathway RAG app with always up-to-date knowledge</a>
19+
<a href="https://github.com/pathwaycom/llm-app/tree/main/templates/question_answering_rag">Pathway RAG app with always up-to-date knowledge</a>
2020
</td>
2121
<td class="text-center">
2222
This example shows how to create a RAG application using Pathway that provides always up-to-date knowledge to your LLM without the need for a separate ETL.
@@ -25,23 +25,23 @@ If you want to see how it works, this page gathers practical examples using Path
2525
</tr>
2626
<tr>
2727
<td class="text-center">
28-
<a href="https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/adaptive-rag">Adaptive RAG</a>
28+
<a href="https://github.com/pathwaycom/llm-app/tree/main/templates/adaptive_rag">Adaptive RAG</a>
2929
</td>
3030
<td class="text-center">
3131
Example of the Adaptive RAG, a technique to dynamically adapt the number of documents in a RAG prompt using feedback from the LLM.
3232
</td>
3333
</tr>
3434
<tr>
3535
<td class="text-center">
36-
<a href="https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/private-rag">Fully Private RAG with Pathway</a>
36+
<a href="https://github.com/pathwaycom/llm-app/tree/main/templates/private_rag">Fully Private RAG with Pathway</a>
3737
</td>
3838
<td class="text-center">
3939
This example shows how to set up a private RAG pipeline with adaptive retrieval using Pathway, Mistral, and Ollama.
4040
</td>
4141
</tr>
4242
<tr>
4343
<td class="text-center">
44-
<a href="https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/gpt_4o_multimodal_rag">Multimodal RAG with Pathway</a>
44+
<a href="https://github.com/pathwaycom/llm-app/tree/main/templates/multimodal_rag">Multimodal RAG with Pathway</a>
4545
</td>
4646
<td class="text-center">
4747
This example demonstrates how you can launch a Multimodal RAG with Pathway. It relies on a document processing pipeline that utilizes GPT-4o in the parsing stage. Pathway extracts information from unstructured financial documents in your folders, updating results as documents change or new ones arrive. You can make your AI application run in permanent connection with your drive, in sync with your documents which include visually formatted elements: tables, charts, etc.
@@ -63,15 +63,15 @@ If you want to see how it works, this page gathers practical examples using Path
6363
<tbody>
6464
<tr>
6565
<td class="text-center">
66-
<a href="https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/demo-document-indexing">Realtime Document Indexing with Pathway</a>
66+
<a href="https://github.com/pathwaycom/llm-app/tree/main/templates/document_indexing">Realtime Document Indexing with Pathway</a>
6767
</td>
6868
<td class="text-center">
6969
Basic example of a real-time document indexing pipeline powered by Pathway. You can index documents from different data sources, such as SharePoint or Google Drive. You can then query the index to retrieve documents, get statistics about the index, and retrieve file metadata.
7070
</td>
7171
</tr>
7272
<tr>
7373
<td class="text-center">
74-
<a href="https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/drive_alert">Drive Alert Pipeline</a>
74+
<a href="https://github.com/pathwaycom/llm-app/tree/main/templates/drive_alert">Drive Alert Pipeline</a>
7575
</td>
7676
<td class="text-center">
7777
This example is very similar to the "Alert" example, the only difference is the data source (Google Drive).
@@ -80,7 +80,7 @@ If you want to see how it works, this page gathers practical examples using Path
8080
</tr>
8181
<tr>
8282
<td class="text-center">
83-
<a href="https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/unstructured_to_sql_on_the_fly">Unstructured to SQL</a>
83+
<a href="https://github.com/pathwaycom/llm-app/tree/main/templates/unstructured_to_sql_on_the_fly">Unstructured to SQL</a>
8484
</td>
8585
<td class="text-center">
8686
The example extracts and structures the data out of unstructured data (PDFs and queries) on the fly.

docs/2.developers/4.user-guide/50.llm-xpack/10.overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ With these tools it is easy to create in Pathway a pipeline serving as a [`Docum
8484

8585
To make interaction with DocumentStore easier you can also use [`DocumentStoreServer`](/developers/api-docs/pathway-xpacks-llm/servers#pathway.xpacks.llm.servers.DocumentStoreServer) that handles API calls.
8686

87-
You can learn more about Document Store in Pathway in a [dedicated tutorial](/developers/user-guide/llm-xpack/docs-indexing) and check out a QA app example in [the llm-app repository](https://github.com/pathwaycom/llm-app/blob/main/examples/pipelines/demo-question-answering/app.py).
87+
You can learn more about Document Store in Pathway in a [dedicated tutorial](/developers/user-guide/llm-xpack/docs-indexing) and check out a QA app example in [the llm-app repository](https://github.com/pathwaycom/llm-app/blob/main/templates/question_answering_rag/app.py).
8888

8989
### Integrating with LlamaIndex and LangChain
9090

docs/2.developers/7.templates/20.run-a-template.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,10 +59,10 @@ Whether you need a real-time ETL, document indexing, or context-based Q&A, you'l
5959
::
6060
::
6161

62-
Then you need to go the repository of the chosen template, let's take the `demo-question-answering` as an example.
62+
Then you need to go the repository of the chosen template, let's take the `question_answering_rag` as an example.
6363

6464
```
65-
cd llm-app/examples/pipelines/demo-question-answering
65+
cd llm-app/templates/question_answering_rag
6666
```
6767

6868
## Configuring Pathway Templates

docs/2.developers/7.templates/30.configure-yaml.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ embedder: !pw.xpacks.llm.embedders.OpenAIEmbedder
8080
```
8181

8282
### Environment Variables
83-
You can also use `$` to refer to environment variables. For that purpose, you need to use identifiers that consist only of upper case letters and `_`. Also, if a variable is defined in the YAML file and present in the environment variables, the definition from the YAML file takes precedence. As an example, you can set the `port` in the [demo-question-answering pipeline](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/demo-question-answering) to be taken from the `$PATHWAY_PORT` environment variable.
83+
You can also use `$` to refer to environment variables. For that purpose, you need to use identifiers that consist only of upper case letters and `_`. Also, if a variable is defined in the YAML file and present in the environment variables, the definition from the YAML file takes precedence. As an example, you can set the `port` in the [question_answering_rag pipeline](https://github.com/pathwaycom/llm-app/tree/main/templates/question_answering_rag) to be taken from the `$PATHWAY_PORT` environment variable.
8484

8585
```yaml
8686
port: $PATHWAY_PORT
@@ -93,11 +93,11 @@ If the values of the environment variables are a valid integer, float or boolean
9393

9494
<!-- TODO: consider writing about pw.load_yaml, but it's not in the api docs currently. -->
9595

96-
## Example: Demo-Question-Answering
96+
## Example: Question-Answering RAG
9797

98-
To see YAMLs in practice let's look at the [demo-question-answering pipeline](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/demo-question-answering). Note, that it differs from [adaptive RAG](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/adaptive-rag), [multimodal RAG](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/gpt_4o_multimodal_rag) and [private RAG](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/private-rag) by the YAML configuration file - their Python code is the same.
98+
To see YAMLs in practice let's look at the [Question-Answering RAG](https://github.com/pathwaycom/llm-app/tree/main/templates/question_answering_rag). Note, that it differs from [adaptive RAG](https://github.com/pathwaycom/llm-app/tree/main/templates/adaptive_rag), [multimodal RAG](https://github.com/pathwaycom/llm-app/tree/main/multimodal_rag) and [private RAG](https://github.com/pathwaycom/llm-app/tree/main/templates/private_rag) by the YAML configuration file - their Python code is the same.
9999

100-
Here is the content of `app.yaml` from demo-question-answering:
100+
Here is the content of `app.yaml` from question_answering_rag:
101101
```yaml
102102
$sources:
103103
- !pw.io.fs.read
@@ -227,7 +227,7 @@ question_answerer: !pw.xpacks.llm.question_answering.BaseRAGQuestionAnswerer
227227
indexer: $document_store
228228
```
229229
230-
If you want to change the provider of LLM models, you can change values of `llm` and `embedder`. By changing `llm` to be `LiteLLMChat`, that uses local `api_base`, and `embedder` to be `SentenceTransformerEmbedder` you obtain a local RAG that does not call external services (this pipeline is now very similar to [private RAG](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/private-rag) from the llm-app).
230+
If you want to change the provider of LLM models, you can change values of `llm` and `embedder`. By changing `llm` to be `LiteLLMChat`, that uses local `api_base`, and `embedder` to be `SentenceTransformerEmbedder` you obtain a local RAG that does not call external services (this pipeline is now very similar to [private RAG](https://github.com/pathwaycom/llm-app/tree/main/templates/private_rag) from the llm-app).
231231

232232
```yaml
233233
$sources:

docs/2.developers/7.templates/35.custom-components.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ def augment_metadata(sources: list[pw.Table]) -> list[pw.Table]:
5353
return [t.with_columns(_metadata=add_isin_and_currency(pw.this._metadata)) for t in sources]
5454
```
5555

56-
Then, in the `app.yaml` from [demo-question-answering](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/demo-question-answering) apply this function on `$sources` to obtain the changed tables, which are then given to the `$document_store`.
56+
Then, in the `app.yaml` from [question_answering_rag](https://github.com/pathwaycom/llm-app/tree/main/templates/question_answering_rag) apply this function on `$sources` to obtain the changed tables, which are then given to the `$document_store`.
5757

5858

5959
```yaml [app.yaml]

docs/2.developers/7.templates/39.yaml-snippets/20.rag-configuration-examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ $llm: !pw.xpacks.llm.llms.LiteLLMChat
133133
temperature: 0
134134
api_base: "http://localhost:11434"
135135
```
136-
You can learn more about this template by visiting its associated public [GitHub project](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/private-rag).
136+
You can learn more about this template by visiting its associated public [GitHub project](https://github.com/pathwaycom/llm-app/tree/main/templates/private_rag).
137137

138138
::
139139
::openable-list

docs/2.developers/7.templates/40.rag-customization/20.custom-prompt.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ heading: false
99

1010
In the RAG templates, you can customize the LLM Q&A prompt directly in the YAML configuration file. You just need to set the `prompt_template` argument for the `BaseRAGQuestionAnswerer`.
1111

12-
Using [`demo-question-answering`](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/demo-question-answering) as an example, let's see how to customize the prompt.
13-
By default, the `BaseRAGQuestionAnswerer` in the [`app.yaml`](https://github.com/pathwaycom/llm-app/blob/main/examples/pipelines/demo-question-answering/app.yaml) is initialized with:
12+
Using [`question_answering_rag`](https://github.com/pathwaycom/llm-app/tree/main/templates/question_answering_rag) as an example, let's see how to customize the prompt.
13+
By default, the `BaseRAGQuestionAnswerer` in the [`app.yaml`](https://github.com/pathwaycom/llm-app/blob/main/templates/question_answering_rag/app.yaml) is initialized with:
1414

1515
```yaml
1616
question_answerer: !pw.xpacks.llm.question_answering.BaseRAGQuestionAnswerer

docs/2.developers/7.templates/rag/.multimodal-rag/article.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -210,7 +210,7 @@
210210
# default file you can use to test
211211
# to use your own data via the Colab UI, click on the 'Files' tab in the left sidebar, go to data folder (that was created prior to this) then drag and drop your files there.
212212

213-
# !wget -q -P ./data/ https://github.com/pathwaycom/llm-app/raw/main/examples/pipelines/gpt_4o_multimodal_rag/data/20230203_alphabet_10K.pdf
213+
# !wget -q -P ./data/ https://github.com/pathwaycom/llm-app/raw/main/templates/multimodal_rag/data/20230203_alphabet_10K.pdf
214214

215215
# + [markdown] id="D7HFGv7ZFl_g"
216216
# #### **Read Documents**
@@ -296,7 +296,7 @@
296296
# + [markdown] id="S3zsr-NGop8B"
297297
# ## **Conclusion**
298298
# This is how you can easily implement a Multimodal RAG Pipeline using GPT-4o and Pathway. You used the [BaseRAGQuestionAnswerer](https://pathway.com/developers/api-docs/pathway-xpacks-llm/question_answering#pathway.xpacks.llm.question_answering.BaseRAGQuestionAnswerer) class from [pathway.xpacks](https://pathway.com/developers/user-guide/llm-xpack/overview), which integrates the foundational components for our RAG application, including data ingestion, LLM integration, database creation and querying, and serving the application on an endpoint. For more advanced RAG options, you can explore [rerankers](https://pathway.com/developers/api-docs/pathway-xpacks-llm/rerankers#pathway.xpacks.llm.rerankers.CrossEncoderReranker) and the [adaptive RAG example](https://pathway.com/developers/showcases/adaptive-rag).
299-
# For implementing this example using open source LLMs, here’s a [private RAG app template](https://pathway.com/developers/showcases/private-rag-ollama-mistral) that you can use as a starting point. It will help you run the entire application locally making it ideal for use-cases with sensitive data and explainable AI needs. You can do this within Docker as well by following the steps in [Pathway’s LLM App templates](https://github.com/pathwaycom/llm-app/tree/main/examples/pipelines/gpt_4o_multimodal_rag) repository.
299+
# For implementing this example using open source LLMs, here’s a [private RAG app template](https://pathway.com/developers/showcases/private-rag-ollama-mistral) that you can use as a starting point. It will help you run the entire application locally making it ideal for use-cases with sensitive data and explainable AI needs. You can do this within Docker as well by following the steps in [Pathway’s LLM App templates](https://github.com/pathwaycom/llm-app/tree/main/templates/multimodal_rag) repository.
300300
#
301301
#
302302
# To explore more app templates and advanced use cases, visit [Pathway App Templates](https://pathway.com/developers/showcases) or Pathway’s [official blog](https://pathway.com/blog).

0 commit comments

Comments
 (0)