Skip to content
This repository was archived by the owner on May 20, 2025. It is now read-only.

Commit 1190b67

Browse files
Apply suggestions from code review
Co-authored-by: David Moore <[email protected]>
1 parent cb577cf commit 1190b67

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/guides/python/llama-rag.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Next, let's install our base dependencies, then add the `llama-index` libraries.
4242
# Install the base dependencies
4343
uv sync
4444
# Add Llama index dependencies
45-
uv add llama-index llama-index-embeddings-huggingface llama-index-llama-cpp
45+
uv add llama-index llama-index-embeddings-huggingface llama-index-llms-llama-cpp
4646
```
4747

4848
We'll organize our project structure like so:
@@ -78,7 +78,7 @@ cd ..
7878

7979
Now that we have our model we can load it into our code. We'll also define our [embed model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/) using a recommend [model](https://huggingface.co/BAAI/bge-large-en-v1.5) from Hugging Face. At this point we can also create a prompt template for prompts with our query engine. It will just sanitize some of the hallucinations so that if the model does not know an answer it won't pretend like it does.
8080

81-
```python title:common/model_paramters.py
81+
```python title:common/model_parameters.py
8282
from llama_index.core import ChatPromptTemplate
8383
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
8484
from llama_index.llms.llama_cpp import LlamaCPP
@@ -166,7 +166,7 @@ uv run build_query_engine.py
166166

167167
With our LLM ready for querying, we can create an API to handle prompts.
168168

169-
```python
169+
```python title:services/api.py
170170
import os
171171

172172
from common.model_parameters import embed_model, llm, text_qa_template, persist_dir

0 commit comments

Comments
 (0)