Skip to content

Commit 653d6f2

Browse files
committed
feat: Use YAML config for HuggingFace token - Add confg.yaml support for HuggingFace authentication - Update local_rag_agent to load token from config - Add config_example.yaml template - Update documentation for YAML-based configuration
1 parent 2fdb510 commit 653d6f2

File tree

4 files changed

+25
-16
lines changed

4 files changed

+25
-16
lines changed

agentic_rag/README.md

Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -25,19 +25,15 @@ The system has the following features:
2525

2626
The system uses Mistral-7B by default, which requires authentication with HuggingFace:
2727

28-
a. Create a HuggingFace account at https://huggingface.co/join
28+
a. Create a HuggingFace account [here](https://huggingface.co/join)
2929

30-
b. Accept the Mistral-7B model terms at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
30+
b. Accept the Mistral-7B model terms & conditions [here](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
3131

32-
c. Create an access token at https://huggingface.co/settings/tokens
32+
c. Create an access token [here](https://huggingface.co/settings/tokens)
3333

34-
d. Login using the token:
35-
```bash
36-
huggingface-cli login
37-
# Or set the token as an environment variable:
38-
export HUGGING_FACE_HUB_TOKEN=your_token_here
39-
# On Windows:
40-
set HUGGING_FACE_HUB_TOKEN=your_token_here
34+
d. Create a `config.yaml` file (you can copy from `config_example.yaml`):
35+
```yaml
36+
HUGGING_FACE_HUB_TOKEN: your_token_here
4137
```
4238

4339
3. (Optional) If you want to use the OpenAI-based agent instead of the default local model, create a `.env` file with your OpenAI API key:
@@ -46,9 +42,7 @@ The system has the following features:
4642
OPENAI_API_KEY=your-api-key-here
4743
```
4844

49-
If no API key is provided, the system will automatically use the local Mistral-7B model for text generation.
50-
51-
4. The system will automatically download and use `Mistral-7B-Instruct-v0.2` for text generation when using the local model. No additional configuration is needed.
45+
4. If no API key is provided, the system will automatically download and use `Mistral-7B-Instruct-v0.2` for text generation when using the local model. No additional configuration is needed.
5246

5347
## 1. Getting Started
5448

agentic_rag/config_example.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
HUGGING_FACE_HUB_TOKEN: your_token_here

agentic_rag/local_rag_agent.py

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@
44
from pydantic import BaseModel, Field
55
from store import VectorStore
66
import argparse
7+
import yaml
8+
import os
79

810
class QueryAnalysis(BaseModel):
911
"""Pydantic model for query analysis output"""
@@ -22,14 +24,25 @@ def __init__(self, vector_store: VectorStore, model_name: str = "mistralai/Mistr
2224
"""Initialize local RAG agent with vector store and local LLM"""
2325
self.vector_store = vector_store
2426

27+
# Load HuggingFace token from config
28+
try:
29+
with open('config.yaml', 'r') as f:
30+
config = yaml.safe_load(f)
31+
token = config.get('HUGGING_FACE_HUB_TOKEN')
32+
if not token:
33+
raise ValueError("HUGGING_FACE_HUB_TOKEN not found in config.yaml")
34+
except Exception as e:
35+
raise Exception(f"Failed to load HuggingFace token from config.yaml: {str(e)}")
36+
2537
# Load model and tokenizer
2638
print("\nLoading model and tokenizer...")
2739
self.model = AutoModelForCausalLM.from_pretrained(
2840
model_name,
2941
torch_dtype=torch.float16,
30-
device_map="auto"
42+
device_map="auto",
43+
token=token
3144
)
32-
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
45+
self.tokenizer = AutoTokenizer.from_pretrained(model_name, token=token)
3346

3447
# Create text generation pipeline
3548
self.pipeline = pipeline(

agentic_rag/requirements.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,5 @@ uvicorn
99
python-multipart
1010
transformers
1111
torch
12-
accelerate
12+
accelerate
13+
pyyaml

0 commit comments

Comments
 (0)