This folder contains the teaching notebooks for the course Introduction to LLM-Based Agents (SBBD 2025).
First, create and activate the conda environment, then install dependencies:
pip install -r requirements.txt
## 🖥️ Running on CPU vs. GPU
All HuggingFace models in these notebooks are configured to run on the **CPU**:
```python
model_kwargs={"device": "cpu"}- Some entry-level GPUs (e.g., GeForce GT 1030) do not support the CUDA compute capabilities required by recent PyTorch and Transformers builds.
- Using CPU ensures the notebooks run consistently across all machines, including student laptops without dedicated GPUs.
- Results are identical between CPU and GPU. The only difference is speed (GPU can be faster, when supported).
Yes — if your GPU is compatible and you have a working CUDA installation, simply remove the explicit {"device": "cpu"} option. For example:
embedding_model = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2"
)or for text generation:
generator = pipeline("text-generation", model="gpt2") # GPU if available00_intro.ipynb— Environment check and first LLM calls01_ngram_vs_llm.ipynb— classic n-gram models x LLMs02_minimal_agent.ipynb— LLM as brain, agent as body03_prompting_patterns.ipynb— Prompting and interaction patterns04_tool_calling.ipynb— Structured tool calling with LangChain05_rag_pipeline.ipynb— Retrieval-Augmented Generation (RAG) with ChromaDB06_text_to_sql.ipynb— Text-to-SQL pipeline with SQLite