Skip to content

pawel-twardziak/langchain_lcel_runnable

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LCEL Learning Lab (LangChain + LangGraph)

This repository is a hands-on learning lab for LangChain Expression Language (LCEL) and a minimal LangGraph example. It provides:

  • A comprehensive script of runnable LCEL examples grouped from foundation to advanced (src/runnable_api.py, src/runnable.py).
  • An interactive CLI to run examples in order and view source + output (src/runnable_cli.py).
  • A Chainlit chat UI that lets you pick a mode each turn, shows the example’s source, and streams the live output (src/chainlit_app.py).

Project Structure

  • src/runnable_api.py: Source of truth. Per-mode functions, public API (MODES, run_mode, get_mode_source, get_next_suggested_mode) and runtime key management.
  • src/runnable.py: Thin CLI wrapper that delegates to run_mode (compatible with --mode).
  • src/runnable_cli.py: Interactive terminal UI suggesting the next mode and showing source + streamed output.
  • src/chainlit_app.py: Chainlit app that behaves like a chat with per-turn mode selection.
  • src/README_CLI.md: Extra details for the CLI.
  • tests: All tests are under src/ (e.g., test_runnable.py, test_runnable_units.py).

Requirements

  • Python 3.9+
  • A virtual environment is recommended.

Installation

Option A: Direct (venv + pip)

# From repo root
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Option B: Using Makefile

# Create venv and install dependencies
make install

# (optional) Install dev tools
make dev

Configure LLM API Keys

This project supports Anthropic and OpenAI backends via LangChain. Anthropic takes precedence when both are set.

Set keys in your shell or in a local .env file at the project root (loaded automatically):

# Quick setup using the provided template:
cp example.env .env

# Then edit .env and set your keys

# Or directly in your shell:

# Anthropic (preferred if present)
export ANTHROPIC_API_KEY=your_key

# OpenAI (used if Anthropic not set)
export OPENAI_API_KEY=your_key

When you need offline/testing mode (no external API calls):

export LCEL_TEST_MODE=1

In this mode the app uses a lightweight fake model that returns deterministic responses while preserving output shape.

Run the App (Chainlit GUI)

Start the chat UI that lets you pick any LCEL mode per turn. It shows the example source and streams execution output.

# Ensure the venv is active and dependencies installed
chainlit run src/chainlit_app.py -w
  • Use the mode selector chips to choose a mode. The app suggests the next one in the learning path.
  • You can set OPENAI_API_KEY / ANTHROPIC_API_KEY in the Chainlit settings panel at runtime; the backend reloads automatically.

Run the CLI

Interactive terminal experience with suggestions and progress tracking.

python src/runnable_cli.py

Run an individual example directly (same behavior as before refactor):

python src/runnable.py --mode basic

Running Tests

# From repo root (venv activated)
pytest -q

Notes

  • Mode order is defined once in src/runnable_api.py (MODES) and reused by CLI and Chainlit.
  • All non-streaming log lines are emitted as full lines (newline-prefixed and newline-suffixed) for consistent rendering; streaming keeps raw chunks.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors