Skip to content

AIML-Solutions/LangChainTools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hf-lc-agent-pipeline

Minimal, reproducible LangChain agent setup backed by Hugging Face Transformers (local), plus a lightweight evaluation harness:

  • Initialize an agent with a consistent prompt + toolset
  • Run scenario-driven evals from YAML
  • Assert tool usage and output patterns
  • Emit JSONL traces to artifacts/ for inspection / regression tracking

Quickstart (WSL2 + uv)

uv run python -m hf_lc_agent_pipeline.runner --prompt "What is 12*13? Use the calculator tool."
uv run python scripts/run_evals.py
uv run pytest -q

cat > src/hf_lc_agent_pipeline/config.py << 'EOF'
from __future__ import annotations

from pydantic import BaseModel, Field


class HFModelConfig(BaseModel):
    """
    Config for a local Hugging Face Transformers text-generation pipeline.

    Defaults are intentionally small/CPU-friendly.
    Swap model_id to something instruct-tuned if you have GPU.
    """
    model_id: str = Field(default="distilgpt2", description="HF model id for local text-generation")
    max_new_tokens: int = Field(default=128, ge=1, le=2048)
    temperature: float = Field(default=0.0, ge=0.0, le=2.0)
    top_p: float = Field(default=1.0, ge=0.0, le=1.0)
    do_sample: bool = Field(default=False)
    repetition_penalty: float = Field(default=1.0, ge=0.5, le=2.0)
    seed: int = Field(default=7)


class AgentConfig(BaseModel):
    """
    Agent behavior config. Keep this minimal and reproducible.
    """
    verbose: bool = Field(default=False)
    max_iterations: int = Field(default=6, ge=1, le=50)
    return_intermediate_steps: bool = Field(default=True)

About

LangChain tooling lane for deterministic agent workflows, eval harnesses, and production-ready integration patterns.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages