LangChain execution backend for Agent-Flavored Markdown (AFM) agents.
This package implements the AgentRunner protocol from afm-core, providing LLM orchestration using the LangChain framework.
- AgentRunner Protocol Implementation: Pluggable backend for AFM agents
- LLM Provider Support: OpenAI and Anthropic models
- MCP Tool Integration: Connect external tools via Model Context Protocol
- Conversation Management: Session history and state management
- Plugin Registration: Auto-discovered via Python entry points
This package is typically installed as part of afm-cli. For LangChain-specific use:
pip install afm-langchainmodel:
provider: openai
name: gpt-4o # or other OpenAI modelsRequires: OPENAI_API_KEY environment variable
model:
provider: anthropic
name: claude-sonnet-4-5 # or other Claude modelsRequires: ANTHROPIC_API_KEY environment variable
This project uses uv for dependency management.
# Clone the repository
git clone https://github.com/wso2/reference-implementations-afm.git
cd python-interpreter
# Install dependencies
uv sync
# Activate the virtual environment
source .venv/bin/activate# Run afm-langchain tests
uv run pytest packages/afm-langchain/tests/
# Run with coverage
uv run pytest packages/afm-langchain/tests/ --cov=afm_langchain# Format code
uv run ruff format
# Lint code
uv run ruff checkpackages/afm-langchain/src/afm_langchain/
├── __init__.py
├── backend.py # LangChainRunner implementation
├── model_factory.py # LLM provider factory
├── mcp_manager.py # MCP tool management
└── tools_adapter.py # Tool calling adapter
The LangChain backend is automatically registered and used when you run an AFM agent:
from afm.runner import get_runner
# Get the LangChain runner
runner = get_runner("langchain")
# Run an agent
result = await runner.run(agent, user_input)For comprehensive documentation, see the project README.
Apache-2.0