Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 90 additions & 0 deletions .github/workflows/generate-llm-txt.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
name: Generate LLM.txt

on:
# Trigger on releases
release:
types: [published]

# Trigger on pushes to main branch
push:
branches: [main]
paths:
- 'src/mcpm/commands/**'
- 'src/mcpm/cli.py'
- 'scripts/generate_llm_txt.py'

# Allow manual trigger
workflow_dispatch:

jobs:
generate-llm-txt:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 0

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -e .

- name: Generate llm.txt
run: |
python scripts/generate_llm_txt.py

- name: Check for changes
id: check_changes
run: |
if git diff --quiet llm.txt; then
echo "no_changes=true" >> $GITHUB_OUTPUT
else
echo "no_changes=false" >> $GITHUB_OUTPUT
fi

- name: Commit and push changes
if: steps.check_changes.outputs.no_changes == 'false'
run: |
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
git add llm.txt
git commit -m "docs: update llm.txt for AI agents [skip ci]"
git push
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

- name: Create Pull Request (for releases)
if: github.event_name == 'release' && steps.check_changes.outputs.no_changes == 'false'
uses: peter-evans/create-pull-request@v5
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: "docs: update llm.txt for release ${{ github.event.release.tag_name }}"
title: "πŸ“š Update llm.txt for AI agents (Release ${{ github.event.release.tag_name }})"
body: |
## πŸ€– Automated llm.txt Update

This PR automatically updates the llm.txt file for AI agents following the release of version ${{ github.event.release.tag_name }}.

### Changes
- Updated command documentation
- Refreshed examples and usage patterns
- Updated version information

### What is llm.txt?
llm.txt is a comprehensive guide for AI agents to understand how to interact with MCPM programmatically. It includes:
- All CLI commands with parameters and examples
- Environment variables for automation
- Best practices for AI agent integration
- Error handling and troubleshooting

This file is automatically generated from the CLI structure using `scripts/generate_llm_txt.py`.
branch: update-llm-txt-${{ github.event.release.tag_name }}
delete-branch: true
42 changes: 42 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ MCPM v2.0 provides a simplified approach to managing MCP servers with a global c
- πŸš€ **Direct Execution**: Run servers over stdio or HTTP for testing
- 🌐 **Public Sharing**: Share servers through secure tunnels
- πŸŽ›οΈ **Client Integration**: Manage configurations for Claude Desktop, Cursor, Windsurf, and more
- πŸ€– **AI Agent Friendly**: Non-interactive CLI with comprehensive automation support and [llm.txt](llm.txt) guide
- πŸ’» **Beautiful CLI**: Rich formatting and interactive interfaces
- πŸ“Š **Usage Analytics**: Monitor server usage and performance

Expand Down Expand Up @@ -145,6 +146,47 @@ mcpm migrate # Migrate from v1 to v2 configuration

The MCP Registry is a central repository of available MCP servers that can be installed using MCPM. The registry is available at [mcpm.sh/registry](https://mcpm.sh/registry).

## πŸ€– AI Agent Integration

MCPM is designed to be AI agent friendly with comprehensive automation support. Every interactive command has a non-interactive alternative using CLI parameters and environment variables.

### πŸ”§ Non-Interactive Mode

Set environment variables to enable full automation:

```bash
export MCPM_NON_INTERACTIVE=true # Disable all interactive prompts
export MCPM_FORCE=true # Skip confirmations
export MCPM_JSON_OUTPUT=true # JSON output for parsing
```

### πŸ“‹ LLM.txt Guide

The [llm.txt](llm.txt) file provides comprehensive documentation specifically designed for AI agents, including:

- Complete command reference with parameters and examples
- Environment variable usage patterns
- Best practices for automation
- Error handling and troubleshooting
- Batch operation patterns

The llm.txt file is automatically generated from the CLI structure and kept up-to-date with each release.

### ⚑ Example Usage

```bash
# Server management
mcpm new myserver --type stdio --command "python -m server" --force
mcpm edit myserver --env "API_KEY=secret" --force

# Profile management
mcpm profile edit web-dev --add-server myserver --force
mcpm profile run web-dev --port 8080

# Client integration
mcpm client edit cursor --add-profile web-dev --force
```

## πŸ—ΊοΈ Roadmap

### βœ… v2.0 Complete
Expand Down
178 changes: 178 additions & 0 deletions docs/llm-txt-generation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
# llm.txt Generation for AI Agents

## Overview

MCPM automatically generates an `llm.txt` file that provides comprehensive documentation for AI agents on how to interact with the MCPM CLI programmatically. This ensures that AI agents always have up-to-date information about command-line interfaces and parameters.

## What is llm.txt?

llm.txt is a markdown-formatted documentation file specifically designed for Large Language Models (AI agents) to understand how to interact with CLI tools. It includes:

- **Complete command reference** with all parameters and options
- **Usage examples** for common scenarios
- **Environment variables** for automation
- **Best practices** for AI agent integration
- **Error codes and troubleshooting** information

## Automatic Generation

The llm.txt file is automatically generated using the `scripts/generate_llm_txt.py` script, which:

1. **Introspects the CLI structure** using Click's command hierarchy
2. **Extracts parameter information** including types, defaults, and help text
3. **Generates relevant examples** based on command patterns
4. **Includes environment variables** and automation patterns
5. **Formats everything** in a structured, AI-agent friendly format

## Generation Triggers

The llm.txt file is regenerated automatically in these scenarios:

### 1. GitHub Actions (CI/CD)

- **On releases**: When a new version is published
- **On main branch commits**: When CLI-related files change
- **Manual trigger**: Via GitHub Actions workflow dispatch

### 2. Local Development

Developers can manually regenerate the file:

```bash
# Using the generation script directly
python scripts/generate_llm_txt.py

# Using the convenience script
./scripts/update-llm-txt.sh
```

## File Structure

The generated llm.txt follows this structure:

```
# MCPM - AI Agent Guide

## Overview
- Tool description
- Key concepts

## Environment Variables for AI Agents
- MCPM_NON_INTERACTIVE
- MCPM_FORCE
- MCPM_JSON_OUTPUT
- Server-specific variables

## Command Reference
- Each command with parameters
- Usage examples
- Subcommands recursively

## Best Practices for AI Agents
- Automation patterns
- Error handling
- Common workflows

## Troubleshooting
- Common issues and solutions
```

## Customization

### Adding New Examples

To add examples for new commands, edit the `example_map` in `scripts/generate_llm_txt.py`:

```python
example_map = {
'mcpm new': [
'# Create a stdio server',
'mcpm new myserver --type stdio --command "python -m myserver"',
],
'mcpm your-new-command': [
'# Your example here',
'mcpm your-new-command --param value',
]
}
```

### Modifying Sections

The script generates several predefined sections. To modify content:

1. Edit the `generate_llm_txt()` function
2. Update the `lines` list with your changes
3. Test locally: `python scripts/generate_llm_txt.py`

## Integration with CI/CD

The GitHub Actions workflow (`.github/workflows/generate-llm-txt.yml`) handles:

1. **Automatic updates** when CLI changes are detected
2. **Pull request creation** for releases
3. **Version tracking** in the generated file
4. **Error handling** if generation fails

### Workflow Configuration

Key configuration options in the GitHub Actions workflow:

- **Trigger paths**: Only runs when CLI-related files change
- **Commit behavior**: Auto-commits changes with `[skip ci]`
- **Release behavior**: Creates PRs for manual review
- **Dependencies**: Installs MCPM before generation

## Benefits for AI Agents

1. **Always Up-to-Date**: Automatically reflects CLI changes
2. **Comprehensive**: Covers all commands, parameters, and options
3. **Structured**: Consistent format for parsing
4. **Practical**: Includes real-world usage examples
5. **Complete**: Covers automation, error handling, and troubleshooting

## Maintenance

### Updating the Generator

When adding new CLI commands or options:

1. The generator automatically detects new commands via Click introspection
2. Add specific examples to the `example_map` if needed
3. Update environment variable documentation if new variables are added
4. Test locally before committing

### Version Compatibility

The generator is designed to be compatible with:

- **Click framework**: Uses standard Click command introspection
- **Python 3.8+**: Compatible with the MCPM runtime requirements
- **Cross-platform**: Works on Linux, macOS, and Windows

### Troubleshooting Generation

If the generation fails:

1. **Check imports**: Ensure all MCPM modules can be imported
2. **Verify CLI structure**: Ensure commands are properly decorated
3. **Test locally**: Run `python scripts/generate_llm_txt.py`
4. **Check dependencies**: Ensure Click and other deps are installed

## Contributing

When contributing new CLI features:

1. **Add examples** to the example map for new commands
2. **Document environment variables** if you add new ones
3. **Test generation** locally before submitting PR
4. **Update this documentation** if you modify the generation process

## Future Enhancements

Potential improvements to the generation system:

- **JSON Schema generation** for structured API documentation
- **Interactive examples** with expected outputs
- **Multi-language examples** for different automation contexts
- **Plugin system** for custom documentation sections
- **Integration testing** to verify examples work correctly
Loading
Loading