pip install openplugin-frameworkOr install from source:
git clone https://github.com/yourusername/openplugin
cd openplugin
pip install -e .Create a plugin directory structure:
my-plugin/
├── .claude-plugin/
│ └── plugin.json
├── commands/
│ └── my-command.md
└── README.md
Create .claude-plugin/plugin.json:
{
"name": "my-plugin",
"version": "1.0.0",
"description": "My awesome plugin",
"author": "Your Name"
}Create commands/my-command.md:
# My Command
This command does something awesome.
## Usage
Describe how to use this command.import asyncio
from openplugin import PluginManager, OpenAIProvider
async def main():
# Initialize manager
manager = PluginManager(plugins_dir="./plugins")
manager.load_plugins()
# Initialize provider
provider = OpenAIProvider(api_key="your-api-key")
# Execute command
result = await manager.execute_command(
"my-plugin",
"my-command",
provider=provider,
user_input="Hello!"
)
print(result)
await manager.shutdown()
asyncio.run(main())Plugins follow this structure:
plugin-name/
├── .claude-plugin/
│ └── plugin.json # Required: Plugin manifest
├── .mcp.json # Optional: MCP server configuration
├── commands/ # Optional: Slash commands (.md files)
├── agents/ # Optional: Agent definitions (.md files)
├── skills/ # Optional: Skill definitions
│ └── skill-name/
│ └── SKILL.md
└── README.md # Optional: Plugin documentation
The plugin.json file supports these fields:
name(required): Plugin name (kebab-case)version(required): Plugin version (semver)description(optional): Plugin descriptionauthor(optional): Plugin authorhomepage(optional): Plugin homepage URLrepository(optional): Plugin repository URLlicense(optional): Plugin licensekeywords(optional): List of keywordsdependencies(optional): Plugin dependenciesmcp_servers(optional): MCP server configurations
To add MCP server support, create .mcp.json:
{
"mcpServers": {
"my-server": {
"command": "python",
"args": ["-m", "my_mcp_server"],
"env": {
"API_KEY": "${MY_API_KEY}"
}
}
}
}from openplugin import OpenAIProvider
provider = OpenAIProvider(
api_key="your-api-key",
model="gpt-4", # or "gpt-3.5-turbo"
temperature=0.7
)Implement the LLMProvider interface:
from openplugin.providers.base import LLMProvider
class MyProvider(LLMProvider):
async def execute_command(self, command_content, user_input, mcp_tools=None, **kwargs):
# Your implementation
pass
async def execute_agent(self, agent_content, user_input, mcp_tools=None, **kwargs):
# Your implementation
pass
async def chat(self, messages, tools=None, **kwargs):
# Your implementation
pass- See examples/ for more usage examples
- Check out plugins/ for example plugins
- Read the API documentation for detailed API reference