-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
Hello, this works well.
Just wanted to share. I got this working under docker compose and then was able to connect to it in vscode using the continue extension. Here's what the two important configurations look like. Hopefully it helps speed things up for someone else.
Docker Compose
services:
mcp-outline:
# Use the pre-built image or your locally built image
image: ghcr.io/vortiago/mcp-outline:latest # Example image path
ports:
# Maps the container port 3001 to the host port 3001
- "3001:3001"
environment:
# Enables HTTP transport for connectivity
- MCP_TRANSPORT=sse
# Replace with your actual Outline API Key
- OUTLINE_API_KEY=your_api_key
# Replace with the API URL of your self-hosted Outline instance
- OUTLINE_API_URL=https://your-outline-instance.com/apiContinue MCP configuration
name: Outline Knowledge
version: 0.0.1
schema: v1
mcpServers:
- name: Outline Search
# CRITICAL CHANGE: Use the 'sse' type for remote HTTP connections
type: sse
# Specify the URL of the running Docker container's SSE endpoint
# This assumes you mapped container port 3001 to host port 3001
url: http://your-dockercompose-server:3001/sse
# Credentials are NOT needed here, as they are already injected
# as environment variables into the Docker container.I found that qwen3-coder:30b was actually sort of the best local LLM that could manage this as an agentic tool from the continue extension. Well, best given my resources anyway. I'm sure larger models may fair better as well but I had some interesting failures with gpt20b where it was less reliable. That new IBM Granite also fell flat for agentic tasks.
Anyway, really cool, thanks!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels