Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ export default {
profile: false,
tokenCache: true,

// Ollama configuration (if using local models)
ollamaBaseUrl: 'http://localhost:11434',
// Base URL configuration (for providers that need it)
baseUrl: 'http://localhost:11434', // Example for Ollama
};
```

Expand Down
4 changes: 4 additions & 0 deletions packages/docs/docs/providers/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ MyCoder currently supports the following LLM providers:
- [**Anthropic**](./anthropic.md) - Claude models from Anthropic
- [**OpenAI**](./openai.md) - GPT models from OpenAI
- [**Ollama**](./ollama.md) - Self-hosted open-source models via Ollama
- [**Local OpenAI Compatible**](./local-openai.md) - GPUStack and other OpenAI-compatible servers
- [**xAI**](./xai.md) - Grok models from xAI

## Configuring Providers

Expand Down Expand Up @@ -52,3 +54,5 @@ For detailed instructions on setting up each provider, see the provider-specific
- [Anthropic Configuration](./anthropic.md)
- [OpenAI Configuration](./openai.md)
- [Ollama Configuration](./ollama.md)
- [Local OpenAI Compatible Configuration](./local-openai.md)
- [xAI Configuration](./xai.md)
123 changes: 123 additions & 0 deletions packages/docs/docs/providers/local-openai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
---
sidebar_position: 5
---

# Local OpenAI Compatible Servers

MyCoder supports connecting to local or self-hosted OpenAI-compatible API servers, including solutions like [GPUStack](https://gpustack.ai/), [LM Studio](https://lmstudio.ai/), [Ollama OpenAI compatibility mode](https://github.com/ollama/ollama/blob/main/docs/openai.md), and [LocalAI](https://localai.io/).

## Setup

To use a local OpenAI-compatible server with MyCoder:

1. Install and set up your preferred OpenAI-compatible server
2. Start the server according to its documentation
3. Configure MyCoder to connect to your local server

### Configuration

Configure MyCoder to use your local OpenAI-compatible server in your `mycoder.config.js` file:

```javascript
export default {
// Provider selection - use gpustack for any OpenAI-compatible server
provider: 'gpustack',
model: 'llama3.2', // Use the model name available on your server

// The base URL for your local server
baseUrl: 'http://localhost:80', // Default for GPUStack, adjust as needed

// Other MyCoder settings
maxTokens: 4096,
temperature: 0.7,
// ...
};
```

## GPUStack

[GPUStack](https://gpustack.ai/) is a solution for running AI models on your own hardware. It provides an OpenAI-compatible API server that works seamlessly with MyCoder.

### Setting up GPUStack

1. Install GPUStack following the instructions on their website
2. Start the GPUStack server
3. Configure MyCoder to use the `gpustack` provider

```javascript
export default {
provider: 'gpustack',
model: 'llama3.2', // Choose a model available on your GPUStack instance
baseUrl: 'http://localhost:80', // Default GPUStack URL
};
```

## Other OpenAI-Compatible Servers

You can use MyCoder with any OpenAI-compatible server by setting the appropriate `baseUrl`:

### LM Studio

```javascript
export default {
provider: 'gpustack',
model: 'llama3', // Use the model name as configured in LM Studio
baseUrl: 'http://localhost:1234', // Default LM Studio server URL
};
```

### LocalAI

```javascript
export default {
provider: 'gpustack',
model: 'gpt-3.5-turbo', // Use the model name as configured in LocalAI
baseUrl: 'http://localhost:8080', // Default LocalAI server URL
};
```

### Ollama (OpenAI Compatibility Mode)

```javascript
export default {
provider: 'gpustack',
model: 'llama3', // Use the model name as configured in Ollama
baseUrl: 'http://localhost:11434/v1', // Ollama OpenAI compatibility endpoint
};
```

## Hardware Requirements

Running LLMs locally requires significant hardware resources:

- Minimum 16GB RAM (32GB+ recommended)
- GPU with at least 8GB VRAM for optimal performance
- SSD storage for model files (models can be 5-20GB each)

## Best Practices

- Ensure your local server and the selected model support tool calling/function calling
- Use models optimized for coding tasks when available
- Monitor your system resources when running large models locally
- Consider using a dedicated machine for hosting your local server

## Troubleshooting

If you encounter issues with local OpenAI-compatible servers:

- Verify the server is running and accessible at the configured base URL
- Check that the model name exactly matches what's available on your server
- Ensure the model supports tool/function calling (required for MyCoder)
- Check server logs for specific error messages
- Test the server with a simple curl command to verify API compatibility:

```bash
curl http://localhost:80/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.2",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```

For more information, refer to the documentation for your specific OpenAI-compatible server.
2 changes: 1 addition & 1 deletion packages/docs/docs/providers/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ export default {
model: 'medragondot/Sky-T1-32B-Preview:latest',

// Optional: Custom base URL (defaults to http://localhost:11434)
// ollamaBaseUrl: 'http://localhost:11434',
// baseUrl: 'http://localhost:11434',

// Other MyCoder settings
maxTokens: 4096,
Expand Down
80 changes: 80 additions & 0 deletions packages/docs/docs/providers/xai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
---
sidebar_position: 6
---

# xAI (Grok)

[xAI](https://x.ai/) is the company behind Grok, a powerful large language model designed to be helpful, harmless, and honest. Grok models offer strong reasoning capabilities and support for tool calling.

## Setup

To use Grok models with MyCoder, you need an xAI API key:

1. Create an account at [xAI](https://x.ai/)
2. Navigate to the API Keys section and create a new API key
3. Set the API key as an environment variable or in your configuration file

### Environment Variables

You can set the xAI API key as an environment variable:

```bash
export XAI_API_KEY=your_api_key_here
```

### Configuration

Configure MyCoder to use xAI's Grok in your `mycoder.config.js` file:

```javascript
export default {
// Provider selection
provider: 'xai',
model: 'grok-2-latest',

// Optional: Set API key directly (environment variable is preferred)
// xaiApiKey: 'your_api_key_here',

// Other MyCoder settings
maxTokens: 4096,
temperature: 0.7,
// ...
};
```

## Supported Models

xAI offers several Grok models with different capabilities:

- `grok-2-latest` (recommended) - The latest Grok-2 model with strong reasoning and tool-calling capabilities
- `grok-1` - The original Grok model

## Best Practices

- Grok models excel at coding tasks and technical problem-solving
- They have strong tool-calling capabilities, making them suitable for MyCoder workflows
- For complex programming tasks, use Grok-2 models for best results
- Provide clear, specific instructions for optimal results

## Custom Base URL

If you need to use a different base URL for the xAI API (for example, if you're using a proxy or if xAI changes their API endpoint), you can specify it in your configuration:

```javascript
export default {
provider: 'xai',
model: 'grok-2-latest',
baseUrl: 'https://api.x.ai/v1', // Default xAI API URL
};
```

## Troubleshooting

If you encounter issues with xAI's Grok:

- Verify your API key is correct and has sufficient quota
- Check that you're using a supported model name
- For tool-calling issues, ensure your functions are properly formatted
- Monitor your token usage to avoid unexpected costs

For more information, visit the [xAI Documentation](https://x.ai/docs).
Loading