Skip to content

Commit e6565a0

Browse files
authored
Merge pull request #128 from codelion/fix-update-documentation
Update contribution and setup docs for LLM configuration
2 parents 92c7f7c + bb9f9df commit e6565a0

File tree

14 files changed

+1316
-16
lines changed

14 files changed

+1316
-16
lines changed

CONTRIBUTING.md

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,15 @@ Thank you for your interest in contributing to OpenEvolve! This document provide
66

77
1. Fork the repository
88
2. Clone your fork: `git clone https://github.com/codelion/openevolve.git`
9-
3. Install the package in development mode: `pip install -e .`
10-
4. Run the tests to ensure everything is working: `python -m unittest discover tests`
9+
3. Install the package in development mode: `pip install -e ".[dev]"`
10+
4. Set up environment for testing:
11+
```bash
12+
# Unit tests don't require a real API key, but the environment variable must be set
13+
export OPENAI_API_KEY=test-key-for-unit-tests
14+
```
15+
5. Run the tests to ensure everything is working: `python -m unittest discover tests`
16+
17+
**Note**: The unit tests do not make actual API calls to OpenAI or any LLM provider. However, the `OPENAI_API_KEY` environment variable must be set to any non-empty value for the tests to run. You can use a placeholder value like `test-key-for-unit-tests`.
1118

1219
## Development Environment
1320

@@ -17,14 +24,32 @@ We recommend using a virtual environment for development:
1724
python -m venv env
1825
source env/bin/activate # On Windows: env\Scripts\activate
1926
pip install -e ".[dev]"
27+
28+
# For running tests (no actual API calls are made)
29+
export OPENAI_API_KEY=test-key-for-unit-tests
30+
31+
# For testing with real LLMs during development
32+
# export OPENAI_API_KEY=your-actual-api-key
2033
```
2134

35+
### LLM Configuration for Development
36+
37+
When developing features that interact with LLMs:
38+
39+
1. **Local Development**: Use a mock API key for unit tests
40+
2. **Integration Testing**: Use your actual API key and configure `api_base` if using alternative providers
41+
3. **Cost Management**: Consider using cheaper models or [optillm](https://github.com/codelion/optillm) for rate limiting during development
42+
2243
## Pull Request Process
2344

2445
1. Create a new branch for your feature or bugfix: `git checkout -b feat-your-feature-name`
2546
2. Make your changes
2647
3. Add tests for your changes
27-
4. Run the tests to make sure everything passes: `python -m unittest discover tests`
48+
4. Run the tests to make sure everything passes:
49+
```bash
50+
export OPENAI_API_KEY=test-key-for-unit-tests
51+
python -m unittest discover tests
52+
```
2853
5. Commit your changes: `git commit -m "Add your descriptive commit message"`
2954
6. Push to your fork: `git push origin feature/your-feature-name`
3055
7. Submit a pull request to the main repository

README.md

Lines changed: 31 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,13 +42,41 @@ pip install -e .
4242

4343
### Quick Start
4444

45-
We use the OpenAI SDK, so you can use any LLM or provider that supports an OpenAI compatible API. Just set the `OPENAI_API_KEY` environment variable
46-
and update the `api_base` in config.yaml if you are using a provider other than OpenAI. For local models, you can use
47-
an inference server like [optillm](https://github.com/codelion/optillm).
45+
#### Setting up LLM Access
46+
47+
OpenEvolve uses the OpenAI SDK, which means it works with any LLM provider that supports an OpenAI-compatible API:
48+
49+
1. **Set the API Key**: Export the `OPENAI_API_KEY` environment variable:
50+
```bash
51+
export OPENAI_API_KEY=your-api-key-here
52+
```
53+
54+
2. **Using Alternative LLM Providers**:
55+
- For providers other than OpenAI (e.g., Anthropic, Cohere, local models), update the `api_base` in your config.yaml:
56+
```yaml
57+
llm:
58+
api_base: "https://your-provider-endpoint.com/v1"
59+
```
60+
61+
3. **Maximum Flexibility with optillm**:
62+
- For advanced routing, rate limiting, or using multiple providers, we recommend [optillm](https://github.com/codelion/optillm)
63+
- optillm acts as a proxy that can route requests to different LLMs based on your rules
64+
- Simply point `api_base` to your optillm instance:
65+
```yaml
66+
llm:
67+
api_base: "http://localhost:8000/v1"
68+
```
69+
70+
This setup ensures OpenEvolve can work with any LLM provider - OpenAI, Anthropic, Google, Cohere, local models via Ollama/vLLM, or any OpenAI-compatible endpoint.
4871

4972
```python
73+
import os
5074
from openevolve import OpenEvolve
5175
76+
# Ensure API key is set
77+
if not os.environ.get("OPENAI_API_KEY"):
78+
raise ValueError("Please set OPENAI_API_KEY environment variable")
79+
5280
# Initialize the system
5381
evolve = OpenEvolve(
5482
initial_program_path="path/to/initial_program.py",

0 commit comments

Comments
 (0)