You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update contribution and setup docs for LLM configuration
Improved instructions in CONTRIBUTING.md and README.md for setting up the development environment, running tests, and configuring LLM providers. Added details on using mock API keys for testing, clarified environment variable requirements, and provided guidance for integrating with alternative LLM providers and optillm.
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+28-3Lines changed: 28 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,15 @@ Thank you for your interest in contributing to OpenEvolve! This document provide
6
6
7
7
1. Fork the repository
8
8
2. Clone your fork: `git clone https://github.com/codelion/openevolve.git`
9
-
3. Install the package in development mode: `pip install -e .`
10
-
4. Run the tests to ensure everything is working: `python -m unittest discover tests`
9
+
3. Install the package in development mode: `pip install -e ".[dev]"`
10
+
4. Set up environment for testing:
11
+
```bash
12
+
# Unit tests don't require a real API key, but the environment variable must be set
13
+
export OPENAI_API_KEY=test-key-for-unit-tests
14
+
```
15
+
5. Run the tests to ensure everything is working: `python -m unittest discover tests`
16
+
17
+
**Note**: The unit tests do not make actual API calls to OpenAI or any LLM provider. However, the `OPENAI_API_KEY` environment variable must be set to any non-empty value for the tests to run. You can use a placeholder value like `test-key-for-unit-tests`.
11
18
12
19
## Development Environment
13
20
@@ -17,14 +24,32 @@ We recommend using a virtual environment for development:
17
24
python -m venv env
18
25
source env/bin/activate # On Windows: env\Scripts\activate
19
26
pip install -e ".[dev]"
27
+
28
+
# For running tests (no actual API calls are made)
29
+
export OPENAI_API_KEY=test-key-for-unit-tests
30
+
31
+
# For testing with real LLMs during development
32
+
# export OPENAI_API_KEY=your-actual-api-key
20
33
```
21
34
35
+
### LLM Configuration for Development
36
+
37
+
When developing features that interact with LLMs:
38
+
39
+
1.**Local Development**: Use a mock API key for unit tests
40
+
2.**Integration Testing**: Use your actual API key and configure `api_base` if using alternative providers
41
+
3.**Cost Management**: Consider using cheaper models or [optillm](https://github.com/codelion/optillm) for rate limiting during development
42
+
22
43
## Pull Request Process
23
44
24
45
1. Create a new branch for your feature or bugfix: `git checkout -b feat-your-feature-name`
25
46
2. Make your changes
26
47
3. Add tests for your changes
27
-
4. Run the tests to make sure everything passes: `python -m unittest discover tests`
48
+
4. Run the tests to make sure everything passes:
49
+
```bash
50
+
export OPENAI_API_KEY=test-key-for-unit-tests
51
+
python -m unittest discover tests
52
+
```
28
53
5. Commit your changes: `git commit -m "Add your descriptive commit message"`
29
54
6. Push to your fork: `git push origin feature/your-feature-name`
Copy file name to clipboardExpand all lines: README.md
+31-3Lines changed: 31 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,13 +42,41 @@ pip install -e .
42
42
43
43
### Quick Start
44
44
45
-
We use the OpenAI SDK, so you can use any LLM or provider that supports an OpenAI compatible API. Just set the `OPENAI_API_KEY` environment variable
46
-
and update the `api_base` in config.yaml if you are using a provider other than OpenAI. For local models, you can use
47
-
an inference server like [optillm](https://github.com/codelion/optillm).
45
+
#### Setting up LLM Access
46
+
47
+
OpenEvolve uses the OpenAI SDK, which means it works with any LLM provider that supports an OpenAI-compatible API:
48
+
49
+
1.**Set the API Key**: Export the `OPENAI_API_KEY` environment variable:
50
+
```bash
51
+
export OPENAI_API_KEY=your-api-key-here
52
+
```
53
+
54
+
2.**Using Alternative LLM Providers**:
55
+
- For providers other than OpenAI (e.g., Anthropic, Cohere, local models), update the `api_base` in your config.yaml:
56
+
```yaml
57
+
llm:
58
+
api_base: "https://your-provider-endpoint.com/v1"
59
+
```
60
+
61
+
3. **Maximum Flexibility with optillm**:
62
+
- For advanced routing, rate limiting, or using multiple providers, we recommend [optillm](https://github.com/codelion/optillm)
63
+
- optillm acts as a proxy that can route requests to different LLMs based on your rules
64
+
- Simply point `api_base` to your optillm instance:
65
+
```yaml
66
+
llm:
67
+
api_base: "http://localhost:8000/v1"
68
+
```
69
+
70
+
This setup ensures OpenEvolve can work with any LLM provider - OpenAI, Anthropic, Google, Cohere, local models via Ollama/vLLM, or any OpenAI-compatible endpoint.
48
71
49
72
```python
73
+
import os
50
74
from openevolve import OpenEvolve
51
75
76
+
# Ensure API key is set
77
+
if not os.environ.get("OPENAI_API_KEY"):
78
+
raise ValueError("Please set OPENAI_API_KEY environment variable")
0 commit comments