Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ When resuming from a checkpoint:
- The system loads all previously evolved programs and their metrics
- Checkpoint numbering continues from where it left off (e.g., if loaded from checkpoint_50, the next checkpoint will be checkpoint_60)
- All evolution state is preserved (best programs, feature maps, archives, etc.)
- Each checkpoint directory contains a copy of the best program at that point in time

Example workflow with checkpoints:

Expand All @@ -91,6 +92,32 @@ python openevolve-run.py examples/function_minimization/initial_program.py \
--checkpoint examples/function_minimization/openevolve_output/checkpoints/checkpoint_50 \
--iterations 50
```

### Comparing Results Across Checkpoints

Each checkpoint directory contains the best program found up to that point, making it easy to compare solutions over time:

```
checkpoints/
checkpoint_10/
best_program.py # Best program at iteration 10
best_program_info.json # Metrics and details
programs/ # All programs evaluated so far
metadata.json # Database state
checkpoint_20/
best_program.py # Best program at iteration 20
...
```

You can compare the evolution of solutions by examining the best programs at different checkpoints:

```bash
# Compare best programs at different checkpoints
diff -u checkpoints/checkpoint_10/best_program.py checkpoints/checkpoint_20/best_program.py

# Compare metrics
cat checkpoints/checkpoint_*/best_program_info.json | grep -A 10 metrics
```
### Docker

You can also install and execute via Docker:
Expand Down
179 changes: 179 additions & 0 deletions examples/function_minimization/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
# Function Minimization Example

This example demonstrates how OpenEvolve can discover sophisticated optimization algorithms starting from a simple implementation.

## Problem Description

The task is to minimize a complex non-convex function with multiple local minima:

```python
f(x, y) = sin(x) * cos(y) + sin(x*y) + (x^2 + y^2)/20
```

The global minimum is approximately at (-1.704, 0.678) with a value of -1.519.

## Getting Started

To run this example:

```bash
cd examples/function_minimization
python ../../openevolve-run.py initial_program.py evaluator.py --config config.yaml
```

## Algorithm Evolution

### Initial Algorithm (Random Search)

The initial implementation was a simple random search that had no memory between iterations:

```python
def search_algorithm(iterations=1000, bounds=(-5, 5)):
"""
A simple random search algorithm that often gets stuck in local minima.

Args:
iterations: Number of iterations to run
bounds: Bounds for the search space (min, max)

Returns:
Tuple of (best_x, best_y, best_value)
"""
# Initialize with a random point
best_x = np.random.uniform(bounds[0], bounds[1])
best_y = np.random.uniform(bounds[0], bounds[1])
best_value = evaluate_function(best_x, best_y)

for _ in range(iterations):
# Simple random search
x = np.random.uniform(bounds[0], bounds[1])
y = np.random.uniform(bounds[0], bounds[1])
value = evaluate_function(x, y)

if value < best_value:
best_value = value
best_x, best_y = x, y

return best_x, best_y, best_value
```

### Evolved Algorithm (Simulated Annealing)

After running OpenEvolve, it discovered a simulated annealing algorithm with a completely different approach:

```python
def simulated_annealing(bounds=(-5, 5), iterations=1000, step_size=0.1, initial_temperature=100, cooling_rate=0.99):
"""
Simulated Annealing algorithm for function minimization.

Args:
bounds: Bounds for the search space (min, max)
iterations: Number of iterations to run
step_size: Step size for perturbing the solution
initial_temperature: Initial temperature for the simulated annealing process
cooling_rate: Cooling rate for the simulated annealing process

Returns:
Tuple of (best_x, best_y, best_value)
"""
# Initialize with a random point
best_x = np.random.uniform(bounds[0], bounds[1])
best_y = np.random.uniform(bounds[0], bounds[1])
best_value = evaluate_function(best_x, best_y)

current_x, current_y = best_x, best_y
current_value = best_value
temperature = initial_temperature

for _ in range(iterations):
# Perturb the current solution
new_x = current_x + np.random.uniform(-step_size, step_size)
new_y = current_y + np.random.uniform(-step_size, step_size)

# Ensure the new solution is within bounds
new_x = max(bounds[0], min(new_x, bounds[1]))
new_y = max(bounds[0], min(new_y, bounds[1]))

new_value = evaluate_function(new_x, new_y)

# Calculate the acceptance probability
if new_value < current_value:
current_x, current_y = new_x, new_y
current_value = new_value

if new_value < best_value:
best_x, best_y = new_x, new_y
best_value = new_value
else:
probability = np.exp((current_value - new_value) / temperature)
if np.random.rand() < probability:
current_x, current_y = new_x, new_y
current_value = new_value

# Cool down the temperature
temperature *= cooling_rate

return best_x, best_y, best_value
```

## Key Improvements

Through evolutionary iterations, OpenEvolve discovered several key algorithmic concepts:

1. **Local Search**: Instead of random sampling across the entire space, the evolved algorithm makes small perturbations to promising solutions:
```python
new_x = current_x + np.random.uniform(-step_size, step_size)
new_y = current_y + np.random.uniform(-step_size, step_size)
```

2. **Temperature-based Acceptance**: The algorithm can escape local minima by occasionally accepting worse solutions:
```python
probability = np.exp((current_value - new_value) / temperature)
if np.random.rand() < probability:
current_x, current_y = new_x, new_y
current_value = new_value
```

3. **Cooling Schedule**: The temperature gradually decreases, transitioning from exploration to exploitation:
```python
temperature *= cooling_rate
```

4. **Parameter Introduction**: The system discovered the need for additional parameters to control the algorithm's behavior:
```python
def simulated_annealing(bounds=(-5, 5), iterations=1000, step_size=0.1, initial_temperature=100, cooling_rate=0.99):
```

## Results

The evolved algorithm shows substantial improvement in finding better solutions:

| Metric | Value |
|--------|-------|
| Value Score | 0.677 |
| Distance Score | 0.258 |
| Reliability Score | 1.000 |
| Overall Score | 0.917 |
| Combined Score | 0.584 |

The simulated annealing algorithm:
- Achieves higher quality solutions (closer to the global minimum)
- Has perfect reliability (100% success rate in completing runs)
- Maintains a good balance between performance and reliability

## How It Works

This example demonstrates key features of OpenEvolve:

- **Code Evolution**: Only the code inside the evolve blocks is modified
- **Complete Algorithm Redesign**: The system transformed a random search into a completely different algorithm
- **Automatic Discovery**: The system discovered simulated annealing without being explicitly programmed with knowledge of optimization algorithms
- **Function Renaming**: The system even recognized that the algorithm should have a more descriptive name

## Next Steps

Try modifying the config.yaml file to:
- Increase the number of iterations
- Change the LLM model configuration
- Adjust the evaluator settings to prioritize different metrics
- Try a different objective function by modifying `evaluate_function()`
12 changes: 6 additions & 6 deletions examples/function_minimization/evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,9 @@ def evaluate(program_path):
Dictionary of metrics
"""
# Known global minimum (approximate)
GLOBAL_MIN_X = -1.76
GLOBAL_MIN_Y = -1.03
GLOBAL_MIN_VALUE = -2.104
GLOBAL_MIN_X = -1.704
GLOBAL_MIN_Y = 0.678
GLOBAL_MIN_VALUE = -1.519

try:
# Load the program
Expand Down Expand Up @@ -216,9 +216,9 @@ def evaluate(program_path):
def evaluate_stage1(program_path):
"""First stage evaluation with fewer trials"""
# Known global minimum (approximate)
GLOBAL_MIN_X = float(-1.76)
GLOBAL_MIN_Y = float(-1.03)
GLOBAL_MIN_VALUE = float(-2.104)
GLOBAL_MIN_X = float(-1.704)
GLOBAL_MIN_Y = float(0.678)
GLOBAL_MIN_VALUE = float(-1.519)

# Quick check to see if the program runs without errors
try:
Expand Down
1 change: 0 additions & 1 deletion examples/function_minimization/initial_program.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,3 @@ def run_search():
if __name__ == "__main__":
x, y, value = run_search()
print(f"Found minimum at ({x}, {y}) with value {value}")
# The global minimum is around (-1.76, -1.03) with value -2.104
42 changes: 41 additions & 1 deletion openevolve/controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -353,10 +353,50 @@ def _save_checkpoint(self, iteration: int) -> None:
checkpoint_dir = os.path.join(self.output_dir, "checkpoints")
os.makedirs(checkpoint_dir, exist_ok=True)

# Save the database
# Create specific checkpoint directory
checkpoint_path = os.path.join(checkpoint_dir, f"checkpoint_{iteration}")
os.makedirs(checkpoint_path, exist_ok=True)

# Save the database
self.database.save(checkpoint_path, iteration)

# Save the best program found so far
best_program = None
if self.database.best_program_id:
best_program = self.database.get(self.database.best_program_id)
else:
best_program = self.database.get_best_program()

if best_program:
# Save the best program at this checkpoint
best_program_path = os.path.join(checkpoint_path, f"best_program{self.file_extension}")
with open(best_program_path, "w") as f:
f.write(best_program.code)

# Save metrics
best_program_info_path = os.path.join(checkpoint_path, "best_program_info.json")
with open(best_program_info_path, "w") as f:
import json

json.dump(
{
"id": best_program.id,
"generation": best_program.generation,
"iteration": iteration,
"metrics": best_program.metrics,
"language": best_program.language,
"timestamp": best_program.timestamp,
"saved_at": time.time(),
},
f,
indent=2,
)

logger.info(
f"Saved best program at checkpoint {iteration} with metrics: "
f"{', '.join(f'{name}={value:.4f}' for name, value in best_program.metrics.items())}"
)

logger.info(f"Saved checkpoint at iteration {iteration} to {checkpoint_path}")

def _save_best_program(self, program: Optional[Program] = None) -> None:
Expand Down