-
Notifications
You must be signed in to change notification settings - Fork 654
Open
Labels
enhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is needed
Description
Problem Statement
Implement and evolve machine learning-accelerated linear solvers for large-scale linear systems arising from PDEs using OpenEvolve. This project explores the intersection of numerical methods, machine learning, and evolutionary computation to discover optimal preconditioning strategies and solver configurations.
Background
Krylov subspace methods (Conjugate Gradient, BiCG, GMRES) are fundamental iterative solvers for large sparse linear systems. While effective, their convergence can be slow for ill-conditioned systems. This project aims to use OpenEvolve to evolve ML-enhanced preconditioners and solver configurations that accelerate convergence.
Technical Objectives
1. Core Implementation
- Implement base Krylov solvers (CG, BiCG, GMRES) in Python
- Create PDE-based test problems (Poisson, Heat, Wave equations)
- Design evaluator that measures solver performance (iterations to convergence, wall time, accuracy)
2. ML Integration Points
- Preconditioner Learning: Evolve neural network architectures that learn problem-specific preconditioners
- Parameter Adaptation: Evolve strategies for dynamically adjusting solver parameters during iteration
- Hybrid Approaches: Combine classical algebraic preconditioners with learned components
3. Evolution Strategy
- Use OpenEvolve to discover optimal:
- Neural network architectures for preconditioners
- Hyperparameter selection strategies
- Hybrid solver configurations
- Problem-specific optimizations
Implementation Guide
Directory Structure
examples/ml_accelerated_solvers/
├── initial_programs/
│ ├── cg_solver.py # Basic CG implementation
│ ├── bicg_solver.py # BiCG implementation
│ ├── gmres_solver.py # GMRES implementation
│ └── ml_preconditioner.py # Neural preconditioner template
├── evaluators/
│ ├── convergence_evaluator.py # Measures iterations to convergence
│ ├── timing_evaluator.py # Measures wall time
│ └── accuracy_evaluator.py # Measures solution accuracy
├── test_problems/
│ ├── poisson_2d.py # 2D Poisson equation discretization
│ ├── heat_equation.py # Heat equation test cases
│ └── advection_diffusion.py # Advection-diffusion problems
├── configs/
│ ├── evolution_config.yaml # OpenEvolve configuration
│ └── solver_config.yaml # Solver-specific parameters
└── analysis/
└── performance_analysis.py # Analyze evolved solutions
Sample Initial Program (cg_solver.py)
import numpy as np
from typing import Tuple, Optional, Callable
# EVOLVE-BLOCK-START
def conjugate_gradient_ml(
A: np.ndarray,
b: np.ndarray,
x0: Optional[np.ndarray] = None,
preconditioner: Optional[Callable] = None,
tol: float = 1e-6,
max_iter: int = 1000
) -> Tuple[np.ndarray, dict]:
"""
ML-enhanced Conjugate Gradient solver.
This function should be evolved to incorporate:
- Learned preconditioners
- Adaptive parameter selection
- Convergence acceleration strategies
"""
n = len(b)
x = x0 if x0 is not None else np.zeros(n)
r = b - A @ x
# Apply preconditioner (to be evolved)
if preconditioner:
z = preconditioner(r)
else:
z = r.copy()
p = z.copy()
rz_old = np.dot(r, z)
iterations = []
for i in range(max_iter):
Ap = A @ p
alpha = rz_old / np.dot(p, Ap)
x = x + alpha * p
r = r - alpha * Ap
# Check convergence
residual_norm = np.linalg.norm(r)
iterations.append(residual_norm)
if residual_norm < tol:
return x, {"iterations": i+1, "residuals": iterations}
# Apply preconditioner
if preconditioner:
z = preconditioner(r)
else:
z = r.copy()
rz_new = np.dot(r, z)
beta = rz_new / rz_old
p = z + beta * p
rz_old = rz_new
return x, {"iterations": max_iter, "residuals": iterations, "converged": False}
# EVOLVE-BLOCK-ENDSample Evaluator (convergence_evaluator.py)
import numpy as np
from typing import Dict, Any
import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from test_problems.poisson_2d import generate_poisson_system
def evaluate(program_path: str) -> Dict[str, Any]:
"""
Evaluate solver convergence on multiple test problems.
"""
# Import the evolved program
spec = importlib.util.spec_from_file_location("evolved", program_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
total_score = 0.0
results = []
# Test on problems of increasing difficulty
for n in [50, 100, 200]:
A, b, x_true = generate_poisson_system(n)
try:
x_sol, info = module.conjugate_gradient_ml(A, b)
# Evaluate performance
error = np.linalg.norm(x_sol - x_true) / np.linalg.norm(x_true)
iterations = info.get("iterations", float('inf'))
# Score based on iterations and accuracy
score = 1000.0 / (iterations + 1) * np.exp(-10 * error)
total_score += score
results.append({
"problem_size": n,
"iterations": iterations,
"error": error,
"score": score
})
except Exception as e:
results.append({
"problem_size": n,
"error": str(e),
"score": 0
})
return {
"score": total_score / len(results),
"details": results
}Evolution Configuration (evolution_config.yaml)
llm:
models:
- name: gpt-4o
weight: 1.0
temperature: 0.7
prompt:
system_message: |
You are optimizing Krylov subspace solvers with ML enhancements.
Focus on:
- Improving preconditioner design
- Reducing iteration count
- Maintaining numerical stability
- Efficient matrix-vector operations
database:
feature_dimensions: ["problem_size", "condition_number", "solver_type"]
feature_bins: 10 # or {"problem_size": 10, "condition_number": 10, "solver_type": 5}
population_size: 100
# Top-level configuration
max_iterations: 500
Getting Started
1. Install Dependencies
pip install numpy scipy torch scikit-learn matplotlib2. Create Initial Test Problem
# test_problems/poisson_2d.py
import numpy as np
from scipy.sparse import diags
def generate_poisson_system(n: int):
"""Generate 2D Poisson equation discretization."""
h = 1.0 / (n + 1)
# Create finite difference matrix
main_diag = np.ones(n*n) * 4
off_diag = np.ones(n*n - 1) * -1
off_diag[n-1::n] = 0 # Handle block structure
A = diags([off_diag, main_diag, off_diag], [-1, 0, 1], shape=(n*n, n*n))
A = A + diags([-1, -1], [-n, n], shape=(n*n, n*n))
A = A.toarray()
# Create RHS and true solution
x_true = np.random.randn(n*n)
b = A @ x_true
return A, b, x_true3. Run Evolution
python openevolve-run.py \
examples/ml_accelerated_solvers/initial_programs/cg_solver.py \
examples/ml_accelerated_solvers/evaluators/convergence_evaluator.py \
--config examples/ml_accelerated_solvers/configs/evolution_config.yaml \
--iterations 500Evaluation Metrics
- Convergence Rate: Iterations required to reach tolerance
- Wall Time: Actual computation time
- Scalability: Performance on problems of increasing size
- Robustness: Performance across different problem types
- Innovation: Novel approaches discovered through evolution
ArnaudDeza
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is needed