You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Expanded optimization hints and LLM context in affine_transform_2d/config.yaml, increased max_tokens and program counts, and switched to the Gemini model. Updated evaluator.py to use EvaluationResult for artifact support and improved error reporting. Added optimization notes to initial_program.py. Removed obsolete benchmark results JSON.
Copy file name to clipboardExpand all lines: examples/algotune/affine_transform_2d/config.yaml
+76-5Lines changed: 76 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -7,17 +7,17 @@ checkpoint_interval: 10
7
7
log_level: "INFO"
8
8
random_seed: 42
9
9
diff_based_evolution: true # Best for Gemini models
10
-
max_code_length: 10000
10
+
max_code_length: 20000# Increased from 10000 for deeper exploration
11
11
12
12
# LLM Configuration
13
13
llm:
14
14
api_base: "https://openrouter.ai/api/v1"
15
15
models:
16
-
- name: "openai/o4-mini"
16
+
- name: "google/gemini-2.5-flash"
17
17
weight: 1.0
18
18
19
19
temperature: 0.4# Optimal (better than 0.2, 0.6, 0.8)
20
-
max_tokens: 16000#Optimal context
20
+
max_tokens: 128000#Increased from 16000 for much richer context
21
21
timeout: 150
22
22
retries: 3
23
23
@@ -67,8 +67,79 @@ prompt:
67
67
Apply a 2D affine transformation to an input image (2D array). The transformation is defined by a 2x3 matrix which combines rotation, scaling, shearing, and translation. This task uses cubic spline interpolation (order=3) and handles boundary conditions using the 'constant' mode (padding with 0).
68
68
69
69
Focus on improving the solve method to correctly handle the input format and produce valid solutions efficiently. Your solution will be compared against the reference AlgoTune baseline implementation to measure speedup and correctness.
70
-
num_top_programs: 3# Best balance
71
-
num_diverse_programs: 2# Best balance
70
+
71
+
72
+
73
+
74
+
75
+
PERFORMANCE OPTIMIZATION OPPORTUNITIES:
76
+
You have access to high-performance libraries that can provide significant speedups:
77
+
78
+
• **JAX** - JIT compilation for numerical computations
79
+
Key insight: Functions should be defined outside classes for JIT compatibility
80
+
For jnp.roots(), consider using strip_zeros=False in JIT contexts
81
+
82
+
• **Numba** - Alternative JIT compilation, often simpler to use
83
+
84
+
• **scipy optimizations** - Direct BLAS/LAPACK access and specialized algorithms
85
+
Many scipy functions have optimized implementations worth exploring
86
+
87
+
• **Vectorization** - Look for opportunities to replace loops with array operations
88
+
89
+
EXPLORATION STRATEGY:
90
+
1. Profile to identify bottlenecks first
91
+
2. Consider multiple optimization approaches for the same problem
92
+
3. Try both library-specific optimizations and algorithmic improvements
93
+
4. Test different numerical libraries to find the best fit
0 commit comments