-
Notifications
You must be signed in to change notification settings - Fork 2
Labels
Description
Problem
PR #120 added ECharts Vue components (ECDF, Convergence, Violin, 3D Landscape), but they currently use mock/placeholder data. The charts need to consume real benchmark results from the data pipeline.
Context
- Depends on: COCO/BBOB Benchmark Data Pipeline: CI → Extract → Artifact Storage #127 (Benchmark Data Pipeline), PR Replace deprecated ECharts-GL with TresJS, fix SSR build errors, and add mock benchmark data #120 (ECharts components)
- Part of: epic: VitePress Documentation Site with Scientific Visualization Suite #52 (Documentation Epic)
- Related: ECharts Vue Components for Scientific Visualization + Schema Integration #84 (ECharts Vue Components), COCO/BBOB Benchmark Data Collection Suite + History Tracking Implementation #85 (Benchmark Data Collection)
Current State
After PR #120:
docs/.vitepress/components/
├── ConvergenceChart.vue # Uses mock convergence data
├── ECDFChart.vue # Uses mock ECDF data
├── ViolinPlot.vue # Uses mock distribution data
└── FitnessLandscape3D.vue # Uses mock 3D surface data
Gap Analysis
What Exists
- ✅ ECharts components with proper rendering
- ✅ TresJS 3D landscape visualization
- ✅ Benchmark suite (
benchmarks/run_benchmark_suite.py) - ✅ History tracking in 113 optimizers (PR Add COCO/BBOB benchmark infrastructure with memory-efficient history tracking #119)
What's Missing
- Data Format Bridge - Benchmark JSON → ECharts option format
- Real Benchmark Data - CI-generated results in
docs/public/benchmarks/ - Dynamic Data Loading - Fetch benchmark results per algorithm
- ECDF Calculation - Empirical CDF from multi-run results
Implementation
1. Benchmark Data Format
Expected structure in docs/public/benchmarks/:
{
"algorithm": "ParticleSwarm",
"function": "shifted_ackley",
"dim": 10,
"runs": [
{
"seed": 0,
"best_fitness": 0.0012,
"convergence_history": [100, 50, 10, 1, 0.1, 0.01, 0.001],
"evaluations": 10000
}
]
}2. Data Transformer
Create docs/.vitepress/utils/benchmarkToECharts.ts:
export function toConvergenceOption(runs: BenchmarkRun[]): EChartsOption {
return {
xAxis: { type: 'value', name: 'Iteration' },
yAxis: { type: 'log', name: 'Best Fitness' },
series: runs.map((run, i) => ({
type: 'line',
data: run.convergence_history.map((v, i) => [i, v]),
name: `Run ${i + 1}`
}))
}
}
export function toECDFOption(runs: BenchmarkRun[], targets: number[]): EChartsOption {
// Calculate empirical CDF at each target threshold
}3. Component Enhancement
Update ConvergenceChart.vue:
<script setup lang="ts">
import { toConvergenceOption } from '../utils/benchmarkToECharts'
const props = defineProps<{
algorithm: string
function: string
}>()
const benchmarkData = await useFetch(
`/benchmarks/${props.algorithm}_${props.function}.json`
)
const option = computed(() => toConvergenceOption(benchmarkData.value.runs))
</script>4. Algorithm Page Integration
Each algorithm markdown page:
## Benchmark Results
<ConvergenceChart algorithm="ParticleSwarm" function="shifted_ackley" />
<ECDFChart algorithm="ParticleSwarm" :targets="[1e-1, 1e-3, 1e-5, 1e-7]" />Acceptance Criteria
- Benchmark JSON files exist in
docs/public/benchmarks/ -
toConvergenceOption()transforms run data correctly -
toECDFOption()calculates empirical CDF - ConvergenceChart renders real multi-run data
- ECDFChart shows proper performance profiles
- ViolinPlot displays fitness distribution
- 3D Landscape uses actual function evaluations
- No mock data remains in production build
Validation
# Ensure benchmark data exists
ls docs/public/benchmarks/*.json | wc -l # Should be > 0
# Test chart rendering
cd docs && npm run docs:dev
# Navigate to algorithm page, verify charts loadComplexity
High - Data pipeline + ECharts integration + async loading
Labels
visualization, enhancement
Copilot