@@ -277,44 +277,6 @@ PDE discretiations.
277277
278278** Reviewers** : Chris Rackauckas
279279
280- ## SciMLBenchmarks Compatability Bump for Benchmark Sets (\100 each set)
281-
282- The [ SciMLBenchmarks] ( https://github.com/SciML/SciMLBenchmarks.jl ) are a large set of benchmarks maintained
283- by the SciML organization. As such, keeping these benchmarks up-to-date can be a time-consuming task.
284- In many cases, we can end up in a situation where there are many package bumps that need to happen. Sometimes
285- no code needs to be updated, in other cases the benchmark code does need to be updated. The only way to tell
286- is to start the update process, bump the project and manifest tomls, and start digging into the results.
287-
288- These bumps are done in subsets. The currently identified subsets are:
289-
290- #### BayesianInference
291-
292- * [ https://github.com/SciML/SciMLBenchmarks.jl/pull/1182 ] ( https://github.com/SciML/SciMLBenchmarks.jl/pull/1182 )
293- * [ https://github.com/SciML/SciMLBenchmarks.jl/pull/1106 ] ( https://github.com/SciML/SciMLBenchmarks.jl/pull/1106 )
294- * [ https://github.com/SciML/SciMLBenchmarks.jl/pull/1105 ] ( https://github.com/SciML/SciMLBenchmarks.jl/pull/1105 )
295-
296- #### AutomaticDifferentiationSparse
297-
298- * [ https://github.com/SciML/SciMLBenchmarks.jl/pull/1023 ] ( https://github.com/SciML/SciMLBenchmarks.jl/pull/1023 )
299- * [ https://github.com/SciML/SciMLBenchmarks.jl/pull/1033 ] ( https://github.com/SciML/SciMLBenchmarks.jl/pull/1033 )
300- * [ https://github.com/SciML/SciMLBenchmarks.jl/pull/1069 ] ( https://github.com/SciML/SciMLBenchmarks.jl/pull/1069 )
301-
302- ** Information to Get Started** : The
303- [ Contributing Section of the SciMLBenchmarks README] ( https://github.com/SciML/SciMLBenchmarks.jl?tab=readme-ov-file#contributing )
304- describes how to contribute to the benchmarks. The benchmark results are
305- generated using the benchmark server. It is expected that the developer checks that the benchmarks
306- are appropriately ran and generating correct graphs when updated, and highlight any performance
307- regressions found through the update process.
308-
309- ** Related Issues** : See the linked pull requests.
310-
311- ** Success Criteria** : The benchmarks should run and give similar results to the pre-updated benchmarks,
312- and any regressions should be identified with an issue opened in the appropriate repository.
313-
314- ** Recommended Skills** : Willingness to roll up some sleaves and figure out what changed in breaking updates.
315-
316- ** Reviewers** : Chris Rackauckas
317-
318280## Update CUTEst.jl to the Optimization.jl Interface and Add to SciMLBenchmarks (\$ 200)
319281
320282[ CUTEst.jl] ( https://github.com/JuliaSmoothOptimizers/CUTEst.jl )
0 commit comments