Move benchmarking and metadata files to new directory#10
Merged
alanlujan91 merged 12 commits intomainfrom Jan 19, 2026
Merged
Conversation
There was a problem hiding this comment.
Pull request overview
This pull request reorganizes the project's benchmarking infrastructure and metadata documentation to consolidate them under the README_IF_YOU_ARE_AN_AI/ directory, improving discoverability for AI systems and maintaining better separation of documentation from core reproduction scripts. Additionally, it corrects the notebook filename reference in the reproduction script and adds sympy as a dependency to support the new symbolic equations module.
Changes:
- Relocated benchmarking infrastructure from
reproduce/benchmarks/toREADME_IF_YOU_ARE_AN_AI/benchmarks/with comprehensive path updates across all documentation - Added rich metadata files (parameters.json, equations.py, algorithm.json) to provide machine-readable specifications for AI systems
- Fixed notebook filename from
notebook.ipynbtomethod-of-moderation.ipynbin reproduce.sh - Added sympy>=1.12 dependency to support symbolic equation representations
- Removed obsolete files (
you-are,BENCHMARKING-PLAN.md)
Reviewed changes
Copilot reviewed 10 out of 22 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| you-are | Removed stray file with path reference |
| uv.lock | Added sympy dependency lock entry |
| requirements.txt | Added sympy>=1.12 requirement |
| reproduce.sh | Fixed notebook filename from notebook.ipynb to method-of-moderation.ipynb |
| README_IF_YOU_ARE_AN_AI/metadata/parameters.json | New comprehensive parameter specification with ranges and defaults |
| README_IF_YOU_ARE_AN_AI/metadata/equations.py | New SymPy module with symbolic equation representations |
| README_IF_YOU_ARE_AN_AI/metadata/equations.json | Updated path reference in metadata |
| README_IF_YOU_ARE_AN_AI/metadata/algorithm.json | New algorithm specification in machine-readable format |
| README_IF_YOU_ARE_AN_AI/metadata/README.md | Updated import paths for relocated metadata files |
| README_IF_YOU_ARE_AN_AI/benchmarks/schema.json | New JSON schema for benchmark result validation |
| README_IF_YOU_ARE_AN_AI/benchmarks/results/saved/20260117_reference_m4max_darwin-arm64.json | New reference benchmark from M4 Max system |
| README_IF_YOU_ARE_AN_AI/benchmarks/results/README.md | Updated benchmark paths throughout documentation |
| README_IF_YOU_ARE_AN_AI/benchmarks/results/.gitignore | New gitignore for autogenerated benchmark results |
| README_IF_YOU_ARE_AN_AI/benchmarks/capture_system_info.py | New system information capture script for benchmarks |
| README_IF_YOU_ARE_AN_AI/benchmarks/benchmark.sh | Updated usage documentation with new paths |
| README_IF_YOU_ARE_AN_AI/benchmarks/README.md | Updated all path references to new location |
| README_IF_YOU_ARE_AN_AI/benchmarks/BENCHMARKING_GUIDE.md | Comprehensive path updates in usage examples |
| README_IF_YOU_ARE_AN_AI/CODE_MAP.md | Updated benchmarks directory reference |
| BENCHMARKING-PLAN.md | Removed completed planning document |
| .gitignore | Updated benchmark results paths |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Renamed and reorganized benchmarking and metadata files under the README_IF_YOU_ARE_AN_AI directory for improved structure and clarity. Also removed the 'you-are' symlink.
Introduces a pedagogical Jupyter notebook demonstrating the Method of Moderation (MoM) for solving consumption-saving models. The notebook includes theoretical background, code examples, and visualizations comparing standard EGM and MoM approaches.
Changed all references from 'reproduce/benchmarks/' and 'metadata/' to 'README_IF_YOU_ARE_AN_AI/benchmarks/' and 'README_IF_YOU_ARE_AN_AI/metadata/' across documentation, scripts, and JSON files for consistency with the new directory structure. Also updated notebook execution path in reproduce.sh and added sympy to requirements.txt.
Changed the script to check for 'code/method-of-moderation.ipynb' instead of 'code/notebook.ipynb' when reporting executed notebooks.
Improved formatting, style, and consistency in benchmark scripts, metadata JSON, and symbolic equation definitions. Added MyST markdown versions of key Jupyter notebooks for documentation. Enhanced code clarity and maintainability in equations.py, and updated notebook references and comments for better pedagogical clarity.
Deleted markdown documentation files for method-of-moderation notebooks and made minor updates to the corresponding Jupyter notebooks. The changes streamline documentation and ensure that only notebook files are maintained for the moderation methods.
Expanded method-of-moderation-symbolic.ipynb with improved documentation, code comments, and algebraic verification examples. Added LaTeX export, symbolic differentiation, and equation listing for clarity and reproducibility. Minor markdown metadata cleanup in related notebooks.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request reorganizes the benchmarking and metadata documentation for improved clarity and accessibility, especially for AI-driven workflows. The main change is the migration of all benchmarking and metadata files from their previous locations (such as
reproduce/benchmarks/andmetadata/) into theREADME_IF_YOU_ARE_AN_AI/directory, with all references and documentation updated accordingly. Minor code formatting improvements were also made to the benchmarking scripts.Benchmarking system and documentation reorganization:
benchmark.sh,capture_system_info.py,schema.json, guides, results, etc.) was moved fromreproduce/benchmarks/toREADME_IF_YOU_ARE_AN_AI/benchmarks/, and all documentation and code references were updated to reflect the new location. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]BENCHMARKING-PLAN.md) was removed, as its content is now covered in the new guides and documentation structure.Benchmarking script and Python utility improvements:
capture_system_info.py, including more readable command strings, expanded package lists, and improved argument parsing. [1] [2] [3] [4] [5] [6] [7]benchmark.shwas clarified for quoting and option handling.Metadata documentation and code sample updates:
README_IF_YOU_ARE_AN_AI/metadata/instead ofmetadata/. [1] [2] [3] [4]These changes help centralize AI-relevant files and documentation, making it easier for programmatic access and maintenance going forward.