Skip to content

Move benchmarking and metadata files to new directory#10

Merged
alanlujan91 merged 12 commits intomainfrom
reorg_ai
Jan 19, 2026
Merged

Move benchmarking and metadata files to new directory#10
alanlujan91 merged 12 commits intomainfrom
reorg_ai

Conversation

@alanlujan91
Copy link
Member

@alanlujan91 alanlujan91 commented Jan 18, 2026

This pull request reorganizes the benchmarking and metadata documentation for improved clarity and accessibility, especially for AI-driven workflows. The main change is the migration of all benchmarking and metadata files from their previous locations (such as reproduce/benchmarks/ and metadata/) into the README_IF_YOU_ARE_AN_AI/ directory, with all references and documentation updated accordingly. Minor code formatting improvements were also made to the benchmarking scripts.

Benchmarking system and documentation reorganization:

  • All benchmarking infrastructure (benchmark.sh, capture_system_info.py, schema.json, guides, results, etc.) was moved from reproduce/benchmarks/ to README_IF_YOU_ARE_AN_AI/benchmarks/, and all documentation and code references were updated to reflect the new location. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]
  • The top-level benchmarking plan document (BENCHMARKING-PLAN.md) was removed, as its content is now covered in the new guides and documentation structure.
  • All example commands, usage instructions, and directory structure illustrations in guides and README files were updated to use the new paths. [1] [2] [3] [4] [5]

Benchmarking script and Python utility improvements:

  • Minor formatting and readability improvements were made to capture_system_info.py, including more readable command strings, expanded package lists, and improved argument parsing. [1] [2] [3] [4] [5] [6] [7]
  • The usage message in benchmark.sh was clarified for quoting and option handling.

Metadata documentation and code sample updates:

  • All metadata documentation and code samples were updated to reference the new location within README_IF_YOU_ARE_AN_AI/metadata/ instead of metadata/. [1] [2] [3] [4]

These changes help centralize AI-relevant files and documentation, making it easier for programmatic access and maintenance going forward.

Copilot AI review requested due to automatic review settings January 18, 2026 23:58
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request reorganizes the project's benchmarking infrastructure and metadata documentation to consolidate them under the README_IF_YOU_ARE_AN_AI/ directory, improving discoverability for AI systems and maintaining better separation of documentation from core reproduction scripts. Additionally, it corrects the notebook filename reference in the reproduction script and adds sympy as a dependency to support the new symbolic equations module.

Changes:

  • Relocated benchmarking infrastructure from reproduce/benchmarks/ to README_IF_YOU_ARE_AN_AI/benchmarks/ with comprehensive path updates across all documentation
  • Added rich metadata files (parameters.json, equations.py, algorithm.json) to provide machine-readable specifications for AI systems
  • Fixed notebook filename from notebook.ipynb to method-of-moderation.ipynb in reproduce.sh
  • Added sympy>=1.12 dependency to support symbolic equation representations
  • Removed obsolete files (you-are, BENCHMARKING-PLAN.md)

Reviewed changes

Copilot reviewed 10 out of 22 changed files in this pull request and generated no comments.

Show a summary per file
File Description
you-are Removed stray file with path reference
uv.lock Added sympy dependency lock entry
requirements.txt Added sympy>=1.12 requirement
reproduce.sh Fixed notebook filename from notebook.ipynb to method-of-moderation.ipynb
README_IF_YOU_ARE_AN_AI/metadata/parameters.json New comprehensive parameter specification with ranges and defaults
README_IF_YOU_ARE_AN_AI/metadata/equations.py New SymPy module with symbolic equation representations
README_IF_YOU_ARE_AN_AI/metadata/equations.json Updated path reference in metadata
README_IF_YOU_ARE_AN_AI/metadata/algorithm.json New algorithm specification in machine-readable format
README_IF_YOU_ARE_AN_AI/metadata/README.md Updated import paths for relocated metadata files
README_IF_YOU_ARE_AN_AI/benchmarks/schema.json New JSON schema for benchmark result validation
README_IF_YOU_ARE_AN_AI/benchmarks/results/saved/20260117_reference_m4max_darwin-arm64.json New reference benchmark from M4 Max system
README_IF_YOU_ARE_AN_AI/benchmarks/results/README.md Updated benchmark paths throughout documentation
README_IF_YOU_ARE_AN_AI/benchmarks/results/.gitignore New gitignore for autogenerated benchmark results
README_IF_YOU_ARE_AN_AI/benchmarks/capture_system_info.py New system information capture script for benchmarks
README_IF_YOU_ARE_AN_AI/benchmarks/benchmark.sh Updated usage documentation with new paths
README_IF_YOU_ARE_AN_AI/benchmarks/README.md Updated all path references to new location
README_IF_YOU_ARE_AN_AI/benchmarks/BENCHMARKING_GUIDE.md Comprehensive path updates in usage examples
README_IF_YOU_ARE_AN_AI/CODE_MAP.md Updated benchmarks directory reference
BENCHMARKING-PLAN.md Removed completed planning document
.gitignore Updated benchmark results paths

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Renamed and reorganized benchmarking and metadata files under the README_IF_YOU_ARE_AN_AI directory for improved structure and clarity. Also removed the 'you-are' symlink.
Introduces a pedagogical Jupyter notebook demonstrating the Method of Moderation (MoM) for solving consumption-saving models. The notebook includes theoretical background, code examples, and visualizations comparing standard EGM and MoM approaches.
Changed all references from 'reproduce/benchmarks/' and 'metadata/' to 'README_IF_YOU_ARE_AN_AI/benchmarks/' and 'README_IF_YOU_ARE_AN_AI/metadata/' across documentation, scripts, and JSON files for consistency with the new directory structure. Also updated notebook execution path in reproduce.sh and added sympy to requirements.txt.
Changed the script to check for 'code/method-of-moderation.ipynb' instead of 'code/notebook.ipynb' when reporting executed notebooks.
Improved formatting, style, and consistency in benchmark scripts, metadata JSON, and symbolic equation definitions. Added MyST markdown versions of key Jupyter notebooks for documentation. Enhanced code clarity and maintainability in equations.py, and updated notebook references and comments for better pedagogical clarity.
Deleted markdown documentation files for method-of-moderation notebooks and made minor updates to the corresponding Jupyter notebooks. The changes streamline documentation and ensure that only notebook files are maintained for the moderation methods.
Expanded method-of-moderation-symbolic.ipynb with improved documentation, code comments, and algebraic verification examples. Added LaTeX export, symbolic differentiation, and equation listing for clarity and reproducibility. Minor markdown metadata cleanup in related notebooks.
@alanlujan91 alanlujan91 merged commit 24e6a20 into main Jan 19, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants