Skip to content

Add torch.compile support for neural network optimization#15

Merged
SkBlaz merged 3 commits intomasterfrom
copilot/fix-14
Aug 17, 2025
Merged

Add torch.compile support for neural network optimization#15
SkBlaz merged 3 commits intomasterfrom
copilot/fix-14

Conversation

Copy link
Contributor

Copilot AI commented Aug 17, 2025

This PR adds support for PyTorch's torch.compile feature to autoBOT's neural networks, enabling JIT compilation for improved training and inference performance.

Changes Made

Core Implementation

  • Added compile_model parameter to the SFNN class with default False for backward compatibility
  • Updated torch_sparse_nn_ff_basic hyperparameter configuration to include compile_model: [False, True]
  • Applied torch.compile() to models in SFNN.fit() when compilation is enabled
  • Added comprehensive docstring documentation for the new parameter

Bug Fixes

  • Fixed existing bug in predict_proba() method that caused unpacking errors when iterating over datasets without labels

Testing & Examples

  • Created comprehensive test suite in tests/test_torch_compile.py with 4 test cases covering:
    • Default behavior (compilation disabled)
    • Compilation enabled scenarios
    • Hyperparameter optimization integration
    • Environment compatibility checks
  • Added examples/minimal_torch_compile.py demonstrating usage with both compiled and non-compiled models
  • Created docs/TORCH_COMPILE.md with detailed documentation

Usage

Through Hyperparameter Configuration

import autoBOTLib

custom_hyperparams = {
    "compile_model": [True]  # Enable torch.compile
}

autobot = autoBOTLib.GAlearner(
    texts, labels,
    framework="torch",
    custom_hyperparameters=custom_hyperparams
)

Direct Usage

from autoBOTLib.learning.torch_sparse_nn import SFNN

model = SFNN(compile_model=True)  # Enable compilation
model.fit(X_train, y_train)

Benefits

  • Performance: Significant speedup during training and inference via JIT compilation
  • Automatic: Works transparently with existing hyperparameter optimization
  • Backward Compatible: Default behavior unchanged, existing code continues to work

Verification

All tests pass successfully, and compiled models show as OptimizedModule indicating torch.compile is working correctly. The hyperparameter optimization automatically explores compiled configurations when available.

Fixes #14.


💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.

Copilot AI and others added 2 commits August 17, 2025 09:12
Co-authored-by: SkBlaz <10035780+SkBlaz@users.noreply.github.com>
…pport

Co-authored-by: SkBlaz <10035780+SkBlaz@users.noreply.github.com>
Copilot AI changed the title [WIP] Adding torch.compile Add torch.compile support for neural network optimization Aug 17, 2025
Copilot AI requested a review from SkBlaz August 17, 2025 09:16
@SkBlaz SkBlaz marked this pull request as ready for review August 17, 2025 10:45
@SkBlaz SkBlaz merged commit dd2ecba into master Aug 17, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Adding torch.compile

2 participants