Skip to content

Conversation

@splotnikv
Copy link

This PR adds HPU support to mini trainer. It should be used with Red-Hat-AI-Innovation-Team/training_hub#10.

Signed-off-by: Sergey Plotnikov <[email protected]>
@coderabbitai
Copy link

coderabbitai bot commented Nov 13, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@splotnikv
Copy link
Author

HPU does not natively support torch.linalg.svd(), last commit added CPU level parallelization.

@RobotSail
Copy link
Collaborator

Thanks for the contribution and for thinking of mini_trainer!

After some consideration, we've decided to keep this project narrowly scoped to CUDA/NVIDIA hardware. The main reason is maintainability; we don't have access to HPU hardware for testing, and adding support for additional accelerators creates code paths we can't realistically validate or maintain long-term. Untested code paths tend to bit-rot and create a poor experience for users who rely on them.

If you're interested in HPU support, you're welcome to maintain a fork, or if there's community interest, a separate mini_trainer-hpu project could be a good approach.

Appreciate your understanding, and thanks again for the interest in the project!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants