Skip to content

Conversation

@NeuralFault
Copy link
Contributor

This pull request updates the logic for selecting PyTorch and ROCm versions during package installation in SDWebForge.cs. The main changes ensure that the latest PyTorch is used for both Blackwell and AMD ROCm environments, and that the ROCm version is upgraded for AMD GPUs.

Dependency and environment handling:

  • Updated the logic to use the latest PyTorch version (by leaving the version string empty) for both Blackwell GPUs and AMD ROCm environments, instead of only for Blackwell.
  • Upgraded the ROCm version from rocm5.7 to rocm6.4 for AMD GPUs.

… was still hardcoded to 5.7 and causing reForge to install with 5.7.
…on contraint for pytorch, so latest pytorch is installed when using AMD
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @NeuralFault, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the dependency management within the SDWebForge.cs installation process. The primary goal is to ensure that users leveraging either Blackwell or AMD ROCm GPUs benefit from the latest available PyTorch versions, while also specifically upgrading the ROCm support to version 6.4 for AMD users. This update aims to enhance compatibility and leverage recent advancements in GPU computing for these environments.

Highlights

  • PyTorch Version Selection Logic: The logic for selecting the PyTorch version during package installation has been updated to ensure the latest PyTorch is used for both Blackwell GPUs and AMD ROCm environments. Previously, this logic only applied to Blackwell GPUs.
  • ROCm Version Upgrade: The ROCm index for AMD GPUs has been upgraded from "rocm5.7" to "rocm6.4" to provide updated support for AMD hardware.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly updates the logic for package installation in SDWebForge.cs, specifically upgrading the ROCm version and adjusting the PyTorch version selection for AMD GPUs. The changes are clear and align with the description. I've included one review comment to address some formatting issues to improve code readability and maintain consistency with the existing codebase.

…a install to be cu126 whilst keeping Blackwell GPU environment at cu128. Removed AMD GPU check as no longer needed.
@NeuralFault
Copy link
Contributor Author

Updated were all installs use latest pytorch. Nvidia index's set to cu128 for blackwell and cu126 for all others, and removed variable for latest pytorch check that is no longer needed.

Ready for merge.

Copy link
Contributor

@mohnjiles mohnjiles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you!

@mohnjiles mohnjiles merged commit 7fa112e into LykosAI:main Dec 25, 2025
2 of 3 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Dec 25, 2025
@NeuralFault NeuralFault deleted the forge-rocm-fix branch December 25, 2025 04:35
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants