Open
Conversation
- Add hip feature flag to Cargo.toml files - Configure HIP build in build.rs with ROCm path detection - Set up HIP compiler and library paths - Add support for multiple AMD GPU architectures including gfx942 (MI300X) - Configure proper linking with HIP runtime libraries (amdhip64, rocblas, hipblas) - Add -fPIC flags for position-independent code - Remove unnecessary GCC/cmath workaround (users should install libstdc++-dev) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Remove hardcoded ROCm version (6.4.1) from library search paths - Dynamically detect versioned ROCm installations in /opt/rocm-* - Prioritize ROCM_PATH environment variable if set - Automatically find all ROCm installations regardless of version - Makes the build more robust across different ROCm releases 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Remove filesystem searching for ROCm installations - Only use hipconfig or ROCM_PATH environment variable - Fail with clear error if ROCm cannot be found - Consolidate HIP configuration into single code block - Follow best practice of explicit configuration over discovery This ensures the build only uses explicitly configured resources rather than searching the filesystem for installations. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Contributor
|
https://github.com/utilityai/llama-cpp-rs/actions/runs/16555633011/job/46873897581?pr=785 should be passing. Looks like formatting is the first step. |
Contributor
|
Is there any update on this? Or is there a way I can help getting this across the line? I'd really like some ROCm support 😅 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
#725 & #784 could be addressed by these changes. Note that the change was tested against an upstream llama.cpp, so it may be best to wait on testing until llama.cpp is updated as part of the normal flow. Testing was done on AMD Developer Cloud with Ubuntu 24.04.1 LTS, AMD ROCm 6.4.1, and an AMD MI300X. This is just a proof-of-concept tested in the one environment. It needs review and execution within a test suite. Let me know if there is a test suite I should run it through.