Skip to content

Conversation

@Yangxiaoz
Copy link
Contributor

As described in (#13856), this PR introduces a build flag (GGML_CUDA_JETSON_DEVICE) for CUDA Jetson-related optimizations and modifies the logic of cuda_device_support_buft when the flag is enabled.

The specific modification logic of “cuda_device_support_buft” is to add support for host_buffer (pinned memory) when cuda_device is jetson

According to the nvidia official website manual:https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/index.html#memory-types-table

image

@github-actions github-actions bot added documentation Improvements or additions to documentation Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels May 28, 2025
@Yangxiaoz
Copy link
Contributor Author

Hi @JohannesGaessler ,could I ask for your expertise in reviewing this PR when convenient?I am currently studying this Integrated GPU architecture(Jetson) in cuda. If you think my PR is inappropriate, I will revoke it immediately 0.0

@JohannesGaessler
Copy link
Collaborator

I think this is not a good solution. Is there no way to detect iGPUs automatically?

@Yangxiaoz
Copy link
Contributor Author

I think this is not a good solution. Is there no way to detect iGPUs automatically?

Thank you very much for your reply. Your opinion is correct. I am aware of the problem with this commit. I will review the cuda runtime API to find a more appropriate way to solve this problem.

@Yangxiaoz Yangxiaoz closed this May 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants