Skip to content

Conversation

@Djip007
Copy link
Contributor

@Djip007 Djip007 commented Jun 14, 2024

add UMA config for higher speed like in (ggml-org/llama.cpp#7414) but made 2 changes:

  • remove UMA build option
  • use it in all case if hipalloc failed with 'not have enough memory'

an other change is look for 'hipcc' on linux and not 'amdclang++'

(1 possible solution for #439 / #468)

add UMA config for higher speed like in (ggml-org/llama.cpp#7414)
but made 2 changes:
- remove UMA build option
- use it in all case if hipalloc failed with 'not have enough memory'

an other change is look for 'hipcc' on linux and not 'amdclang++'
Copy link
Collaborator

@jart jart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@jart jart merged commit a28250b into mozilla-ai:main Jun 20, 2024
@Djip007
Copy link
Contributor Author

Djip007 commented Aug 2, 2024

I think I need some update after e9ee3f9
(the UMA patch was not 100% same because we can't decide with llamafile if/when we have to activate "UMA")

=> I'll check that in the next few days.

13/08/2024: This is the case => new patch: #536

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants