Skip to content

Conversation

@stt
Copy link

@stt stt commented May 8, 2025

My nvidia cards were lacking in VRAM so got this up and running on 7900XTX, works ok-ish, could be faster (didn't do proper testing yet but logs had these sort of numbers: "Processed prompts: 5.96s, Total time: 14.05s, Generated 262 frames (20.96s of audio), Real-time factor: 0.670x" how does that compare to RTX cards? not sure where to look for bottlenecks)

Building vllm and finding compatible bitsandbytes and triton versions were the main challenges, hopefully there's no need to update them in a while. vllm docs said to disable FA so did, not sure what it's status is, could be nice to get working if possible.
For some reason building vllm from their most recent master (v0.8.5+) resulted in "only" 30.8GB image, didn't check why v0.6.6 would be 100GB but I guess +-70GB is no biggie these days.

@davidbrowne17
Copy link
Owner

davidbrowne17 commented May 9, 2025

Nice work! I don't actually have an AMD card myself and cant test it so I think we will likely need a third person to test this then Ill happily merge (If no one tests it for awhile it lgtm so I reckon it can just be merged but id prefer a test if possible)

@stt
Copy link
Author

stt commented May 18, 2025

Noticed that dependencies have been updated so at the moment this won't run, I'll try bringing PR up-to-date.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants