Skip to content

Conversation

@addictivepixels
Copy link

No description provided.

Signed-off-by: addictivepixels <addictivepixels@mac.com>
Signed-off-by: addictivepixels <addictivepixels@mac.com>
Signed-off-by: addictivepixels <addictivepixels@mac.com>
Better documentation on top of the Windows fixes

Signed-off-by: addictivepixels <addictivepixels@mac.com>
@vatsalaggarwal
Copy link
Member

@addictivepixels thanks! a few merge conflicts, but @sidroopdaska will review soon o/w.

@noita-player
Copy link

this doesn't fix windows support as torch.compile is not supported on windows, so fast_inference_utils will need to be reworked

@sidroopdaska
Copy link
Member

@noita-player, can you assist here?

@noita-player
Copy link

my hack to run on windows via disabling torch.compile and triton flags resulted in 40sec gen times on a 4080 so IMO windows support should just be blocked by the huge perf wins: triton-lang/triton#1640 pytorch/pytorch#122094

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants