Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Actions: pytorch/torchchat

Actions

Run the README instructions - with stories - on MacOS

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
692 workflow run results
692 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1555: Pull request #1193 synchronize by swolchok
[Distributed] Fix cache lane (#1194)
Run the README instructions - with stories - on MacOS #1554: Commit 6d2ef4a pushed by kwen2501
12m 31s main
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1553: Pull request #1193 synchronize by swolchok
Default max_seq_length to 128 for ExecuTorch export (#1192)
Run the README instructions - with stories - on MacOS #1552: Commit c40c6bb pushed by swolchok
12m 34s main
Also use default max_seq_length of 128 for ExecuTorch native runner (…
Run the README instructions - with stories - on MacOS #1551: Commit 4554314 pushed by swolchok
12m 56s main
Use default max_seq_length of 128 when loading ExecuTorch models (#1190)
Run the README instructions - with stories - on MacOS #1550: Commit c3b5965 pushed by swolchok
12m 21s main
add ConvertToLinear, disable custom SDPA for bfloat16 (#1189)
Run the README instructions - with stories - on MacOS #1549: Commit 28914fd pushed by swolchok
12m 31s main
parallelize ExecuTorch build (#1188)
Run the README instructions - with stories - on MacOS #1548: Commit 2281c37 pushed by swolchok
12m 35s main
Update ExecuTorch pin to pick up bfloat16 fixes
Run the README instructions - with stories - on MacOS #1547: Commit e2989c9 pushed by swolchok
11m 47s main
[Distributed] Fix cache lane
Run the README instructions - with stories - on MacOS #1546: Pull request #1194 opened by kwen2501
11m 28s fix_cache_lane
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MacOS #1537: Pull request #1190 synchronize by swolchok
parallelize ExecuTorch build
Run the README instructions - with stories - on MacOS #1539: Pull request #1188 synchronize by swolchok
parallelize ExecuTorch build
Run the README instructions - with stories - on MacOS #1538: Pull request #1188 synchronize by swolchok
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MacOS #1540: Pull request #1190 synchronize by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1544: Pull request #1192 synchronize by swolchok
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1543: Pull request #1193 synchronize by swolchok
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1541: Pull request #1193 synchronize by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1545: Pull request #1192 synchronize by swolchok
Update ExecuTorch pin to pick up bfloat16 fixes
Run the README instructions - with stories - on MacOS #1542: Pull request #1187 synchronize by swolchok
add ConvertToLinear, disable custom SDPA for bfloat16
Run the README instructions - with stories - on MacOS #1536: Pull request #1189 synchronize by swolchok
Also use default max_seq_length of 128 for ExecuTorch native runner
Run the README instructions - with stories - on MacOS #1535: Pull request #1191 synchronize by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1533: Pull request #1192 opened by swolchok
Also use default max_seq_length of 128 for ExecuTorch native runner
Run the README instructions - with stories - on MacOS #1532: Pull request #1191 opened by swolchok
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MacOS #1531: Pull request #1190 opened by swolchok
ProTip! You can narrow down the results and go further in time using created:<2024-09-24 or the other filters available.