Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Actions: pytorch/torchchat

Actions

Run the README instructions - with stories

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
235 workflow run results
235 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Bump torchtune pin to a 9-24 commit; Update Flamingo Definition
Run the README instructions - with stories #1566: Pull request #1195 synchronize by Gasoonjia
22m 3s new-flamingo
Add bf16 to runner_et test
Run the README instructions - with stories #1565: Pull request #1196 synchronize by swolchok
native runner: convert logits to float
Run the README instructions - with stories #1563: Pull request #1198 opened by swolchok
Revert "export.py: fix custom SDPA type conversion logic & re-enable …
Run the README instructions - with stories #1562: Commit f343b43 pushed by swolchok
20m 38s main
[Distributed] Make decode in flight optional
Run the README instructions - with stories #1561: Pull request #1180 synchronize by kwen2501
21m 22s batch_decode
export.py: fix custom SDPA type conversion logic & re-enable for bflo…
Run the README instructions - with stories #1559: Commit 3aba730 pushed by swolchok
20m 49s main
Add bf16 to runner_et test
Run the README instructions - with stories #1557: Pull request #1196 opened by swolchok
Bump torchtune pin to a 9-24 commit; Update Flamingo Definition
Run the README instructions - with stories #1556: Pull request #1195 opened by Gasoonjia
22m 31s new-flamingo
[Distributed] Fix cache lane (#1194)
Run the README instructions - with stories #1554: Commit 6d2ef4a pushed by kwen2501
20m 47s main
Default max_seq_length to 128 for ExecuTorch export (#1192)
Run the README instructions - with stories #1552: Commit c40c6bb pushed by swolchok
23m 15s main
Also use default max_seq_length of 128 for ExecuTorch native runner (…
Run the README instructions - with stories #1551: Commit 4554314 pushed by swolchok
23m 41s main
Use default max_seq_length of 128 when loading ExecuTorch models (#1190)
Run the README instructions - with stories #1550: Commit c3b5965 pushed by swolchok
22m 31s main
add ConvertToLinear, disable custom SDPA for bfloat16 (#1189)
Run the README instructions - with stories #1549: Commit 28914fd pushed by swolchok
21m 3s main
parallelize ExecuTorch build (#1188)
Run the README instructions - with stories #1548: Commit 2281c37 pushed by swolchok
20m 23s main
Update ExecuTorch pin to pick up bfloat16 fixes
Run the README instructions - with stories #1547: Commit e2989c9 pushed by swolchok
21m 27s main
[Distributed] Fix cache lane
Run the README instructions - with stories #1546: Pull request #1194 opened by kwen2501
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories #1545: Pull request #1192 synchronize by swolchok
Update ExecuTorch pin to pick up bfloat16 fixes
Run the README instructions - with stories #1543: Pull request #1187 synchronize by swolchok
parallelize ExecuTorch build
Run the README instructions - with stories #1537: Pull request #1188 synchronize by swolchok