Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Actions: pytorch/torchchat

Actions

Run the README instructions - with stories - on MacOS

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
236 workflow run results
236 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1544: Pull request #1192 synchronize by swolchok
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1543: Pull request #1193 synchronize by swolchok
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1541: Pull request #1193 synchronize by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1545: Pull request #1192 synchronize by swolchok
Update ExecuTorch pin to pick up bfloat16 fixes
Run the README instructions - with stories - on MacOS #1542: Pull request #1187 synchronize by swolchok
add ConvertToLinear, disable custom SDPA for bfloat16
Run the README instructions - with stories - on MacOS #1536: Pull request #1189 synchronize by swolchok
Also use default max_seq_length of 128 for ExecuTorch native runner
Run the README instructions - with stories - on MacOS #1535: Pull request #1191 synchronize by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1533: Pull request #1192 opened by swolchok
Also use default max_seq_length of 128 for ExecuTorch native runner
Run the README instructions - with stories - on MacOS #1532: Pull request #1191 opened by swolchok
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MacOS #1531: Pull request #1190 opened by swolchok
add ConvertToLinear, disable custom SDPA for bfloat16
Run the README instructions - with stories - on MacOS #1530: Pull request #1189 opened by swolchok
parallelize ExecuTorch build
Run the README instructions - with stories - on MacOS #1529: Pull request #1188 opened by swolchok
Update ExecuTorch pin to pick up bfloat16 fixes
Run the README instructions - with stories - on MacOS #1528: Pull request #1187 opened by swolchok
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1527: Pull request #1171 synchronize by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1526: Pull request #1170 synchronize by swolchok
Also use default max_seq_length of 128 for ExecuTorch native runner
Run the README instructions - with stories - on MacOS #1525: Pull request #1186 opened by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1524: Pull request #1170 synchronize by swolchok
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1523: Pull request #1171 synchronize by swolchok
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MacOS #1522: Pull request #1184 opened by swolchok
Android artifact update (#1138)
Run the README instructions - with stories - on MacOS #1521: Commit 04ea309 pushed by kirklandsign
11m 24s main
enable llava on torchchat
Run the README instructions - with stories - on MacOS #1520: Pull request #1183 synchronize by Gasoonjia
11m 15s llava-e2e
enable llava on torchchat
Run the README instructions - with stories - on MacOS #1519: Pull request #1183 synchronize by Gasoonjia
11m 26s llava-e2e
enable llava on torchchat
Run the README instructions - with stories - on MacOS #1518: Pull request #1183 synchronize by Gasoonjia
11m 21s llava-e2e
enable llava on torchchat
Run the README instructions - with stories - on MacOS #1517: Pull request #1183 opened by Gasoonjia
11m 22s llava-e2e