Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Actions: pytorch/torchchat

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
665 workflow run results
665 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MPS/MacOS #1423: Pull request #1193 opened by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MacOS #1533: Pull request #1192 opened by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the aoti runner with CUDA using stories #1918: Pull request #1192 opened by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories - on MPS/MacOS #1422: Pull request #1192 opened by swolchok
Default max_seq_length to 128 for ExecuTorch export
Run the README instructions - with stories #1533: Pull request #1192 opened by swolchok
Also use default max_seq_length of 128 for ExecuTorch native runner
Run the README instructions - with stories - on MPS/MacOS #1421: Pull request #1191 opened by swolchok
Also use default max_seq_length of 128 for ExecuTorch native runner
Run the README instructions - with stories - on MacOS #1532: Pull request #1191 opened by swolchok
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MacOS #1531: Pull request #1190 opened by swolchok
ProTip! You can narrow down the results and go further in time using created:<2024-09-24 or the other filters available.