You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update base for Update on "[Executorch][SDPA] Refactor + Make quantized sdpa handle sequence at dim 1 or 2"
For quantized SDPA we want to evaluate performance impact of having seq at dim 1 as well as dim 2.
This diff refactors the code to enable this.
The same should be done also for float SDPA but left for future.
Differential Revision: [D71833060](https://our.internmc.facebook.com/intern/diff/D71833060/)
[ghstack-poisoned]
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,9 +49,9 @@ Key value propositions of ExecuTorch are:
49
49
## Getting Started
50
50
To get started you can:
51
51
52
-
- Visit the [Step by Step Tutorial](https://pytorch.org/executorch/main/index.html)on getting things running locally and deploy a model to a device
52
+
- Visit the [Step by Step Tutorial](https://pytorch.org/executorch/main/index.html)to get things running locally and deploy a model to a device
53
53
- Use this [Colab Notebook](https://pytorch.org/executorch/stable/getting-started-setup.html#quick-setup-colab-jupyter-notebook-prototype) to start playing around right away
54
-
- Jump straight into LLMs use cases by following specific instructions for [Llama](./examples/models/llama/README.md) and [Llava](./examples/models/llava/README.md)
54
+
- Jump straight into LLM use cases by following specific instructions for [Llama](./examples/models/llama/README.md) and [Llava](./examples/models/llava/README.md)
0 commit comments