Skip to content

Commit 5420a40

Browse files
committed
Add result_xpu.md
1 parent 546d54e commit 5420a40

File tree

1 file changed

+17
-19
lines changed

1 file changed

+17
-19
lines changed
Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -47,30 +47,28 @@ Feel free to use https://github.com/pytorch/pytorch/releases/tag/v2.10.0 as an e
4747
### bc breaking
4848
### deprecation
4949
### new features
50+
- Intorduce XPUGraph a runtime optimization feature designed to reduce kernels host overhead on XPU devices ([#166285](https://github.com/pytorch/pytorch/pull/166285), [#174041](https://github.com/pytorch/pytorch/pull/174041), [#174351](https://github.com/pytorch/pytorch/pull/174351), [#174059](https://github.com/pytorch/pytorch/pull/174059), [#174046](https://github.com/pytorch/pytorch/pull/174046), [#166843](https://github.com/pytorch/pytorch/pull/166843)
51+
5052
### improvements
53+
- Add `torch.xpu._dump_snapshot` API ([#170186](https://github.com/pytorch/pytorch/pull/170186))
54+
- Add `torch.xpu._record_memory_history` API ([#169559](https://github.com/pytorch/pytorch/pull/169559))
55+
- Add `torch.xpu.memory_snapshot` ([#169442](https://github.com/pytorch/pytorch/pull/169442))
56+
- Add `local_mem_size` to XPU device properties ([#172314](https://github.com/pytorch/pytorch/pull/172314))
57+
- Support `torch.accelerator.get_device_capability` on XPU ([#170747](https://github.com/pytorch/pytorch/pull/170747))
58+
- Enable static Triton kernel launcher for XPU backend ([#169938](https://github.com/pytorch/pytorch/pull/169938))
59+
- Enable Triton online softmax kernels on XPU ([#163251](https://github.com/pytorch/pytorch/pull/163251))
60+
- Support woq_int8 Inductor pattern on Intel GPU ([#163615](https://github.com/pytorch/pytorch/pull/163615))
61+
- Add XPU ATen GEMM overloads with output dtype ([#170523](https://github.com/pytorch/pytorch/pull/170523))
62+
- Support `aot_inductor.emit_multi_arch_kernel` on XPU ([#171432](https://github.com/pytorch/pytorch/pull/171432))
63+
- Improve Inductor UT coverage for XPU ([#171280](https://github.com/pytorch/pytorch/pull/171280), [#166376](https://github.com/pytorch/pytorch/pull/166376), [#169181](https://github.com/pytorch/pytorch/pull/169181), [#166504](https://github.com/pytorch/pytorch/pull/166504))
64+
5165
### bug fixes
5266
### performance
5367
### docs
68+
- Update XPU Get Started guide with new client GPU and formatting ([#169810](https://github.com/pytorch/pytorch/pull/169810))
69+
- Document previous version of Torch XPU installation ([#174453](https://github.com/pytorch/pytorch/pull/174453))
70+
5471
### devs
5572
### Untopiced
56-
- [xpu][feature] [1/6] Add trace support on XPU caching allocator ([#168262](https://github.com/pytorch/pytorch/pull/168262))
57-
- [xpu][feature] [2/6] Track stack context for xpu caching allocator ([#169280](https://github.com/pytorch/pytorch/pull/169280))
58-
- [xpu][feature] [3/6] Add snapshot support on XPU caching allocator ([#169203](https://github.com/pytorch/pytorch/pull/169203))
59-
- [xpu][feature] Introduce some additional metrics for memory stats of XPU caching allocator ([#169812](https://github.com/pytorch/pytorch/pull/169812))
60-
- Optimizes the performance of the int_mm which mat2 tensor is non-contiguous on Intel GPU ([#169555](https://github.com/pytorch/pytorch/pull/169555))
61-
- [xpu][feature]Fallbacks memory efficient attention to math attention on XPU ([#166936](https://github.com/pytorch/pytorch/pull/166936))
62-
- [xpu][feature] Add skip actions support to filter out memory trace ([#170760](https://github.com/pytorch/pytorch/pull/170760))
63-
- Support torch.accelerator.get_device_capability on XPU ([#170747](https://github.com/pytorch/pytorch/pull/170747))
64-
- [xpu][fix] Use small pool for 1MB allocation ([#171453](https://github.com/pytorch/pytorch/pull/171453))
65-
- [xpu][feature] [4/6] Introduce record memory history for XPU in cpp part ([#169296](https://github.com/pytorch/pytorch/pull/169296))
66-
- [xpu][feature] [5/6] Introduce memory snapshot for XPU in frontend part ([#169442](https://github.com/pytorch/pytorch/pull/169442))
67-
- [xpu][feature] [6/6] Introduce _record_memory_history for XPU in frontend part ([#169559](https://github.com/pytorch/pytorch/pull/169559))
68-
- [xpu][fix] Fix wrong signature on XPU memory docs ([#172933](https://github.com/pytorch/pytorch/pull/172933))
69-
- [xpu][utils] Add a helper function to XPU for code reuse ([#173333](https://github.com/pytorch/pytorch/pull/173333))
70-
- The frontend python APIs for XPUGraph capture/replay etc. ([#174046](https://github.com/pytorch/pytorch/pull/174046))
71-
- [xpu][feature] Add local_mem_size to XPU device property ([#172314](https://github.com/pytorch/pytorch/pull/172314))
72-
- [xpu][fix] Enlarge dynamo UT timeout for XPU duet to low CPU ferq of XPU CI machine. ([#170292](https://github.com/pytorch/pytorch/pull/170292))
7373
### not user facing
74-
- [xpu][test][FlexAttention]Enable the test_GQA on Intel XPU ([#166376](https://github.com/pytorch/pytorch/pull/166376))
75-
- xpu: add a test to verify all torch xpu libraries are linked on Linux ([#169322](https://github.com/pytorch/pytorch/pull/169322))
7674
### security

0 commit comments

Comments
 (0)