Skip to content

Commit b805e91

Browse files
authored
Jingxu10/docstring 112 (#1004)
* fix docs bugs highlight installation of pytorch+cpu * update version number to 1.12.100
1 parent caf9660 commit b805e91

File tree

4 files changed

+23
-20
lines changed

4 files changed

+23
-20
lines changed

docs/tutorials/installation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Installation Guide
55

66
|Category|Content|
77
|--|--|
8-
|Compiler|Recommend using GCC newer than 11.2|
8+
|Compiler|Recommend using GCC 10|
99
|Operating System|CentOS 7, RHEL 8, Rocky Linux 8.5, Ubuntu newer than 18.04|
1010
|Python|See prebuilt wheel files availability matrix below|
1111

@@ -26,7 +26,7 @@ Make sure PyTorch is installed so that the extension will work properly. For eac
2626
|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|[v1.0.1](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.1)|
2727
|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|[v1.0.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.0)|
2828

29-
Here is an example showing how to install PyTorch. For more details, refer to [pytorch.org](https://pytorch.org/get-started/locally/).
29+
Please install CPU version of PyTorch through its official channel. For more details, refer to [pytorch.org](https://pytorch.org/get-started/locally/).
3030

3131
---
3232

docs/tutorials/performance_tuning/known_issues.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
Known Issues
22
============
33

4+
- Compiling with gcc 11 might result in `illegal instruction` error.
5+
46
- `RuntimeError: Overflow when unpacking long` when a tensor's min max value exceeds int range while performing int8 calibration. Please customize QConfig to use min-max calibration method.
57

68
- For models with dynamic control flow, please try dynamic quantization. Users are likely to get performance gain for GEMM models.

intel_extension_for_pytorch/cpu/runtime/multi_stream.py

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -8,24 +8,25 @@
88
import warnings
99

1010
class MultiStreamModuleHint(object):
11+
r"""
12+
MultiStreamModuleHint is a hint to MultiStreamModule about how to split the inputs
13+
or concat the output. Each argument should be None, with type of int or a container
14+
which containes int or None such as: (0, None, ...) or [0, None, ...]. If the argument
15+
is None, it means this argument will not be split or concat. If the argument is with
16+
type int, its value means along which dim this argument will be split or concat.
17+
18+
Args:
19+
*args: Variable length argument list.
20+
**kwargs: Arbitrary keyword arguments.
21+
22+
Returns:
23+
intel_extension_for_pytorch.cpu.runtime.MultiStreamModuleHint: Generated
24+
intel_extension_for_pytorch.cpu.runtime.MultiStreamModuleHint object.
25+
26+
:meta public:
27+
"""
28+
1129
def __init__(self, *args, **kwargs):
12-
r"""
13-
MultiStreamModuleHint is a hint to MultiStreamModule about how to split the inputs
14-
or concat the output. Each argument should be None, with type of int or a container
15-
which containes int or None such as: (0, None, ...) or [0, None, ...]. If the argument
16-
is None, it means this argument will not be split or concat. If the argument is with
17-
type int, its value means along which dim this argument will be split or concat.
18-
19-
Args:
20-
*args: Variable length argument list.
21-
**kwargs: Arbitrary keyword arguments.
22-
23-
Returns:
24-
intel_extension_for_pytorch.cpu.runtime.MultiStreamModuleHint: Generated
25-
intel_extension_for_pytorch.cpu.runtime.MultiStreamModuleHint object.
26-
27-
:meta public:
28-
"""
2930
self.args = list(args)
3031
self.kwargs = kwargs
3132
self.args_len = args.__len__()

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@
7474
#TORCH_VERSION = '1.13.0'
7575
#TORCH_VERSION = os.getenv('TORCH_VERSION', TORCH_VERSION)
7676

77-
TORCH_IPEX_VERSION = '1.12.0+cpu'
77+
TORCH_IPEX_VERSION = '1.12.100+cpu'
7878
PYTHON_VERSION = sys.version_info
7979

8080
package_name = "intel_extension_for_pytorch"

0 commit comments

Comments
 (0)