Skip to content

Commit bf0ea26

Browse files
authored
Merge pull request #35 from IntelPython/samaid-patch-1
Update README.md
2 parents 2780370 + 4e87f9b commit bf0ea26

File tree

1 file changed

+40
-0
lines changed

1 file changed

+40
-0
lines changed

README.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,3 +22,43 @@ The following command will run the very first example of using **Data Parallel E
2222
```
2323
python ./examples/01-hello_dpnp.py
2424
```
25+
## Tutorials
26+
Jupyter Notebook-based Getting Started tutorials are located in `./notebooks` directory.
27+
28+
To run the tutorial, in the command line prompt type:
29+
```
30+
jupyter notebook
31+
```
32+
This will print some information about the notebook server in your terminal, including the URL of the web application (by default, `http://localhost:8888`):
33+
34+
```
35+
36+
$ jupyter notebook
37+
[I 08:58:24.417 NotebookApp] Serving notebooks from local directory: /Users/catherine
38+
[I 08:58:24.417 NotebookApp] 0 active kernels
39+
[I 08:58:24.417 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/
40+
[I 08:58:24.417 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
41+
```
42+
43+
It will then open your default web browser to this URL.
44+
45+
When the notebook opens in your browser, you will see the **Notebook Dashboard**, which will show a list of the notebooks, files, and subdirectories in the directory where the notebook server was started. Navigate to the notebook of your interest and open it in the dashboard.
46+
47+
For more information please refer to [Jupyter documentation](https://docs.jupyter.org/en/latest/running.html)
48+
49+
## Benchmarks
50+
Data Parallel Extensions for Python provide a set of benchmarks illustrating different aspects of implementing the performant code with Data Parallel Extensions for Python.
51+
Benchmarks represent some real life numerical problem or some important part (kernel) of real life application. Each application/kernel is implemented in several variants (not necessarily all variants):
52+
- Pure Python: Typically the slowest and used just as a reference implementation
53+
- `numpy`: Same application/kernel implemented using NumPy library
54+
- `dpnp`: Modified `numpy` implementation to run on a specific device. You can use `numpy` as a baseline while evaluating the `dpnp` implementation and its performance
55+
- `numba @njit` array-style: application/kernel implemented using NumPy and compiled with Numba. You can use `numpy` as a baseline when evaluate `numba @njit` array-style implementat and its performance
56+
- `numba @njit` direct loops (`prange`): Same application/kernel implemented using Numba compiler using direct loops. Sometimes array-style programming is cumbersome and performance inefficient. Using direct loop programming may lead to more readable and performance code. Thus, while evaluating the performance of direct loop implementation it is useful to compare array-style Numba implementation as a baseline
57+
- `numba-dpex @dpjit` array-style: Modified `numba @njit` array-style implementation to compile and run on a specific device. You can use vanilla Numba implementation as a baseline while comparing `numba-dpex` implementation details and performance. You can also compare it against `dpnp` implementation to see how much extra performance `numba-dpex` can bring when you compile NumPy code for a given device
58+
- `numba-dpex @dpjit` direct loops (`prange`): Modified `numba @njit` direct loop implementation to compile and run on a specific device. You can use vanilla Numba implementation as a baseline while comparing `numba-dpex` implementation details and performance. You can also compare it against `dpnp` implementation to see how much extra performance `numba-dpex` can bring when you compile NumPy code for a given device
59+
- `numba-dpex @dpjit` kernel: Kernel-style programming, which is close to `@cuda.jit` programming model used in vanilla Numba
60+
- `cupy`: NumPy-like implementation using CuPy to run on CUDA-compatible devices
61+
- `@cuda.jit`: Kernel-style Numba implementation to run on CUDA-compatible devices
62+
- Native SYCL: Most applications/kernels also have DPC++ implementation, which can be used to compare performance of above implementations to DPC++ compiled code.
63+
64+
For more details please refer to `dpbench` [documentation](https://github.com/IntelPython/dpbench/blob/main/README.md).

0 commit comments

Comments
 (0)