Skip to content

Commit eb729c5

Browse files
Merge branch 'better_gpu_docs' into serialization_docs
2 parents 2185687 + b7b0c67 commit eb729c5

File tree

3 files changed

+10
-10
lines changed

3 files changed

+10
-10
lines changed

doc/sources/array_api.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ on GPU without moving the data from host to device.
3232
be :external+sklearn:doc:`enabled in scikit-learn <modules/array_api>`, which requires either changing
3333
global settings or using a ``config_context``, plus installing additional dependencies such as ``array-api-compat``.
3434

35-
When passing array API inputs whose data is on a SyCL-enabled device (e.g. an Intel GPU), as
35+
When passing array API inputs whose data is on a SYCL-enabled device (e.g. an Intel GPU), as
3636
supported for example by `PyTorch <https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html>`__
3737
and |dpnp|, if array API support is enabled and the requested operation (e.g. call to ``.fit()`` / ``.predict()``
3838
on the estimator class being used) is :ref:`supported on device/GPU <sklearn_algorithms_gpu>`, computations
@@ -50,10 +50,10 @@ through options ``allow_sklearn_after_onedal`` (default is ``True``) and ``allow
5050

5151
If array API is enabled for |sklearn| and the estimator being used has array API support on |sklearn| (which can be
5252
verified by attribute ``array_api_support`` from :obj:`sklearn.utils.get_tags`), then array API inputs whose data
53-
is allocated neither on CPU nor on a SyCL device will be forwarded directly to the unpatched methods from |sklearn|,
53+
is allocated neither on CPU nor on a SYCL device will be forwarded directly to the unpatched methods from |sklearn|,
5454
without using the accelerated versions from this library, regardless of option ``allow_sklearn_after_onedal``.
5555

56-
While other array API inputs (e.g. torch arrays with data allocated on a non-SyCL device) might be supported
56+
While other array API inputs (e.g. torch arrays with data allocated on a non-SYCL device) might be supported
5757
by the |sklearnex| in cases where the same class from |sklearn| doesn't support array API, note that the data will
5858
be transferred to host if it isn't already, and the computations will happen on CPU.
5959

doc/sources/input-types.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,6 @@ enabled the input is unsupported).
5050
- For SciPy CSR matrix / array, index arrays are always copied. Note that sparse matrices in formats other than CSR
5151
will be converted to CSR, which implies more than just data copying.
5252
- Heterogeneous NumPy array
53-
- If SyCL queue is provided for device without ``float64`` support but data are ``float64``, data are copied with reduced precision.
53+
- If SYCL queue is provided for device without ``float64`` support but data are ``float64``, data are copied with reduced precision.
5454
- If :ref:`Array API <array_api>` is not enabled then data from GPU devices are always copied to the host device and then result table
5555
(for applicable methods) is copied to the source device.

doc/sources/oneapi-gpu.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,11 @@ GPU support
2222
Overview
2323
--------
2424

25-
|sklearnex| can execute computations on different devices (CPUs and GPUs, including integrated GPUs from laptops and desktops) supported by the SyCL framework.
25+
|sklearnex| can execute computations on different devices (CPUs and GPUs, including integrated GPUs from laptops and desktops) supported by the SYCL framework.
2626

2727
The device used for computations can be easily controlled through the ``target_offload`` option in config contexts, which moves data to GPU if it's not already there - see :ref:`config_contexts` and rest of this page for more details).
2828

29-
For finer-grained controlled (e.g. operating on arrays that are already in a given device's memory), it can also interact with on-device :ref:`array API classes <array_api>` like |dpnp_array|, and with SyCL-related objects from package |dpctl| such as :obj:`dpctl.SyclQueue`.
29+
For finer-grained controlled (e.g. operating on arrays that are already in a given device's memory), it can also interact with on-device :ref:`array API classes <array_api>` like |dpnp_array|, and with SYCL-related objects from package |dpctl| such as :obj:`dpctl.SyclQueue`.
3030

3131
.. Note:: Note that not every operation from every estimator is supported on GPU - see the :ref:`GPU support table <sklearn_algorithms_gpu>` for more information. See also :ref:`verbose` to verify where computations are performed.
3232

@@ -94,7 +94,7 @@ Target offload option
9494

9595
Just like |sklearn|, the |sklearnex| can use configuration contexts and global options to modify how it interacts with different inputs - see :ref:`config_contexts` for details.
9696

97-
In particular, the |sklearnex| allows an option ``target_offload`` which can be passed a SyCL device name like ``"gpu"`` indicating where the operations should be performed, moving the data to that device in the process if it's not already there; or a :obj:`dpctl.SyclQueue` object from an already-existing queue on a device.
97+
In particular, the |sklearnex| allows an option ``target_offload`` which can be passed a SYCL device name like ``"gpu"`` indicating where the operations should be performed, moving the data to that device in the process if it's not already there; or a :obj:`dpctl.SyclQueue` object from an already-existing queue on a device.
9898

9999
.. hint:: If repeated operations are going to be performed on the same data (e.g. cross-validators, resamplers, missing data imputers, etc.), it's recommended to use the array API option instead - see the next section for details.
100100

@@ -114,7 +114,7 @@ Example:
114114
model.fit(X, y)
115115
pred = model.predict(X)
116116
117-
.. tab:: Passing a SyCL queue
117+
.. tab:: Passing a SYCL queue
118118
.. code-block:: python
119119
120120
import dpctl
@@ -139,7 +139,7 @@ Example:
139139
GPU arrays through array API
140140
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
141141

142-
As another option, computations can also be performed on data that is already on a SyCL device without moving it there if it belongs to an array API-compatible class, such as |dpnp_array| or `torch.tensor <https://docs.pytorch.org/docs/stable/tensors.html>`__.
142+
As another option, computations can also be performed on data that is already on a SYCL device without moving it there if it belongs to an array API-compatible class, such as |dpnp_array| or `torch.tensor <https://docs.pytorch.org/docs/stable/tensors.html>`__.
143143

144144
This is particularly useful when multiple operations are performed on the same data (e.g. cross validators, stacked ensembles, etc.), or when the data is meant to interact with other libraries besides the |sklearnex|. Be aware that it requires enabling array API support in |sklearn|, which comes with additional dependencies.
145145

@@ -230,7 +230,7 @@ Example:
230230
model.fit(X, y)
231231
232232
233-
Note that, if array API had been enabled, the snippet above would use the data as-is on the device where it resides, but without array API, it implies data movements using the SyCL queue contained by those objects.
233+
Note that, if array API had been enabled, the snippet above would use the data as-is on the device where it resides, but without array API, it implies data movements using the SYCL queue contained by those objects.
234234

235235
.. note::
236236
All the input data for an algorithm must reside on the same device.

0 commit comments

Comments
 (0)