Skip to content

Commit 82475ed

Browse files
Update docs/src/interfaces/Ensembles.md
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
1 parent c43e36e commit 82475ed

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/src/interfaces/Ensembles.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -45,12 +45,12 @@ EnsembleSplitThreads
4545
with hyperparallelism. It will automatically recompile your Julia functions to the GPU. A standard GPU sees
4646
a 5x performance increase over a 16 core Xeon CPU. However, there are limitations on what functions can
4747
auto-compile in this fashion, please see [DiffEqGPU for more details](https://docs.sciml.ai/DiffEqGPU/stable/)
48-
- `EnsembleGPUKernel()` - Requires installing and `using DiffEqGPU`. This uses a GPU for computing the ensemble
49-
with hyperparallelism by building a custom GPU kernel. This can have drastically less overhead (for example,
50-
achieving 15x accelerating against Jax and PyTorch, see
51-
[this paper for more details](https://www.sciencedirect.com/science/article/abs/pii/S0045782523007156)) but
52-
has limitations on what kinds of problems are compatible. See
53-
[DiffEqGPU for more details](https://docs.sciml.ai/DiffEqGPU/stable/)
48+
- `EnsembleGPUKernel()` - Requires installing and `using DiffEqGPU`. This uses a GPU for computing the ensemble
49+
with hyperparallelism by building a custom GPU kernel. This can have drastically less overhead (for example,
50+
achieving 15x accelerating against Jax and PyTorch, see
51+
[this paper for more details](https://www.sciencedirect.com/science/article/abs/pii/S0045782523007156)) but
52+
has limitations on what kinds of problems are compatible. See
53+
[DiffEqGPU for more details](https://docs.sciml.ai/DiffEqGPU/stable/)
5454

5555
### Choosing an Ensembler
5656

0 commit comments

Comments
 (0)