Skip to content

Commit 4dc09da

Browse files
minor fix on docs
1 parent 100ce91 commit 4dc09da

File tree

4 files changed

+13
-33
lines changed

4 files changed

+13
-33
lines changed

docs/source/advance.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -485,6 +485,15 @@ Here's an example studying entanglement asymmetry in tilted ferromagnet states:
485485
Randoms, Jit, Backend Agnostic, and Their Interplay
486486
--------------------------------------------------------
487487

488+
This section explains how random number generation interacts with JIT compilation and backend agnosticism in TensorCircuit. Understanding this interplay is crucial for reproducible and correct simulation results, especially when using JAX.
489+
490+
**Key Management for Reproducibility:**
491+
In JAX, random number generation is deterministic and relies on explicit "keys" that manage the random state. This is different from TensorFlow or NumPy, where random states are often managed implicitly. For reproducible results and correct JIT compilation, JAX requires these keys to be passed and split explicitly.
492+
493+
**Why Explicit Key Management?**
494+
When a JIT-compiled function is called multiple times with the same inputs, JAX aims to produce the same output. If random numbers were generated implicitly within a JIT-compiled function, subsequent calls would produce the same "random" numbers, which is often not the desired behavior for simulations requiring true randomness across runs.
495+
Explicit key management ensures that each call to a random function, even within JIT, uses a new, distinct random state derived from a split key, thus maintaining the desired randomness and reproducibility.
496+
488497
.. code-block:: python
489498
490499
import tensorcircuit as tc

docs/source/conf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@
2020

2121
# -- Project information -----------------------------------------------------
2222

23-
project = "tensorcircuit"
24-
copyright = "2020, TensorCircuit Development Team. Created by Shi-Xin Zhang"
23+
project = "tensorcircuit-ng"
24+
copyright = "2020, TensorCircuit Development Team. Created by Shi-Xin Zhang."
2525
author = "refraction-ray"
2626

2727
# The short X.Y version

docs/source/faq.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Frequently Asked Questions
44
What is the relation between TensorCircuit and TensorCircuit-NG?
55
-------------------------------------------------------------------
66

7-
Both packages are created by `Shi-Xin Zhang <https://www.iop.cas.cn/rcjy/tpyjy/?id=6789>`_ (`@refraction-ray <https://github.com/refraction-ray>`_). For the history of the evolution of tensorcircuit, please refer to `history <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/HISTORY.md>`_.
7+
Both packages are created by `Shi-Xin Zhang <https://www.iop.cas.cn/rcjy/tpyjy/?id=6789>`_ (`@refraction-ray <https://github.com/refraction-ray>`_). For the history of the evolution of TensorCircuit-NG, please refer to `history <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/HISTORY.md>`_.
88

99
From users' perspective, TensorCircuit-NG maintains full compatibility with the TensorCircuit API, enhancing it with additional features and critical bug fixes. Only TensorCircuit-NG is kept up-to-date with the fast evolving scientific computing and machine learning ecosystem in Python.
1010

@@ -41,7 +41,7 @@ How can I use multiple GPUs?
4141
For different observables evaluation on different cards, see `example <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/examples/vqe_parallel_pmap.py>`_.
4242

4343
For distributed simulation of one circuit on multiple cards, see `example for expectation <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/examples/slicing_auto_pmap_vqa.py>`_ and `example for MPO <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/examples/slicing_auto_pmap_mpo.py>`_.
44-
We also introduce a new interface for the multi-GPU tensornetwork contraction, see `example for VQE <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/examples/distributed_interface_vqe.py>`_ and `example for ammplitude <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/examples/distributed_interface_amplitude.py>`_.
44+
We also introduce a new interface for the multi-GPU tensornetwork contraction, see `example for VQE <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/examples/distributed_interface_vqe.py>`_ and `example for amplitude <https://github.com/tensorcircuit/tensorcircuit-ng/blob/master/examples/distributed_interface_amplitude.py>`_.
4545

4646

4747
When should I jit the function?

docs/source/modules.rst.backup

Lines changed: 0 additions & 29 deletions
This file was deleted.

0 commit comments

Comments
 (0)