Skip to content

Commit c800eb1

Browse files
Merge branch 'master' into beta
2 parents 9537a09 + 18ff63a commit c800eb1

File tree

6 files changed

+720
-59
lines changed

6 files changed

+720
-59
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
### Added
66

7-
- Add `tc.about()` to print related software versions and configs.
7+
- Add `tc.about()` to print related software versions and configs
88

99
- Torch support is updraded to 2.0, and now support native vmap and native functional grad, and thus `vvag`. Still jit support is conflict with these functional transformations and be turned off by default
1010

docs/source/sharpbits.rst

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,53 @@ Similarly, conditional gate application must be takend carefully.
141141
# <tf.Tensor: shape=(2,), dtype=complex64, numpy=array([0.99999994+0.j, 0. +0.j], dtype=complex64)>
142142
143143
144+
Tensor variables consistency
145+
-------------------------------------------------------
146+
147+
148+
All tensor variables' backend (tf vs jax vs ..), dtype (float vs complex), shape and device (cpu vs gpu) must be compatible/consistent.
149+
150+
Inspect the backend, dtype, shape and device using the following codes.
151+
152+
.. code-block:: python
153+
154+
for backend in ["numpy", "tensorflow", "jax", "pytorch"]:
155+
with tc.runtime_backend(backend):
156+
a = tc.backend.ones([2, 3])
157+
print("tensor backend:", tc.interfaces.which_backend(a))
158+
print("tensor dtype:", tc.backend.dtype(a))
159+
print("tensor shape:", tc.backend.shape_tuple(a))
160+
print("tensor device:", tc.backend.device(a))
161+
162+
If the backend is inconsistent, one can convert the tensor backend via :py:meth:`tensorcircuit.interfaces.tensortrans.general_args_to_backend`.
163+
164+
.. code-block:: python
165+
166+
for backend in ["numpy", "tensorflow", "jax", "pytorch"]:
167+
with tc.runtime_backend(backend):
168+
a = tc.backend.ones([2, 3])
169+
print("tensor backend:", tc.interfaces.which_backend(a))
170+
b = tc.interfaces.general_args_to_backend(a, target_backend="jax", enable_dlpack=False)
171+
print("tensor backend:", tc.interfaces.which_backend(b))
172+
173+
If the dtype is inconsistent, one can convert the tensor dtype using ``tc.backend.cast``.
174+
175+
.. code-block:: python
176+
177+
for backend in ["numpy", "tensorflow", "jax", "pytorch"]:
178+
with tc.runtime_backend(backend):
179+
a = tc.backend.ones([2, 3])
180+
print("tensor dtype:", tc.backend.dtype(a))
181+
b = tc.backend.cast(a, dtype="float64")
182+
print("tensor dtype:", tc.backend.dtype(b))
183+
184+
Also note the jax issue on float64/complex128, see `jax gotcha <https://github.com/google/jax#current-gotchas>`_.
185+
186+
If the shape is not consistent, one can convert the shape by ``tc.backend.reshape``.
187+
188+
If the device is not consistent, one can move the tensor between devices by ``tc.backend.device_move``.
189+
190+
144191
AD Consistency
145192
---------------------
146193

docs/source/tutorial.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,5 @@ Jupyter Tutorials
1919
tutorials/optimization_and_expressibility.ipynb
2020
tutorials/vqex_mbl.ipynb
2121
tutorials/dqas.ipynb
22-
tutorials/barren_plateaus.ipynb
22+
tutorials/barren_plateaus.ipynb
23+
tutorials/qaoa_portfolio_optimization.ipynb

docs/source/tutorials/qaoa.ipynb

Lines changed: 67 additions & 56 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)