You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/advance.rst
+129Lines changed: 129 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -220,6 +220,135 @@ Please refer to :py:meth:`tensorcircuit.templates.measurements.sparse_expectatio
220
220
For different representations to evaluate Hamiltonian expectation in tensorcircuit, please refer to :doc:`tutorials/tfim_vqe_diffreph`.
221
221
222
222
223
+
Hamiltonian Matrix Building
224
+
----------------------------
225
+
226
+
TensorCircuit-NG provides multiple ways to build Hamiltonian matrices, especially for sparse Hamiltonians constructed from Pauli strings. This is crucial for quantum many-body physics simulations and variational quantum algorithms.
227
+
228
+
**Pauli String Based Construction:**
229
+
230
+
The most flexible way to build Hamiltonians is through Pauli strings:
231
+
232
+
.. code-block:: python
233
+
234
+
import tensorcircuit as tc
235
+
236
+
# Define Pauli strings and their weights
237
+
# Each Pauli string is represented by a list of integers:
TensorCircuit-NG supports conversion from different MPO (Matrix Product Operator) formats, particularly from TensorNetwork and Quimb libraries. This is useful when you want to leverage existing MPO implementations or convert between different frameworks.
291
+
292
+
**TensorNetwork MPO:**
293
+
294
+
For TensorNetwork MPOs, you can convert predefined models like the Transverse Field Ising (TFI) model:
295
+
296
+
.. code-block:: python
297
+
298
+
import tensorcircuit as tc
299
+
import tensornetwork as tn
300
+
301
+
# Create TFI Hamiltonian MPO from TensorNetwork
302
+
nwires =6
303
+
Jx = np.array([1.0] * (nwires -1)) # XX coupling strength
304
+
Bz = np.array([-1.0] * nwires) # Transverse field strength
305
+
306
+
# Create TensorNetwork MPO
307
+
tn_mpo = tn.matrixproductstates.mpo.FiniteTFI(
308
+
Jx, Bz,
309
+
dtype=np.complex64
310
+
)
311
+
312
+
# Convert to TensorCircuit format
313
+
tc_mpo = tc.quantum.tn2qop(tn_mpo)
314
+
315
+
# Get dense matrix representation
316
+
h_matrix = tc_mpo.eval_matrix()
317
+
318
+
Note: TensorNetwork MPO currently only supports open boundary conditions.
319
+
320
+
**Quimb MPO:**
321
+
322
+
Quimb provides more flexible MPO construction options:
Parameterized quantum circuits can run in a blink. Always use jit if the circuit will get evaluations multiple times, it can greatly boost the simulation with two or three order time reduction. But also be cautious, users need to be familiar with jit, otherwise, the jitted function may return unexpected results or recompile on every hit (wasting lots of time).
234
-
To learn more about the jit mechanism, one can refer to documentation or blogs on ``tf.function`` or ``jax.jit``, though these two still have subtle differences.
262
+
Just-In-Time (JIT) compilation significantly accelerates quantum circuit simulation by optimizing the computation graph. Key points:
263
+
264
+
* Use JIT for functions that will be called multiple times
265
+
* JIT compilation has some overhead, so it's most beneficial for repeated executions
266
+
* Ensure input shapes and types are consistent to avoid recompilation
267
+
* The input and output of the functions are all tensors, except static inputs.
Inputs, parameters, measurements, circuit structures, and Monte Carlo noise can all be evaluated in parallel.
240
-
To learn more about vmap mechanism, one can refer to documentation or blogs on ``tf.vectorized_map`` or ``jax.vmap``.
241
-
One can also refer to `tutorial <https://tensorcircuit-ng.readthedocs.io/en/latest/whitepaper/6-3-vmap.html>`_ for more details on the vmap usage in TensorCircuit-NG.
297
+
Vectorized mapping (vmap) enables parallel evaluation across multiple inputs or parameters:
298
+
299
+
* Batch processing of quantum circuit input wavefunctions
300
+
* Batch processing quantum circuit structure
301
+
* Parallel parameter optimization
302
+
* Efficient Monte Carlo sampling for noise simulation
303
+
* Vectorized measurement operations
304
+
305
+
Example of vmap for parallel circuit evaluation:
306
+
307
+
.. code-block:: python
308
+
309
+
# Define a parameterized circuit
310
+
defparam_circuit(params):
311
+
c = tc.Circuit(2)
312
+
c.rx(0, theta=params[0])
313
+
c.ry(1, theta=params[1])
314
+
return K.real(c.expectation([tc.gates.z(), [0]]))
315
+
316
+
# Create batch of parameters
317
+
batch_params = K.ones([10, 2])
318
+
319
+
# Vectorize the circuit evaluation
320
+
vmap_circuit = K.vmap(param_circuit)
321
+
results = vmap_circuit(batch_params)
322
+
323
+
324
+
For more advanced usage patterns and detailed examples of vmap, refer to our `vmap tutorial <https://tensorcircuit-ng.readthedocs.io/en/latest/whitepaper/6-3-vmap.html>`_.
242
325
243
326
244
327
Backend Agnosticism
@@ -424,15 +507,14 @@ and the other part is implemented in `TensorCircuit package <modules.html#module
424
507
'vvag',
425
508
'zeros']
426
509
427
-
428
510
429
511
Switch the Dtype
430
512
--------------------
431
513
432
514
TensorCircuit-NG supports simulation using 32/64 bit precession. The default dtype is32-bit as"complex64".
433
515
Change this by ``tc.set_dtype("complex128")``.
434
516
435
-
``tc.dtypestr`` always returns the current dtype string: either "complex64"or"complex128".
517
+
``tc.dtypestr`` always returns the current dtype string: either "complex64"or"complex128". Accordingly, ``tc.rdtypestr`` always returns the current real dtype string: either "float32"or"float64".
436
518
437
519
438
520
Setup the Contractor
@@ -769,6 +851,79 @@ We also provider wrapper of quantum function for keras layer as :py:meth:`tensor
769
851
l = layer(v)
770
852
grad = tape.gradient(l, layer.trainable_variables)
771
853
854
+
**JAX interfaces:**
855
+
856
+
TensorCircuit-NG also newly introduces JAX interface to seamlessly integrate withJAX's ecosystem.
857
+
This allows you to use JAX's powerful features like automatic differentiation, JIT compilation, and vectorization with quantum circuits or functions running on any backend.
Copy file name to clipboardExpand all lines: docs/source/sharpbits.rst
+19-1Lines changed: 19 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -191,6 +191,9 @@ If the device is not consistent, one can move the tensor between devices by ``tc
191
191
AD Consistency
192
192
---------------------
193
193
194
+
Gradients in terms of complex dtypes
195
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
196
+
194
197
TF and JAX backend manage the differentiation rules differently for complex-valued function (actually up to a complex conjuagte). See issue discussion `tensorflow issue <https://github.com/tensorflow/tensorflow/issues/3348>`_.
195
198
196
199
In TensorCircuit-NG, currently we make the difference in AD transparent, namely, when switching the backend, the AD behavior and result for complex valued function can be different and determined by the nature behavior of the corresponding backend framework.
@@ -222,4 +225,19 @@ Also see the code below for a reference:
Vmap (vectorized map) outside a grad-like function may cause incorrected results on TensorFlow backends due to a long existing `bug <https://github.com/tensorflow/tensorflow/issues/52148>`_ in TensorFlow codebase. So better always stick to the first-vmap-then-differentiated paradigm.
235
+
236
+
Grad over vmap function
237
+
~~~~~~~~~~~~~~~~~~~~~~~~~
238
+
239
+
A related issue is the different behavior for`K.grad(K.vmap(f))` on different backends. For tensorflow backend, the function to be differentiated has a scalar output which is the sum of all outputs.
240
+
241
+
However, for Jax backend, the function simply raise error as only scalar output function can be differentiated, no implicit sum of the vectorized ``f``is assumed. For non-scalar output, one should use `jacrev`or`jacfwd` to get the gradient information.
242
+
243
+
Specifically, `K.grad(K.vmap(f))` on TensorFlow backend is equilvalent to `K.grad(K.append(K.vamp(f), K.sum))` on Jax backend.
0 commit comments