You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When performing the ITE scheme, the TN virtual bond dimension increases, therefore, after every few ITE iterations we need to truncate the bond dimensions so the number of parameters in the tensor network state would stay bounded. The truncation step is implemented via a [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition) step. A full step-by-step illustrated description of the Simple Update algorithm (which is based on the ITE scheme) is depicted below.
28
+
When performing the ITE scheme, the TN virtual bond dimension increases. Therefore, after every few ITE iterations, we need to truncate the bond dimensions so the number of parameters in the tensor network state would stay bounded. The truncation step is implemented via a [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition) step. A full step-by-step illustrated description of the Simple Update algorithm (which is based on the ITE scheme) is depicted below.
| 1 |`tensor_network.py`| a Tensor Network class object which tracks the tensors, weights and their connectivity|
42
-
| 2 |`simple_update.py`| a Tensor Network Simple-Update algorithm class, which gets as an input a `TensorNetwork` object and perform a simple-update run on it using Imaginary Time Evolution. |
41
+
| 1 |`tensor_network.py`| a Tensor Network class object which tracks the tensors, weights, and their connectivity|
42
+
| 2 |`simple_update.py`| a Tensor Network Simple-Update algorithm class, which gets as an input a `TensorNetwork` object and performs a simple-update run on it using Imaginary Time Evolution. |
43
43
| 3 | `structure_matrix_constructor.py` | Contains a dictionary of common iPEPS structure matrices and also functionality construction of 2D square and rectangular lattices structure matrices (**still in progress**).
44
44
| 4 |`examples.py`| Few scripts for loading a tensor network state from memory and a full Antiferromagnetic Heisenberg model PEPS experiment.|
45
-
| 5 |`ncon.py`| A module for tensors contraction in python copied from the [ncon](https://github.com/mhauru/ncon)github repository.|
45
+
| 5 |`ncon.py`| A module for tensors contraction in python copied from the [ncon](https://github.com/mhauru/ncon)GitHub repository.|
46
46
| 6 |`utils.py`| A general utility module.|
47
47
48
48
@@ -118,13 +118,13 @@ and run the algorithm
118
118
star_su.run()
119
119
```
120
120
121
-
It is also possible to compute a single and doublesite expectation values like energy, magnetizatoin etc, with the following
121
+
It is also possible to compute single and double-site expectation values like energy, magnetization etc, with the following
### Example 2: The Trivial Simple-Update Algorithm
139
-
The trivial SU algorithm is equivalent to the SU algorithm without the ITE and truncation steps; it only consists of consecutive SVD steps over each TN edge (the same as contracting ITE gate with zero time-step). The trivial-SU algorithm's fixed point corresponds to a canonical representation of the tensor network representations we started with. A tensor network canonical representation is strongly related to the Schmidt Decomposition operation over all the tensor network's edges, where for a tensor networks with no loops (tree-like topology) each weight vector in the canonical representation corresponds to the Schmidt values of partitioning the network into two distinct networks along that edge. When the given tensor network has loops in it, it is no longer possible to partition the network along a single edge into to distinguished parts. Therefore, the weight vectors are no longer equal to the Schmidt values but rather become some general approximation of the tensors' environments in the network. A very interesting property of the trivial simple update algorithm is that it is identical to the [Belief Propagation (BP)](https://en.wikipedia.org/wiki/Belief_propagation) algorithm. The Belief Propagation (BP) algorithm is a famous iterative-message-passing algorithm in the world of Probabilistic Graphical Models (PGM), where it is used as an approximated inference tool. For a detailed description about the duality between the trivial-Simple-Update and the Belief Propagation algorithm see Refs [3][4].
139
+
The trivial SU algorithm is equivalent to the SU algorithm without the ITE and truncation steps; it only consists of consecutive SVD steps over each TN edge (the same as contracting the ITE gate with zero time-step). The trivial-SU algorithm's fixed point corresponds to a canonical representation of the tensor network representations we started with. A tensor network canonical representation is strongly related to the Schmidt Decomposition operation over all the tensor network's edges, where for a tensor network with no loops (tree-like topology), each weight vector in the canonical representation corresponds to the Schmidt values of partitioning the network into two distinct networks along that edge. When the given tensor network has loops in it, it is no longer possible to partition the network along a single edge into distinguished parts. Therefore, the weight vectors are no longer equal to the Schmidt values but rather become some general approximation of the tensors' environments in the network. A very interesting property of the trivial simple update algorithm is that it is identical to the [Belief Propagation (BP)](https://en.wikipedia.org/wiki/Belief_propagation) algorithm. The Belief Propagation (BP) algorithm is a famous iterative-message-passing algorithm in the world of Probabilistic Graphical Models (PGM), which is used as an approximated inference tool. For a detailed description of the duality between the trivial-Simple-Update and the Belief Propagation algorithm, see Refs [3][4].
140
140
141
-
In order to implement the trivial-SU algorithm we can initialize the simple update class with zero time step as follows
141
+
In order to implement the trivial-SU algorithm, we can initialize the simple update class with zero time step as follows
then, the algorithm will run 1000 iteration or until the maximal L2 distance between temporal consecutive weight vectors will be smaller then 1e-6.
156
+
then, the algorithm will run 1000 iterations or until the maximal L2 distance between temporal consecutive weight vectors is smaller than 1e-6.
157
157
158
158
159
159
There are more fully-written examples in the [`notebooks`](/notebooks) folder.
160
160
161
161
### List of Notebooks
162
-
The notebooks below are not part of the package, they can be found in the `tnsu`github repository under `/notebooks`. You can run them locally with jupyter notebook or in google colab (which is preferable in case you don't want to burn your laptop's mother-board :) )
162
+
The notebooks below are not part of the package, they can be found in the `tnsu`GitHub repository under `/notebooks`. You can run them locally with Jupiter notebook or in google colab (which is preferable in case you don't want to burn your laptop's mother-board :) )
@@ -171,17 +171,17 @@ The notebooks below are not part of the package, they can be found in the `tnsu`
171
171
## Simulations
172
172
### Spin-1/2 Antiferromagnetic Heisenberg (AFH) model
173
173
174
-
Below are some result of ground-state energy per-site simulated with the Simple Update algorithm over AFH Chain, Star, PEPS and Cube tensor networks. The AFH Hamiltonian is given by
174
+
Below are some results of ground-state energy per-site simulated with the Simple Update algorithm over AFH Chain, Star, PEPS, and Cube tensor networks. The AFH Hamiltonian is given by
In the plots below one can see the simulated x, z magnetization (per-site) along with the simulated energy (per-site). We see that the SU algorithm is able to extract the phase transition of the model around h=3.2.
193
+
In the plots below, one can see the simulated x and z magnetization (persite) along with the simulated energy (persite). We see that the SU algorithm is able to extract the phase transition of the model around h=3.2.
notice that for 0 radian angle, this model coincides with the original AFH model. The energy, magnetization and Q-norm as a function of the angle for different bond dimension are plotted below. We can see that the simple-update algorithm is having a hard time to trace all the phase transitions of this model. However, we notice that for larger bond dimensions it seems like it captures the general behavior of the model's phase transition. For a comprehensive explanation and results (for triangular lattice see Ref [2])
203
+
notice that for the 0-radian angle, this model coincides with the original AFH model. The energy, magnetization, and Q-norm as a function of the angle for a different bond dimension are plotted below. We can see that the simple-update algorithm is having a hard time tracing all the phase transitions of this model. However, we notice that for larger bond dimensions, it seems like it captures the general behavior of the model's phase transition. For a comprehensive explanation and results (for triangular lattice, see Ref [2])
0 commit comments