Skip to content

Commit ab22fc6

Browse files
committed
Update README.md
1 parent db4e38c commit ab22fc6

File tree

1 file changed

+24
-23
lines changed

1 file changed

+24
-23
lines changed

README.md

Lines changed: 24 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ where finally,
2525

2626
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/ITE_local_gate.png?raw=true" width="200" height="">
2727

28-
When performing the ITE scheme, the TN virtual bond dimension increases, therefore, after every few ITE iterations we need to truncate the bond dimensions so the number of parameters in the tensor network state would stay bounded. The truncation step is implemented via a [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition) step. A full step-by-step illustrated description of the Simple Update algorithm (which is based on the ITE scheme) is depicted below.
28+
When performing the ITE scheme, the TN virtual bond dimension increases. Therefore, after every few ITE iterations, we need to truncate the bond dimensions so the number of parameters in the tensor network state would stay bounded. The truncation step is implemented via a [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition) step. A full step-by-step illustrated description of the Simple Update algorithm (which is based on the ITE scheme) is depicted below.
2929

3030
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/simple_update_algorithm.png?raw=true" width="1000" height="">
3131

@@ -38,11 +38,11 @@ The [`src.tnsu`](/src/tnsu) folder contains the source code for this project
3838

3939
| # | file | Subject |
4040
|:----:|------------------------------------------------|:-----------------:|
41-
| 1 | `tensor_network.py` | a Tensor Network class object which tracks the tensors, weights and their connectivity|
42-
| 2 | `simple_update.py` | a Tensor Network Simple-Update algorithm class, which gets as an input a `TensorNetwork` object and perform a simple-update run on it using Imaginary Time Evolution. |
41+
| 1 | `tensor_network.py` | a Tensor Network class object which tracks the tensors, weights, and their connectivity|
42+
| 2 | `simple_update.py` | a Tensor Network Simple-Update algorithm class, which gets as an input a `TensorNetwork` object and performs a simple-update run on it using Imaginary Time Evolution. |
4343
| 3 | `structure_matrix_constructor.py` | Contains a dictionary of common iPEPS structure matrices and also functionality construction of 2D square and rectangular lattices structure matrices (**still in progress**).
4444
| 4 | `examples.py` | Few scripts for loading a tensor network state from memory and a full Antiferromagnetic Heisenberg model PEPS experiment.|
45-
| 5 | `ncon.py` | A module for tensors contraction in python copied from the [ncon](https://github.com/mhauru/ncon) github repository.|
45+
| 5 | `ncon.py` | A module for tensors contraction in python copied from the [ncon](https://github.com/mhauru/ncon) GitHub repository.|
4646
| 6 | `utils.py` | A general utility module.|
4747

4848

@@ -118,13 +118,13 @@ and run the algorithm
118118
star_su.run()
119119
```
120120

121-
It is also possible to compute a single and double site expectation values like energy, magnetizatoin etc, with the following
121+
It is also possible to compute single and double-site expectation values like energy, magnetization etc, with the following
122122
```python
123123
energy_per_site = star_su.energy_per_site()
124124
z_magnetization_per_site = star_su.expectation_per_site(operator=pauli_z / 2)
125125
```
126126

127-
or manually calculating single and double site reduced-density matrices and expectations following the next few lines of code
127+
or manually calculating single and double-site reduced-density matrices and expectations following the next few lines of code
128128
```python
129129
tensor = 0
130130
edge = 1
@@ -136,9 +136,9 @@ star_su.tensor_pair_expectation(common_edge=edge, operator=tensor_pair_operator)
136136
```
137137

138138
### Example 2: The Trivial Simple-Update Algorithm
139-
The trivial SU algorithm is equivalent to the SU algorithm without the ITE and truncation steps; it only consists of consecutive SVD steps over each TN edge (the same as contracting ITE gate with zero time-step). The trivial-SU algorithm's fixed point corresponds to a canonical representation of the tensor network representations we started with. A tensor network canonical representation is strongly related to the Schmidt Decomposition operation over all the tensor network's edges, where for a tensor networks with no loops (tree-like topology) each weight vector in the canonical representation corresponds to the Schmidt values of partitioning the network into two distinct networks along that edge. When the given tensor network has loops in it, it is no longer possible to partition the network along a single edge into to distinguished parts. Therefore, the weight vectors are no longer equal to the Schmidt values but rather become some general approximation of the tensors' environments in the network. A very interesting property of the trivial simple update algorithm is that it is identical to the [Belief Propagation (BP)](https://en.wikipedia.org/wiki/Belief_propagation) algorithm. The Belief Propagation (BP) algorithm is a famous iterative-message-passing algorithm in the world of Probabilistic Graphical Models (PGM), where it is used as an approximated inference tool. For a detailed description about the duality between the trivial-Simple-Update and the Belief Propagation algorithm see Refs [3][4].
139+
The trivial SU algorithm is equivalent to the SU algorithm without the ITE and truncation steps; it only consists of consecutive SVD steps over each TN edge (the same as contracting the ITE gate with zero time-step). The trivial-SU algorithm's fixed point corresponds to a canonical representation of the tensor network representations we started with. A tensor network canonical representation is strongly related to the Schmidt Decomposition operation over all the tensor network's edges, where for a tensor network with no loops (tree-like topology), each weight vector in the canonical representation corresponds to the Schmidt values of partitioning the network into two distinct networks along that edge. When the given tensor network has loops in it, it is no longer possible to partition the network along a single edge into distinguished parts. Therefore, the weight vectors are no longer equal to the Schmidt values but rather become some general approximation of the tensors' environments in the network. A very interesting property of the trivial simple update algorithm is that it is identical to the [Belief Propagation (BP)](https://en.wikipedia.org/wiki/Belief_propagation) algorithm. The Belief Propagation (BP) algorithm is a famous iterative-message-passing algorithm in the world of Probabilistic Graphical Models (PGM), which is used as an approximated inference tool. For a detailed description of the duality between the trivial-Simple-Update and the Belief Propagation algorithm, see Refs [3][4].
140140

141-
In order to implement the trivial-SU algorithm we can initialize the simple update class with zero time step as follows
141+
In order to implement the trivial-SU algorithm, we can initialize the simple update class with zero time step as follows
142142
```python
143143
su.SimpleUpdate(tensor_network=tensornet,
144144
dts=[0],
@@ -153,13 +153,13 @@ su.SimpleUpdate(tensor_network=tensornet,
153153
log_energy=False,
154154
print_process=False)
155155
```
156-
then, the algorithm will run 1000 iteration or until the maximal L2 distance between temporal consecutive weight vectors will be smaller then 1e-6.
156+
then, the algorithm will run 1000 iterations or until the maximal L2 distance between temporal consecutive weight vectors is smaller than 1e-6.
157157

158158

159159
There are more fully-written examples in the [`notebooks`](/notebooks) folder.
160160

161161
### List of Notebooks
162-
The notebooks below are not part of the package, they can be found in the `tnsu` github repository under `/notebooks`. You can run them locally with jupyter notebook or in google colab (which is preferable in case you don't want to burn your laptop's mother-board :) )
162+
The notebooks below are not part of the package, they can be found in the `tnsu` GitHub repository under `/notebooks`. You can run them locally with Jupiter notebook or in google colab (which is preferable in case you don't want to burn your laptop's mother-board :) )
163163

164164
| # | file | Subject | Colab | Nbviewer |
165165
|:----:|:--------------:|:------------------------------------------------:|:-----------------:|:---------------------:|
@@ -171,17 +171,17 @@ The notebooks below are not part of the package, they can be found in the `tnsu`
171171
## Simulations
172172
### Spin-1/2 Antiferromagnetic Heisenberg (AFH) model
173173

174-
Below are some result of ground-state energy per-site simulated with the Simple Update algorithm over AFH Chain, Star, PEPS and Cube tensor networks. The AFH Hamiltonian is given by
174+
Below are some results of ground-state energy per-site simulated with the Simple Update algorithm over AFH Chain, Star, PEPS, and Cube tensor networks. The AFH Hamiltonian is given by
175175

176176
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/hamiltonian_eq.png?raw=true" width="" height="60">
177177

178-
In the case of the Star tensor network lattice the AFH Hamiltonian is composite of two parts which corresponds to different type of edges (see [1]).
179-
The Chain, Star, PEPS and Cube infinite tensor networks are illustrated in the next figure.
178+
In the case of the Star tensor network lattice, the AFH Hamiltonian consists of two parts that correspond to different types of edges (see [1]).
179+
The Chain, Star, PEPS, and Cube infinite tensor networks are illustrated in the next figure.
180180

181181
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/Tensor_Networks_diagrams.png?raw=true" width="1000" height="">
182182

183183

184-
Here are the ground-state energy per-site vs inverse virtual bond-dimension simulations for the tensor networks diagrams above
184+
Here is the ground-state energy per-site vs. inverse virtual bond-dimension simulations for the tensor networks diagrams above
185185

186186
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/chain_star_peps_cube_plots.png?raw=true" width="1000" height="">
187187

@@ -190,17 +190,17 @@ Next, we simulated the quantum Ising model on a 2D lattice with a transverse mag
190190

191191
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/ising_transverse_field.png?raw=true" width="" height="100">
192192

193-
In the plots below one can see the simulated x, z magnetization (per-site) along with the simulated energy (per-site). We see that the SU algorithm is able to extract the phase transition of the model around h=3.2.
193+
In the plots below, one can see the simulated x and z magnetization (per site) along with the simulated energy (per site). We see that the SU algorithm is able to extract the phase transition of the model around h=3.2.
194194

195195
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/Ising_model.png?raw=true" width="1000" height="">
196196

197197
### Spin-1 Simulation of a Bilinear-Biquadratic Heisenberg model on a star 2D lattice
198198

199-
Finally we simulated the BLBQ Hamiltonian which is given by the next equation
199+
Finally, we simulated the BLBQ Hamiltonian, which is given by the next equation
200200

201201
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/BLBQ_hamiltonian.png?raw=true" width="300" height="">
202202

203-
notice that for 0 radian angle, this model coincides with the original AFH model. The energy, magnetization and Q-norm as a function of the angle for different bond dimension are plotted below. We can see that the simple-update algorithm is having a hard time to trace all the phase transitions of this model. However, we notice that for larger bond dimensions it seems like it captures the general behavior of the model's phase transition. For a comprehensive explanation and results (for triangular lattice see Ref [2])
203+
notice that for the 0-radian angle, this model coincides with the original AFH model. The energy, magnetization, and Q-norm as a function of the angle for a different bond dimension are plotted below. We can see that the simple-update algorithm is having a hard time tracing all the phase transitions of this model. However, we notice that for larger bond dimensions, it seems like it captures the general behavior of the model's phase transition. For a comprehensive explanation and results (for triangular lattice, see Ref [2])
204204

205205
<img src="https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/assets/BLBQ_model_simulation_star.png?raw=true" width="1000" height="">
206206

@@ -217,14 +217,15 @@ Roy Elkabetz - [elkabetzroy@gmail.com](mailto:elkabetzroy@gmail.com)
217217

218218
## Citation
219219

220-
To cite this repository in academic works or any other purpose, please use the following BibTeX citation:
220+
To cite this repository in academic works or for any other purpose, please use the following BibTeX citation:
221221
```Latex
222-
@software{tnsu,
222+
@misc{tnsu,
223223
author = {Elkabetz, Roy},
224-
title = {{tnsu: A python package for Tensor Networks Simple-Update simulations}},
225-
url = {https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update},
226-
version = {1.0.2},
227-
year = {2022}
224+
title = {{Python Package for Universal Tensor-Networks Simple-Update Simulations}},
225+
howpublished = \url{https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update},
226+
url = {https://github.com/RoyElkabetz/Tensor-Networks-Simple-Update/blob/main/tnsu__A_python_package_for_Tensor_Networks_Simple_Update_simulations.pdf},
227+
year = {2022},
228+
type = {Python package}
228229
}
229230
```
230231

0 commit comments

Comments
 (0)