diff --git a/Project.toml b/Project.toml index 7dd9d2158d..e74287a24a 100644 --- a/Project.toml +++ b/Project.toml @@ -1,7 +1,7 @@ name = "ITensors" uuid = "9136182c-28ba-11e9-034c-db9fb085ebd5" authors = ["Matthew Fishman ", "Miles Stoudenmire "] -version = "0.9.0" +version = "0.9.1" [deps] Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e" diff --git a/docs/settings.jl b/docs/settings.jl index 62e4e6e52c..a49d903537 100644 --- a/docs/settings.jl +++ b/docs/settings.jl @@ -39,10 +39,8 @@ settings = Dict( "Documentation" => ["Index" => "IndexType.md", "ITensor" => "ITensorType.md", "QN" => "QN.md"], "Frequently Asked Questions" => [ - "Programming Language (Julia, C++, ...) FAQs" => "faq/JuliaAndCpp.md", "ITensor Development FAQs" => "faq/Development.md", "Julia Package Manager FAQs" => "faq/JuliaPkg.md", - "High-Performance Computing FAQs" => "faq/HPC.md", ], "Upgrade guides" => ["Upgrading from 0.1 to 0.2" => "UpgradeGuide_0.1_to_0.2.md"], "Advanced Usage Guide" => [ diff --git a/docs/src/faq/HPC.md b/docs/src/faq/HPC.md deleted file mode 100644 index 97f0eaa538..0000000000 --- a/docs/src/faq/HPC.md +++ /dev/null @@ -1,88 +0,0 @@ -# High Performance Computing (HPC) Frequently Asked Questions - -## My code is using a lot of RAM - what can I do about this? - -Tensor network algorithms can often use a large amount of RAM. On top -of this essential fact, the Julia programming languge is "garbage collected" -which means that unused memory isn't given back to the operating system right away, -but only when the Julia runtime dynamically reclaims it. When your code -allocates memory very rapidly, this can lead to high memory usage overall. - -Fortunately there are various steps you can take to keep the memory usage of your code under control. - -### 1. Avoid Repeatedly Allocating, Especially in Fast or "Hot" Loops - -More memory gets used whenever your code "allocates", which happens most commonly -when you use dynamic storage types like `Vector` and `Matrix`. If you have a code -pattern where you allocate or resize an array or vector inside a 'hot' loop, -meaning a loop that iterates quickly very many times, the memory from the previous -allocations may pile up very quickly before the next garbage collector run. - -To avoid this, allocate the array once before the loop begins if possible, -then overwrite its contents during each iteration. More generally, try as much as -possible to estimate the sizes of dynamic resources ahead of time. Or do one allocation -that creates a large enough "workspace" that dynamic algorithms can reuse part of without -reallocating the whole workspace (i.e. making a large array once then using portions of it -when smaller arrays are needed). - -### 2. Use the `--heap-size-hint` Flag - -A simple step you can take to help with overall memory usage is to pass -the `--heap-size-hint` flag to the Julia program when you start it. For example, -you can call Julia as: -``` -julia --heap-size-hint=60G -``` -When you pass this heap size, Julia will try to keep the memory usage at or below this -value if possible. - -In cases where this does not work, your code simply may be allocating too much memory. -Be sure not to allocate over and over again inside of "hot" loops which execute many times. - -Another possibility is that you are simply working with a tensor network with large -bond dimensions, which may fundamentally use a lot of memory. In those cases, you can -try to use features such as "write to disk mode" of the ITensor DMRG code or other related -techniques. (See the `write_when_maxdim_exceeds` keyword of the ITensor `dmrg` function.) - - -### 3. In Rare Case, Force a Garbage Collection Run - -In some rare cases, such as when your code cannot be optimized to avoid any more allocations -or when the `--heap-size-hint` provided above is not affecting the behavior of the Julia -garbage collector, you can force the garbage collector (GC) to run at a specific point -in your code by calling: -``` -GC.gc() -``` -Alternatively, you can call `GC.gc(true)` to force a "full run" rather than just collecting -a more 'young' subset of previous allocations. - -While this approach works well to reduce memory usage, it can have the unfortunate downside -of slowing down your code each time the garbage collector runs, which can be especially -harmful to multithreaded or parallel algorithms. Therefore, if this approach must be used -try calling `GC.gc()` as infrequently as possible and ideally only in the outermost functions -and loops of your code (highest levels of your code). - - -## Can Julia Be Used to Perform Parallel, Distributed Calculations on Large Clusters? - -Yes. The Julia ecosystem offers multiple approaches to parallel computing across multiple -machines including on large HPC clusters and including GPU resources. - -For an overall view of some of these options, the [Julia on HPC Clusters](https://juliahpc.github.io/JuliaOnHPCClusters/) website is a good resource. - -Some of the leading approaches to parallelism in Julia are: -* MPI, through the [MPI.jl](https://juliaparallel.org/MPI.jl/latest/) package. Has the advantage of optionally using an MPI backend that is optimized for a particular cluster and possibly using fast interconnects like Infiniband. -* [Dagger](https://juliaparallel.org/Dagger.jl/dev/), a framework for parallel computing across all kinds of resources, like CPUs and GPUs, and across multiple threads and multiple servers. -* [Distributed](https://docs.julialang.org/en/v1/stdlib/Distributed/). Part of the base Julia library, giving tools to perform calculations distributed across multiple machines. - - -## Does My Cluster Admin Have to Install Julia for Me? What are the Best Practices for Installing Julia on Clusters? - -The most common approach to installing and using Julia on clusters is for users to install their own Julia binary and dependencies, which is quite easy to do. However, for certain libraries like MPI.jl, there may be MPI backends that are preferred by the cluster administrator. Fortunately, it is possible for admins to set global defaults for such backends and other library preferences. - -For more information on best practices for installing Julia on clusters, see the [Julia on HPC Clusters](https://juliahpc.github.io/JuliaOnHPCClusters/) website. - - - - diff --git a/docs/src/faq/JuliaAndCpp.md b/docs/src/faq/JuliaAndCpp.md deleted file mode 100644 index 8f43026379..0000000000 --- a/docs/src/faq/JuliaAndCpp.md +++ /dev/null @@ -1,68 +0,0 @@ -# Programming Language (Julia, C++) Frequently Asked Questions - -## Should I use the Julia or C++ version of ITensor? - -We recommend the Julia version of ITensor for most people, because: -* Julia ITensor has more and newer features than C++ ITensor, and we are developing it more rapidly -* Julia is a more productive language than C++ with more built-in features, such as linear algebra, iteration tools, etc. -* Julia is a compiled language with performance rivaling C++ (see next question below for a longer discussion) -* Julia has a rich ecosystem with a package manager, many well-designed libraries, and helpful tutorials - -Even if Julia is not available by default on your computer cluster, it is easy to set up your own local install of Julia on a cluster. - -However, some good reasons to use the C++ version of ITensor are: -* using ITensor within existing C++ codes -* you already have expertise in C++ programming -* multithreading support in C++, such as with OpenMP, offer certain sophisticated features compared to Julia multithreading (though Julia's support for multithreading has other benefits such as composability and is rapidly improving) -* you need other specific features of C++, such as control over memory management or instant start-up times - -## Which is faster: Julia or C++ ? - -Julia and C++ offer about the same performance. - -Each language gets compiled to optimized assembly code and offer arrays and containers -which can efficiently stored and iterated. Well-written Julia code can be even faster -than comparable C++ codes in many cases. - -The longer answer is of course that _it depends_: -* Julia is a more productive language than C++, with many highly-optimized libraries for - numerical computing tasks, and excellent tools for profiling and benchmarking. - These features help significantly to tune Julia codes for optimal performance. -* C++ offers much more fine-grained control over memory management, which can enhance - performance in certain applications and control memory usage. -* Julia codes can slow down significantly during refactoring or when introducing new - code if certain [best practices](https://docs.julialang.org/en/v1/manual/performance-tips/) - are not followed. The most important of these is writing type-stable code. For more details - see the [Performance Tips](https://docs.julialang.org/en/v1/manual/performance-tips/) section - of the Julia documentation. -* C++ applications start instantly, while Julia codes can be slow to start. - However, if this start-up time is subtracted, the rest of the time of running a - Julia application is similar to C++. - -## Why did you choose Julia over Python for ITensor? - -Julia offers much better performance than Python, -while still having nearly all of Python's benefits. One consequence is that -ITensor can be written purely in Julia, whereas to write high-performance -Python libraries it is necessary to implement many parts in C or C++ -(the "two-language problem"). - -The main reasons Julia codes can easily outperform Python codes are: -1. Julia is a (just-in-time) compiled language with functions specialized - for the types of the arguments passed to them -2. Julia arrays and containers are specialized to the types they contain, - and perform similarly to C or C++ arrays when all elements have the same type -3. Julia has sophisticated support for multithreading while Python has significant - problems with multithreading - -Of course there are some drawbacks of Julia compared to Python, including -a less mature ecosystem of libraries (though it is simple to call Python libraries -from Julia using [PyCall](https://github.com/JuliaPy/PyCall.jl)), and less widespread -adoption. - -## Is Julia ITensor a wrapper around the C++ version? - -No. The Julia version of ITensor is a complete, ground-up port -of the ITensor library to the Julia language and is written -100% in Julia. -