Skip to content

Conversation

@JoeyT1994
Copy link
Collaborator

This PR adds GPU support for the BeliefPropagationCache, allowing the structure to be adapted to GPUs via overload of the adapt_structure function on both the tensor network and underlying messages. In doing so, adapt has been made a core dependency of the library and is no longer a weak dependency.

Moreover a custom_device message update alg has been defined which allows the tensors in a message update to be passed to a designated device (such as GPU) and then brought back to the device the tensors were originally on after contraction.

To enable these features, the kwargs of the BP code have been improved such that they are expected to be passed as kwargs inside the algorithm struct specified for BP. E.g. update(bp_cache; alg = Algorithm("bp"; maxiter = , tol = , edge_sequence = ....)). A reasonable default is defined.

Meanwhile the message_update method can be passed as message_update_alg = Algorithm("contract"; kwargs...) with the relevant kwargs contained within there.

The following code is now supported

using CUDA
ψ = random_tensornetwork(rng, s; link_space=χ)
ψψ = QuadraticFormNetwork(ψ)
bpc = BeliefPropagationCache(ψψ)

#Move the cache to Nvidia GPU and do BP on there
gpu_bpc = CUDA.cu(bpc)
gpu_bpc = update(gpu_bpc)

#Alternatively, keep the cache on CPU and just use a GPU to do updates
bpc = update(bpc; message_update_alg = Algorithm("contract_custom_device"; normalize = true, custom_device_adapt =     CUDA.cu, sequence_alg = "optimal")

@mtfishman
Copy link
Member

@JoeyT1994 it looks like you need to update the compat entries for ITensorNetworks to v0.14 in docs/Project.toml and examples/Project.toml.

@mtfishman mtfishman merged commit 5bc9888 into ITensor:main Aug 20, 2025
10 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants