Skip to content

Commit 9cf235c

Browse files
committed
fix tests
1 parent 9175101 commit 9cf235c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+357
-917
lines changed

NEWS.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Updates in v3.0
2+
3+
1. The solution size of `Coloring`/`Satisfiability` is now defined as the number of violations of colors/clauses. The smaller the better now.
4+
2. Rename `best_solutions` to `largest_solutions`, `best2_solutions` to `largest2_solutions` and `bestk_solutions` to `largestk_solutions`.
5+
3. Remove the weights from `PaintShop`.
6+
4. Remove the weights on vertices from `MaxCut`.
7+
5. `SpinGlass` is no longer specified by cliques. It is now specified by graphs or hypergraphs. Weights can be defined on both edges and vertices.

docs/src/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ solve(
4141
Here the main function [`solve`](@ref) takes three input arguments, the problem instance of type [`IndependentSet`](@ref), the property instance of type [`GraphPolynomial`](@ref) and an optional key word argument `usecuda` to decide use GPU or not.
4242
If one wants to use GPU to accelerate the computation, the `, CUDA` should be uncommented.
4343

44-
An [`IndependentSet`](@ref) instance takes two positional arguments to initialize, the graph instance that one wants to solve and the get_weights for each vertex. Here, we use a random regular graph with 20 vertices and degree 3, and the default uniform weight 1.
44+
An [`IndependentSet`](@ref) instance takes two positional arguments to initialize, the graph instance that one wants to solve and the weights for each vertex. Here, we use a random regular graph with 20 vertices and degree 3, and the default uniform weight 1.
4545

4646
The [`GenericTensorNetwork`](@ref) function is a constructor for the problem instance, which takes the problem instance as the first argument and optional key word arguments. The key word argument `optimizer` is for specifying the tensor network optimization algorithm.
4747
The keyword argument `openvertices` is a tuple of labels for specifying the degrees of freedom not summed over, and `fixedvertices` is a label-value dictionary for specifying the fixed values of the degree of freedoms.

docs/src/ref.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,17 +21,17 @@ OpenPitMining
2121
#### Constraint Satisfaction Problem Interfaces
2222

2323
To subtype [`ConstraintSatisfactionProblem`](@ref), a new type must contain a `code` field to represent the (optimized) tensor network.
24-
Interfaces [`GenericTensorNetworks.generate_tensors`](@ref), [`labels`](@ref), [`flavors`](@ref) and [`get_weights`](@ref) are required.
25-
[`nflavor`](@ref) is optional.
24+
Interfaces [`GenericTensorNetworks.generate_tensors`](@ref), [`labels`](@ref), [`flavors`](@ref) and [`weights`](@ref) are required.
25+
[`num_flavors`](@ref) is optional.
2626

2727
```@docs
2828
GenericTensorNetworks.generate_tensors
2929
labels
3030
energy_terms
3131
flavors
32-
get_weights
32+
weights
3333
set_weights
34-
nflavor
34+
num_flavors
3535
fixedvertices
3636
```
3737

docs/src/tensornetwork.md

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
# An introduction to tensor networks
2+
3+
Let $G = (V, E)$ be a hypergraph, where $V$ is the set of vertices and $E$ is the set of hyperedges. Each vertex $v \in V$ is associated with a local variable, e.g. "spin" and "bit". A hyperedge $e \in E$ is a subset of vertices $e \subseteq V$. On top of which, we can define a local Hamiltonian $H$ as a sum of local terms $h_e$ over all hyperedges $e \in E$:
4+
5+
```math
6+
H(\sigma) = \sum_{e \in E} h_e(\sigma_e)
7+
```
8+
9+
where $\sigma_e$ is the restriction of the configuration $\sigma$ to the vertices in $e$.
10+
11+
The following solution space properties are of interest:
12+
13+
* The partition function,
14+
```math
15+
Z = \sum_{\sigma} e^{-\beta H(\sigma)}
16+
```
17+
where $\beta$ is the inverse temperature.
18+
* The maximum/minimum solution sizes,
19+
```math
20+
\max_{\sigma} H(\sigma), \min_{\sigma} H(\sigma)
21+
```
22+
* The number of solutions at certain sizes,
23+
```math
24+
N(k) = \sum_{\sigma} \delta(k, H(\sigma))
25+
```
26+
* The enumeration of solutions at certain sizes.
27+
```math
28+
S = \{ \sigma | H(\sigma) = k \}
29+
```
30+
* The direct sampling of solutions at certain sizes.
31+
```math
32+
\sigma \sim S
33+
```
34+
35+
## Tensor network representation
36+
37+
### Partition function
38+
It is well known that the partition function of an energy model can be represented as a tensor network[^Levin2007]. The partition function can be written in a sum-product form as
39+
```math
40+
Z = \sum_{\sigma} e^{-\beta H(\sigma)} = \sum_{\sigma} \prod_{e \in E} T_e(\sigma_e)
41+
```
42+
where $T_e(\sigma_e) = e^{-\beta h_e(\sigma_e)}$ is a tensor associated with the hyperedge $e$.
43+
44+
This sum-product form is directly related to a tensor network $(V, \{T_{\sigma_e} \mid e\in E\}, \emptyset)$, where $T_{\sigma_e}$ is a tensor labeled by $\sigma_e \subseteq V$, and its elements are defined by $T_{\sigma_e}= T_e(\sigma_e)$. $\emptyset$ is the set of open vertices in a tensor network, which are not summed over.
45+
46+
### Maximum/minimum solution sizes
47+
The maximum/minimum solution sizes can be represented as a tensor network as well. The maximum solution size can be written as
48+
```math
49+
\max_{\sigma} H(\sigma) = \max_{\sigma} \sum_{e \in E} h_e(\sigma_e)
50+
```
51+
which can be represented as a tropical tensor network[^Liu2021] $(V, \{h_{\sigma_e} \mid e\in E\}, \emptyset)$, where $h_{\sigma_e}$ is a tensor labeled by $\sigma_e \subseteq V$, and its elements are defined by $h_{\sigma_e}= h_e(\sigma_e)$.
52+
53+
## Problems
54+
### Independent set problem
55+
The independent set problem on graph $G=(V, E)$ is characterized by the Hamiltonian
56+
```math
57+
H(\sigma) = U \sum_{(i, j) \in E} n_i n_j - \sum_{i \in V} n_i
58+
```
59+
where $n_i \in \{0, 1\}$ is a binary variable associated with vertex $i$, and $U\rightarrow \infty$ is a large constant. The goal is to find the maximum independent set, i.e. the maximum number of vertices such that no two vertices are connected by an edge.
60+
The partition function for an independent set problem is
61+
```math
62+
Z = \sum_{\sigma} e^{-\beta H(\sigma)} = \sum_{\sigma} \prod_{(i, j) \in E} e^{-\beta U n_in_j} \prod_{i \in V} e^{\beta n_i}
63+
```
64+
65+
Let $x = e^{\beta}$, the partition function can be written as
66+
```math
67+
Z = \sum_{\sigma} \prod_{(i, j) \in E} B_{n_in_j} \prod_{i \in V} W_{n_i}
68+
```
69+
where $B_{n_in_j} = \lim_{U \rightarrow \infty} e^{-U \beta n_in_j}=\begin{cases}0, \quad n_in_j = 1\\1,\quad n_in_j = 0\end{cases}$ and $W_{n_i} = x^{n_i}$ are tensors associated with the hyperedge $(i, j)$ and the vertex $i$, respectively.
70+
71+
The tensor network representation for the partition function is
72+
```math
73+
\mathcal{N}_{IS} = (\Lambda, \{B_{n_in_j} \mid (i, j)\in E\} \cup \{W_{n_i} \mid i\in \Lambda\}, \emptyset)
74+
```
75+
where $\Lambda = \{n_i \mid i \in V\}$ is the set of binary variables, $B_{n_in_j}$ is a tensor associated with the hyperedge $(i, j)$ and $W_{n_i}$ is a tensor associated with the vertex $i$. The tensors are defined as
76+
```math
77+
W = \left(\begin{matrix}
78+
1 \\
79+
x
80+
\end{matrix}\right)
81+
```
82+
where $x$ is a variable associated with $v$.
83+
```math
84+
B = \left(\begin{matrix}
85+
1 & 1\\
86+
1 & 0
87+
\end{matrix}\right).
88+
```
89+
90+
The contraction of the tensor network $\mathcal{N}_{IS}$ gives the partition function $Z$. It is implicitly assumed that the tensor elements are real numbers.
91+
92+
However, by replacing the tensor elements with tropical numbers, the tensor network $\mathcal{N}_{IS}$ can be used to compute the maximum independent set size and its degeneracy[^Liu2021].
93+
94+
An algebra can be defined by
95+
```math
96+
\begin{align*}
97+
\oplus &= \max\\
98+
\otimes &= +
99+
\end{align*}
100+
```
101+
102+
[^Levin2007]: Levin, M., Nave, C.P., 2007. Tensor renormalization group approach to two-dimensional classical lattice models. Physical Review Letters 99, 1–4. https://doi.org/10.1103/PhysRevLett.99.120601
103+
[^Liu2021]: Liu, J.-G., Wang, L., Zhang, P., 2021. Tropical Tensor Network for Ground States of Spin Glasses. Phys. Rev. Lett. 126, 090506. https://doi.org/10.1103/PhysRevLett.126.090506

examples/MaxCut.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
# In graph theory, a [cut](https://en.wikipedia.org/wiki/Cut_(graph_theory)) is a partition of the vertices of a graph into two disjoint subsets.
88
# It is closely related to the [Spin-glass problem](@ref) in physics.
99
# Finding the maximum cut is NP-Hard, where a maximum cut is a cut whose size is at least the size of any other cut,
10-
# where the size of a cut is the number of edges (or the sum of get_weights on edges) crossing the cut.
10+
# where the size of a cut is the number of edges (or the sum of weights on edges) crossing the cut.
1111

1212
using GenericTensorNetworks, Graphs
1313

@@ -40,7 +40,7 @@ problem = GenericTensorNetwork(maxcut)
4040
# where ``w_{ij}`` is a real number associated with edge ``(i, j)`` as the edge weight.
4141
# If and only if the bipartition cuts on edge ``(i, j)``,
4242
# this tensor contributes a factor ``x_{i}^{w_{ij}}`` or ``x_{j}^{w_{ij}}``.
43-
# Similarly, one can assign get_weights to vertices, which corresponds to the onsite energy terms in the spin glass.
43+
# Similarly, one can assign weights to vertices, which corresponds to the onsite energy terms in the spin glass.
4444
# The vertex tensor is
4545
# ```math
4646
# W(x_i, w_i) = \left(\begin{matrix}

examples/SpinGlass.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ hyperedges = [[1,3,4,6,7], [4,7,8,12], [2,5,9,11,13],
106106
[1,2,14,15], [3,6,10,12,14], [8,14,15],
107107
[1,2,6,11], [1,2,4,6,8,12]]
108108

109-
get_weights = [-1, 1, -1, 1, -1, 1, -1, 1];
109+
weights = [-1, 1, -1, 1, -1, 1, -1, 1];
110110

111111
# The energy function of the spin glass problem is
112112
# ```math
@@ -117,7 +117,7 @@ get_weights = [-1, 1, -1, 1, -1, 1, -1, 1];
117117
# \end{align*}
118118
# ```
119119
# A spin glass problem can be defined with the [`SpinGlass`](@ref) type as
120-
hyperspinglass = SpinGlass(num_vertices, hyperedges, get_weights)
120+
hyperspinglass = SpinGlass(num_vertices, hyperedges, weights)
121121

122122
# The tensor network representation of the set packing problem can be obtained by
123123
hyperproblem = GenericTensorNetwork(hyperspinglass)

examples/weighted.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ graph = Graphs.smallgraph(:petersen)
77

88
# The following code constructs a weighted MIS problem instance.
99
problem = GenericTensorNetwork(IndependentSet(graph, collect(1:10)));
10-
GenericTensorNetworks.get_weights(problem)
10+
GenericTensorNetworks.weights(problem)
1111

1212
# The tensor labels that associated with the weights can be accessed by
1313
GenericTensorNetworks.energy_terms(problem)
@@ -26,8 +26,8 @@ show_graph(graph, locations; format=:svg, vertex_colors=
2626

2727
# The only solution space property that can not be defined for general real-weighted (not including integer-weighted) graphs is the [`GraphPolynomial`](@ref).
2828

29-
# For the weighted MIS problem, a useful solution space property is the "energy spectrum", i.e. the largest several configurations and their get_weights.
30-
# We can use the solution space property is [`SizeMax`](@ref)`(10)` to compute the largest 10 get_weights.
29+
# For the weighted MIS problem, a useful solution space property is the "energy spectrum", i.e. the largest several configurations and their weights.
30+
# We can use the solution space property is [`SizeMax`](@ref)`(10)` to compute the largest 10 weights.
3131
spectrum = solve(problem, SizeMax(10))[]
3232

3333
# The return value has type [`ExtendedTropical`](@ref), which contains one field `orders`.

src/GenericTensorNetworks.jl

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,11 @@ using Primes
1616
using DocStringExtensions
1717
using Base.Cartesian
1818
using ProblemReductions
19-
import ProblemReductions: ConstraintSatisfactionProblem,AbstractSatisfiabilityProblem, UnitWeight, ZeroWeight
20-
import ProblemReductions: @bv_str, StaticElementVector, StaticBitVector,onehotv, _nints
21-
import ProblemReductions: is_set_covering, is_vertex_coloring, is_set_packing,is_matching, is_valid_mining, print_mining,num_paint_shop_color_switch, paint_shop_coloring_from_config, paint_shop_from_pairs,spin_glass_from_matrix, CNF, CNFClause, BoolVar, satisfiable, @bools, , ¬,
22-
import ProblemReductions: flavors,set_weights
19+
import ProblemReductions: ConstraintSatisfactionProblem, AbstractSatisfiabilityProblem, UnitWeight
20+
import ProblemReductions: @bv_str, StaticElementVector, StaticBitVector, onehotv, _nints, hamming_distance
21+
import ProblemReductions: is_set_covering, is_vertex_coloring, is_set_packing, is_dominating_set, is_matching, is_maximal_independent_set, cut_size, is_independent_set
22+
import ProblemReductions: num_paint_shop_color_switch, spin_glass_from_matrix, CNF, CNFClause, BoolVar, satisfiable, @bools, , ¬,
23+
import ProblemReductions: flavors, set_weights, weights, num_flavors, variables
2324
import AbstractTrees: children, printnode, print_tree
2425
import StatsBase
2526

@@ -44,8 +45,8 @@ export square_lattice_graph, unit_disk_graph, random_diagonal_coupled_graph, ran
4445
export line_graph
4546

4647
# Tensor Networks (Graph problems)
47-
export GenericTensorNetwork, optimize_code, UnitWeight, ZeroWeight
48-
export flavors, labels, nflavor, get_weights, fixedvertices, set_weights, energy_terms
48+
export GenericTensorNetwork, optimize_code, UnitWeight
49+
export flavors, variables, num_flavors, weights, fixedvertices, set_weights, energy_terms
4950
export IndependentSet, mis_compactify!, is_independent_set
5051
export MaximalIS, is_maximal_independent_set
5152
export cut_size, MaxCut
@@ -57,7 +58,6 @@ export DominatingSet, is_dominating_set
5758
export Matching, is_matching
5859
export SetPacking, is_set_packing
5960
export SetCovering, is_set_covering
60-
export OpenPitMining, is_valid_mining, print_mining
6161

6262
# Interfaces
6363
export solve, SizeMax, SizeMin, PartitionFunction, CountingAll, CountingMax, CountingMin, GraphPolynomial, SingleConfigMax, SingleConfigMin, ConfigsAll, ConfigsMax, ConfigsMin, Single, AllConfigs
@@ -80,7 +80,7 @@ using .Mods
8080

8181
include("utils.jl")
8282
include("arithematics.jl")
83-
include("networks/networks.jl")
83+
include("networks.jl")
8484
include("graph_polynomials.jl")
8585
include("configurations.jl")
8686
include("graphs.jl")

src/arithematics.jl

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -788,31 +788,31 @@ end
788788
# convert from counting type to bitstring type
789789
for F in [:set_type, :sampler_type, :treeset_type]
790790
@eval begin
791-
function $F(::Type{T}, n::Int, nflavor::Int) where {OT, K, T<:TruncatedPoly{K,C,OT} where C}
792-
TruncatedPoly{K, $F(n,nflavor),OT}
791+
function $F(::Type{T}, n::Int, num_flavors::Int) where {OT, K, T<:TruncatedPoly{K,C,OT} where C}
792+
TruncatedPoly{K, $F(n,num_flavors),OT}
793793
end
794-
function $F(::Type{T}, n::Int, nflavor::Int) where {TX, T<:Polynomial{C,TX} where C}
795-
Polynomial{$F(n,nflavor),:x}
794+
function $F(::Type{T}, n::Int, num_flavors::Int) where {TX, T<:Polynomial{C,TX} where C}
795+
Polynomial{$F(n,num_flavors),:x}
796796
end
797-
function $F(::Type{T}, n::Int, nflavor::Int) where {TV, T<:CountingTropical{TV}}
798-
CountingTropical{TV, $F(n,nflavor)}
797+
function $F(::Type{T}, n::Int, num_flavors::Int) where {TV, T<:CountingTropical{TV}}
798+
CountingTropical{TV, $F(n,num_flavors)}
799799
end
800-
function $F(::Type{Real}, n::Int, nflavor::Int)
801-
$F(n, nflavor)
800+
function $F(::Type{Real}, n::Int, num_flavors::Int)
801+
$F(n, num_flavors)
802802
end
803803
end
804804
end
805805
for (F,TP) in [(:set_type, :ConfigEnumerator), (:sampler_type, :ConfigSampler)]
806-
@eval function $F(n::Integer, nflavor::Integer)
807-
s = ceil(Int, log2(nflavor))
806+
@eval function $F(n::Integer, num_flavors::Integer)
807+
s = ceil(Int, log2(num_flavors))
808808
c = _nints(n,s)
809809
return $TP{n,s,c}
810810
end
811811
end
812-
function treeset_type(n::Integer, nflavor::Integer)
813-
return SumProductTree{OnehotVec{n, nflavor}}
812+
function treeset_type(n::Integer, num_flavors::Integer)
813+
return SumProductTree{OnehotVec{n, num_flavors}}
814814
end
815-
sampler_type(::Type{ExtendedTropical{K,T}}, n::Int, nflavor::Int) where {K,T} = ExtendedTropical{K, sampler_type(T, n, nflavor)}
815+
sampler_type(::Type{ExtendedTropical{K,T}}, n::Int, num_flavors::Int) where {K,T} = ExtendedTropical{K, sampler_type(T, n, num_flavors)}
816816

817817
# utilities for creating onehot vectors
818818
onehotv(::Type{ConfigEnumerator{N,S,C}}, i::Integer, v) where {N,S,C} = ConfigEnumerator([onehotv(StaticElementVector{N,S,C}, i, v)])

src/configurations.jl

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
1-
function config_type(::Type{T}, n, nflavor; all::Bool, tree_storage::Bool) where T
1+
function config_type(::Type{T}, n, num_flavors; all::Bool, tree_storage::Bool) where T
22
if all
33
if tree_storage
4-
return treeset_type(T, n, nflavor)
4+
return treeset_type(T, n, num_flavors)
55
else
6-
return set_type(T, n, nflavor)
6+
return set_type(T, n, num_flavors)
77
end
88
else
9-
return sampler_type(T, n, nflavor)
9+
return sampler_type(T, n, num_flavors)
1010
end
1111
end
1212

1313
"""
14-
best_solutions(problem; all=false, usecuda=false, invert=false, tree_storage::Bool=false)
14+
largest_solutions(problem; all=false, usecuda=false, invert=false, tree_storage::Bool=false)
1515
1616
Find optimal solutions with bounding.
1717
@@ -20,19 +20,19 @@ Find optimal solutions with bounding.
2020
* If `invert` is true, find the minimum.
2121
* If `tree_storage` is true, use [`SumProductTree`](@ref) as the storage of solutions.
2222
"""
23-
function best_solutions(gp::GenericTensorNetwork; all=false, usecuda=false, invert=false, tree_storage::Bool=false, T=Float64)
23+
function largest_solutions(gp::GenericTensorNetwork; all=false, usecuda=false, invert=false, tree_storage::Bool=false, T=Float64)
2424
if all && usecuda
2525
throw(ArgumentError("ConfigEnumerator can not be computed on GPU!"))
2626
end
2727
xst = generate_tensors(_x(Tropical{T}; invert), gp)
28-
ymask = trues(fill(nflavor(gp), length(getiyv(gp.code)))...)
28+
ymask = trues(fill(num_flavors(gp), length(getiyv(gp.code)))...)
2929
if usecuda
3030
xst = togpu.(xst)
3131
ymask = togpu(ymask)
3232
end
3333
if all
3434
# we use `Float64` as default because we want to support weighted graphs.
35-
T = config_type(CountingTropical{T,T}, length(labels(gp)), nflavor(gp); all, tree_storage)
35+
T = config_type(CountingTropical{T,T}, length(labels(gp)), num_flavors(gp); all, tree_storage)
3636
xs = generate_tensors(_x(T; invert), gp)
3737
ret = bounding_contract(AllConfigs{1}(), gp.code, xst, ymask, xs)
3838
return invert ? asarray(post_invert_exponent.(ret), ret) : ret
@@ -61,22 +61,22 @@ function solutions(gp::GenericTensorNetwork, ::Type{BT}; all::Bool, usecuda::Boo
6161
if all && usecuda
6262
throw(ArgumentError("ConfigEnumerator can not be computed on GPU!"))
6363
end
64-
T = config_type(BT, length(labels(gp)), nflavor(gp); all, tree_storage)
64+
T = config_type(BT, length(labels(gp)), num_flavors(gp); all, tree_storage)
6565
ret = contractx(gp, _x(T; invert); usecuda=usecuda)
6666
return invert ? asarray(post_invert_exponent.(ret), ret) : ret
6767
end
6868

6969
"""
70-
best2_solutions(problem; all=true, usecuda=false, invert=false, tree_storage::Bool=false)
70+
largest2_solutions(problem; all=true, usecuda=false, invert=false, tree_storage::Bool=false)
7171
7272
Finding optimal and suboptimal solutions.
7373
"""
74-
best2_solutions(gp::GenericTensorNetwork; all=true, usecuda=false, invert::Bool=false, T=Float64) = solutions(gp, Max2Poly{T,T}; all, usecuda, invert)
74+
largest2_solutions(gp::GenericTensorNetwork; all=true, usecuda=false, invert::Bool=false, T=Float64) = solutions(gp, Max2Poly{T,T}; all, usecuda, invert)
7575

76-
function bestk_solutions(gp::GenericTensorNetwork, k::Int; invert::Bool=false, tree_storage::Bool=false, T=Float64)
76+
function largestk_solutions(gp::GenericTensorNetwork, k::Int; invert::Bool=false, tree_storage::Bool=false, T=Float64)
7777
xst = generate_tensors(_x(Tropical{T}; invert), gp)
7878
ymask = trues(fill(2, length(getiyv(gp.code)))...)
79-
T = config_type(TruncatedPoly{k,T,T}, length(labels(gp)), nflavor(gp); all=true, tree_storage)
79+
T = config_type(TruncatedPoly{k,T,T}, length(labels(gp)), num_flavors(gp); all=true, tree_storage)
8080
xs = generate_tensors(_x(T; invert), gp)
8181
ret = bounding_contract(AllConfigs{k}(), gp.code, xst, ymask, xs)
8282
return invert ? asarray(post_invert_exponent.(ret), ret) : ret

0 commit comments

Comments
 (0)