Skip to content

Commit bdcbbde

Browse files
authored
[docs] add quantum conditional entropy example (#671)
1 parent d19955e commit bdcbbde

File tree

3 files changed

+112
-9
lines changed

3 files changed

+112
-9
lines changed
Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
# # Continuity of the quantum conditional entropy
2+
#
3+
# The quantum conditional entropy is given by
4+
#
5+
# ```math
6+
# S(A|B)_\rho := S(\rho^{AB}) - S(\rho^{B})
7+
# ```
8+
#
9+
# where $S$ is the von Neumann entropy,
10+
#
11+
# ```math
12+
# S(\rho) := - \text{tr}(\rho \log \rho)
13+
# ```
14+
#
15+
# and $\rho$ is a positive semidefinite trace-1 matrix (density matrix).
16+
#
17+
# Here, $\rho^{AB}$ represents an operator on the tensor product of two finite-dimensional Hilbert spaces $A$ and $B$ (with dimensions $d_A$ and $d_B$ respectively), so we can regard $\rho_{AB}$ as a matrix on the vector space $\mathbb{C}^{d_Ad_B}$. Moreover, $\rho^B$ denotes the partial trace of $\rho^{AB}$ over the system $A$, so $\rho^B$ is a matrix on $\mathbb{C}^{d_B}$.
18+
#
19+
# One question is how much can $S(A|B)_\rho$ vary between two density matrices $\rho$ and $\sigma$ as a function of the trace-distance $\text{trdist}(\rho, \sigma) := \|\rho-\sigma\|_1 = \frac{1}{2} \text{tr}\left(\sqrt{(\rho-\sigma)^\dagger (\rho-\sigma)}\right)$ (that is, half of the nuclear norm). Here the trace distance is meaningful as it is the quantum analog to the total variation distance, and has an interpretation in terms of the maximal possible probability to distinguish between $\rho$ and $\sigma$ by measurement.
20+
#
21+
# The Alicki-Fannes-Winter (AFW) bound ([*Winter 2015*](https://arxiv.org/abs/1507.07775v6), Lemma 2) states that if $\rho$ and $\sigma$ are density matrices, then $\text{trdist}(\rho, \sigma) \leq \varepsilon \leq 1$, then
22+
#
23+
# ```math
24+
# | S(A|B)_\rho - S(A|B)_\sigma| \leq 2 \varepsilon \log d_A + (1 + \varepsilon) h \left(\frac{\varepsilon}{1+\varepsilon}\right)
25+
# ```
26+
#
27+
# where $h(x) = -x\log x - (1-x)\log(1-x)$ is the binary entropy.
28+
#
29+
# We can illustrate this bound by computing
30+
#
31+
# ```math
32+
# \max_{\rho} S(A|B)_\rho - S(A|B)_\sigma
33+
# ```
34+
#
35+
# for a fixed state $\sigma$, and comparing to the AFW bound.
36+
#
37+
# We will choose $d_A=d_B=2$, and $\sigma$ as the maximally entangled state:
38+
#
39+
# ```math
40+
# \sigma = \frac{1}{2}\begin{pmatrix}1 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1\\\end{pmatrix}
41+
# ```math
42+
#
43+
# First, we can formulate the conditional entropy in terms of the relative entropy using the relationship
44+
#
45+
# ```math
46+
# S(A|B)_\rho = - D(\rho^{AB} \| I^A \otimes \rho^B)
47+
# ```math
48+
#
49+
# where $D$ is the quantum relative entropy and $I^A$ is the $d_A$-dimensional identity matrix. Thus:
50+
51+
using Convex
52+
using LinearAlgebra: I
53+
54+
function quantum_conditional_entropy(ρ_AB, d_A, d_B)
55+
ρ_B = partialtrace(ρ_AB, 1, [d_A, d_B])
56+
return -quantum_relative_entropy(ρ_AB, kron(I(d_A), ρ_B))
57+
end
58+
59+
# Now we setup the problem data:
60+
61+
ϵ = 0.1
62+
d_A = d_B = 2
63+
σ_AB = 0.5 * [
64+
1 0 0 1
65+
0 0 0 0
66+
0 0 0 0
67+
1 0 0 1
68+
]
69+
70+
# And we build and solve problem itself
71+
72+
using SCS
73+
74+
ρ_AB = HermitianSemidefinite(d_A * d_B)
75+
add_constraint!(ρ_AB, tr(ρ_AB) == 1)
76+
77+
problem = maximize(
78+
quantum_conditional_entropy(ρ_AB, d_A, d_B),
79+
0.5 * nuclearnorm(ρ_AB - σ_AB) ϵ,
80+
)
81+
82+
solve!(problem, SCS.Optimizer; silent_solver = false)
83+
84+
# We can then check the observed difference in relative entropies:
85+
86+
difference = evaluate(
87+
quantum_conditional_entropy(ρ_AB, d_A, d_B) -
88+
quantum_conditional_entropy(σ_AB, d_A, d_B),
89+
)
90+
91+
# We can compare to the bound:
92+
h(x) = -x * log(x) - (1 - x) * log(1 - x)
93+
bound = 2 * ϵ * log(d_A) + (1 + ϵ) * h/ (1 + ϵ))
94+
95+
# In fact, in this case we know the maximizer is given by
96+
97+
ρ_max = σ_AB * (1 - ϵ) + ϵ * (I(d_A * d_B) - σ_AB) / (d_A * d_B - 1)
98+
99+
# We can check that `ρ_AB` obtained the right value:
100+
101+
norm(evaluate(ρ_AB) - ρ_max)
102+
103+
# Here we see a result within the expected tolerances of SCS.

docs/styles/config/vocabularies/jump-dev/accept.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ docstring
55
[Dd]ualize
66
[Dd]ualization
77
[Ee]lementwise
8+
entropies
89
[Ee]num
910
injective
1011
nonconvex
@@ -51,8 +52,10 @@ Markowitz
5152
Mohan
5253
Namkoong
5354
Nemirovski
55+
Neumann
5456
Skaf
5557
Udell
5658
Vandenberghe
59+
von
5760
Watrous
5861
Zeng

src/reformulations/partialtranspose.jl

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -51,13 +51,10 @@ Returns a matrix `M` so that for any vector `v` of length `prod(dims)`,
5151
`M*v == vec(permutedims(reshape(v, dims), p))`.
5252
"""
5353
function permutedims_matrix(dims, p)
54-
d, n = prod(dims), length(dims)
55-
dense = reshape(
56-
PermutedDimsArray(
57-
reshape(LinearAlgebra.I(d), (dims..., dims...)),
58-
(p..., (n+1:2n)...),
59-
),
60-
(d, d),
61-
)
62-
return SparseArrays.sparse(dense)
54+
d = prod(dims)
55+
# Generalization of https://stackoverflow.com/a/60680132
56+
rows = 1:d
57+
cols = vec(permutedims(reshape(rows, dims), p))
58+
data = ones(Int, d)
59+
return SparseArrays.sparse(rows, cols, data, d, d)
6360
end

0 commit comments

Comments
 (0)