|
1 | 1 | % Main content for QKD blueprint |
2 | 2 |
|
3 | | -This blueprint documents the formalization of \textbf{Lemma 1} from the seminal paper |
| 3 | +This blueprint formalizes \textbf{Lemma 1} from the paper |
4 | 4 | \href{https://arxiv.org/abs/quant-ph/9802025}{\emph{Unconditional Security Of Quantum Key Distribution Over Arbitrarily Long Distances}} |
5 | 5 | by Lo and Chau (1998). |
6 | 6 |
|
7 | | -The lemma establishes a fundamental result in quantum key distribution security: |
8 | | -high fidelity with a pure state implies bounded von Neumann entropy. |
9 | | -This formalization is expected to attract significant interest from the quantum cryptography community, |
10 | | -as it provides a machine-checked proof of a cornerstone result in the security analysis of quantum key distribution protocols. |
| 7 | +Lemma 1 establishes a quantitative relation between \emph{fidelity} and \emph{von Neumann entropy}: |
| 8 | +high fidelity with a reference pure state implies an upper bound on the entropy of the underlying density operator. |
| 9 | +The purpose of this blueprint is to give a precise, machine-checked version of this argument. |
11 | 10 |
|
12 | 11 | \bigskip |
13 | 12 |
|
| 13 | +\noindent For reference, we reproduce below the statement and proof of Lemma~1 as it appears in Lo--Chau (1998). |
| 14 | + |
14 | 15 | \begin{center} |
15 | 16 | \fbox{\begin{minipage}{0.95\textwidth} |
16 | 17 | \vspace{0.5em} |
|
38 | 39 |
|
39 | 40 | \bigskip |
40 | 41 |
|
41 | | -\begin{definition} |
| 42 | +\subsection*{Preliminaries} |
| 43 | + |
| 44 | +\noindent We begin with several basic definitions used to verify the proof of Lemma~1. |
| 45 | + |
| 46 | +\begin{definition}[von Neumann Entropy] |
42 | 47 | \label{def:vonNeumannEntropy} |
43 | 48 | \lean{vonNeumannEntropy} |
44 | 49 | \leanok |
45 | | -For a Hermitian matrix $A$, the \emph{von Neumann entropy} is defined as: |
46 | | -$$S(A) = -\sum_i \lambda_i \log(\lambda_i)$$ |
47 | | -where $\lambda_i$ are the eigenvalues of $A$. |
| 50 | +For a Hermitian matrix $A$ with eigenvalues $\lambda_i$ (for instance, a density matrix with $\lambda_i \geq 0$ and $\sum_i \lambda_i = 1$), |
| 51 | +the \emph{von Neumann entropy} is defined as: |
| 52 | +$$S(A) = -\sum_i \lambda_i \log(\lambda_i).$$ |
| 53 | +\end{definition} |
| 54 | + |
| 55 | +\begin{definition}[Decreasing Vector] |
| 56 | +\label{def:decreasing_vector} |
| 57 | +A vector $x \in \mathbb{R}^n$ is called \emph{decreasing} if its coordinates are ordered |
| 58 | +$$x_1 \geq x_2 \geq \cdots \geq x_n.$$ |
| 59 | +Given any $x \in \mathbb{R}^n$, we write $x^{\downarrow}$ for the vector obtained by rearranging the coordinates of $x$ in decreasing order. |
48 | 60 | \end{definition} |
49 | 61 |
|
50 | 62 | \begin{definition}[Majorization Ordering] |
|
68 | 80 |
|
69 | 81 | \bigskip |
70 | 82 |
|
71 | | -\begin{lemma} |
| 83 | +\subsection*{Auxiliary Results} |
| 84 | + |
| 85 | +\noindent In this subsection we record the auxiliary results needed for the proof of Lemma~1. |
| 86 | +We first establish Schur-convexity of the map $x \mapsto \sum_i x_i \log(x_i)$, and then prove a spectral estimate showing that high fidelity forces the existence of a large eigenvalue. |
| 87 | + |
| 88 | +\begin{proposition} |
72 | 89 | \label{lem:slope_comparison} |
73 | 90 | \lean{slope_comparison_tlogt} |
74 | 91 | \leanok |
75 | | -For $f(t) = t \cdot \log(t)$, if $a \geq c$, $b \geq d$, $a \neq b$, $c \neq d$, then: |
76 | | -$$\frac{f(c) - f(d)}{c - d} \leq \frac{f(a) - f(b)}{a - b}$$ |
77 | | -\end{lemma} |
| 92 | +Let $f(t) = t \log(t)$ for $t \geq 0$, where we set $f(0) = 0$. For any $a, b, c, d \in [0,\infty)$ satisfying $a \geq c$, $b \geq d$, $a \neq b$, and $c \neq d$, we have |
| 93 | +$$\frac{f(c) - f(d)}{c - d} \leq \frac{f(a) - f(b)}{a - b}.$$ |
| 94 | +\end{proposition} |
78 | 95 |
|
79 | | -\begin{lemma} |
| 96 | +\begin{proof} |
| 97 | +This is a standard consequence of the convexity of $f(t) = t \log(t)$. |
| 98 | +\end{proof} |
| 99 | + |
| 100 | +\begin{proposition} |
80 | 101 | \label{lem:schur_reduction} |
81 | 102 | \lean{schur_convex_reduction_to_distinct} |
82 | 103 | \leanok |
83 | 104 | \uses{lem:slope_comparison} |
84 | | -For decreasing vectors $x, y$ with $x_i \neq y_i$ for all $i$, if $x \preceq y$ then: |
85 | | -$$\sum_i x_i \log(x_i) \leq \sum_i y_i \log(y_i)$$ |
86 | | -\end{lemma} |
| 105 | +Let $x, y \in \mathbb{R}^n$ be decreasing vectors with $x_i \neq y_i$ for all $i$. |
| 106 | +If $x \preceq y$, then: |
| 107 | +$$\sum_i x_i \log(x_i) \leq \sum_i y_i \log(y_i).$$ |
| 108 | +\end{proposition} |
87 | 109 |
|
88 | 110 | \begin{proof} |
89 | 111 | For $f(t) = t \log(t)$, define the secant slopes: |
90 | 112 | $$c_i := \frac{f(x_i) - f(y_i)}{x_i - y_i} = \frac{x_i \log(x_i) - y_i \log(y_i)}{x_i - y_i}$$ |
91 | 113 |
|
92 | | -By Lemma~\ref{lem:slope_comparison} (convexity of $f$), these slopes are monotonically non-decreasing: |
| 114 | +By Lemma~\ref{lem:slope_comparison} (convexity of $f$), these slopes are monotonically non-increasing: |
93 | 115 | $$c_{i+1} \leq c_i \quad \text{for all } i \in \{1, \ldots, n-1\}$$ |
94 | 116 |
|
95 | 117 | Using the telescoping identity $x_i = \left(\sum_{j \leq i} x_j\right) - \left(\sum_{j < i} x_j\right)$, we rewrite: |
|
99 | 121 | &= \sum_{i=1}^n c_i \left[\left(\sum_{j \leq i} x_j - \sum_{j \leq i} y_j\right) - \left(\sum_{j < i} x_j - \sum_{j < i} y_j\right)\right] |
100 | 122 | \end{align*} |
101 | 123 |
|
102 | | -By majorization, $\sum_{j \leq k} x_j \geq \sum_{j \leq k} y_j$ for all $k < n$, and equality holds at $k = n$ by normalization. |
| 124 | +By majorization, $\sum_{j \leq k} x_j \leq \sum_{j \leq k} y_j$ for all $k < n$, and equality holds at $k = n$ by normalization. |
103 | 125 |
|
104 | 126 | Expanding the telescoping sum and using the boundary condition (the term at $i = n$ vanishes): |
105 | 127 | \begin{align*} |
106 | 128 | \sum_{i=1}^n c_i (x_i - y_i) |
107 | 129 | &= \sum_{i=1}^{n-1} (c_i - c_{i+1}) \left(\sum_{j \leq i} x_j - \sum_{j \leq i} y_j\right) \\ |
108 | | -&\geq 0 |
| 130 | +&\leq 0 |
109 | 131 | \end{align*} |
110 | 132 |
|
111 | 133 | The final inequality holds because $(c_i - c_{i+1}) \geq 0$ (slopes non-increasing) and |
112 | | -$\left(\sum_{j \leq i} x_j - \sum_{j \leq i} y_j\right) \geq 0$ (majorization). |
| 134 | +$\left(\sum_{j \leq i} x_j - \sum_{j \leq i} y_j\right) \leq 0$ (majorization), so their product is non-positive. |
113 | 135 | \end{proof} |
114 | 136 |
|
115 | | -\begin{theorem} |
| 137 | +\begin{proposition} |
116 | 138 | \label{thm:schur_convex_xlogx} |
117 | 139 | \lean{schur_convex_xlogx} |
118 | 140 | \leanok |
119 | 141 | \uses{lem:schur_reduction} |
120 | | -The entropy function $f(x) = \sum_i x_i \log(x_i)$ is Schur-convex. |
121 | | -\end{theorem} |
| 142 | +The function $f(x) = \sum_i x_i \log(x_i)$ is Schur-convex. |
| 143 | +\end{proposition} |
122 | 144 |
|
123 | 145 | \begin{proof} |
124 | 146 | We first reduce to decreasing vectors by sorting, which preserves both the majorization relation and the function value. |
|
128 | 150 | directly. |
129 | 151 | \end{proof} |
130 | 152 |
|
131 | | -\begin{lemma}[Eigenvalue Bound from Inner Product] |
| 153 | +\noindent We now turn to the spectral estimate that links high fidelity with the existence of a large eigenvalue. |
| 154 | +In particular, it formalizes the observation that a large expectation value $\langle v, \rho v \rangle$ forces the spectrum of $\rho$ to have a correspondingly large eigenvalue. |
| 155 | + |
| 156 | +\begin{proposition}[Eigenvalue Bound from Inner Product] |
132 | 157 | \label{lem:eigenvalue_bound} |
133 | 158 | \lean{eigenvalue_bound_eigenbasis} |
134 | 159 | \leanok |
135 | 160 | Let $\rho$ be a Hermitian matrix and $v$ a unit vector with $\|v\| = 1$. |
136 | 161 | If $\langle v, \rho v \rangle > C$, then $\rho$ has an eigenvalue $\lambda_i > C$. |
137 | | -\end{lemma} |
| 162 | +\end{proposition} |
138 | 163 |
|
139 | 164 | \bigskip |
| 165 | +\bigskip |
| 166 | + |
| 167 | +\subsection*{Proof of Lemma~1} |
| 168 | + |
| 169 | +\noindent Combining these ingredients, we obtain the following entropy bound, which formalizes Lemma~1 of Lo--Chau (1998). |
140 | 170 |
|
141 | | -\begin{theorem} |
| 171 | +\begin{lemma} |
142 | 172 | \label{thm:main} |
143 | 173 | \lean{high_fidelity_implies_low_entropy_equivalent} |
144 | 174 | \leanok |
145 | 175 | \uses{def:vonNeumannEntropy, thm:schur_convex_xlogx, lem:eigenvalue_bound} |
146 | | -Let $\rho$ be an $R \times R$ positive semidefinite density matrix with trace 1. |
147 | | -If there exists a unit vector $v$ such that $\langle v, \rho v \rangle > 1 - \delta$ where $0 \leq \delta < 1$ and $\delta \leq (R-1)/R$, then: |
148 | | -$$S(\rho) \leq -(1-\delta)\log(1-\delta) - \delta \log\left(\frac{\delta}{R-1}\right)$$ |
149 | | -\end{theorem} |
| 176 | +Let $\rho$ be an $R \times R$ positive semidefinite density matrix with trace $1$. |
| 177 | +If there exists a unit vector $v$ such that $\langle v, \rho v \rangle > 1 - \delta$ where $0 \leq \delta < 1$ and $\delta \leq (R-1)/R$, |
| 178 | +then the von Neumann entropy of $\rho$ satisfies |
| 179 | +$$S(\rho) \leq -(1-\delta)\log(1-\delta) - \delta \log\left(\frac{\delta}{R-1}\right).$$ |
| 180 | +\end{lemma} |
150 | 181 |
|
151 | 182 | \begin{proof} |
152 | 183 | \uses{lem:eigenvalue_bound, thm:schur_convex_xlogx} |
153 | | -The proof proceeds in 9 steps: |
| 184 | +For clarity, the proof is organized into 9 short steps: |
154 | 185 |
|
155 | 186 | \textbf{Step 1: Eigenvalue Bound from High Fidelity.} |
156 | 187 | From the hypothesis $\langle v, \rho v \rangle > 1 - \delta$, we apply |
|
159 | 190 | \textbf{Step 2: Eigenvalue Non-negativity.} |
160 | 191 | Since $\rho$ is positive semidefinite, all eigenvalues are non-negative: $\lambda_i \geq 0$ for all $i$. |
161 | 192 |
|
162 | | -\textbf{Step 3: Construct Comparison Distribution.} |
| 193 | +\textbf{Step 3: Construction of the Comparison Eigenvalue Distribution.} |
163 | 194 | Define a comparison eigenvalue distribution $\lambda_{\text{comp}}$: |
164 | 195 | $$\lambda_{\text{comp}}(i) = \begin{cases} |
165 | 196 | 1 - \delta & \text{if } i = 0 \\ |
|
180 | 211 | \end{enumerate} |
181 | 212 | The proof uses case analysis ($k=0$ vs $k>0$) with a proof by contradiction for the $k>0$ case. |
182 | 213 |
|
183 | | -\textbf{Step 6: Apply Schur-Convexity (Payoff).} |
| 214 | +\textbf{Step 6: Application of Schur-Convexity.} |
184 | 215 | Using Theorem~\ref{thm:schur_convex_xlogx} and the majorization from Step 5: |
185 | 216 | $$\sum_i \lambda_i \log(\lambda_i) \geq \sum_i \lambda_{\text{comp}}(i) \log(\lambda_{\text{comp}}(i))$$ |
186 | 217 |
|
187 | | -\textbf{Step 7: Unfold Entropy Definition.} |
188 | | -By Definition~\ref{def:vonNeumannEntropy}: |
189 | | -$$S(\rho) = -\sum_i \lambda_i \log(\lambda_i)$$ |
190 | | - |
191 | | -\textbf{Step 8: Compute Comparison Entropy.} |
| 218 | +\textbf{Step 7: Compute Comparison Entropy.} |
192 | 219 | Direct calculation yields: |
193 | 220 | $$\sum_i \lambda_{\text{comp}}(i) \log(\lambda_{\text{comp}}(i)) = (1-\delta)\log(1-\delta) + \delta \log\left(\frac{\delta}{R-1}\right)$$ |
194 | 221 |
|
195 | | -\textbf{Step 9: Final Calculation.} |
| 222 | +\textbf{Step 8: Final Calculation.} |
196 | 223 | Combining all steps: |
197 | 224 | \begin{align*} |
198 | 225 | S(\rho) &= -\sum_i \lambda_i \log(\lambda_i) \\ |
|
203 | 230 |
|
204 | 231 | \bigskip |
205 | 232 |
|
206 | | -\noindent\textbf{Remark:} The original paper assumes $\delta \ll 1$, but for rigorous formalization we require |
207 | | -$\delta \leq \frac{R-1}{R}$. This ensures $\frac{\delta}{R-1} \leq 1 - \delta$, which guarantees the comparison |
208 | | -distribution $\lambda_{\text{comp}}$ is properly ordered with one dominant eigenvalue. Without this constraint, |
209 | | -the majorization argument fails. Since $\frac{R-1}{R} \to 1$ as $R \to \infty$, this constraint is nearly equivalent |
210 | | -to $\delta < 1$ for large systems and is always satisfied in practical QKD where $\delta \ll 1$. |
| 233 | +\noindent\textbf{Remarks:} |
| 234 | + |
| 235 | +\noindent\textbf{(1) Notation.} In our formalization, $R$ denotes the dimension of the Hilbert space, corresponding to $2^{2R}$ in Lo \& Chau's notation (where $R$ represents the number of singlet pairs). |
| 236 | + |
| 237 | +\medskip |
| 238 | + |
| 239 | +\noindent\textbf{(2) Constraint on $\delta$.} The original paper requires $\delta \ll 1$. Our formalization makes this precise: we require $\delta \leq \frac{R-1}{R}$ to ensure the comparison distribution has decreasing eigenvalues. Since $\frac{R-1}{R} \to 1$ as $R \to \infty$, this constraint is satisfied for large systems with $\delta \ll 1$. |
211 | 240 |
|
0 commit comments