Skip to content

Commit 8fd6096

Browse files
committed
Add exam with solutions
1 parent a5c97b1 commit 8fd6096

File tree

11 files changed

+475
-0
lines changed

11 files changed

+475
-0
lines changed

Exams/2026_01/Project.toml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
[deps]
2+
Cairo = "159f3aea-2a34-519c-b102-8c37f9878175"
3+
Colors = "5ae59095-9a9b-59fe-a467-6f913c188581"
4+
Compose = "a81c6b42-2e10-5240-aca2-a61377ecd94b"
5+
Fontconfig = "186bb1d3-e1f7-5a2c-a377-96d770f13627"
6+
GraphPlot = "a2cc645c-3eea-5389-862e-a155d0052231"
7+
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
8+
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
9+
Polynomials = "f27b6e38-b328-58d1-80ce-0feddd5e7a45"
10+
SparseConnectivityTracer = "9f842d2f-2579-4b1d-911e-f412cf18a3f5"
11+
SparseMatrixColorings = "0a514795-09f3-496d-8182-132a7b665d35"

Exams/2026_01/acyclic_coloring.png

20.4 KB
Loading

Exams/2026_01/adjacency_graph.png

20.5 KB
Loading

Exams/2026_01/diffusion.tex

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
% https://arxiv.org/pdf/2502.03810
2+
\titledquestion{Diffusion}
3+
4+
You would like to find a method for deblurring images.
5+
You have a program $b(x, \sigma)$ that can be used to blur an image $x$
6+
with blurring intensity $\sigma$.
7+
For blurring intensity $\sigma = 0$, $b(x, \sigma) = x$ and the larger
8+
$\sigma$ is, the more the image is blurred.
9+
10+
\begin{itemize}
11+
\item
12+
Given a datasets of clear images,
13+
explain how you can use the Evidence Lower Bound to train
14+
a model $f_\theta(y, \sigma)$, parameterized by $\theta$,
15+
that attempts to predicts the clear image
16+
$x$ that produced the blurry image $y = b(x, \sigma)$.
17+
18+
\begin{solutionbox}{7.5cm}
19+
We want to train a Denoising Auto-Encoder.
20+
We define $Z_\sigma$ to be the latent variable in the space of images of blur intensity $\sigma$ that produces the clear image $X$ and
21+
the variable $Y_\sigma = b(X, \sigma)$.
22+
Sample clear images $x$ from the dataset and various different blur intensities $\sigma$.
23+
Create pairs $(y, x)$ with $y = b(x, \sigma)$.
24+
The ELBO bounds $\log p(y|\sigma)$:
25+
\[
26+
\log f_X(x) \ge -D_{\mathrm{KL}}((Y|X=x) \| Z)
27+
+ \mathbb{E}[\log f_{X|Z}(x|Y)].
28+
\]
29+
The first term serves as regularizer and the second term is the error.
30+
\end{solutionbox}
31+
32+
\item You now use a trained model $f_{\theta^*}$ to deblur an image
33+
$y$ of supposed blurred intensity $\sigma$ using $\hat{x} = f_{\theta^*}(y, \sigma)$.
34+
You observe that, while this works well for small $\sigma$,
35+
the estimate is less precise for deblurring image with high blurred intensity $\sigma$.
36+
How would you suggest to improve this using an iterative approach ?
37+
38+
\begin{solutionbox}{7.5cm}
39+
$f_{\theta^*}(y, \sigma)$ is based on an estimate of
40+
the gradient of the log of the PDF of the distribution ofimages blurred with intensity $\sigma$.
41+
For higher $\sigma$, this distribution is more spread out, so the gradient is less precise.
42+
For large $\sigma$, it's only good to get \emph{close} to the distribution of less blurry images.
43+
44+
Therefore, we use an \textbf{iterative refinement} with decreasing blur levels.
45+
Define $\sigma_0 = \sigma > \sigma_1 > \cdots > \sigma_T \approx 0$.
46+
Initialize $\hat{x}_0 = f_{\theta^*}(y, \sigma_0)$.
47+
For $k = 1, \ldots, T$:
48+
simulate a ``less blurry'' observation by $\tilde{y}_k = b(\hat{x}_{k-1}, \sigma_k)$,
49+
then set $\hat{x}_k = f_{\theta^*}(\tilde{y}_k, \sigma_k)$.
50+
Final deblurred image: $\hat{x}_T$.
51+
\end{solutionbox}
52+
\end{itemize}

Exams/2026_01/implicit.jl

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Inspired from example 11.5 of "Elements of Differential Programming"
2+
using Polynomials
3+
λ = 1
4+
poly(λ) = Polynomial([1, λ, 0, 1, 0, 1])
5+
der(λ) = Polynomial([λ, 0, 3, 0, 5])
6+
root(p::Polynomials.Polynomial) = real(only(filter(isreal, roots(p))))
7+
root(λ) = root(poly(λ))
8+
function newton(λ, f_df, niter = 100)
9+
for _ in 1:niter
10+
f, df = f_df(λ)
11+
λ -= f / df
12+
end
13+
return λ
14+
end
15+
function f_df(λ, target)
16+
p = poly(λ)
17+
r = root(λ)
18+
d = der(λ)
19+
return r - target, inv(d(r))
20+
end
21+
22+
λπ = newton(λ, Base.Fix2(f_df, π))
23+
root(λπ)
24+
25+
root(π)
26+
27+
roots(Polynomial([-1, 1]) * Polynomial([1, 0, 1]))

Exams/2026_01/implicit.tex

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
\titledquestion{Implicit}
2+
3+
Consider the function
4+
\[
5+
f(x, \lambda) : x^3 - \lambda x^2 + x - 1
6+
\]
7+
If $|\lambda| < \sqrt{3}$,
8+
the derivative $\partial_1 f(x, \lambda) = 3x^2 - 2\lambda x + 1$
9+
is positive for all $x$.
10+
Therefore, for any $\lambda \in (-\sqrt{3}, \sqrt{3})$,
11+
there exists a unique $x^*$ such that
12+
$f(x^*, \lambda) = 0$.
13+
Let $x^*(\lambda) : (-\sqrt{3}, \sqrt{3}) \to \mathbb{R}$ be this solution.
14+
15+
\begin{itemize}
16+
\item We have $x^*(1) = 1$, find the value of derivative $\partial x^*(1)$.
17+
18+
\begin{solutionbox}{7cm}
19+
We have
20+
\[
21+
\partial x^*(\lambda)
22+
= \frac{-\partial_2 f(1, 1)}{\partial_1 f(1, 1)}
23+
= \frac{(x^*)^2}{3(x^*)^2 - 2\lambda x^* + 1}
24+
= \frac{1}{2}
25+
\]
26+
\end{solutionbox}
27+
\item What is the value of the second derivative $\partial^2 x^*(1)$ ?
28+
29+
\begin{solutionbox}{9.5cm}
30+
Let $n, d$ be the numerator and denominator:
31+
\[
32+
n = (x^*)^2
33+
\qquad
34+
d = 3(x^*)^2 - 2\lambda x^* + 1.
35+
\]
36+
We have
37+
\[
38+
\partial^2 x^*(\lambda)
39+
= \frac{n'd - nd'}{d^2}
40+
\]
41+
where $n' = 2x^*(x^*)' = 2 \cdot 1 \cdot (1/2) = 1$,
42+
$d = 2$, $n = 1$ and
43+
\[ d' = 6x^*(x^*)' - 2\lambda' x^* - 2\lambda (x^*)'
44+
= 3 - 2 - 2/2 = 0 \]
45+
so
46+
\[
47+
\partial^2 x^*(\lambda)
48+
= \frac{1 \cdot 2 - 1 \cdot 0}{2^2} = \frac{2}{4} = \frac{1}{2}.
49+
\]
50+
\end{solutionbox}
51+
\end{itemize}

Exams/2026_01/kernel.tex

Lines changed: 126 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,126 @@
1+
\titledquestion{Kernels (applications)}
2+
3+
Suppose that you have $N$ datapoints $x_i \in \mathcal{X}$ with labels $y_i \in \{-1,1\}$ that you want to classify, where $1\leqslant i \leqslant N$. To do so, you will try two different classical methods, enhanced with the kernel trick. For the whole exercise, we introduce the following kernel-related definitions:
4+
\begin{itemize}
5+
\item $\mathcal{H}$ is an RKHS,
6+
\item $\langle \cdot, \cdot \rangle_{\mathcal{H}} : \mathcal{H} \times \mathcal{H} \to \mathbb{R}$ is the inner product associated with $\mathcal{H}$,
7+
\item $\| \cdot \|_{\mathcal{H}} : \mathcal{H} \to \mathbb{R}^+$ is the induced norm in $\mathcal{H}$ such that $ \| f \|_{\mathcal{H}} = \sqrt{\langle f, f \rangle_{\mathcal{H}}}$ for any $f\in \mathcal{H}$,
8+
\item $k: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ is the (symmetric positive-definite) kernel,
9+
\item $\phi: \mathcal{X} \to \mathcal{H}$ is a feature map such that $k(x,y) = \langle \phi(x), \phi(y) \rangle_{\mathcal{H}}$ for any $x,y \in \mathcal{X}$,
10+
\item $K \in \mathbb{R}^{N \times N}$ is the kernel matrix such that $K_{ij} = k(x_i,x_j) = \langle \phi(x_i), \phi(x_j) \rangle_{\mathcal{H}}$.
11+
\end{itemize}
12+
13+
\textbf{A) Method I: hard-margin kernel support-vector machine ($k$-SVM).} $k$-SVM tries to maximize the \textit{margin} of the hyperplane which would separate the data embedded in the feature space. The decision function for any new input $x \in \mathcal{X}$ thus reads $d_I(x) = \langle v^*, \phi(x) \rangle_{\mathcal{H}} + b^*$, where the direction vector of the hyperplane $v^* \in \mathcal{H}$ and the bias $b^*\in\mathbb{R}$ are found through the $k$-SVM optimization problem. Let $\hat{y}_{I}(x)$ be the class predicted by the model $x$. This decision rule satisfies $\hat{y}_{I}(x) = \sign(d_{I}(x))$, where the sign function is defined as $\sign(t) = -1$ if $t<0$, $\sign(0) = 0$ and $\sign(t) = 1$ if $t>0$.
14+
\begin{enumerate}
15+
\item A famous theorem allows to decompose $v^* = \sum_{i=1}^N \phi(x_i) \gamma_i^*$, with $\gamma^* \in \mathbb{R}^N$. What is the name of this theorem?
16+
\begin{solutionbox}{1cm}
17+
The \textbf{Representer Theorem}.
18+
\end{solutionbox}
19+
\item Write down the (primal) $k$-SVM optimization problem in terms of $\gamma \in \mathbb{R}^N$ and $b\in \mathbb{R}$, using only $K$, $y_i$ and $N$.
20+
\begin{solutionbox}{7.5cm}
21+
\begin{align*}
22+
\min_{\gamma \in \mathbb{R}^N, b \in \mathbb{R}} \quad & \frac{1}{2} \gamma^\top K \gamma \\
23+
\text{s.t.} \quad & y_i \left( \sum_{j=1}^N K_{ij} \gamma_j + b \right) \geq 1, \quad \forall i \in \{1,\ldots,N\}
24+
\end{align*}
25+
or equivalently, $y_i((K\gamma)_i + b) \geq 1$ for all $i$.
26+
\end{solutionbox}
27+
\item Let $\gamma^* \in \mathbb{R}^N$ and $b^* \in \mathbb{R}$ be the solutions of the $k$-SVM problem (we assume that they exist and that they are bounded). Give the expression for $d_I(x)$ in terms of $x$, $\gamma^*$, $b^*$, $k$ and $x_i$.
28+
29+
Moreover, give the expression for $\hat{y}_{I}(x)$.
30+
\begin{solutionbox}{9cm}
31+
\begin{align*}
32+
d_I(x) &= \sum_{i=1}^N \gamma_i^* k(x_i, x) + b^*, \\
33+
\hat{y}_I(x) &= \sign(d_I(x)) = \sign\left(\sum_{i=1}^N \gamma_i^* k(x_i, x) + b^*\right).
34+
\end{align*}
35+
\end{solutionbox}
36+
\end{enumerate}
37+
38+
\newpage
39+
\textbf{B) Method II: (squared) distance to the (lifted) mean.}
40+
We define the centers of each class as
41+
\[
42+
\mu_+ = \frac{1}{n_+} \sum_{i = 1 \atop y_i = 1}^N \phi(x_i),
43+
\qquad
44+
\mu_- = \frac{1}{n_-} \sum_{i = 1 \atop y_i = -1}^N \phi(x_i),
45+
\]
46+
where $n_+$ (resp. $n_-$) is the number of points labeled $+1$ (resp. $-1$). We call $\hat{y}_{II}(x)$ the class predicted for a new point $x$ by the method of closest mean. The decision rule reads:
47+
\begin{align*}
48+
\hat{y}_{II}(x) &= \begin{cases} 1 & \textnormal{ if } \hspace{0.5cm} \|\phi(x) - \mu_+\|_{\mathcal{H}}^2 < \|\phi(x) - \mu_-\|_{\mathcal{H}}^2, \\
49+
0 & \textnormal{ if } \hspace{0.5cm} \|\phi(x) - \mu_+\|_{\mathcal{H}}^2 = \|\phi(x) - \mu_-\|_{\mathcal{H}}^2, \\
50+
-1 & \textnormal{ if } \hspace{0.5cm} \|\phi(x) - \mu_+\|_{\mathcal{H}}^2 > \|\phi(x) - \mu_-\|_{\mathcal{H}}^2. \end{cases}
51+
\end{align*}
52+
\begin{enumerate}
53+
\item Propose and motivate an expression for the decision function $d_{II}(x)$ which would satisfy the equation $\hat{y}_{II}(x) = \sign(d_{II}(x))$, using only $\phi(x)$, $\mu_-$, $\mu_+$, $\langle \cdot , \cdot \rangle_{\mathcal{H}}$ and $\| \cdot \|_{\mathcal{H}}^2$.
54+
55+
Simplify the expression as much as possible.
56+
\begin{solutionbox}{5.5cm}
57+
We need to satisfy the condition on $\sign(d_{II}(x))$. A solution is
58+
\begin{align*}
59+
d_{II}(x) &= \|\phi(x) - \mu_-\|_{\mathcal{H}}^2 - \|\phi(x) - \mu_+\|_{\mathcal{H}}^2 \\
60+
&= \|\phi(x)\|_{\mathcal{H}}^2 - 2\langle \phi(x), \mu_- \rangle_{\mathcal{H}} + \|\mu_-\|_{\mathcal{H}}^2 - \|\phi(x)\|_{\mathcal{H}}^2 + 2\langle \phi(x), \mu_+ \rangle_{\mathcal{H}} - \|\mu_+\|_{\mathcal{H}}^2 \\
61+
&= 2\langle \phi(x), \mu_+ - \mu_- \rangle_{\mathcal{H}} + \|\mu_-\|_{\mathcal{H}}^2 - \|\mu_+\|_{\mathcal{H}}^2.
62+
\end{align*}
63+
This choice of $d_{II}(x)$ is great because it simplifies to an affine function, because it is easy to differentiate (ex: for optimization purposes), because it is easy to compute, because it naturally comes from the conditions, because it is natural to look at squared norms (ex: least squares methods),...
64+
\end{solutionbox}
65+
66+
\item Express the squared distances $\|\phi(x)-\mu_+\|_{\mathcal{H}}^2$ and $\|\phi(x)-\mu_-\|_{\mathcal{H}}^2$ using only $x$, $k$, $x_i$, $y_i$, $N$, $n_+$ and $n_-$.
67+
\begin{solutionbox}{5.5cm}
68+
\begin{align*}
69+
\|\phi(x)-\mu_+\|_{\mathcal{H}}^2 &= \|\phi(x)\|_{\mathcal{H}}^2 - 2\langle \phi(x), \mu_+ \rangle_{\mathcal{H}} + \|\mu_+\|_{\mathcal{H}}^2 \\
70+
&= k(x,x) - \frac{2}{n_+} \sum_{i: y_i = 1} k(x, x_i) + \frac{1}{n_+^2} \sum_{i: y_i = 1} \sum_{j: y_j = 1} k(x_i, x_j), \\
71+
\|\phi(x)-\mu_-\|_{\mathcal{H}}^2 &= k(x,x) - \frac{2}{n_-} \sum_{i: y_i = -1} k(x, x_i) + \frac{1}{n_-^2} \sum_{i: y_i = -1} \sum_{j: y_j = -1} k(x_i, x_j).
72+
\end{align*}
73+
\end{solutionbox}
74+
75+
\item Just for this subquestion, assume that $n_+ = n_-$ (classes are balanced) and that $\|\mu_+\|_{\mathcal{H}} = \|\mu_-\|_{\mathcal{H}}$. Simplify the rule $\hat{y}_{II}(x)$ to express it only in terms of $x$, $N$, $k$, $x_i$ and $y_i$.
76+
77+
\textit{Hint:} What is the relation between $N$, $n_+$ and $n_-$?
78+
\begin{solutionbox}{8cm}
79+
Since $n_+ = n_- = N/2$ and $\|\mu_+\|_{\mathcal{H}} = \|\mu_-\|_{\mathcal{H}}$, we have:
80+
\begin{align*}
81+
d_{II}(x) &= 2\langle \phi(x), \mu_+ - \mu_- \rangle_{\mathcal{H}} \\
82+
&= \frac{2}{n_+} \sum_{i: y_i = 1} k(x, x_i) - \frac{2}{n_-} \sum_{i: y_i = -1} k(x, x_i) \\
83+
&= \frac{4}{N} \sum_{i=1}^N y_i k(x, x_i).
84+
\end{align*}
85+
Therefore, $\hat{y}_{II}(x) = \sign\left(\frac{4}{N}\sum_{i=1}^N y_i k(x, x_i)\right)=\sign\left(\sum_{i=1}^N y_i k(x, x_i)\right)$.
86+
\end{solutionbox}
87+
\end{enumerate}
88+
89+
\newpage
90+
\textbf{C) Methods comparison.}
91+
\begin{enumerate}
92+
\item Are the decision functions $d_{I}(x)$ and $d_{II}(x)$ affine functions of $\phi(x)$ in $\mathcal{H}$? Justify briefly.
93+
\begin{solutionbox}{8.5cm}
94+
Yes, both are affine functions of $\phi(x)$:
95+
\begin{itemize}
96+
\item $d_I(x) = \langle v^*, \phi(x) \rangle_{\mathcal{H}} + b^*$ is affine (linear plus constant).
97+
\item $d_{II}(x) = \langle 2( \mu_+ - \mu_- ),\phi(x)\rangle_{\mathcal{H}} + \|\mu_-\|_{\mathcal{H}}^2 - \|\mu_+\|_{\mathcal{H}}^2$ is also affine (linear plus constant).
98+
\end{itemize}
99+
Both have the form $\langle w, \phi(x) \rangle_{\mathcal{H}} + c$ for some $w \in \mathcal{H}$ and $c \in \mathbb{R}$.
100+
\end{solutionbox}
101+
\item If possible, give simple conditions under which both methods would be equivalent, meaning that for any new datapoint $x\in \mathcal{X}$, they would always predict the same class.
102+
\begin{solutionbox}{9cm}
103+
The methods would be equivalent iff $d_I(x)$ and $d_{II}(x)$ have the same sign for all $x$, i.e. $d_I(x) = c d_{II}(x)$ for $c>0$. A simple choice is $c=1$ so $d_{I}(x)=d_{II}(x)$.
104+
105+
This gives the conditions $v^* = 2(\mu_+-\mu_-)$ and $b^* = \|\mu_-\|_{\mathcal{H}}^2 - \|\mu_+\|_{\mathcal{H}}^2$.
106+
107+
\end{solutionbox}
108+
\item What method would you prefer to use for your classification task? Choose a method and give at least two arguments in favor of it.
109+
\begin{solutionbox}{10cm}
110+
I would prefer \textbf{Method I ($k$-SVM)} for the following reasons:
111+
\begin{enumerate}
112+
\item \textbf{Maximum margin:} Intelligent choice of separating hyperplane.
113+
\item \textbf{Sparse and scalable:} The solution typically uses only a subset of training points (support vectors), making predictions (inference) more efficient. This makes the method scalable to larger of more complex datasets at inference.
114+
\item \textbf{Theoretical guarantees:} SVMs have strong theoretical foundations with bounds on generalization error based on margin maximization.
115+
\item \textbf{Improvable:} Extension to soft-margin $k$-SVM for more robustness to outliers and better generalization.
116+
\end{enumerate}
117+
I would prefer \textbf{Method II (nearest mean)} for the following reasons:
118+
\begin{enumerate}
119+
\item \textbf{Simple and interpretable:} Simple and intuitive motivation, easy to understand and to interpret.
120+
\item \textbf{No pre-compute:} No need to solve an optimization problem to get the coefficients.
121+
\item \textbf{Non-separable data:} Works despite non-linearly separable data in $\mathcal{H}$
122+
\item \textbf{Adaptable:} Easily adaptable to new datapoints.
123+
\end{enumerate}
124+
\end{solutionbox}
125+
\end{enumerate}
126+
\clearpage

Exams/2026_01/main.tex

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
%\documentclass[addpoints,12pt,a4paper]{exam}
2+
\documentclass[addpoints,answers,12pt,a4paper]{exam} % print solutions
3+
4+
\newcommand{\ExamDate}{January 2026}
5+
\input{../utils/preamble}
6+
7+
\DeclareMathOperator{\sign}{sign}
8+
9+
\begin{document}
10+
\input{../utils/rules}
11+
12+
\begin{questions}
13+
\input{kernel}
14+
\clearpage
15+
\input{diffusion}
16+
\clearpage
17+
\input{implicit}
18+
\clearpage
19+
\input{sparse}
20+
\end{questions}
21+
22+
\end{document}

Exams/2026_01/sparse.jl

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
using LinearAlgebra
2+
using SparseConnectivityTracer, SparseMatrixColorings
3+
using Graphs, GraphPlot, Colors
4+
using Compose, Cairo, Fontconfig
5+
6+
detector = TracerSparsityDetector()
7+
x = rand(3)
8+
f(x, y, z, λ, γ) = sum(x.^2) - λ * sum(x) + λ^2 + γ^2 + sum(sin- z) for z in z) + dot(x - z, y)
9+
v(x) = f(x[1:n], x[n+1:2n], x[2n+1:3n], x[3n+1], x[3n+2])
10+
f(x, y, λ) = sum(x.^2) + sum(y.^2) + λ^2 + dot(y .- λ, x)
11+
n = 3
12+
v(x) = f(x[1:n], x[n+1:2n], x[2n+1])
13+
14+
v(x) = sum(x.^2) + sum(x[2i] * (x[2i-1] - x[2n+1]) for i in 1:n)
15+
x = rand(2n+1)
16+
S = hessian_sparsity(v, x, detector)
17+
G = Graphs.SimpleDiGraph(S - Diagonal(diag(S)))
18+
adjacency_graph = gplot(G, nodelabel = eachindex(x))
19+
draw(PNG(joinpath(@__DIR__, "adjacency_graph.png"), 16cm, 16cm), adjacency_graph)
20+
21+
problem = ColoringProblem(; structure=:symmetric, partition=:column)
22+
23+
star_algo = GreedyColoringAlgorithm(; decompression=:direct)
24+
star_result = coloring(S, problem, star_algo)
25+
adj = SparseMatrixColorings.AdjacencyGraph(S)
26+
background_color = RGBA(0, 0, 0, 0)
27+
border_color = RGB(0, 0, 0)
28+
colorscheme = distinguishable_colors(
29+
ncolors(result),
30+
[convert(RGB, background_color), convert(RGB, border_color)];
31+
dropseed=true,
32+
)
33+
star_coloring = gplot(G; nodelabel = eachindex(x), nodefillc = colorscheme[star_result.color])
34+
draw(PNG(joinpath(@__DIR__, "star_coloring.png"), 16cm, 16cm), star_coloring)
35+
36+
acyclic_algo = GreedyColoringAlgorithm(; decompression=:substitution)
37+
acyclic_result = coloring(S, problem, acyclic_algo)
38+
acyclic_coloring = gplot(G; nodelabel = eachindex(x), nodefillc = colorscheme[acyclic_result.color])
39+
draw(PNG(joinpath(@__DIR__, "acyclic_coloring.png"), 16cm, 16cm), acyclic_coloring)

0 commit comments

Comments
 (0)