Skip to content

Commit d6343fa

Browse files
authored
Fix docs and update cache action (#49)
* Fix for Documenter pr2388 * Update action cache v4 * Specify using
1 parent 4e7558c commit d6343fa

File tree

7 files changed

+42
-29
lines changed

7 files changed

+42
-29
lines changed

.github/workflows/Documentation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ jobs:
2121
with:
2222
version: '1'
2323
- name: CacheArtifacts
24-
uses: actions/cache@v3
24+
uses: actions/cache@v4
2525
env:
2626
cache-name: cache-artifacts
2727
with:

docs/lit/examples/1-overview.jl

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,8 @@ This page gives an overview of the Julia package
1111

1212
# Packages needed here.
1313

14-
using SPECTrecon
14+
using SPECTrecon: plan_psf, psf_gauss, SPECTplan
15+
using SPECTrecon: project, project!, backproject, backproject!
1516
using MIRTjim: jim, prompt
1617
using LinearAlgebra: mul!
1718
using LinearMapsAA: LinearMapAA
@@ -218,10 +219,10 @@ mul!(tmp, A', views)
218219
The pixel dimensions `deltas` can (and should!) be values with units.
219220
220221
Here is an example ... (todo)
221-
=#
222222
223-
#using UnitfulRecipes
224-
#using Unitful: mm
223+
using UnitfulRecipes
224+
using Unitful: mm
225+
=#
225226

226227

227228
# ## Projection view animation

docs/lit/examples/2-rotate.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ This page explains the image rotation portion of the Julia package
1111

1212
# Packages needed here.
1313

14-
using SPECTrecon
14+
using SPECTrecon: plan_rotate, imrotate!, imrotate_adj!
1515
using MIRTjim: jim, prompt
1616
using Plots: scatter, scatter!, plot!, default
1717
default(markerstrokecolor=:auto, markersize=3)

docs/lit/examples/3-psf.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ This page explains the PSF portion of the Julia package
1111

1212
# Packages needed here.
1313

14-
using SPECTrecon
14+
using SPECTrecon: psf_gauss, plan_psf, fft_conv!, fft_conv_adj!
1515
using MIRTjim: jim, prompt
1616
using Plots: scatter, scatter!, plot!, default
1717
default(markerstrokecolor=:auto, markersize=3)

docs/lit/examples/4-mlem.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ This page illustrates ML-EM reconstruction with the Julia package
1111

1212
# Packages needed here.
1313

14-
using SPECTrecon
14+
using SPECTrecon: SPECTplan, psf_gauss, project!, backproject!, mlem, mlem!
1515
using MIRTjim: jim, prompt
1616
using Plots: scatter, plot!, default; default(markerstrokecolor=:auto)
1717

docs/lit/examples/5-2d.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,8 @@ using the Julia package
1212

1313
# Packages needed here.
1414

15-
using SPECTrecon
15+
using SPECTrecon: SPECTplan, psf_gauss
16+
using SPECTrecon: project, project!, backproject, backproject!
1617
using MIRTjim: jim, prompt
1718
using ImagePhantoms: shepp_logan, SheppLoganEmis
1819
using LinearAlgebra: mul!

docs/lit/examples/6-dl.jl

Lines changed: 31 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -219,26 +219,37 @@ end
219219
# Initial loss
220220
@show loss(xhat1, xtrue)
221221

222-
# ### Train the CNN
223-
# Uncomment the following code to train!
224-
## using Printf
225-
## nepoch = 200
226-
## for e = 1:nepoch
227-
## @printf("epoch = %d, loss = %.2f\n", e, loss(xhat1, xtrue))
228-
## ps = Flux.params(cnn)
229-
## gs = gradient(ps) do
230-
## loss(xhat1, xtrue) # we start with the 30 iteration EM reconstruction
231-
## end
232-
## opt = ADAMW(0.002)
233-
## Flux.Optimise.update!(opt, ps, gs)
234-
## end
235-
236-
# Uncomment to save your trained model.
237-
## file = "../data/trained-cnn-example-6-dl.bson" # adjust path/name as needed
238-
## @save file cnn
239-
240-
# Load the pre-trained model (uncomment if you save your own model).
241-
## @load file cnn
222+
#=
223+
### Train the CNN
224+
Uncomment the following code to train:
225+
226+
```
227+
using Printf
228+
nepoch = 200
229+
for e in 1:nepoch
230+
@printf("epoch = %d, loss = %.2f\n", e, loss(xhat1, xtrue))
231+
ps = Flux.params(cnn)
232+
gs = gradient(ps) do
233+
loss(xhat1, xtrue) # we start with the 30 iteration EM reconstruction
234+
end
235+
opt = ADAMW(0.002)
236+
Flux.Optimise.update!(opt, ps, gs)
237+
end
238+
```
239+
=#
240+
241+
#=
242+
Uncomment to save your trained model:
243+
```
244+
file = "../data/trained-cnn-example-6-dl.bson" # adjust path/name as needed
245+
@save file cnn
246+
```
247+
248+
Load the pre-trained model (uncomment if you save your own model):
249+
```
250+
@load file cnn
251+
```
252+
=#
242253

243254
#=
244255
The code below here works fine when run via `include` from the REPL,

0 commit comments

Comments
 (0)