Skip to content

Commit aa12423

Browse files
authored
Merge pull request #46 from JuliaAstro/fixes
Various small fixes
2 parents 12cbbad + 4c2cdca commit aa12423

File tree

14 files changed

+259
-212
lines changed

14 files changed

+259
-212
lines changed

CITATION.cff

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,22 @@
1-
# Go to https://citation-file-format.github.io/cff-initializer-javascript/#/ to finish this
21
cff-version: 1.2.0
32
title: SolarPosition.jl
43
message: >-
54
If you use this software, please cite it using the
65
metadata from this file.
76
type: software
87
authors:
9-
- given-names: Stefan de Lange
8+
- given-names: Stefan
9+
family-names: de Lange
10+
repository-code: https://github.com/JuliaAstro/SolarPosition.jl
11+
url: https://juliaastro.github.io/SolarPosition.jl/
12+
license: MIT
13+
version: 0.3.2
14+
date-released: 2025-12-20
15+
abstract: >-
16+
SolarPosition.jl provides a simple, unified interface to a collection
17+
of validated solar position algorithms written in pure Julia.
18+
keywords:
19+
- solar-position
20+
- solar-astronomy
21+
- julia
22+
- solar-energy

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
name = "SolarPosition"
22
uuid = "5b9d1343-a731-5a90-8730-7bf8d89bf3eb"
3-
version = "0.3.1"
3+
version = "0.3.2"
44
authors = ["Stefan de Lange"]
55

66
[workspace]

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -61,24 +61,24 @@ accuracy and implementation status.
6161

6262
| Algorithm | Reference | Accuracy | Default Refraction | Status |
6363
| --------- | ----------------------------------------------------------------------------------------------- | -------- | ------------------ | ------ |
64-
| PSA | [Blanco-Muriel et al.](https://www.sciencedirect.com/science/article/abs/pii/S0038092X00001560) | ±0.0083° | None | |
65-
| NOAA | [Global Monitoring Laboratory](https://gml.noaa.gov/grad/solcalc/calcdetails.html) | ±0.0167° | HUGHES | |
66-
| Walraven | [Walraven, 1978](<https://doi.org/10.1016/0038-092X(78)90155-X>) | ±0.0100° | None | |
67-
| USNO | [U.S. Naval Observatory](https://aa.usno.navy.mil/faq/sun_approx) | ±0.0500° | None | |
68-
| SPA | [Reda & Andreas, 2004](https://doi.org/10.1016/j.solener.2003.12.003) | ±0.0003° | Built-in | |
64+
| PSA | [Blanco-Muriel et al.](https://www.sciencedirect.com/science/article/abs/pii/S0038092X00001560) | ±0.0083° | None ||
65+
| NOAA | [Global Monitoring Laboratory](https://gml.noaa.gov/grad/solcalc/calcdetails.html) | ±0.0167° | HUGHES ||
66+
| Walraven | [Walraven, 1978](<https://doi.org/10.1016/0038-092X(78)90155-X>) | ±0.0100° | None ||
67+
| USNO | [U.S. Naval Observatory](https://aa.usno.navy.mil/faq/sun_approx) | ±0.0500° | None ||
68+
| SPA | [Reda & Andreas, 2004](https://doi.org/10.1016/j.solener.2003.12.003) | ±0.0003° | Built-in ||
6969

7070
## Refraction correction algorithms
7171

7272
Atmospheric refraction correction algorithms available in SolarPosition.jl.
7373

7474
| Algorithm | Reference | Atmospheric Parameters | Status |
7575
| --------- | ------------------------------------------------------------------------------------------------ | ---------------------- | ------ |
76-
| HUGHES | [Hughes, 1985](https://pvpmc.sandia.gov/app/uploads/sites/243/2022/10/Engineering-Astronomy.pdf) | Pressure, Temperature | |
77-
| ARCHER | Archer et al., 1980 | None | |
78-
| BENNETT | [Bennett, 1982](https://doi.org/10.1017/S0373463300022037) | Pressure, Temperature | |
79-
| MICHALSKY | [Michalsky, 1988](<https://doi.org/10.1016/0038-092X(88)90045-X>) | None | |
80-
| SG2 | [Blanc & Wald, 2012](https://doi.org/10.1016/j.solener.2012.07.018) | Pressure, Temperature | |
81-
| SPA | [Reda & Andreas, 2004](https://doi.org/10.1016/j.solener.2003.12.003) | Pressure, Temperature | |
76+
| HUGHES | [Hughes, 1985](https://pvpmc.sandia.gov/app/uploads/sites/243/2022/10/Engineering-Astronomy.pdf) | Pressure, Temperature ||
77+
| ARCHER | Archer et al., 1980 | None ||
78+
| BENNETT | [Bennett, 1982](https://doi.org/10.1017/S0373463300022037) | Pressure, Temperature ||
79+
| MICHALSKY | [Michalsky, 1988](<https://doi.org/10.1016/0038-092X(88)90045-X>) | None ||
80+
| SG2 | [Blanc & Wald, 2012](https://doi.org/10.1016/j.solener.2012.07.018) | Pressure, Temperature ||
81+
| SPA | [Reda & Andreas, 2004](https://doi.org/10.1016/j.solener.2003.12.003) | Pressure, Temperature ||
8282

8383
## Extensions
8484

docs/.CondaPkg/pixi.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ libstdcxx-ng = ">=3.4,<=15.1"
77
[dependencies.python]
88
channel = "conda-forge"
99
build = "*cp*"
10-
version = ">=3.10,<3.14"
10+
version = ">=3.10,!=3.14.0,!=3.14.1,<4"
1111

1212
[project]
1313
name = ".CondaPkg"

docs/src/guides/benchmarking.md

Lines changed: 155 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -14,23 +14,12 @@ using Statistics
1414
using BenchmarkTools
1515
```
1616

17-
## Algorithm Overview
18-
19-
SolarPosition.jl provides several solar positioning algorithms with different
20-
accuracy/performance trade-offs:
21-
22-
| Algorithm | Claimed Accuracy | Complexity |
23-
| ----------------------------------------------------- | ---------------- | ---------- |
24-
| [`SPA`](@ref SolarPosition.Positioning.SPA) | ±0.0003° | High |
25-
| [`PSA`](@ref SolarPosition.Positioning.PSA) | ±0.0083° | Low |
26-
| [`NOAA`](@ref SolarPosition.Positioning.NOAA) | ±0.0167° | Low |
27-
| [`Walraven`](@ref SolarPosition.Positioning.Walraven) | ±0.0100° | Low |
28-
| [`USNO`](@ref SolarPosition.Positioning.USNO) | ±0.0500° | Low |
17+
For available solar position algorithms, see the [Positioning Guide](@ref SolarPosition.Positioning).
2918

3019
## Accuracy Analysis
3120

32-
To evaluate accuracy, we compare each algorithm against SPA across a full year of
33-
hourly timestamps at various geographic locations.
21+
To evaluate accuracy, we compare each algorithm against the gold standard SPA across a
22+
full year of hourly timestamps at various geographic locations.
3423

3524
```@example benchmarks
3625
# Test locations representing different latitudes
@@ -43,7 +32,6 @@ locations = [
4332
4433
# Generate hourly timestamps for a full year
4534
times = collect(DateTime(2024, 1, 1):Hour(1):DateTime(2024, 12, 31, 23))
46-
println("Testing with $(length(times)) timestamps per location")
4735
```
4836

4937
!!! details "Accuracy comparison"
@@ -129,12 +117,10 @@ println("Testing with $(length(times)) timestamps per location")
129117
accuracy_results
130118
```
131119

132-
### Accuracy Visualization
133-
134120
The following plots show the mean error with 95% confidence intervals (2.5th to 97.5th
135121
percentile) for each algorithm compared to SPA.
136122

137-
!!! details "Accuracy visualization"
123+
!!! details "Visualization"
138124
```@example benchmarks
139125
# Aggregate results by algorithm (mean across all locations)
140126
algo_stats = combine(
@@ -217,7 +203,7 @@ println(" Elevation: mean=$(round(mean(abs.(elev_errors)), digits=6))°, max=$(
217203
println(" Azimuth: mean=$(round(mean(abs.(azim_errors)), digits=6))°, max=$(round(maximum(abs.(azim_errors)), digits=4))°")
218204
```
219205

220-
!!! details "PSA error visualization"
206+
!!! details "Visualization"
221207
```@example benchmarks
222208
fig_err = Figure(size = (900, 500), backgroundcolor = :transparent, fontsize = 12, textcolor = "#f5ab35")
223209

@@ -251,7 +237,7 @@ fig_err # hide
251237

252238
### Error Distribution by Location
253239

254-
!!! details "Error distribution visualization"
240+
!!! details "Visualization"
255241
```@example benchmarks
256242
fig2 = Figure(size = (900, 500), backgroundcolor = :transparent, fontsize = 11, textcolor = "#f5ab35")
257243

@@ -280,10 +266,13 @@ fig_err # hide
280266
fig2 # hide
281267
```
282268

269+
As we can see the error distribution is relatively consistent across different geographic
270+
locations, with PSA consistently providing the lowest mean error compared to SPA.
271+
283272
## Performance Benchmarks
284273

285274
We benchmark the computational performance of each algorithm across different input
286-
sizes, from single timestamp calculations to bulk operations with 100,000 timestamps.
275+
sizes, from single timestamp calculations to bulk operations with 10,000 timestamps.
287276

288277
!!! details "Single benchmark"
289278
```@example benchmarks
@@ -318,7 +307,7 @@ sizes, from single timestamp calculations to bulk operations with 100,000 timest
318307
!!! details "Vector benchmark"
319308
```@example benchmarks
320309
# Vector benchmarks for different sizes
321-
sizes = [100, 1_000, 10_000, 100_000]
310+
sizes = [100, 1_000, 10_000]
322311

323312
vector_benchmarks = DataFrame(
324313
Algorithm = String[],
@@ -348,9 +337,7 @@ sizes, from single timestamp calculations to bulk operations with 100,000 timest
348337
vector_pivot
349338
```
350339

351-
### Performance Visualization
352-
353-
!!! details "Performance visualization"
340+
!!! details "Visualization"
354341
```@example benchmarks
355342
fig3 = Figure(size = (900, 400), backgroundcolor = :transparent, fontsize = 12, textcolor = "#f5ab35")
356343

@@ -376,15 +363,15 @@ sizes, from single timestamp calculations to bulk operations with 100,000 timest
376363

377364
# Throughput plot
378365
ax2 = Axis(fig3[1, 2],
379-
title = "Throughput at N=100,000",
366+
title = "Throughput at N=10,000",
380367
xlabel = "Algorithm",
381368
ylabel = "Positions per Second",
382369
xticks = (1:5, algo_names),
383370
backgroundcolor = :transparent,
384371
)
385372

386-
throughput_100k = filter(r -> r.N == 100_000, vector_benchmarks)
387-
barplot!(ax2, 1:5, throughput_100k.Throughput ./ 1e6, color = colors)
373+
throughput_10k = filter(r -> r.N == 10_000, vector_benchmarks)
374+
barplot!(ax2, 1:5, throughput_10k.Throughput ./ 1e6, color = colors)
388375
ax2.ylabel = "Million Positions / Second"
389376

390377
nothing # hide
@@ -423,7 +410,7 @@ sp = pyimport("solposx.solarposition")
423410
pd = pyimport("pandas")
424411
```
425412

426-
### Benchmark Configuration
413+
### Configuration
427414

428415
For fair comparison, we use the same test conditions for both libraries:
429416

@@ -432,12 +419,10 @@ For fair comparison, we use the same test conditions for both libraries:
432419
- **Algorithms**: PSA, NOAA, Walraven, USNO, SPA
433420

434421
```@example benchmarks
435-
# Helper function to create pandas DatetimeIndex
436422
function create_pandas_times(n::Int)
437423
pd.date_range(start="2024-01-01 00:00:00", periods=n, freq="h", tz="UTC")
438424
end
439425
440-
# solposx algorithm mapping - use Symbol keys for kwargs
441426
solposx_algorithms = Dict(
442427
"PSA" => (sp.psa, (coefficients = 2020,)),
443428
"NOAA" => (sp.noaa, NamedTuple()),
@@ -504,7 +489,7 @@ We benchmark both libraries across different input sizes:
504489
comparison_results
505490
```
506491

507-
!!! details "Performance comparison visualization"
492+
!!! details "Visualization"
508493
```@example benchmarks
509494
fig5 = Figure(size = (600, 750), backgroundcolor = :transparent, fontsize = 12, textcolor = "#f5ab35")
510495

@@ -575,7 +560,144 @@ We benchmark both libraries across different input sizes:
575560
fig5 # hide
576561
```
577562

578-
### Summary
563+
## Accuracy vs Performance Trade-off
564+
565+
When selecting a solar position algorithm, there is often a trade-off between accuracy
566+
and computational performance. The following Pareto plot visualizes this trade-off by
567+
plotting mean elevation error against computation time at N=1,000 timestamps, comparing
568+
both Julia and Python implementations.
569+
570+
!!! details "Pareto Analysis"
571+
```@example benchmarks
572+
# Combine accuracy and performance data for Pareto analysis at N=1000
573+
# Get mean elevation error across all locations for each algorithm
574+
pareto_data = combine(
575+
groupby(accuracy_results, :Algorithm),
576+
:Elevation_Mean_Error => mean => :Mean_Error
577+
)
578+
579+
# Add SPA as reference (error = 0)
580+
push!(pareto_data, (Algorithm = "SPA", Mean_Error = 0.0))
581+
582+
# Get Julia timing at N=1000
583+
julia_times_1k = filter(r -> r.N == 1_000, comparison_results)
584+
pareto_julia = leftjoin(pareto_data,
585+
select(julia_times_1k, :Algorithm, :Julia_ms),
586+
on = :Algorithm)
587+
pareto_julia.Library = fill("Julia", nrow(pareto_julia))
588+
589+
# Get Python timing at N=1000
590+
pareto_python = leftjoin(pareto_data,
591+
select(julia_times_1k, :Algorithm, :Python_ms),
592+
on = :Algorithm)
593+
rename!(pareto_python, :Python_ms => :Julia_ms) # Reuse column name for convenience
594+
pareto_python.Library = fill("Python", nrow(pareto_python))
595+
596+
# Combine both
597+
pareto_combined = vcat(pareto_julia, pareto_python)
598+
rename!(pareto_combined, :Julia_ms => :Time_ms)
599+
600+
# Sort by error then time
601+
sort!(pareto_combined, [:Mean_Error, :Time_ms])
602+
603+
pareto_combined
604+
```
605+
606+
!!! details "Visualization"
607+
```@example benchmarks
608+
fig_pareto = Figure(size = (700, 550), backgroundcolor = :transparent, fontsize = 12, textcolor = "#f5ab35")
609+
610+
ax = Axis(fig_pareto[1, 1],
611+
title = "Algorithm Selection: Accuracy vs Performance (N=1,000)",
612+
xlabel = "Computation Time (ms)",
613+
ylabel = "Mean Elevation Error vs SPA (°)",
614+
backgroundcolor = :transparent,
615+
)
616+
617+
# Color scheme for algorithms
618+
colors_pareto = Dict(
619+
"PSA" => :steelblue,
620+
"NOAA" => :orange,
621+
"Walraven" => :green,
622+
"USNO" => :purple,
623+
"SPA" => :red
624+
)
625+
626+
# Marker scheme for library
627+
markers_lib = Dict(
628+
"Julia" => :circle,
629+
"Python" => :utriangle
630+
)
631+
632+
# Plot points
633+
julia_points = []
634+
python_points = []
635+
636+
for row in eachrow(pareto_combined)
637+
marker = markers_lib[row.Library]
638+
color = colors_pareto[row.Algorithm]
639+
640+
s = scatter!(ax, [row.Time_ms], [row.Mean_Error],
641+
color = color,
642+
marker = marker,
643+
markersize = 15)
644+
645+
if row.Library == "Julia"
646+
push!(julia_points, s)
647+
else
648+
push!(python_points, s)
649+
end
650+
end
651+
652+
# Add algorithm labels for Julia implementations
653+
for row in eachrow(filter(r -> r.Library == "Julia", pareto_combined))
654+
text!(ax, row.Time_ms, row.Mean_Error,
655+
text = " " * row.Algorithm,
656+
align = (:left, :center),
657+
fontsize = 10)
658+
end
659+
660+
# Add algorithm labels for Python implementations
661+
for row in eachrow(filter(r -> r.Library == "Python", pareto_combined))
662+
text!(ax, row.Time_ms, row.Mean_Error,
663+
text = " " * row.Algorithm,
664+
align = (:left, :center),
665+
fontsize = 9,
666+
color = (:gray40, 0.8))
667+
end
668+
669+
# Connect Julia points to show the trade-off curve
670+
julia_data = sort(filter(r -> r.Library == "Julia", pareto_combined), :Time_ms)
671+
lines!(ax, julia_data.Time_ms, julia_data.Mean_Error,
672+
color = (:steelblue, 0.3), linestyle = :dash, linewidth = 1.5, label = "Julia Frontier")
673+
674+
# Connect Python points
675+
python_data = sort(filter(r -> r.Library == "Python", pareto_combined), :Time_ms)
676+
lines!(ax, python_data.Time_ms, python_data.Mean_Error,
677+
color = (:coral, 0.3), linestyle = :dot, linewidth = 1.5, label = "Python Frontier")
678+
679+
# Set y-axis limits for better discrimination
680+
ylims!(ax, nothing, 0.005)
681+
682+
# Add guidelines for accuracy levels
683+
hlines!(ax, [0.001, 0.002], color = (:gray, 0.2), linestyle = :dot, linewidth = 1)
684+
685+
# Create legend for library types
686+
library_elements = [
687+
MarkerElement(marker = :circle, color = :gray, markersize = 12),
688+
MarkerElement(marker = :utriangle, color = :gray, markersize = 12)
689+
]
690+
Legend(fig_pareto[1, 2], library_elements, ["Julia", "Python"], "Library",
691+
framevisible = true, backgroundcolor = :transparent, labelsize = 10)
692+
693+
nothing # hide
694+
```
695+
696+
```@example benchmarks
697+
fig_pareto # hide
698+
```
699+
700+
## Summary
579701

580702
The benchmarks demonstrate that SolarPosition.jl offers significant performance
581703
advantages over the solposx Python library across all tested algorithms and input sizes.
@@ -586,8 +708,3 @@ advantages over the solposx Python library across all tested algorithms and inpu
586708
on large datasets. The benchmarks above were conducted using a single thread for
587709
fair comparison with `solposx`, but enabling multi-threading can yield even greater
588710
speedups in practical applications.
589-
590-
```@example benchmarks
591-
speedup_table = unstack(comparison_results[:, [:Algorithm, :N, :Speedup]], :Algorithm, :N, :Speedup)
592-
speedup_table
593-
```

docs/src/guides/getting-started.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,10 @@ Almería) algorithm, which has a decent tradeoff between complexity and accuracy
5454
can choose other algorithms as described in the [Solar Positioning Algorithms](../positioning.md)
5555
section.
5656

57+
!!! info "Performance and Accuracy Benchmarks"
58+
For detailed performance comparisons and accuracy analysis of all available algorithms,
59+
see the [Benchmarking Guide](@ref benchmarking).
60+
5761
First, we repeat the previous calculation using the default PSA algorithm:
5862

5963
```@example getting-started

0 commit comments

Comments
 (0)