Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
b5fc7c4
Remove pvlib restriction and add python 3.13 support (#467)
martin-springer Feb 4, 2026
5c674a9
Allow clip_filter for energy-based trend analyses (#474)
martin-springer Feb 4, 2026
674e384
Fix numpy and pandas compatibility (#475)
martin-springer Feb 11, 2026
c26463c
Improved logo with transparent background (#481)
shirubana Feb 11, 2026
9d09f90
Add stacklevel to warnings.warn() calls (#476)
martin-springer Feb 11, 2026
5925406
Update temperature coefficient in PVDAQ notebooks to -0.0034 (#478)
Copilot Feb 11, 2026
4620944
Bump pillow from 10.4.0 to 12.1.1 (#483)
dependabot[bot] Feb 11, 2026
fb80b70
Merge remote-tracking branch 'origin/master' into development
martin-springer Feb 11, 2026
c27f2fb
create v3.1.0 changelog
martin-springer Feb 11, 2026
89f58a3
Bump nbconvert from 7.16.4 to 7.17.0 in /docs (#482)
dependabot[bot] Feb 11, 2026
8f2881d
Update docs/sphinx/source/changelog/v3.1.0.rst
martin-springer Feb 12, 2026
3a6c7d9
Update docs/sphinx/source/changelog/v3.1.0.rst
martin-springer Feb 12, 2026
dd92a15
Update docs/sphinx/source/changelog/v3.1.0.rst
martin-springer Feb 12, 2026
7fbf78f
Update setup.cfg
martin-springer Feb 12, 2026
da3265a
Update docs/nbval_sanitization_rules.cfg
martin-springer Feb 12, 2026
939281e
add python version requirement
martin-springer Feb 12, 2026
280f764
Merge branch 'development' of https://github.com/NREL/rdtools into de…
martin-springer Feb 12, 2026
aac4765
allow new nbval version
martin-springer Feb 12, 2026
acc357c
fix semicolon in availability notebook
martin-springer Feb 12, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/pytest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,21 +14,21 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
python-version: ["3.10", "3.11", "3.12", "3.13"]
env: [
'-r requirements.txt .[test]',
'-r requirements-min.txt .[test]',
'--upgrade --upgrade-strategy=eager .[test]'
]
exclude:
- python-version: "3.9"
env: "-r requirements.txt .[test]"
- python-version: "3.10"
env: '-r requirements-min.txt .[test]'
- python-version: "3.11"
env: '-r requirements-min.txt .[test]'
- python-version: "3.12"
env: '-r requirements-min.txt .[test]'
- python-version: "3.13"
env: '-r requirements-min.txt .[test]'
fail-fast: false

steps:
Expand Down
671 changes: 486 additions & 185 deletions docs/TrendAnalysis_example.ipynb

Large diffs are not rendered by default.

125 changes: 108 additions & 17 deletions docs/TrendAnalysis_example_NSRDB.ipynb

Large diffs are not rendered by default.

6,585 changes: 3,452 additions & 3,133 deletions docs/degradation_and_soiling_example.ipynb

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 7 additions & 1 deletion docs/nbval_sanitization_rules.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,16 @@
regex: .*: UserWarning:
replace: NBVAL-FILEPATH: UserWarning:

# sanitize the specific traceback file/line entry shown in warning tracebacks
# since stacklevel changes will alter which line is reported
[regex2]
regex: ^\s*File ".*", line \d+.*$
replace: CODE-LINE

[regex3]
regex: \d{1,2}/\d{1,2}/\d{2,4}
replace: DATE-STAMP

[regex3]
[regex4]
regex: \d{2}:\d{2}:\d{2}
replace: TIME-STAMP
4 changes: 2 additions & 2 deletions docs/notebook_requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ lxml==5.3.0
MarkupSafe==2.1.5
mistune==3.0.2
nbclient==0.10.0
nbconvert==7.16.4
nbconvert==7.17.0
nbformat==5.10.4
nest-asyncio==1.6.0
notebook==7.2.2
Expand All @@ -47,7 +47,7 @@ simplegeneric==0.8.1
soupsieve==2.6
terminado==0.18.1
testpath==0.6.0
tinycss2==1.3.0
tinycss2==1.2.1
tornado==6.5.1
traitlets==5.14.3
wcwidth==0.2.13
Expand Down
2 changes: 0 additions & 2 deletions docs/sphinx/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -133,8 +133,6 @@ Normalization
normalize_with_expected_power
normalize_with_pvwatts
pvwatts_dc_power
delta_index
check_series_frequency


Aggregation
Expand Down
1 change: 1 addition & 0 deletions docs/sphinx/source/changelog.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
RdTools Change Log
==================
.. include:: changelog/v3.1.0.rst
.. include:: changelog/v3.0.1.rst
.. include:: changelog/v3.0.0.rst
.. include:: changelog/v2.1.8.rst
Expand Down
91 changes: 91 additions & 0 deletions docs/sphinx/source/changelog/v3.1.0.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
****************************
v3.1.0 (February XX, 2026)
****************************

Enhancements
------------
* Modified ``TrendAnalysis._filter()`` to allow ``clip_filter`` to use ``pv_energy``
when ``pv_power`` is not available. This enables clipping detection for energy-based
analyses with sub-hourly data.
* Added frequency validation for ``clip_filter`` in ``TrendAnalysis._filter()`` that
raises a ``ValueError`` if the time series has a median time step greater than 60
minutes, as clipping detection requires higher resolution data.


Documentation
-------------
* Updated temperature coefficient (``gamma_pdc``) in PVDAQ example notebooks from -0.005 to
-0.0034 1/K to reflect modern silicon PV module specifications. Updated notebooks include
``degradation_and_soiling_example.ipynb``, ``TrendAnalysis_example.ipynb``, and
``TrendAnalysis_example_NSRDB.ipynb``.
* Added ``stacklevel`` parameter to all ``warnings.warn()`` calls so that warning
messages point to user code rather than rdtools internals. Affected modules:
``analysis_chains``, ``filtering``, ``soiling``, ``plotting``, ``normalization``,
``availability``, and ``clearsky_temperature``.


Requirements
------------
* Updated pvlib requirement in setup.py from "pvlib >= 0.11.0, <0.12.0" to "pvlib >= 0.12.0" (removed upper version restriction).
* Updated pvlib version in requirements.txt from 0.11.0 to 0.14.0
* Removed pandas upper version restriction in setup.py. Now "pandas >= 1.4.4" to support pandas 3.0.
* Removed numpy upper version restriction in setup.py. Now "numpy >= 1.22.4" to support numpy 2.x.
* Updated pandas version in requirements.txt from 2.2.2 to 2.2.3 for python 3.13 compatibility.
* Updated scipy version in requirements.txt from 1.13.1 to 1.14.1 for python 3.13 compatibility.
* Updated h5py version in requirements.txt from 3.11.0 to 3.12.0 for python 3.13 compatibility.
* Updated scikit-learn version in requirements.txt from 1.5.1 to 1.7.2 for python 3.13 and xgboost compatibility.
* Updated plotly version in requirements.txt from 5.23.0 to 6.1.1 for python 3.13 compatibility.
* Updated setuptools-scm version in requirements.txt from 8.1.0 to 9.2.2 for python 3.13 compatibility.
* Updated six version in requirements.txt from 1.16.0 to 1.17.0 for python 3.13 compatibility.
* Updated statsmodels version in requirements.txt from 0.14.2 to 0.14.6 for python 3.13 compatibility.
* Updated threadpoolctl version in requirements.txt from 3.5.0 to 3.6.0 for python 3.13 compatibility.
* Updated tomli version in requirements.txt from 2.0.1 to 2.0.2 for python 3.13 compatibility.
* Updated typing_extensions version in requirements.txt from 4.12.2 to 4.15.0 for python 3.13 compatibility.
* Updated urllib3 version in requirements.txt from 2.5.0 to 2.6.3 for python 3.13 compatibility and to fix security issues.
* Updated xgboost version in requirements.txt from 2.1.1 to 3.1.3 for python 3.13 compatibility.
* Updated fonttools version in requirements.txt from 4.53.1 to 4.58.4 for python 3.13 compatibility.
* Updated idna version in requirements.txt from 3.7 to 3.8 for python 3.13 compatibility.
* Updated joblib version in requirements.txt from 1.4.2 to 1.5.2 for python 3.13 compatibility.
* Updated kiwisolver version in requirements.txt from 1.4.5 to 1.4.6 for python 3.13 compatibility.
* Updated matplotlib version in requirements.txt from 3.9.2 to 3.9.4 for python 3.13 compatibility.
* Updated packaging version in requirements.txt from 24.1 to 26.0 for python 3.13 compatibility.
* Updated patsy version in requirements.txt from 0.5.6 to 1.0.0 for python 3.13 compatibility.
* Updated Pillow version in requirements.txt from 10.4.0 to 12.1.1 for python 3.13 compatibility.
* Updated pyparsing version in requirements.txt from 3.1.2 to 3.2.0 for python 3.13 compatibility.
* Updated pytz version in requirements.txt from 2024.1 to 2025.2 for python 3.13 compatibility.
* Added ``python_requires='>=3.10'`` to ``setup.py`` to enforce minimum Python version
at install time, matching the supported Python versions in classifiers.
* Updated ``nbval`` test dependency from ``<=0.9.6`` to ``>=0.10.0`` to support pytest 7+
which requires ``pathlib.Path`` instead of the deprecated ``py.path.local``.

Bug Fixes
---------
* Fixed pandas 3.0 compatibility in ``normalization.py`` by using ``.total_seconds()``
instead of ``.view('int64')`` with hardcoded nanosecond divisors. Pandas 3.0 changed
the default datetime resolution from nanoseconds (``datetime64[ns]``) to microseconds
(``datetime64[us]``). Affected functions: ``_delta_index``, ``_t_step_nanoseconds``,
``_aggregate``, ``_interpolate_series``.
* Fixed datetime resolution preservation in ``normalization.interpolate()`` to ensure
output maintains the same resolution as input (e.g., ``datetime64[us]``).
* Fixed numpy 2.x compatibility in ``soiling.py`` by using ``.item()`` and explicit
indexing to extract scalar values from numpy arrays, as implicit array-to-scalar
conversion is deprecated.
* Fixed xgboost 3.x compatibility in ``filtering.xgboost_clip_filter()`` by using
``xgb.DMatrix`` with explicit feature names for model prediction.
* Fixed pandas 4.0 deprecation warnings by changing lowercase ``'d'`` to uppercase
``'D'`` in Timedelta strings and using ``axis=`` keyword argument for DataFrame
aggregation methods.


Warnings
--------
* Removed obsolete ``fspath`` deprecation warning filter from ``setup.cfg`` as it is
no longer needed with ``nbval>=0.10.0``.


Deprecations
------------
* Removed deprecated ``normalization.delta_index`` function (deprecated in v2.0.0).
The private ``_delta_index`` helper remains available for internal use.
* Removed deprecated ``normalization.check_series_frequency`` function (deprecated in v2.0.0).
The private ``_check_series_frequency`` helper remains available for internal use.
101 changes: 85 additions & 16 deletions docs/system_availability_example.ipynb

Large diffs are not rendered by default.

38 changes: 23 additions & 15 deletions rdtools/analysis_chains.py
Original file line number Diff line number Diff line change
Expand Up @@ -475,7 +475,8 @@ def _pvwatts_norm(self, poa_global, temperature_cell):
if self.gamma_pdc is None:
warnings.warn(
"Temperature coefficient not passed in to TrendAnalysis. "
"No temperature correction will be conducted."
"No temperature correction will be conducted.",
stacklevel=3,
)
pvwatts_kws = {
"poa_global": poa_global,
Expand Down Expand Up @@ -571,15 +572,17 @@ def _call_clearsky_filter(filter_string):
f = filtering.tcell_filter(cell_temp, **self.filter_params["tcell_filter"])
filter_components["tcell_filter"] = f
if "clip_filter" in self.filter_params:
if self.pv_power is None:
raise ValueError(
"PV power (not energy) is required for the clipping filter. "
"Either omit the clipping filter, provide PV power at "
"instantiation, or explicitly assign TrendAnalysis.pv_power."
)
f = filtering.clip_filter(
self.pv_power, **self.filter_params["clip_filter"]
)
# Check that the time series frequency is 60 minutes or less
clip_data = self.pv_power if self.pv_power is not None else self.pv_energy
if clip_data is not None and len(clip_data) > 1:
median_freq = pd.Series(clip_data.index).diff().median()
if median_freq > pd.Timedelta(minutes=60):
raise ValueError(
f"clip_filter requires time series frequency of 60 minutes or less. "
f"Median time step is {median_freq}."
)

f = filtering.clip_filter(clip_data, **self.filter_params["clip_filter"])
filter_components["clip_filter"] = f
if "hour_angle_filter" in self.filter_params:
if not hasattr(self, "pvlib_location"):
Expand Down Expand Up @@ -617,15 +620,17 @@ def _call_clearsky_filter(filter_string):

if ad_hoc_filter.isnull().any():
warnings.warn(
"ad_hoc_filter contains NaN values; setting to False (excluding)"
"ad_hoc_filter contains NaN values; setting to False (excluding)",
stacklevel=3,
)
ad_hoc_filter.loc[ad_hoc_filter.isnull()] = False

if not filter_components.index.equals(ad_hoc_filter.index):
warnings.warn(
"ad_hoc_filter index does not match index of other filters; missing "
"values will be set to True (kept). Align the index with the index "
"of the filter_components attribute to prevent this warning"
"of the filter_components attribute to prevent this warning",
stacklevel=3,
)
ad_hoc_filter = ad_hoc_filter.reindex(filter_components.index)
ad_hoc_filter.loc[ad_hoc_filter.isnull()] = True
Expand Down Expand Up @@ -707,7 +712,8 @@ def _aggregated_filter(self, aggregated, case):

if ad_hoc_filter_aggregated.isnull().any():
warnings.warn(
"aggregated ad_hoc_filter contains NaN values; setting to False (excluding)"
"aggregated ad_hoc_filter contains NaN values; setting to False (excluding)",
stacklevel=3,
)
ad_hoc_filter_aggregated.loc[ad_hoc_filter_aggregated.isnull()] = False

Expand All @@ -718,7 +724,8 @@ def _aggregated_filter(self, aggregated, case):
"Aggregated ad_hoc_filter index does not match index of other "
"filters; missing values will be set to True (kept). "
"Align the index with the index of the "
"filter_components_aggregated attribute to prevent this warning"
"filter_components_aggregated attribute to prevent this warning",
stacklevel=3,
)
ad_hoc_filter_aggregated = ad_hoc_filter_aggregated.reindex(
filter_components_aggregated.index
Expand Down Expand Up @@ -961,7 +968,8 @@ def _clearsky_preprocess(self):
"""Clear-sky analysis is performed but `power_expected` was passed in by user.
In this case, the power normalization is not tied to the modeled clear-sky
irradiance and the clear-sky workflow may provide similar results to
the sensor workflow."""
the sensor workflow.""",
stacklevel=2,
)
self._filter(cs_normalized, "clearsky")
cs_aggregated, cs_aggregated_insolation = self._aggregate(
Expand Down
2 changes: 1 addition & 1 deletion rdtools/availability.py
Original file line number Diff line number Diff line change
Expand Up @@ -543,7 +543,7 @@ def _combine_losses(self, rollup_period="ME"):
'levels. This is unexpected and could indicate a problem with '
'the input time series data.'
)
warnings.warn(msg, UserWarning)
warnings.warn(msg, UserWarning, stacklevel=3)

self.loss_total = self.loss_system + self.loss_subsystem

Expand Down
6 changes: 4 additions & 2 deletions rdtools/clearsky_temperature.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,8 @@ def get_clearsky_tamb(times, latitude, longitude, window_size=40,
# workaround from https://github.com/pandas-dev/pandas/issues/55794
freq_actual = pd.infer_freq(times[:10])
warnings.warn("Input 'times' has no frequency attribute. "
"Inferring frequency from first 10 timestamps.")
"Inferring frequency from first 10 timestamps.",
stacklevel=2)
else:
freq_actual = times.freq

Expand Down Expand Up @@ -121,7 +122,8 @@ def solar_noon_offset(utc_offset):
df['solar_noon_offset'].values)
if df['Clear Sky Temperature (C)'].isna().any():
warnings.warn("Clear Sky Temperature includes NaNs, "
"possibly invalid Lat/Lon coordinates.", UserWarning)
"possibly invalid Lat/Lon coordinates.", UserWarning,
stacklevel=2)
return df['Clear Sky Temperature (C)']


Expand Down
4 changes: 2 additions & 2 deletions rdtools/degradation.py
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ def degradation_year_on_year(energy_normalized, recenter=True,
# Auto center
if recenter:
start = energy_normalized.index[0]
oneyear = start + pd.Timedelta('364d')
oneyear = start + pd.Timedelta('364D')
renorm = utilities.robust_median(energy_normalized[start:oneyear])
else:
renorm = 1.0
Expand All @@ -280,7 +280,7 @@ def degradation_year_on_year(energy_normalized, recenter=True,
tolerance=pd.Timedelta('8D')
)

df['time_diff_years'] = (df.dt - df.dt_right) / pd.Timedelta('365d')
df['time_diff_years'] = (df.dt - df.dt_right) / pd.Timedelta('365D')
df['yoy'] = 100.0 * (df.energy - df.energy_right) / (df.time_diff_years)
df.index = df.dt

Expand Down
51 changes: 27 additions & 24 deletions rdtools/filtering.py
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,8 @@ def _format_clipping_time_series(power_ac, mounting_type):
warnings.warn(
"Function expects timestamps in local time. "
"For best results pass a time-zone-localized "
"time series localized to the correct local time zone."
"time series localized to the correct local time zone.",
stacklevel=3,
)
# Check the other input variables to ensure that they are the
# correct format
Expand Down Expand Up @@ -448,7 +449,8 @@ def _check_data_sampling_frequency(power_ac):
"Variable sampling frequency across time series. "
"Less than 95% of the time series is sampled at the "
"same interval. This function was not tested "
"on variable frequency data--use at your own risk!"
"on variable frequency data--use at your own risk!",
stacklevel=3,
)
return

Expand Down Expand Up @@ -846,30 +848,31 @@ def xgboost_clip_filter(power_ac, mounting_type="fixed"):
power_ac_df["mounting_config"] == "fixed", "mounting_config_bool"
] = 0
# Subset the dataframe to only include model inputs
power_ac_df = power_ac_df[
[
"first_order_derivative_backward",
"first_order_derivative_forward",
"first_order_derivative_backward_rolling_avg",
"first_order_derivative_forward_rolling_avg",
"sampling_frequency",
"mounting_config_bool",
"scaled_value",
"rolling_average",
"daily_max",
"percent_daily_max",
"deriv_max",
"deriv_backward_rolling_stdev",
"deriv_backward_rolling_mean",
"deriv_backward_rolling_median",
"deriv_backward_rolling_min",
"deriv_backward_rolling_max",
]
].dropna()
feature_cols = [
"first_order_derivative_backward",
"first_order_derivative_forward",
"first_order_derivative_backward_rolling_avg",
"first_order_derivative_forward_rolling_avg",
"sampling_frequency",
"mounting_config_bool",
"scaled_value",
"rolling_average",
"daily_max",
"percent_daily_max",
"deriv_max",
"deriv_backward_rolling_stdev",
"deriv_backward_rolling_mean",
"deriv_backward_rolling_median",
"deriv_backward_rolling_min",
"deriv_backward_rolling_max",
]
power_ac_df = power_ac_df[feature_cols].dropna()
# Run the power_ac_df dataframe through the XGBoost ML model,
# and return boolean outputs
# and return boolean outputs. Use DMatrix with explicit feature names
# for xgboost 3.x compatibility.
dmatrix = xgb.DMatrix(power_ac_df, feature_names=feature_cols)
xgb_predictions = pd.Series(
xgboost_clipping_model.predict(power_ac_df).astype(bool)
(xgboost_clipping_model.get_booster().predict(dmatrix) > 0.5).astype(bool)
)
# Add datetime as an index
xgb_predictions.index = power_ac_df.index
Expand Down
Loading