diff --git a/context/README.md b/context/README.md index 3aa2a78d..3308a7e2 100644 --- a/context/README.md +++ b/context/README.md @@ -56,10 +56,11 @@ BAT framework, residual allocation, and how they connect to the literature. Cost-reflective TOU design: theory and window optimization. -| File | Purpose | -| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| cost_reflective_tou_rate_design.md | Theory and practice of cost-reflective TOU rate design: demand-weighted MC averages, cost-causation ratios, period selection, assumptions, demand flexibility implications, and partial vs. general equilibrium | -| tou_window_optimization.md | TOU window width ($N$) sweep: welfare-loss metric derivation, HP-demand weighting proof, sweep algorithm, CLI/Justfile, NY results | +| File | Purpose | +| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| cost_reflective_tou_rate_design.md | Theory and practice of cost-reflective TOU rate design: demand-weighted MC averages, cost-causation ratios, period selection, assumptions, demand flexibility implications, and partial vs. general equilibrium | +| tou_window_optimization.md | TOU window width ($N$) sweep: welfare-loss metric derivation, HP-demand weighting proof, sweep algorithm, CLI/Justfile, NY results | +| demand_flex_elasticity_calibration.md | Per-utility elasticity calibration: Arcturus 2.0 meta-analysis anchor, diagnostic methodology, results table (ε = -0.10 or -0.12 per utility), two savings mechanisms, known limitations | ### methods/marginal_costs/ diff --git a/context/docs/Faruqui et al. - 2017 - Arcturus 2.0 A meta-analysis of time-varying rates for electricity.pdf b/context/docs/Faruqui et al. - 2017 - Arcturus 2.0 A meta-analysis of time-varying rates for electricity.pdf new file mode 100644 index 00000000..07b11ae0 Binary files /dev/null and b/context/docs/Faruqui et al. - 2017 - Arcturus 2.0 A meta-analysis of time-varying rates for electricity.pdf differ diff --git a/context/methods/tou_and_rates/demand_flex_elasticity_calibration.md b/context/methods/tou_and_rates/demand_flex_elasticity_calibration.md new file mode 100644 index 00000000..b9083dca --- /dev/null +++ b/context/methods/tou_and_rates/demand_flex_elasticity_calibration.md @@ -0,0 +1,248 @@ +# Demand-Flex Elasticity Calibration for NY Utilities + +## Overview + +This document describes how we calibrate the demand-flexibility elasticity parameter (epsilon) for each NY utility's TOU rate design. The elasticity controls how much HP customers shift load from peak to off-peak hours in response to TOU price signals. We anchor to empirical evidence from the Arcturus 2.0 meta-analysis. + +## The Arcturus 2.0 empirical anchor + +### What it is + +Arcturus 2.0 (Faruqui et al., 2017) is a meta-analysis of 62 time-varying pricing pilots conducted across North America, Europe, and Australia. Each pilot enrolled residential customers on some form of time-of-use or dynamic pricing and measured how much they reduced peak demand compared to a control group. + +### What they found + +Peak demand reduction is strongly predicted by the peak-to-off-peak price ratio -- the higher the ratio, the more customers shift. They fit regression models separately for pilots with and without enabling technology (smart thermostats, in-home displays, etc.) See Fig 12 in Arcturus 2.0: + +**No enabling technology** (conservative baseline): + +$$\text{peak\_reduction} = -0.011 + (-0.065) \times \ln(\text{price\_ratio})$$ + +**With enabling technology** (smart thermostats, in-home displays, etc.) -- same intercept, slope augmented by the interaction term (-0.046): + +$$\text{peak\_reduction} = -0.011 + (-0.111) \times \ln(\text{price\_ratio})$$ + +At a 2:1 ratio, the no-tech model predicts ~5.6% peak reduction; the with-tech model predicts ~8.8%. At 4:1, ~10.1% vs ~16.5%. The relationship is log-linear -- doubling the ratio doesn't double the response; there are diminishing returns. + +### No-tech vs. with-tech + +Arcturus separates results into groups based on whether customers had enabling technology. The no-tech group represents customers responding to price signals alone with no automation. The with-tech group includes customers with smart thermostats, in-home displays, or other automation, producing roughly 2x the demand response at any given price ratio. + +Note: the paper estimates a single regression with an interaction term `ln(ratio) × tech`, where `tech` is a binary dummy. The with-tech slope is the sum of the base slope (-0.065) and the interaction coefficient (-0.046). The intercept is the same for both models. + +Our default calibration uses the **no-tech** model as a conservative baseline. The calibration script also reports **with-tech** recommendations for sensitivity analysis and scenarios that assume HP customers have smart controls. Both sets of seasonal elasticities are written to the periods YAML (`elasticity` and `elasticity_with_tech`); the scenario generation script selects between them based on the `enabling_tech` column in the Google Sheet (empty or TRUE = with-tech; FALSE/no/0 = no-tech). + +### How we use it + +Our model uses a different functional form -- constant elasticity ($Q_{\text{shifted}} = Q \times (P / P_{\text{flat}})^\varepsilon$) rather than Arcturus's log-linear regression. We can't directly equate the two. Instead, we: + +1. Take each utility's actual TOU price ratio +2. Ask Arcturus: "what peak reduction would real customers achieve at this ratio?" +3. Run our constant-elasticity model at a range of epsilon values +4. Pick the epsilon that produces the same peak reduction as Arcturus predicts + +This gives us an empirically grounded elasticity: not a theoretical value, but one that reproduces the aggregate demand response observed in actual pricing pilots at a comparable price ratio. + +### Seasonal elasticities + +Summer and winter have very different TOU price ratios (e.g. CenHud: 3.05 summer vs 1.57 winter), so Arcturus predicts different peak reductions for each season. A single annual elasticity is necessarily a compromise: matching summer undershoots winter, matching winter overshoots summer. Seasonal elasticities address this by calibrating each season independently against its own Arcturus target. + +The runtime implementation supports seasonal elasticities natively. In the scenario YAML, elasticity can be specified as a scalar (applied uniformly) or as a `{season: value}` dict: + +```yaml +# Scalar (backward compatible): +elasticity: -0.12 + +# Seasonal: +elasticity: + winter: -0.12 + summer: -0.14 +``` + +The shifting pipeline already iterates per season (`_shift_season` in `utils/cairo.py`), so per-season elasticity resolution adds no structural complexity -- it simply looks up the season-specific value from the dict instead of using the scalar. + +### What Arcturus doesn't tell us + +It measures aggregate outcomes across heterogeneous customers -- some shift a lot, some don't shift at all. Our model applies the same epsilon uniformly to every building. The recommended epsilon is therefore an approximation that matches the _average_ pilot outcome, not the distribution of individual responses. + +## Diagnostic methodology + +### What varies across utilities + +Each NY utility has different TOU structures derived from their marginal cost profiles: + +- **Peak/off-peak price ratios** range from 1.57 (CenHud/OR winter) to 4.33 (ConEd summer) +- **Peak window widths** are either 3 hours (ConEd, NiMo, NYSEG, RGE) or 5 hours (CenHud, OR, PSEG-LI) +- **HP customer share** ranges from 1.4% (PSEG-LI) to 4.4% (RGE) of weighted customers + +Because HP customers heat with electricity, they consume disproportionately more in winter, when the price ratio is lower. We compute a demand-weighted annual price ratio that reflects this seasonal load distribution, giving us a single Arcturus target per utility. When using seasonal elasticities, we instead match each season's Arcturus target independently. + +### How the diagnostic works + +The calibration script (`utils/pre/calibrate_demand_flex_elasticity.py`) does the following for each utility: + +1. Loads the actual hourly electricity consumption for every HP building in the baseline stock (upgrade=00) +2. Loads the TOU derivation data (price ratios, peak hours, base rates per season) +3. At each candidate elasticity (default: -0.04 through -0.50 in 0.02 steps, configurable via `--epsilon-start`/`--epsilon-end`/`--epsilon-step`), applies the same constant-elasticity shifting formula CAIRO uses: $Q_{\text{shifted}} = Q_{\text{orig}} \times (P_{\text{period}} / P_{\text{flat}})^\varepsilon$ +4. Measures the resulting peak reduction percentage +5. Compares to the Arcturus prediction for this utility's price ratio +6. Selects the elasticity whose peak reduction most closely matches Arcturus + +We also compute rate arbitrage savings: the dollars an HP customer saves by consuming less at the peak rate and more at the off-peak rate. + +### Two bill savings mechanisms + +1. **Rate arbitrage** (primary): HP customers on a TOU tariff shift load from expensive peak hours to cheap off-peak hours. Savings = kWh shifted x (peak rate - off-peak rate). This is independent of the revenue requirement change. +2. **RR reduction** (secondary, negligible at delivery level): under frozen residual, shifting reduces total MC, lowering the RR. Delivery MC is nonzero in only 40-113 hours/year, so this effect is effectively zero. + +### Key data + +- **Building loads**: ResStock upgrade=00 HP buildings, local parquet at `/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/load_curve_hourly/state=NY/upgrade=00/` +- **TOU derivation**: `rate_design/hp_rates/ny/config/tou_derivation/{utility}_hp_seasonalTOU_derivation.json` +- **MC data**: `s3://data.sb/switchbox/marginal_costs/ny/{dist_and_sub_tx,bulk_tx}/utility={util}/year=2025/data.parquet` (NOT Cambium -- derived from NYISO LBMP and utility PUC filings) +- **CAIRO ground truth**: `s3://data.sb/switchbox/cairo/outputs/hp_rates/ny/all_utilities/ny_20260325b_r1-16/` + +## Results (March 2026) + +### Seasonal elasticities — no enabling technology + +Each season is matched independently against its own Arcturus no-tech target derived from that season's TOU ratio. These values are written to `config/periods/{utility}.yaml` under the `elasticity` key. + +| Utility | Summer ε | Summer ratio | Summer Arcturus | Winter ε | Winter ratio | Winter Arcturus | Savings/HP | +| ------- | :------: | :----------: | :-------------: | :------: | :----------: | :-------------: | :--------: | +| CenHud | -0.14 | 3.05 | 8.3% | -0.12 | 1.57 | 4.0% | $19.29 | +| ConEd | -0.12 | 4.33 | 10.6% | -0.10 | 1.98 | 5.5% | $21.05 | +| NiMo | -0.10 | 2.91 | 8.0% | -0.10 | 1.75 | 4.7% | $5.90 | +| NYSEG | -0.10 | 2.93 | 8.1% | -0.10 | 1.75 | 4.7% | $7.40 | +| OR | -0.14 | 3.13 | 8.5% | -0.12 | 1.57 | 4.0% | $14.03 | +| PSEG-LI | -0.14 | 3.80 | 9.8% | -0.12 | 1.67 | 4.4% | $22.73 | +| RGE | -0.10 | 2.87 | 7.9% | -0.10 | 1.75 | 4.7% | $5.65 | + +### Seasonal elasticities — with enabling technology + +Same methodology, but calibrated against the Arcturus with-tech target (slope = -0.111). Written to `config/periods/{utility}.yaml` under the `elasticity_with_tech` key. + +| Utility | Summer ε | Summer Arcturus | Winter ε | Winter Arcturus | Savings/HP | +| ------- | :------: | :-------------: | :------: | :-------------: | :--------: | +| CenHud | -0.22 | 13.5% | -0.18 | 6.1% | $29.38 | +| ConEd | -0.18 | 17.4% | -0.16 | 8.7% | $31.07 | +| NiMo | -0.18 | 13.0% | -0.16 | 7.3% | $9.95 | +| NYSEG | -0.18 | 13.0% | -0.16 | 7.3% | $12.49 | +| OR | -0.24 | 13.8% | -0.20 | 6.1% | $23.25 | +| PSEG-LI | -0.22 | 15.9% | -0.18 | 6.8% | $34.53 | +| RGE | -0.18 | 12.8% | -0.16 | 7.3% | $9.53 | + +### Key takeaways + +1. **Two natural groups**: Utilities with 3-hour peak windows land at ε = -0.10 (no-tech); utilities with 5-hour peak windows land at ε = -0.12. Wider peak windows shift more kWh per unit of epsilon, so less epsilon is needed to match the same Arcturus target. + +2. **With-tech elasticities are ~1.5-1.7x larger**: Enabling technology pushes seasonal epsilons from -0.10/−0.14 (no-tech) to roughly -0.16/−0.22 (with-tech), increasing per-HP savings by 50-90%. + +3. **Savings are modest but real**: No-tech: $6-$23 per HP building per year; with-tech: $10-$35. This is from rate arbitrage alone, on top of the much larger savings (~$590 for ConEd) from simply switching to the TOU rate structure. + +4. **Marginal cost savings are negligible**: Delivery MC is nonzero in only 40-113 hours per year. The frozen-residual RR reduction from load shifting is effectively zero at the delivery level. Bill savings come entirely from the rate structure spread. + +5. **Winter matters less than summer for shifting savings**: Winter price ratios (1.6-2.0) are much lower than summer (2.9-4.3), so the rate spread available for arbitrage is smaller. HP customers' heavy winter load contributes little to shifting savings despite being the season where they consume the most. + +### CAIRO ground truth comparison + +The `--compare-batch` flag compares analytical savings predictions against actual CAIRO bill differences. Use `--with-tech` when the CAIRO batch was configured with `elasticity_with_tech`: + +```bash +# Compare against no-tech batch: +just -f rate_design/hp_rates/ny/Justfile calibrate-demand-flex-elasticity \ + --compare-batch ny_20260325b_r1-16 + +# Compare against with-tech batch: +just -f rate_design/hp_rates/ny/Justfile calibrate-demand-flex-elasticity \ + --compare-batch ny_20260326_elast_seasonal_tech --with-tech +``` + +Note: CAIRO runs use upgrade=02 (counterfactual all-HP world), while the diagnostic computes savings per HP building in the upgrade=00 baseline stock. These are different scenarios; expect directional agreement rather than exact matches. + +## Validation (March 2026) + +The shift mechanics were validated against CAIRO's own outputs using `utils/post/validate_demand_flex_shift.py`. The script reproduces the constant-elasticity shift analytically using the same `utils/cairo.py` functions CAIRO uses, then checks three invariants. + +### Validation results at ε = -0.10 + +All 7 NY utilities pass all checks: + +| Check | Result | +| ---------------------------------------------- | -------------------------------------------------------------- | +| Energy conservation (per building, per season) | PASS -- max \|orig − shifted\| = 0.000 kWh (machine precision) | +| Direction (peak kWh ↓, off-peak kWh ↑) | PASS -- all utilities, both seasons | +| CAIRO tracker match (per-building realized ε) | max \|Δε\| < 0.015, mean \|Δε\| < 0.003 across all utilities | + +The small tracker differences appear only on the **receiver (off-peak) period**, where CAIRO's zero-sum residual differs slightly from our analytical reproduction due to floating-point accumulation order. Donor (peak) period epsilons match CAIRO exactly to machine precision. + +### Peak reduction at ε = -0.10 across utilities + +| Utility | Summer ratio | Summer peak red | Winter ratio | Winter peak red | +| ------- | :----------: | :-------------: | :----------: | :-------------: | +| CenHud | 3.05 | 6.43% | 1.57 | 3.35% | +| ConEd | 4.33 | 9.83% | 1.98 | 5.44% | +| NiMo | 2.91 | 7.67% | 1.75 | 4.60% | +| NYSEG | 2.93 | 7.68% | 1.75 | 4.60% | +| OR | 3.13 | 6.56% | 1.57 | 3.29% | +| PSEG-LI | 3.80 | 7.40% | 1.67 | 3.69% | +| RGE | 2.87 | 7.54% | 1.75 | 4.58% | + +### What the diagnostic plots show + +Five plots per utility are written to `dev_plots/flex/{utility}/`: + +- **`aggregate_daily_profile.png`**: Mean kW per HP building by hour of day, pre/post shift, for summer and winter. The shaded fill between curves shows where load is removed (red, peak hours) and where it accumulates (blue, off-peak). The shift is visually clear in summer (larger ratio, wider spread); winter shifting is smaller. + +- **`net_shift_by_hour.png`**: Bar chart of mean kWh change per building per hour. Red bars during peak hours, blue bars during off-peak. Bars sum to zero within each season (energy conservation). The plot immediately confirms the model is moving load in the right direction. + +- **`shift_heatmap.png`**: Month × hour heatmap of mean kWh shift per building (red = removed, blue = added). Shows the full seasonal and diurnal pattern: summer months (Jun-Sep) have the darkest red in peak hours; winter months have a shallower but still visible shift. The transition months (Apr, Oct) are visually distinct from both seasons, reflecting which season's TOU schedule applies. + +- **`peak_reduction_distribution.png`**: Histogram of per-building peak reduction %. At ε = -0.10, the distribution collapses to a single value with near-zero variance -- because constant elasticity with a uniform rate ratio produces the same proportional reduction for every building regardless of their load level. This is a known limitation of the model. + +- **`building_daily_profile_{bldg_id}.png`**: Per-building view for 5 illustrative buildings at different consumption percentiles. Shows original and shifted load curves overlaid on a representative summer and winter weekday, with TOU rate structure as a step function on the right axis. Suitable as a report visual; per-building hourly data for 100 buildings is written to `{utility}_building_hourly_shifted.parquet` for further use. + +### Invoking the calibration and validation + +```bash +# Calibrate all utilities (default sweep -0.04 to -0.50, step -0.02): +just -f rate_design/hp_rates/ny/Justfile calibrate-demand-flex-elasticity + +# Calibrate specific utilities with custom range: +just -f rate_design/hp_rates/ny/Justfile calibrate-demand-flex-elasticity \ + --utilities cenhud,coned --epsilon-start -0.04 --epsilon-end -0.60 --epsilon-step -0.02 + +# Calibrate and write both no-tech/with-tech to periods YAMLs: +just -f rate_design/hp_rates/ny/Justfile calibrate-demand-flex-elasticity --write-periods + +# Compare against CAIRO batch (no-tech): +just -f rate_design/hp_rates/ny/Justfile calibrate-demand-flex-elasticity \ + --compare-batch ny_20260325b_r1-16 + +# Compare against CAIRO batch (with-tech): +just -f rate_design/hp_rates/ny/Justfile calibrate-demand-flex-elasticity \ + --compare-batch ny_20260326_elast_seasonal_tech --with-tech + +# Validate shift mechanics (scalar elasticity): +just -f rate_design/hp_rates/ny/Justfile validate-demand-flex coned -0.10 ny_20260325b_r1-16 + +# Validate with seasonal elasticity: +uv run python -m utils.post.validate_demand_flex_shift \ + --utility coned --elasticity winter=-0.10,summer=-0.08 \ + --output-dir dev_plots/flex/coned + +# Validate all utilities: +just -f rate_design/hp_rates/ny/Justfile validate-demand-flex-all -0.10 ny_20260325b_r1-16 +``` + +## Known limitations + +1. **Uniform vs heterogeneous response**: Arcturus measures aggregate outcomes from diverse customers; our model applies the same epsilon to every HP building. The peak reduction distribution confirms this -- all buildings get the same % reduction, which is unrealistic. +2. **Functional form mismatch**: Arcturus is log-linear; our model is power-law. The comparison is valid at specific price ratios, not across the full curve. +3. **Seasonal elasticities assume season-specific price response**: Arcturus pilots measured customers responding to a single TOU rate announced at enrollment. Our seasonal elasticities model customers who implicitly respond differently in summer vs winter. This is reasonable -- summer and winter have different peak/off-peak spreads, weather-driven loads, and behavioral incentives -- but it extends beyond what Arcturus directly measured. +4. **Delivery MC is too sparse for meaningful savings**: The frozen-residual channel produces negligible savings because delivery MC is concentrated in ~100 hours/year. Supply MC (NYISO LBMP) would add a broader spread, but supply MCs are zeroed in delivery-only runs. + +## References + +- Faruqui, A., Sergici, S., & Warner, C. (2017). Arcturus 2.0: A meta-analysis of time-varying rates for electricity. _The Electricity Journal_, 30(10), 64-72. +- Simenone, M. et al. (2023). Bill alignment test paper. _Utilities Policy_. diff --git a/context/plans/demand-flex_elasticity_calibration.md b/context/plans/demand-flex_elasticity_calibration.md new file mode 100644 index 00000000..2ad9c306 --- /dev/null +++ b/context/plans/demand-flex_elasticity_calibration.md @@ -0,0 +1,290 @@ +--- +name: Demand-flex elasticity calibration +overview: Find the correct elasticity parameter per NY utility by running a fast analytical diagnostic against real TOU structures and MC data, grounded in Arcturus empirical predictions, then validate with targeted CAIRO runs. Results stored at s3://data.sb/switchbox/demand_flex/ with utility partitioning. +todos: + - id: diagnostic-script + content: "Create `utils/post/calibrate_demand_flex_elasticity.py`: per-utility analytical calibration reading real TOU derivation JSONs and MC data, computing demand shift + MC savings + Arcturus comparison for elasticity sweep" + status: completed + - id: shift-validation-script + content: "Create `utils/post/validate_demand_flex_shift.py`: reproduce shift analytically, check energy conservation + direction, cross-check vs CAIRO tracker, produce diagnostic plots and data outputs" + status: completed + - id: justfile-recipes + content: "Add `validate-demand-flex`, `validate-demand-flex-all`, `calibrate-demand-flex-elasticity` recipes to ny/Justfile" + status: completed + - id: run-diagnostic + content: Run the diagnostic and validation for all 7 NY utilities, review results + status: completed + - id: update-ticket + content: Update RDP-175 / open PR #382 with full scope + status: completed + - id: update-scenario-yamls + content: "Update scenario YAMLs with per-utility recommended elasticities (ε=-0.10 for 3h peak window utilities, ε=-0.12 for 5h)" + status: pending + - id: plan-cairo-sweep + content: "Optional: targeted CAIRO sweep at recommended elasticities to produce final calibrated outputs; see Phase 2 section below" + status: pending +isProject: false +--- + +# Demand-Flex Elasticity Calibration: Per-Utility Diagnostic and Sweep + +## Status (March 2026) + +**Phase 1 complete.** All analytical work is done and validated. PR #382 closed the GitHub issue. + +| Deliverable | Status | +| -------------------------------------------------------------------------------------------------------- | ------------------------- | +| `utils/post/calibrate_demand_flex_elasticity.py` -- elasticity calibration sweep | Done | +| `utils/post/validate_demand_flex_shift.py` -- shift validation + diagnostic plots | Done | +| `rate_design/hp_rates/ny/Justfile` -- `validate-demand-flex`, `calibrate-demand-flex-elasticity` recipes | Done | +| `context/methods/tou_and_rates/demand_flex_elasticity_calibration.md` -- methodology writeup | Done | +| Per-utility recommended elasticities (7 utilities) | Done -- see results table | +| Shift validation across all 7 utilities (energy conservation, direction, CAIRO tracker match) | Done -- all pass | +| Diagnostic plots in `dev_plots/flex/{utility}/` | Done | +| Update scenario YAMLs with per-utility elasticities | **Pending** | +| Phase 2 CAIRO sweep at recommended elasticities | Optional / deferred | + +**Recommended elasticities (pending YAML update):** + +- ε = -0.10: ConEd, NiMo, NYSEG, RGE (3-hour peak windows) +- ε = -0.12: CenHud, OR, PSEG-LI (5-hour peak windows) + +## Problem statement + +Bill savings under demand flex at `epsilon = -0.1` are very small. We need to: + +1. **Diagnose why** -- is it the elasticity, the MC spread, or both? +2. **Find the right elasticity per utility** -- each utility has different seasonal TOU price ratios, so the same epsilon produces different peak reductions +3. **Quantify the demand shift** -- kWh moved, peak reduction %, seasonal breakdown +4. **Maximize bill savings while remaining realistic** -- anchored to Arcturus "no enabling tech" empirical predictions +5. **Run targeted CAIRO sweeps** to validate and produce final outputs + +**The HP overpaying problem**: HP customers have high winter electricity demand (electric heating) while winter marginal costs are generally lower (capacity costs allocated mainly to summer). Under a flat or default rate structure, they overpay relative to their cost causation in winter. The TOU flex tariff gives them the opportunity to shift load away from peak hours, but: (a) the peak/offpeak spread they can arbitrage is smaller in winter than summer, and (b) we are constrained to a single annual elasticity value in the scenario YAMLs. + +## Key data: TOU price ratios vary widely across utilities + +From the TOU derivation JSONs (`[ny/config/tou_derivation/](rate_design/hp_rates/ny/config/tou_derivation/)`): + +| Utility | Winter Ratio | Winter Peak Hours | Summer Ratio | Summer Peak Hours | +| -------- | ------------ | ----------------- | ------------ | ----------------- | +| ConEd | 2.0 | 16-18 (3h) | 4.3 | 15-17 (3h) | +| CenHud | 1.6 | 16-20 (5h) | 3.0 | 15-19 (5h) | +| NiMo | 1.8 | 17-19 (3h) | 2.9 | 17-19 (3h) | +| (5 more) | ... | ... | ... | ... | + +A single `epsilon = -0.1` produces very different peak reductions depending on the ratio. Arcturus "no enabling tech" predictions at these ratios: + +- `peak_reduction = -0.011 + (-0.065) * ln(price_ratio)` +- 1.6:1 -> ~4.2% peak reduction +- 2.0:1 -> ~5.6% +- 3.0:1 -> ~8.2% +- 4.3:1 -> ~10.6% + +## CAIRO model constraints + +From [`context/code/cairo/cairo_demand_flexibility_workflow.md`](context/code/cairo/cairo_demand_flexibility_workflow.md) and [`context/code/cairo/demand_flex_residual_treatment.md`](context/code/cairo/demand_flex_residual_treatment.md): + +- **Constant elasticity**: `Q* = Q * (P_period / P_flat)^epsilon` -- same response for all buildings +- **Single receiver period** per season: all shifted load goes to one off-peak sink +- **Exogenous MC prices**: load shifting changes total MC dollars, not prices +- **Frozen residual**: only the MC component of the RR adjusts; non-MC infrastructure costs held constant +- **Zero-sum**: kWh shifted, not reduced + +There are two distinct bill savings mechanisms: + +1. **RR reduction** (system-level, small): under frozen residual, shifting reduces total MC -> lower RR -> lower rates for all (or for the HP subclass under subclass RR). This is `Sigma_h MC_h * (L_orig_h - L_shifted_h)`. Small because HP share is small. +2. **Rate arbitrage** (per-customer, primary): HP customers on a TOU tariff shift load from expensive peak hours to cheap offpeak hours. Savings = `kWh_shifted * (peak_rate - offpeak_rate)`. This is independent of the RR change -- it's the TOU rate structure creating an incentive. Compared to each utility's current default rate structure, this is where meaningful per-customer savings come from. + +## Tariff calibration feedback loop + +The calibrated tariff customers face is an **output** of the demand-flex process, not a fixed input. The chain: + +```mermaid +flowchart LR + deriv["TOU derivation\n(MC-derived ratios)"] -->|"cp"| flex["_flex.json\n(identical copy)"] + flex -->|"run 13 precalc"| p15["Phase 1.5: shift loads\nusing derivation ratios"] + p15 --> p175["Phase 1.75: recompute\nratios from shifted loads"] + p175 --> cairo["CAIRO precalc:\ncalibrate levels to\nmeet lower RR"] + cairo --> cal["_flex_calibrated.json\n(new ratios + levels)"] + cal -->|"run 15 default"| final["Final bills\n(customers face\ncalibrated tariff)"] +``` + +Example for ConEd (from actual files): + +- **Derivation ratios** (from MC, input to run 13): winter 1.978, summer 4.332 +- **Flex calibrated ratios** (output of run 13, input to run 15): winter 2.014, summer 4.143 +- Summer ratio **compressed by ~4.4%** because shifting load out of peak reduced the demand-weighted peak MC +- Summer peak rate dropped from $0.318 to $0.299/kWh (lower RR + compressed ratio) + +**Implications for the sweep:** + +1. The analytical diagnostic uses derivation ratios (first-order); actual customer outcomes use calibrated ratios (second-order feedback ~5%). Good enough for targeting. +2. The CAIRO sweep **must run the full 13 -> copy-calibrated -> 15 chain** per elasticity, not just run 13 alone. Runs 15-16 produce the actual customer bills. +3. The diagnostic should predict how much Phase 1.75 compresses the ratio at each elasticity (larger elasticity = more shift = more compression). + +## Architecture: two phases + +```mermaid +flowchart TD + subgraph phase1 ["Phase 1: Analytical diagnostic (no CAIRO)"] + TOU["TOU derivation JSONs\n(7 utilities x 2 seasons)"] --> cal["calibrate_demand_flex_elasticity.py"] + MC["MC data from S3\n(bulk + dist per utility)"] --> cal + ARC["Arcturus model\n(no-tech coefficients)"] --> cal + cal --> table["Per-utility table:\nprice ratios, peak reduction %,\nMC spread, savings ceiling,\nrecommended epsilon"] + cal --> csv["CSV at s3://data.sb/\nswitchbox/demand_flex/\ndiagnostic/"] + end + + subgraph phase2 ["Phase 2: Targeted CAIRO sweep"] + table --> sweep["Run CAIRO at recommended\nepsilon per utility\n(+/- 0.05 bracket)"] + sweep --> outputs["Per-run outputs at\ns3://data.sb/switchbox/\ndemand_flex/{utility}/"] + outputs --> compare["Compare CAIRO bills/BAT\nvs analytical predictions"] + end +``` + +## Phase 1: Calibration on real load data (`utils/post/calibrate_demand_flex_elasticity.py`) + +New script. Uses actual ResStock building loads, customer metadata, and MC data for all 7 NY utilities. Focuses on the **runs 13-14 scenario** (precalc with HP/non-HP subclasses), which reflects the realistic world where only HP buildings (`postprocess_group.has_hp=True`) are on the TOU flex tariff and subject to demand shifting. HP buildings are a small fraction of each utility's customer base -- this is the primary reason bill savings are small. + +Runs 15-16 model a counterfactual 100% HP adoption world; calibration should target runs 13-14. + +For each of the 7 NY utilities, for each season, for each elasticity in `[-0.05, -0.10, -0.15, -0.20]`: + +**Demand shift metrics (per utility, per season, per elasticity):** + +- HP building count and weighted customer share (% of utility total) +- HP peak period kWh before and after shift +- Peak reduction (% of HP peak kWh) -- the Arcturus-comparable metric +- kWh shifted peak-to-offpeak (absolute) +- System-level peak impact (% of total utility peak, dampened by HP share) +- Demand-weighted annual price ratio (weighted by HP seasonal kWh; used as the Arcturus calibration anchor since we are constrained to a single annual elasticity) +- Per-season Arcturus peak reduction at per-season price ratios (informational, not used for calibration) +- Annual Arcturus peak reduction at the demand-weighted annual ratio (calibration target) +- Delta: our model's prediction vs Arcturus (identifies over/under-response) +- HP-specific effective TOU ratios vs derivation ratios (flags if derivation used system-wide loads that differ from HP-only loads) + +**Financial metrics (two distinct savings mechanisms):** + +_Rate arbitrage savings (primary, per-HP-customer):_ + +- kWh shifted from peak to offpeak per HP customer (demand-weighted average) +- Bill savings from rate structure = kWh_shifted x (peak_rate - offpeak_rate) per season +- Annual per-HP-customer bill savings from rate arbitrage +- This uses the TOU derivation rates, not calibrated rates (first-order; CAIRO calibration adjusts these) +- Comparison baseline: the current default rate structure for each utility (varies by utility) + +_RR reduction savings (secondary, system-level):_ + +- HP subclass MC delta ($) = sum_h(MC_h x shift_h) for HP buildings only (demand-weighted) +- Per-HP-customer RR savings = HP_MC_delta / HP_weighted_count (under subclass RR) +- This is small because HPs are a small share of all customers + +_MC data context (NOT Cambium -- all derived from NYISO LBMP and utility PUC filings):_ + +- NY uses 5 utility-specific MC components: supply energy/NYISO LBMP (most hours nonzero), supply capacity (96 hours/year), supply ancillary, bulk TX (~80 SCR hours/year), dist+sub-TX (~100 PoP hours/year) +- **Delivery-only runs (13) zero out supply MCs** (`zero.parquet`). Only dist+sub-tx + bulk_tx are active -- extremely peaky, nonzero in only ~100-180 hours. MC-based RR savings from shifting are small because the spread is so concentrated. +- **Delivery+supply runs (14)** include supply energy (NYISO LBMP, nonzero virtually all hours) -- broader MC spread, larger MC savings possible. +- The diagnostic should report nonzero-MC hours and their overlap with TOU peak hours per utility, separately for delivery-only and delivery+supply MC stacks. +- For **rate arbitrage savings** (the primary savings mechanism), the MC stack is irrelevant -- savings come from the peak/offpeak rate spread in the tariff structure itself. + +**Tariff feedback metrics (predicting Phase 1.75 effects):** + +- Predicted post-shift TOU cost-causation ratio per season (Phase 1.75 compression) +- Demand-weighted P_flat computed from actual HP building loads + +**Calibration output:** + +- Recommended elasticity per utility: the epsilon where our model's HP peak reduction matches the Arcturus "no tech" prediction for that utility's seasonal price ratio +- Per-HP-customer bill savings at the recommended epsilon +- Binding constraint per utility: is it the MC spread, the HP share, or the peak volume that limits savings? + +**Data sources (all local or S3, no CAIRO run needed):** + +- **Utility assignment**: `/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/metadata_utility/state=NY/utility_assignment.parquet` -- maps bldg_id to utility +- **Customer metadata**: `/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/metadata/state=NY/upgrade=00/metadata-sb.parquet` -- has `postprocess_group.has_hp`, `weight` +- **HP building hourly loads**: `/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/load_curve_hourly/state=NY/upgrade=00/{bldg_id}_00.parquet` -- local disk, fast reads +- **TOU derivation JSONs**: local at [`ny/config/tou_derivation/`](rate_design/hp_rates/ny/config/tou_derivation/) -- seasonal price ratios and peak hours +- **Delivery MC**: `s3://data.sb/switchbox/marginal_costs/ny/{dist_and_sub_tx,bulk_tx}/utility={util}/year=2025/data.parquet` -- 8760 hourly $/kWh (peaky, ~100-180 nonzero hours) +- **Supply MC** (for delivery+supply analysis only): `s3://data.sb/switchbox/marginal_costs/ny/supply/{energy,capacity}/utility={util}/year=2025/data.parquet` -- NYISO LBMP + capacity; zeroed out in delivery-only runs via `zero.parquet` + +**CLI:** + +``` +uv run python -m utils.post.calibrate_demand_flex_elasticity \ + --state ny \ + --elasticities -0.05,-0.10,-0.15,-0.20 \ + [--output-dir /tmp/demand_flex_diagnostic] \ + [--sample-size N] # optional: limit HP buildings per utility for faster dev iteration +``` + +**Key implementation details:** + +- Loads utility_assignment + metadata once; groups HP bldg_ids by utility +- Per utility: loads HP building hourly loads from local parquet (e.g. ~2k HP buildings for ConEd, fast from local disk) +- Computes demand-weighted `P_flat` from actual HP loads (same as CAIRO's endogenous computation) +- Applies `process_residential_hourly_demand_response_shift` or equivalent constant-elasticity formula per season +- Loads MC data (dist+sub-tx + bulk_tx) per utility from S3 +- Computes HP subclass MC delta = sum_h(MC_h x HP_load_shift_h) using demand-weighted HP system load +- Applies Arcturus formula: `peak_red = -0.011 + (-0.065) * ln(ratio)` for comparison +- Runs the actual constant-elasticity shifting math at each sweep elasticity, computes realized peak reduction, and interpolates the elasticity whose peak reduction best matches Arcturus (not a simple algebraic back-solve, which is imprecise given the multi-period, zero-sum model with endogenous P_flat) + +## Empirical ground truth from batch ny_20260325b_r1-16 + +CAIRO outputs at `s3://data.sb/switchbox/cairo/outputs/hp_rates/ny/all_utilities/ny_20260325b_r1-16/` provide actual bill numbers at `epsilon = -0.1` for all 7 utilities. Key finding for ConEd: + +- Run 3+4 (default tariff, no flex): mean elec bill = $2,329/year +- Run 11+12 (TOU, no flex): mean = $1,740/year (TOU structure saves $589) +- Run 15+16 (TOU flex, epsilon=-0.1): mean = $1,729/year (flex adds only $10.51 more savings) + +The demand-flex effect at epsilon=-0.1 is ~0.6% of the bill -- confirming the "small savings" concern. The TOU structure change dominates. The diagnostic should: + +1. Reproduce this $10.51 analytically (validation) +2. Show how it scales with larger elasticity (sweep) +3. Identify the ceiling (is it MC spread, HP share, or both?) + +The bill comparison is: **run 15+16 vs run 11+12** (flex effect alone) and **run 15+16 vs run 3+4** (total effect). + +## Phase 2: CAIRO sweep (after Phase 1 results reviewed) + +Deferred until Phase 1 results are reviewed. The plan: + +1. **Generate sweep scenario YAMLs**: For each utility, create run definitions at the Phase 1-recommended elasticity (and +/- 0.05 bracket = 3 values per utility). Could be a script that patches the existing `scenarios_*.yaml` or generates separate sweep YAMLs. Alternatively, add an `--elasticity` CLI override to `run_scenario.py` (small change: add to `_parse_args`, override in `_resolve_settings`). + +2. **Run scope per elasticity**: Minimum = **runs 13 + 15** (precalc then default, delivery-only). Run 13 produces the calibrated tariff; run 15 produces the actual customer bills against that tariff. The full `run-13 -> copy-calibrated-tariff -> run-15` chain must execute per elasticity. Supply runs (14, 16) only needed for the final chosen elasticity. + +3. **Output location**: `s3://data.sb/switchbox/demand_flex/ny/{utility}/epsilon={value}/` with Hive-style elasticity partitioning. + +4. **Justfile recipe**: `sweep-demand-flex-elasticity` that takes a utility and elasticity list, runs the 13+15 chain at each value. + +5. **Diagnostic comparison**: After sweep, compare CAIRO bill outputs against Phase 1 analytical predictions. Key metrics to compare: peak reduction %, calibrated ratio compression, per-customer bill savings. The ratio compression from Phase 1.75 is a useful validation point -- the diagnostic can predict it analytically and the CAIRO sweep produces it empirically. + +6. **Compute budget**: 7 utilities x 3 elasticities x 2 runs (13+15) = 42 runs. At ~20 min/run for ConEd (smaller utilities faster), total ~6-10 hours. Parallelizable in delivery/supply pairs per the existing `run-all-parallel-tracks` pattern. + +## Files to create/modify + +- **Create**: `[utils/post/calibrate_demand_flex_elasticity.py](utils/post/calibrate_demand_flex_elasticity.py)` -- Phase 1 calibration +- **Modify**: `[rate_design/hp_rates/Justfile](rate_design/hp_rates/Justfile)` -- add `calibrate-demand-flex-elasticity` recipe +- **Update**: Linear ticket RDP-175 with evolved scope + +## Relationship to existing RDP-175 work + +The former synthetic validation/sensitivity scripts (`validate_demand_flex.py`, `sensitivity_demand_flex.py`) were removed -- their mathematical invariant checks were either tautological or fully covered by the real-data calibration and shift validation scripts. + +## Known limitations and caveats + +These are explicitly acknowledged; the diagnostic output should note them. + +1. **Arcturus measures aggregate heterogeneous response; our model applies uniform elasticity.** Arcturus coefficients come from meta-analysis of real TOU pilots where customers have diverse responses. Our model applies the same epsilon to every HP building. The recommended elasticity is therefore an approximation: it targets the same aggregate peak reduction as Arcturus predicts, but the underlying behavioral assumption is different. This should be stated in the diagnostic output. + +2. **Arcturus functional form (log-linear) differs from our model (power law).** Arcturus: `peak_red = a + b * ln(ratio)`. Our model: `Q* = Q * (ratio)^epsilon`. These only agree at specific price ratios, not across the full curve. The diagnostic compares at each utility's actual ratio, which is valid; it does not claim to reproduce the full Arcturus response curve. + +3. **Annual elasticity is a compromise.** Each utility has different winter and summer ratios, but we use a single annual epsilon. The diagnostic computes a demand-weighted annual ratio (weighted by HP seasonal kWh) as the Arcturus anchor. Per-season peak reductions are reported for context -- they will differ from the annual prediction because the same epsilon applied to different seasonal ratios produces different reductions. Winter ratios are lower, so winter peak reduction will be smaller. + +4. **ResStock data I/O.** Loading hourly loads for all HP buildings per utility requires reading hundreds to thousands of local parquet files. This is fast from local disk (~seconds) but the data must be present locally. The `--sample-size` CLI option allows faster development iteration by sampling HP buildings. + +5. **Demand shift is identical in delivery vs supply precalc runs.** The `_flex.json` tariff rates are identical between delivery and supply runs; shifting is driven by tariff rates, not MC values. The MC zeroing only affects Phase 1.75 ratio recomputation and Phase 2 RR calibration. The diagnostic therefore computes one shift per elasticity, then branches into delivery-only vs delivery+supply MC savings. + +## Open questions for Phase 2 + +- For the sweep, each elasticity re-runs precalc (run 13) which recalibrates via Phase 1.75. The pre-calibrated `_flex.json` is always the same (copy of the derivation tariff). Only the calibrated output changes. So the sweep is self-contained per elasticity value. +- How to manage 42 calibrated tariff files? One approach: write to a sweep-specific output directory (not the main `config/tariffs/electric/`), parameterized by elasticity, so sweep artifacts don't pollute the main config. +- Should we sweep delivery+supply runs (13+14 -> 15+16) or delivery-only (13 -> 15)? Delivery-only gives the delivery bill savings; adding supply doubles compute time. Recommend delivery-only for the sweep, full 4-run chain only for the final chosen elasticity. diff --git a/rate_design/hp_rates/Justfile b/rate_design/hp_rates/Justfile index 33740647..73c06ee7 100644 --- a/rate_design/hp_rates/Justfile +++ b/rate_design/hp_rates/Justfile @@ -689,8 +689,8 @@ run-all-sequential: : "${RDP_BATCH:?Set RDP_BATCH before running (e.g. ny_20260305c_r1-8)}" export RDP_BATCH echo ">> run-all-sequential [${RDP_BATCH}]" >&2 - # just run-1 - # just run-2 + just run-1 + just run-2 just compute-rev-requirements just run-3 just run-4 diff --git a/rate_design/hp_rates/ny/Justfile b/rate_design/hp_rates/ny/Justfile index 106304ad..43289c99 100644 --- a/rate_design/hp_rates/ny/Justfile +++ b/rate_design/hp_rates/ny/Justfile @@ -252,3 +252,58 @@ validate-all-runs-ny timestamp="": just validate-or "{{ timestamp }}" just validate-psegli "{{ timestamp }}" just validate-rge "{{ timestamp }}" + +# ============================================================================= +# NY-only: demand-flex diagnostics +# ============================================================================= + +path_flex_plots := path_repo / "dev_plots/flex" + +# Validate demand-flex load shifting for one utility: reproduces the shift +# analytically, checks energy conservation and direction, cross-checks against +# CAIRO's elasticity tracker, and generates diagnostic plots. +# just -f ny/Justfile validate-demand-flex coned + +# just -f ny/Justfile validate-demand-flex coned -0.12 ny_20260325b_r1-16 +validate-demand-flex utility_arg elasticity="-0.10" batch="": + cd {{ project_root }} && uv run python -m utils.post.validate_demand_flex_shift \ + --utility {{ utility_arg }} \ + --elasticity {{ elasticity }} \ + --output-dir "{{ path_flex_plots }}/{{ utility_arg }}" \ + {{ if batch != "" { "--batch " + batch } else { "" } }} + +validate-demand-flex-all elasticity="-0.10" batch="": + just validate-demand-flex cenhud {{ elasticity }} "{{ batch }}" + just validate-demand-flex coned {{ elasticity }} "{{ batch }}" + just validate-demand-flex nimo {{ elasticity }} "{{ batch }}" + just validate-demand-flex nyseg {{ elasticity }} "{{ batch }}" + just validate-demand-flex or {{ elasticity }} "{{ batch }}" + just validate-demand-flex psegli {{ elasticity }} "{{ batch }}" + just validate-demand-flex rge {{ elasticity }} "{{ batch }}" + +# Run the elasticity calibration sweep: tests a range of elasticity values per +# utility and finds the one matching the Arcturus 2.0 empirical target. +# +# Sweep all utilities (default range -0.04 to -0.50, step -0.02): +# just -f ny/Justfile calibrate-demand-flex-elasticity +# +# Subset of utilities: +# just -f ny/Justfile calibrate-demand-flex-elasticity --utilities=coned,nimo +# +# Custom range: +# just -f ny/Justfile calibrate-demand-flex-elasticity --epsilon-start -0.04 --epsilon-end -0.30 --epsilon-step -0.01 +# +# Write no-tech + with-tech seasonal elasticities to config/periods/*.yaml: +# just -f ny/Justfile calibrate-demand-flex-elasticity --write-periods +# +# Compare analytical predictions against a CAIRO batch (no-tech): +# just -f ny/Justfile calibrate-demand-flex-elasticity --utilities=cenhud --compare-batch ny_20260326_batch +# +# Compare against a CAIRO batch that used with-tech elasticities: + +# just -f ny/Justfile calibrate-demand-flex-elasticity --utilities=cenhud --compare-batch ny_20260326_batch --with-tech +calibrate-demand-flex-elasticity *args: + cd {{ project_root }} && uv run python -m utils.pre.calibrate_demand_flex_elasticity \ + --state ny \ + --output-dir "{{ path_flex_plots }}/diagnostic" \ + {{ args }} diff --git a/rate_design/hp_rates/ny/config/periods/cenhud.yaml b/rate_design/hp_rates/ny/config/periods/cenhud.yaml index 88decd52..27aab5e4 100644 --- a/rate_design/hp_rates/ny/config/periods/cenhud.yaml +++ b/rate_design/hp_rates/ny/config/periods/cenhud.yaml @@ -1,2 +1,14 @@ -winter_months: [10, 11, 12, 1, 2, 3] +winter_months: + - 10 + - 11 + - 12 + - 1 + - 2 + - 3 tou_window_hours: 5 +elasticity: + summer: -0.14 + winter: -0.12 +elasticity_with_tech: + summer: -0.22 + winter: -0.18 diff --git a/rate_design/hp_rates/ny/config/periods/coned.yaml b/rate_design/hp_rates/ny/config/periods/coned.yaml index c6a02550..ba87659f 100644 --- a/rate_design/hp_rates/ny/config/periods/coned.yaml +++ b/rate_design/hp_rates/ny/config/periods/coned.yaml @@ -1,2 +1,14 @@ -winter_months: [10, 11, 12, 1, 2, 3] +winter_months: + - 10 + - 11 + - 12 + - 1 + - 2 + - 3 tou_window_hours: 3 +elasticity: + summer: -0.12 + winter: -0.1 +elasticity_with_tech: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/periods/nimo.yaml b/rate_design/hp_rates/ny/config/periods/nimo.yaml index c6a02550..3eeebc77 100644 --- a/rate_design/hp_rates/ny/config/periods/nimo.yaml +++ b/rate_design/hp_rates/ny/config/periods/nimo.yaml @@ -1,2 +1,14 @@ -winter_months: [10, 11, 12, 1, 2, 3] +winter_months: + - 10 + - 11 + - 12 + - 1 + - 2 + - 3 tou_window_hours: 3 +elasticity: + summer: -0.1 + winter: -0.1 +elasticity_with_tech: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/periods/nyseg.yaml b/rate_design/hp_rates/ny/config/periods/nyseg.yaml index c6a02550..3eeebc77 100644 --- a/rate_design/hp_rates/ny/config/periods/nyseg.yaml +++ b/rate_design/hp_rates/ny/config/periods/nyseg.yaml @@ -1,2 +1,14 @@ -winter_months: [10, 11, 12, 1, 2, 3] +winter_months: + - 10 + - 11 + - 12 + - 1 + - 2 + - 3 tou_window_hours: 3 +elasticity: + summer: -0.1 + winter: -0.1 +elasticity_with_tech: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/periods/or.yaml b/rate_design/hp_rates/ny/config/periods/or.yaml index 88decd52..260a1b3b 100644 --- a/rate_design/hp_rates/ny/config/periods/or.yaml +++ b/rate_design/hp_rates/ny/config/periods/or.yaml @@ -1,2 +1,14 @@ -winter_months: [10, 11, 12, 1, 2, 3] +winter_months: + - 10 + - 11 + - 12 + - 1 + - 2 + - 3 tou_window_hours: 5 +elasticity: + summer: -0.14 + winter: -0.12 +elasticity_with_tech: + summer: -0.24 + winter: -0.2 diff --git a/rate_design/hp_rates/ny/config/periods/psegli.yaml b/rate_design/hp_rates/ny/config/periods/psegli.yaml index 88decd52..27aab5e4 100644 --- a/rate_design/hp_rates/ny/config/periods/psegli.yaml +++ b/rate_design/hp_rates/ny/config/periods/psegli.yaml @@ -1,2 +1,14 @@ -winter_months: [10, 11, 12, 1, 2, 3] +winter_months: + - 10 + - 11 + - 12 + - 1 + - 2 + - 3 tou_window_hours: 5 +elasticity: + summer: -0.14 + winter: -0.12 +elasticity_with_tech: + summer: -0.22 + winter: -0.18 diff --git a/rate_design/hp_rates/ny/config/periods/rge.yaml b/rate_design/hp_rates/ny/config/periods/rge.yaml index c6a02550..3eeebc77 100644 --- a/rate_design/hp_rates/ny/config/periods/rge.yaml +++ b/rate_design/hp_rates/ny/config/periods/rge.yaml @@ -1,2 +1,14 @@ -winter_months: [10, 11, 12, 1, 2, 3] +winter_months: + - 10 + - 11 + - 12 + - 1 + - 2 + - 3 tou_window_hours: 3 +elasticity: + summer: -0.1 + winter: -0.1 +elasticity_with_tech: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/rev_requirement/cenhud_hp_vs_nonhp.yaml b/rate_design/hp_rates/ny/config/rev_requirement/cenhud_hp_vs_nonhp.yaml index 0c2f2cfd..688ec905 100644 --- a/rate_design/hp_rates/ny/config/rev_requirement/cenhud_hp_vs_nonhp.yaml +++ b/rate_design/hp_rates/ny/config/rev_requirement/cenhud_hp_vs_nonhp.yaml @@ -1,15 +1,15 @@ utility: cenhud group_col: has_hp cross_subsidy_col: BAT_percustomer -source_run_dir: s3://data.sb/switchbox/cairo/outputs/hp_rates/ny/cenhud/ny_20260325b_r1-16/20260325_143137_ny_cenhud_run1_up00_precalc__default +source_run_dir: s3://data.sb/switchbox/cairo/outputs/hp_rates/ny/cenhud/ny_20260326_elast_seasonal_tech/20260326_194621_ny_cenhud_run1_up00_precalc__default total_delivery_revenue_requirement: 418151641.73 total_delivery_and_supply_revenue_requirement: 646688375.32 subclass_revenue_requirements: non-hp: - delivery: 401392812.5614301 - supply: 209823391.6038903 - total: 611216204.1653204 + delivery: 401392812.56143004 + supply: 209823391.60389048 + total: 611216204.1653205 hp: - delivery: 16758829.168569982 - supply: 18713341.98610972 - total: 35472171.1546797 + delivery: 16758829.16856999 + supply: 18713341.986109704 + total: 35472171.15467969 diff --git a/rate_design/hp_rates/ny/config/scenarios/scenarios_cenhud.yaml b/rate_design/hp_rates/ny/config/scenarios/scenarios_cenhud.yaml index 822df69b..d2a5391d 100644 --- a/rate_design/hp_rates/ny/config/scenarios/scenarios_cenhud.yaml +++ b/rate_design/hp_rates/ny/config/scenarios/scenarios_cenhud.yaml @@ -367,7 +367,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 14: run_name: ny_cenhud_run14_up00_precalc_supply__hp_seasonalTOU_flex_vs_default state: NY @@ -396,7 +398,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 15: run_name: ny_cenhud_run15_up02_default__hp_seasonalTOU_flex state: NY @@ -424,7 +428,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 16: run_name: ny_cenhud_run16_up02_default_supply__hp_seasonalTOU_flex state: NY @@ -452,4 +458,6 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 diff --git a/rate_design/hp_rates/ny/config/scenarios/scenarios_coned.yaml b/rate_design/hp_rates/ny/config/scenarios/scenarios_coned.yaml index 2906b563..06d3b3f7 100644 --- a/rate_design/hp_rates/ny/config/scenarios/scenarios_coned.yaml +++ b/rate_design/hp_rates/ny/config/scenarios/scenarios_coned.yaml @@ -367,7 +367,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 14: run_name: ny_coned_run14_up00_precalc_supply__hp_seasonalTOU_flex_vs_default state: NY @@ -396,7 +398,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 15: run_name: ny_coned_run15_up02_default__hp_seasonalTOU_flex state: NY @@ -424,7 +428,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 16: run_name: ny_coned_run16_up02_default_supply__hp_seasonalTOU_flex state: NY @@ -452,4 +458,6 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/scenarios/scenarios_nimo.yaml b/rate_design/hp_rates/ny/config/scenarios/scenarios_nimo.yaml index fa11460a..c2121415 100644 --- a/rate_design/hp_rates/ny/config/scenarios/scenarios_nimo.yaml +++ b/rate_design/hp_rates/ny/config/scenarios/scenarios_nimo.yaml @@ -367,7 +367,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 14: run_name: ny_nimo_run14_up00_precalc_supply__hp_seasonalTOU_flex_vs_default state: NY @@ -396,7 +398,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 15: run_name: ny_nimo_run15_up02_default__hp_seasonalTOU_flex state: NY @@ -424,7 +428,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 16: run_name: ny_nimo_run16_up02_default_supply__hp_seasonalTOU_flex state: NY @@ -452,4 +458,6 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/scenarios/scenarios_nyseg.yaml b/rate_design/hp_rates/ny/config/scenarios/scenarios_nyseg.yaml index d9dc213c..25b4c5bd 100644 --- a/rate_design/hp_rates/ny/config/scenarios/scenarios_nyseg.yaml +++ b/rate_design/hp_rates/ny/config/scenarios/scenarios_nyseg.yaml @@ -367,7 +367,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 14: run_name: ny_nyseg_run14_up00_precalc_supply__hp_seasonalTOU_flex_vs_default state: NY @@ -396,7 +398,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 15: run_name: ny_nyseg_run15_up02_default__hp_seasonalTOU_flex state: NY @@ -424,7 +428,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 16: run_name: ny_nyseg_run16_up02_default_supply__hp_seasonalTOU_flex state: NY @@ -452,4 +458,6 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/scenarios/scenarios_or.yaml b/rate_design/hp_rates/ny/config/scenarios/scenarios_or.yaml index a0a63682..dd98650e 100644 --- a/rate_design/hp_rates/ny/config/scenarios/scenarios_or.yaml +++ b/rate_design/hp_rates/ny/config/scenarios/scenarios_or.yaml @@ -367,7 +367,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.24 + winter: -0.2 14: run_name: ny_or_run14_up00_precalc_supply__hp_seasonalTOU_flex_vs_default state: NY @@ -396,7 +398,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.24 + winter: -0.2 15: run_name: ny_or_run15_up02_default__hp_seasonalTOU_flex state: NY @@ -424,7 +428,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.24 + winter: -0.2 16: run_name: ny_or_run16_up02_default_supply__hp_seasonalTOU_flex state: NY @@ -452,4 +458,6 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.24 + winter: -0.2 diff --git a/rate_design/hp_rates/ny/config/scenarios/scenarios_psegli.yaml b/rate_design/hp_rates/ny/config/scenarios/scenarios_psegli.yaml index 0c224780..9c335d10 100644 --- a/rate_design/hp_rates/ny/config/scenarios/scenarios_psegli.yaml +++ b/rate_design/hp_rates/ny/config/scenarios/scenarios_psegli.yaml @@ -367,7 +367,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 14: run_name: ny_psegli_run14_up00_precalc_supply__hp_seasonalTOU_flex_vs_default state: NY @@ -396,7 +398,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 15: run_name: ny_psegli_run15_up02_default__hp_seasonalTOU_flex state: NY @@ -424,7 +428,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 16: run_name: ny_psegli_run16_up02_default_supply__hp_seasonalTOU_flex state: NY @@ -452,4 +458,6 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.22 + winter: -0.18 diff --git a/rate_design/hp_rates/ny/config/scenarios/scenarios_rge.yaml b/rate_design/hp_rates/ny/config/scenarios/scenarios_rge.yaml index 11bb7b11..1c052501 100644 --- a/rate_design/hp_rates/ny/config/scenarios/scenarios_rge.yaml +++ b/rate_design/hp_rates/ny/config/scenarios/scenarios_rge.yaml @@ -367,7 +367,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 14: run_name: ny_rge_run14_up00_precalc_supply__hp_seasonalTOU_flex_vs_default state: NY @@ -396,7 +398,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 15: run_name: ny_rge_run15_up02_default__hp_seasonalTOU_flex state: NY @@ -424,7 +428,9 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 16: run_name: ny_rge_run16_up02_default_supply__hp_seasonalTOU_flex state: NY @@ -452,4 +458,6 @@ runs: year_run: 2025 year_dollar_conversion: 2025 process_workers: 8 - elasticity: -0.1 + elasticity: + summer: -0.18 + winter: -0.16 diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal.json index ef78d278..be33570c 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal.json @@ -640,14 +640,14 @@ "energyratestructure": [ [ { - "rate": 0.14529932053295455, + "rate": 0.14529932053295458, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.025720769425837828, + "rate": 0.025720769425837883, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_calibrated.json index 5fffd7c7..c0780052 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_calibrated.json @@ -640,28 +640,28 @@ "energyratestructure": [ [ { - "rate": 0.05243093636450794, + "rate": 0.05243093636450798, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.08219835809492648, + "rate": 0.08219835809492654, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.04661356362420692, + "rate": 0.04661356362420695, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.14213873955929415, + "rate": 0.14213873955929424, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_calibrated.json index 7e3165eb..a803ad23 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_calibrated.json @@ -640,28 +640,28 @@ "energyratestructure": [ [ { - "rate": 0.05303207464995336, + "rate": 0.05386936607137937, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.08227295922468561, + "rate": 0.08357155100573309, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.04710194035878263, + "rate": 0.04670889280445623, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.14205945212208843, + "rate": 0.14086000803039866, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_supply_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_supply_calibrated.json index 2e35f27c..b3d4abac 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_supply_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_flex_supply_calibrated.json @@ -640,28 +640,28 @@ "energyratestructure": [ [ { - "rate": 0.12386630784621323, + "rate": 0.1256624037464114, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.19216385106579684, + "rate": 0.19494942580688396, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.11001537245422617, + "rate": 0.10895899050239435, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.33180636332194613, + "rate": 0.32858762765807065, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_supply_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_supply_calibrated.json index afa97909..f9948015 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_supply_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonalTOU_supply_calibrated.json @@ -640,28 +640,28 @@ "energyratestructure": [ [ { - "rate": 0.12263403419973369, + "rate": 0.12263403419973366, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.19225894017408462, + "rate": 0.1922589401740846, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.10902741305097234, + "rate": 0.10902741305097231, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.33245729061632995, + "rate": 0.3324572906163299, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_calibrated.json index 531253d2..776cb4d2 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_calibrated.json @@ -640,14 +640,14 @@ "energyratestructure": [ [ { - "rate": 0.14607171250510662, + "rate": 0.1460717125051067, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.025857519340267764, + "rate": 0.025857519340267778, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply.json index 9e8f8e7d..83052684 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply.json @@ -647,7 +647,7 @@ ], [ { - "rate": 0.10563409147863001, + "rate": 0.10563409147863004, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply_calibrated.json index a3dcdbac..28237b43 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_hp_seasonal_supply_calibrated.json @@ -640,14 +640,14 @@ "energyratestructure": [ [ { - "rate": 0.2400153191276985, + "rate": 0.24001531912769844, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.10590157038814794, + "rate": 0.10590157038814792, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_calibrated.json index 551c7678..a29b7adc 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_calibrated.json @@ -640,84 +640,84 @@ "energyratestructure": [ [ { - "rate": 0.1507979540475916, + "rate": 0.15079795404759158, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.1497199702617073, + "rate": 0.14971997026170727, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15064823407732988, + "rate": 0.15064823407732986, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15228018175318248, + "rate": 0.15228018175318245, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.1551997211732858, + "rate": 0.15519972117328576, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15419659737253236, + "rate": 0.15419659737253233, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15624776096511775, + "rate": 0.15624776096511772, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15864328048930507, + "rate": 0.15864328048930504, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15895769242685465, + "rate": 0.15895769242685462, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15847858852201718, + "rate": 0.15847858852201716, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15379235345282571, + "rate": 0.1537923534528257, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.15722094077181883, + "rate": 0.1572209407718188, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_supply_calibrated.json b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_supply_calibrated.json index a110c118..d5ef0b91 100644 --- a/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_supply_calibrated.json +++ b/rate_design/hp_rates/ny/config/tariffs/electric/cenhud_nonhp_default_supply_calibrated.json @@ -640,84 +640,84 @@ "energyratestructure": [ [ { - "rate": 0.2212411444851879, + "rate": 0.22124114448518797, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.24593165620973484, + "rate": 0.24593165620973492, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.2775470157566682, + "rate": 0.2775470157566683, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.27234784886126634, + "rate": 0.2723478488612664, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.2510423266473427, + "rate": 0.25104232664734283, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.23763511329154033, + "rate": 0.2376351132915404, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.24721485484774894, + "rate": 0.24721485484774902, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.28084350880949754, + "rate": 0.28084350880949766, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.26586548332785026, + "rate": 0.26586548332785037, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.2412634680610974, + "rate": 0.2412634680610975, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.2427900319580452, + "rate": 0.24279003195804527, "adj": 0.0, "unit": "kWh" } ], [ { - "rate": 0.24882991520249081, + "rate": 0.24882991520249093, "adj": 0.0, "unit": "kWh" } diff --git a/rate_design/hp_rates/run_scenario.py b/rate_design/hp_rates/run_scenario.py index 1265cad9..3956699c 100644 --- a/rate_design/hp_rates/run_scenario.py +++ b/rate_design/hp_rates/run_scenario.py @@ -109,7 +109,7 @@ class ScenarioSettings: solar_pv_compensation: str = "net_metering" run_includes_supply: bool = False sample_size: int | None = None - elasticity: float = 0.0 + elasticity: float | dict[str, float] = 0.0 # Real (non-zero) supply MC paths used exclusively for Phase 1.75 TOU # cost-causation recalibration. Supplied as CLI args from the Justfile so # delivery-only runs (whose path_supply_energy/capacity_mc point to zeros) @@ -257,7 +257,14 @@ def _build_settings_from_yaml_run( path_tariff_maps_gas, path_config, ) - elasticity = _parse_float(run.get("elasticity", 0.0), "elasticity") + elasticity_raw = run.get("elasticity", 0.0) + if isinstance(elasticity_raw, dict): + elasticity: float | dict[str, float] = { + str(k): _parse_float(v, f"elasticity.{k}") + for k, v in elasticity_raw.items() + } + else: + elasticity = _parse_float(elasticity_raw, "elasticity") sample_size = ( _parse_int(run["sample_size"], "sample_size") if "sample_size" in run else None ) @@ -622,7 +629,10 @@ def run(settings: ScenarioSettings, num_workers: int | None = None) -> None: # supply runs, per-subclass supply costs are derived from run 2 BAT data # (via compute_subclass_rr --run-dir-supply), not from raw Cambium prices. - demand_flex_enabled = settings.elasticity != 0.0 + if isinstance(settings.elasticity, dict): + demand_flex_enabled = any(v != 0.0 for v in settings.elasticity.values()) + else: + demand_flex_enabled = settings.elasticity != 0.0 if demand_flex_enabled: flex = apply_demand_flex( diff --git a/utils/cairo.py b/utils/cairo.py index 1398479e..007226ab 100644 --- a/utils/cairo.py +++ b/utils/cairo.py @@ -979,7 +979,7 @@ def apply_runtime_tou_demand_response( raw_load_elec: pd.DataFrame, tou_bldg_ids: list[int], tou_tariff: dict, - demand_elasticity: float, + demand_elasticity: float | dict[str, float], season_specs: list | None = None, *, inplace: bool = False, @@ -990,7 +990,8 @@ def apply_runtime_tou_demand_response( raw_load_elec: Full electric load DataFrame indexed by `(bldg_id, time)`. tou_bldg_ids: Building IDs assigned to the TOU tariff. tou_tariff: URDB v7 tariff dictionary. - demand_elasticity: Constant demand elasticity parameter. + demand_elasticity: Constant demand elasticity parameter, or a + ``{season_name: elasticity}`` dict for per-season values. season_specs: Optional season definitions for seasonal slicing. inplace: If True, mutate *raw_load_elec* directly instead of copying. The caller is responsible for passing an already-copied DataFrame. @@ -1048,6 +1049,14 @@ def apply_runtime_tou_demand_response( def _shift_season(season_name: str, season_months: set[int]) -> None: """Shift one season's TOU rows in-place on shifted_load_elec.""" + season_eps = ( + demand_elasticity.get(season_name, 0.0) + if isinstance(demand_elasticity, dict) + else demand_elasticity + ) + if season_eps == 0.0: + return + mask = bldg_level.isin(tou_set) & month_level.isin(season_months) season_df = ( shifted_load_elec.loc[mask, ["electricity_net"]].copy().reset_index() @@ -1063,7 +1072,7 @@ def _shift_season(season_name: str, season_months: set[int]) -> None: process_residential_hourly_demand_response_shift( hourly_load_df=season_df, period_rate=period_rate, - demand_elasticity=demand_elasticity, + demand_elasticity=season_eps, ) ) tracker["season"] = season_name @@ -1106,7 +1115,7 @@ def _shift_season(season_name: str, season_months: set[int]) -> None: shifted_total = shifted_load_elec.loc[tou_mask, "electricity_net"].sum() log.info( - "Runtime demand response complete: bldgs=%d, elasticity=%.3f, " + "Runtime demand response complete: bldgs=%d, elasticity=%s, " "original=%.0f, shifted=%.0f, diff=%.2f", len(tou_bldg_ids), demand_elasticity, diff --git a/utils/demand_flex.py b/utils/demand_flex.py index cae697e8..6cc68f04 100644 --- a/utils/demand_flex.py +++ b/utils/demand_flex.py @@ -244,7 +244,7 @@ def _compute_tou_subclass_mc( def apply_demand_flex( *, - elasticity: float, + elasticity: float | dict[str, float], run_type: str, year_run: int, path_tariffs_electric: dict[str, Path], @@ -276,7 +276,7 @@ def apply_demand_flex( f"keys={sorted(path_tariffs_electric)}" ) log.info( - ".... Demand flex enabled (elasticity=%.4f); detected %d TOU tariff(s): %s", + ".... Demand flex enabled (elasticity=%s); detected %d TOU tariff(s): %s", elasticity, len(tou_tariff_keys), tou_tariff_keys, diff --git a/utils/mid/copy_calibrated_tariff_from_run.py b/utils/mid/copy_calibrated_tariff_from_run.py index 2680e6f6..35901864 100644 --- a/utils/mid/copy_calibrated_tariff_from_run.py +++ b/utils/mid/copy_calibrated_tariff_from_run.py @@ -49,7 +49,7 @@ def _read_json(path: Path | S3Path) -> dict[str, Any]: def _write_json(path: Path, payload: dict[str, Any]) -> Path: """Write formatted JSON to local disk.""" path.parent.mkdir(parents=True, exist_ok=True) - path.write_text(json.dumps(payload, indent=2), encoding="utf-8") + path.write_text(json.dumps(payload, indent=2) + "\n", encoding="utf-8") return path diff --git a/utils/post/validate_demand_flex_shift.py b/utils/post/validate_demand_flex_shift.py new file mode 100644 index 00000000..fdd6b453 --- /dev/null +++ b/utils/post/validate_demand_flex_shift.py @@ -0,0 +1,1320 @@ +"""Validate demand-flex load shifting and produce diagnostic plots. + +Reproduces the constant-elasticity demand-response shift analytically using the +same functions CAIRO uses, then: + 1. Runs validation checks (energy conservation, direction, magnitude) + 2. Cross-checks against CAIRO's saved elasticity tracker + 3. Produces diagnostic plots (daily profiles, heatmaps, distributions) + 4. Writes data outputs for report reuse + +Invoke via the Justfile: + + # One utility, scalar elasticity (no CAIRO cross-check): + just -f rate_design/hp_rates/ny/Justfile validate-demand-flex cenhud + + # One utility with seasonal elasticities (matching periods YAML): + just -f rate_design/hp_rates/ny/Justfile validate-demand-flex cenhud "winter=-0.12,summer=-0.14" + + # With CAIRO tracker cross-check (no-tech batch): + just -f rate_design/hp_rates/ny/Justfile validate-demand-flex cenhud "winter=-0.12,summer=-0.14" ny_20260325b_r1-16 + + # With CAIRO tracker cross-check (with-tech batch): + just -f rate_design/hp_rates/ny/Justfile validate-demand-flex cenhud "winter=-0.18,summer=-0.22" ny_20260326_elast_seasonal_tech + + # All utilities (scalar, no batch): + just -f rate_design/hp_rates/ny/Justfile validate-demand-flex-all + +Or directly: + + # Scalar elasticity: + uv run python -m utils.post.validate_demand_flex_shift \ + --utility coned --elasticity -0.10 \ + --output-dir dev_plots/flex/coned + + # Seasonal elasticities (no-tech values from periods YAML): + uv run python -m utils.post.validate_demand_flex_shift \ + --utility cenhud --elasticity winter=-0.12,summer=-0.14 \ + --output-dir dev_plots/flex/cenhud + + # Seasonal elasticities (with-tech values): + uv run python -m utils.post.validate_demand_flex_shift \ + --utility cenhud --elasticity winter=-0.18,summer=-0.22 \ + --output-dir dev_plots/flex/cenhud + + # With CAIRO tracker cross-check: + uv run python -m utils.post.validate_demand_flex_shift \ + --utility cenhud --elasticity winter=-0.12,summer=-0.14 \ + --batch ny_20260326_elast_seasonal_tech \ + --output-dir dev_plots/flex/cenhud +""" + +from __future__ import annotations + +import argparse +import json +import logging +import sys +from pathlib import Path +from typing import cast + +import matplotlib.pyplot as plt +import numpy as np +import pandas as pd +import polars as pl + +from utils.cairo import ( + _build_period_consumption, + _build_period_shift_targets, + _compute_equivalent_flat_tariff, + assign_hourly_periods, + extract_tou_period_rates, +) +from utils.pre.compute_tou import SeasonTouSpec, load_season_specs + +log = logging.getLogger(__name__) + +plt.style.use("seaborn-v0_8-whitegrid") +plt.rcParams.update( + { + "figure.dpi": 150, + "savefig.dpi": 150, + "savefig.bbox": "tight", + "font.size": 10, + "axes.titlesize": 12, + "axes.labelsize": 10, + } +) + +WINTER_MONTHS = {1, 2, 3, 10, 11, 12} +SUMMER_MONTHS = {4, 5, 6, 7, 8, 9} + +STATE_CONFIGS: dict[str, dict] = { + "ny": { + "utilities": ("cenhud", "coned", "nimo", "nyseg", "or", "psegli", "rge"), + "path_metadata": Path( + "/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/metadata/state=NY/" + "upgrade=00/metadata-sb.parquet" + ), + "path_utility_assignment": Path( + "/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/metadata_utility/" + "state=NY/utility_assignment.parquet" + ), + "path_loads_dir": Path( + "/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/load_curve_hourly/" + "state=NY/upgrade=00" + ), + "path_tou_derivation_dir": Path( + "rate_design/hp_rates/ny/config/tou_derivation" + ), + "path_tariffs_electric_dir": Path( + "rate_design/hp_rates/ny/config/tariffs/electric" + ), + }, +} + + +# ── Data loading ────────────────────────────────────────────────────────────── + + +def load_hp_metadata( + path_metadata: Path, + path_utility_assignment: Path, +) -> pl.DataFrame: + """Load metadata and return HP buildings with utility assignment and weight.""" + meta = pl.read_parquet(path_metadata).select( + "bldg_id", "postprocess_group.has_hp", "weight" + ) + util = pl.read_parquet(path_utility_assignment).select( + "bldg_id", "sb.electric_utility" + ) + joined = meta.join(util, on="bldg_id", how="inner") + hp = joined.filter(pl.col("postprocess_group.has_hp") == True) # noqa: E712 + log.info("Loaded %d HP buildings out of %d total", hp.height, joined.height) + return hp + + +def load_building_loads(bldg_ids: list[int], loads_dir: Path) -> pd.DataFrame: + """Load hourly electric net load for buildings from local parquet. + + Returns pandas DataFrame with MultiIndex (bldg_id, time) and column + 'electricity_net' in kWh. + """ + frames: list[pd.DataFrame] = [] + missing = 0 + for bid in bldg_ids: + path = loads_dir / f"{bid}-0.parquet" + if not path.exists(): + missing += 1 + continue + df = pl.read_parquet( + path, + columns=["timestamp", "out.electricity.net.energy_consumption"], + ).to_pandas() + df = df.rename( + columns={ + "timestamp": "time", + "out.electricity.net.energy_consumption": "electricity_net", + } + ) + df["bldg_id"] = bid + frames.append(df) + + if missing: + log.warning("Missing load files for %d of %d buildings", missing, len(bldg_ids)) + if not frames: + raise FileNotFoundError(f"No load files found in {loads_dir}") + + combined = pd.concat(frames, ignore_index=True) + combined = combined.set_index(["bldg_id", "time"]).sort_index() + log.info("Loaded %d buildings, %d rows", len(frames), len(combined)) + return combined + + +def load_tou_context( + utility: str, + tou_derivation_dir: Path, + tariffs_electric_dir: Path, +) -> tuple[list[SeasonTouSpec], dict]: + """Load TOU derivation specs and the flex tariff for one utility.""" + deriv_path = tou_derivation_dir / f"{utility}_hp_seasonalTOU_derivation.json" + specs = load_season_specs(deriv_path) + tariff_path = tariffs_electric_dir / f"{utility}_hp_seasonalTOU_flex.json" + if not tariff_path.exists(): + tariff_path = tariffs_electric_dir / f"{utility}_hp_seasonalTOU.json" + with open(tariff_path) as f: + tou_tariff = json.load(f) + return specs, tou_tariff + + +# ── Shift reproduction ──────────────────────────────────────────────────────── + + +def reproduce_shift( + loads_df: pd.DataFrame, + period_rate: pd.Series, + period_map: pd.Series, + season_specs: list[SeasonTouSpec], + elasticity: float | dict[str, float], +) -> pd.DataFrame: + """Apply constant-elasticity shifting to all buildings, return hourly results. + + Args: + elasticity: Scalar applied to all seasons, or a ``{season_name: value}`` + dict for per-season elasticity. + + Returns a DataFrame indexed like loads_df with columns: + electricity_net_orig, electricity_net_shifted, hourly_shift_kw, + energy_period, hour_of_day, month, season, tou_rate + """ + time_level = pd.DatetimeIndex(loads_df.index.get_level_values("time")) + + result_parts: list[pd.DataFrame] = [] + + for spec in season_specs: + season_months = set(spec.season.months) + season_name = spec.season.name + + season_eps = ( + elasticity.get(season_name, 0.0) + if isinstance(elasticity, dict) + else elasticity + ) + if season_eps == 0.0: + continue + + season_mask = time_level.to_series().dt.month.isin(season_months).to_numpy() + season_df = loads_df.loc[season_mask].copy().reset_index() + + period_lookup = period_map.reset_index() + period_lookup.columns = pd.Index(["time", "energy_period"]) + season_df = season_df.merge(period_lookup, on="time", how="left") + + period_consumption = _build_period_consumption(season_df) + p_flat = _compute_equivalent_flat_tariff(period_consumption, period_rate) + + targets = _build_period_shift_targets( + period_consumption, period_rate, season_eps, p_flat, receiver_period=None + ) + + # Distribute period-level shifts to hourly rows proportionally + hourly = season_df.merge( + targets[["bldg_id", "energy_period", "load_shift", "Q_orig"]], + on=["bldg_id", "energy_period"], + how="left", + ) + period_sums = hourly.groupby(["bldg_id", "energy_period"])[ + "electricity_net" + ].transform("sum") + hour_share = np.where( + period_sums != 0, hourly["electricity_net"] / period_sums, 0.0 + ) + hourly["hourly_shift_kw"] = hourly["load_shift"] * hour_share + hourly["electricity_net_shifted"] = ( + hourly["electricity_net"] + hourly["hourly_shift_kw"] + ) + hourly["season"] = season_name + hourly["hour_of_day"] = hourly["time"].dt.hour + hourly["month"] = hourly["time"].dt.month + + rate_map = period_rate.to_dict() + hourly["tou_rate"] = hourly["energy_period"].map(rate_map) + + hourly = hourly.rename(columns={"electricity_net": "electricity_net_orig"}) + result_parts.append( + hourly[ + [ + "bldg_id", + "time", + "electricity_net_orig", + "electricity_net_shifted", + "hourly_shift_kw", + "energy_period", + "hour_of_day", + "month", + "season", + "tou_rate", + ] + ] + ) + + result = pd.concat(result_parts, ignore_index=True).sort_values(["bldg_id", "time"]) + log.info("Reproduced shift: %d hourly rows", len(result)) + return result + + +# ── Validation checks ───────────────────────────────────────────────────────── + + +def run_validation_checks( + hourly: pd.DataFrame, + season_specs: list[SeasonTouSpec], + period_rate: pd.Series, +) -> dict: + """Run validation checks on the shifted hourly data. Returns summary dict.""" + results: dict = {"energy_conservation": {}, "direction": {}, "magnitude": {}} + p_flat_map: dict[str, float] = {} + + for spec in season_specs: + sn = spec.season.name + sh = hourly[hourly["season"] == sn] + + # Energy conservation: per building + bldg_totals = sh.groupby("bldg_id").agg( + orig_kwh=("electricity_net_orig", "sum"), + shifted_kwh=("electricity_net_shifted", "sum"), + ) + bldg_totals["diff_kwh"] = bldg_totals["shifted_kwh"] - bldg_totals["orig_kwh"] + max_diff = bldg_totals["diff_kwh"].abs().max() + violations = (bldg_totals["diff_kwh"].abs() > 0.01).sum() + results["energy_conservation"][sn] = { + "max_abs_diff_kwh": float(max_diff), + "violations": int(violations), + "pass": violations == 0, + } + + # Direction: peak kWh should decrease, off-peak should increase + rates = period_rate.to_dict() + period_consumption_all = sh.groupby("energy_period").agg( + orig=("electricity_net_orig", "sum"), + shifted=("electricity_net_shifted", "sum"), + ) + period_consumption_all["shift"] = ( + period_consumption_all["shifted"] - period_consumption_all["orig"] + ) + + global_p_flat = ( + period_consumption_all["orig"] * pd.Series(rates) + ).sum() / period_consumption_all["orig"].sum() + p_flat_map[sn] = float(global_p_flat) + + direction_ok = True + direction_details = {} + for ep, row in period_consumption_all.iterrows(): + ep_rate = rates.get(int(ep), 0.0) + if ep_rate > global_p_flat: + expected_sign = "decrease" + ok = row["shift"] <= 0.01 + else: + expected_sign = "increase" + ok = row["shift"] >= -0.01 + direction_ok = direction_ok and ok + direction_details[int(ep)] = { + "rate": float(ep_rate), + "is_peak": ep_rate > global_p_flat, + "shift_kwh": float(row["shift"]), + "expected": expected_sign, + "pass": ok, + } + results["direction"][sn] = { + "pass": direction_ok, + "periods": direction_details, + } + + # Magnitude: aggregate peak reduction % + peak_eps = [ + ep + for ep, r in rates.items() + if r > global_p_flat and ep in period_consumption_all.index + ] + if peak_eps: + peak_orig = period_consumption_all.loc[peak_eps, "orig"].sum() + peak_shifted = period_consumption_all.loc[peak_eps, "shifted"].sum() + peak_red_pct = ( + (peak_orig - peak_shifted) / peak_orig * 100 if peak_orig > 0 else 0.0 + ) + else: + peak_red_pct = 0.0 + results["magnitude"][sn] = { + "peak_reduction_pct": float(peak_red_pct), + "ratio": spec.peak_offpeak_ratio, + } + + return results + + +def print_validation_report(checks: dict) -> None: + """Print a human-readable validation report.""" + print("\n" + "=" * 70) + print("DEMAND-FLEX SHIFT VALIDATION") + print("=" * 70) + + for sn in sorted(checks["energy_conservation"]): + ec = checks["energy_conservation"][sn] + status = "PASS" if ec["pass"] else "FAIL" + print(f"\n [{status}] Energy conservation ({sn}):") + print(f" max |orig - shifted| = {ec['max_abs_diff_kwh']:.6f} kWh") + if not ec["pass"]: + print(f" {ec['violations']} buildings exceeded 0.01 kWh tolerance") + + for sn in sorted(checks["direction"]): + dr = checks["direction"][sn] + status = "PASS" if dr["pass"] else "FAIL" + print(f"\n [{status}] Direction check ({sn}):") + for ep, info in sorted(dr["periods"].items()): + peak_label = "PEAK" if info["is_peak"] else "offpk" + ep_status = "ok" if info["pass"] else "FAIL" + print( + f" period {ep} ({peak_label}, ${info['rate']:.4f}/kWh):" + f" shift={info['shift_kwh']:+,.0f} kWh [{ep_status}]" + ) + + for sn in sorted(checks["magnitude"]): + mg = checks["magnitude"][sn] + print( + f"\n Peak reduction ({sn}): {mg['peak_reduction_pct']:.2f}%" + f" (TOU ratio={mg['ratio']:.2f})" + ) + + +# ── Diagnostic plots ────────────────────────────────────────────────────────── + + +def _get_peak_hours(spec: SeasonTouSpec) -> set[int]: + return set(spec.peak_hours) + + +def _season_spec_by_name(specs: list[SeasonTouSpec], name: str) -> SeasonTouSpec | None: + for s in specs: + if s.season.name == name: + return s + return None + + +def plot_building_daily_profile( + hourly: pd.DataFrame, + bldg_id: int, + season_specs: list[SeasonTouSpec], + utility: str, + output_dir: Path, +) -> None: + """Plot 1: Per-building daily load profile with TOU overlay for one building.""" + bldg = hourly[hourly["bldg_id"] == bldg_id] + if bldg.empty: + return + + seasons = ["summer", "winter"] + fig, axes = plt.subplots(1, 2, figsize=(14, 5), sharey=True) + + for ax, sn in zip(axes, seasons): + spec = _season_spec_by_name(season_specs, sn) + if spec is None: + continue + peak_hours = _get_peak_hours(spec) + + sdata = bldg[bldg["season"] == sn] + if sdata.empty: + ax.set_title(f"{sn.title()} — no data") + continue + + # Pick a representative weekday (middle of season) + sdata_ts = pd.to_datetime(sdata["time"]) + weekday_mask = sdata_ts.dt.dayofweek < 5 + sdata_wd = sdata[weekday_mask.values] + if sdata_wd.empty: + continue + + dates = sdata_wd["time"].dt.date.unique() + mid_date = dates[len(dates) // 2] + day = sdata_wd[sdata_wd["time"].dt.date == mid_date] + if day.empty: + continue + + hours = day["hour_of_day"].values + orig = day["electricity_net_orig"].values + shifted = day["electricity_net_shifted"].values + rates = day["tou_rate"].values + + # Shade peak hours + for h in range(24): + if h in peak_hours: + ax.axvspan(h - 0.5, h + 0.5, alpha=0.12, color="red", zorder=0) + + ax.plot( + hours, + orig, + "o-", + color="#2c3e50", + linewidth=1.8, + markersize=3, + label="Original", + zorder=3, + ) + ax.plot( + hours, + shifted, + "s--", + color="#e74c3c", + linewidth=1.8, + markersize=3, + label="Shifted", + zorder=3, + ) + + ax2 = ax.twinx() + ax2.step( + hours, + rates, + where="mid", + color="#7f8c8d", + linewidth=1.0, + alpha=0.6, + label="TOU rate", + ) + ax2.set_ylabel("TOU rate ($/kWh)", color="#7f8c8d") + ax2.tick_params(axis="y", labelcolor="#7f8c8d") + + shift_kwh = float(day["hourly_shift_kw"].sum()) + peak_orig_kwh = float( + day.loc[day["hour_of_day"].isin(peak_hours), "electricity_net_orig"].sum() + ) + peak_shifted_kwh = float( + day.loc[ + day["hour_of_day"].isin(peak_hours), "electricity_net_shifted" + ].sum() + ) + peak_red = ( + (peak_orig_kwh - peak_shifted_kwh) / peak_orig_kwh * 100 + if peak_orig_kwh > 0 + else 0 + ) + + ax.set_title(f"{sn.title()} ({mid_date})") + ax.set_xlabel("Hour of day") + ax.set_xlim(-0.5, 23.5) + ax.set_xticks(range(0, 24, 3)) + textstr = ( + f"Peak red: {peak_red:.1f}%\n" + f"Net shift: {shift_kwh:+.2f} kWh\n" + f"Ratio: {spec.peak_offpeak_ratio:.2f}" + ) + ax.text( + 0.02, + 0.98, + textstr, + transform=ax.transAxes, + fontsize=8, + verticalalignment="top", + bbox={"boxstyle": "round,pad=0.3", "facecolor": "wheat", "alpha": 0.5}, + ) + + axes[0].set_ylabel("Load (kWh)") + axes[0].legend(loc="upper right", fontsize=8) + + fig.suptitle( + f"{utility.upper()} — Building {bldg_id} daily profile", + fontsize=13, + fontweight="bold", + ) + fig.tight_layout() + path = output_dir / f"building_daily_profile_{bldg_id}.png" + fig.savefig(path) + plt.close(fig) + log.info("Saved %s", path) + + +def plot_aggregate_daily_profile( + hourly: pd.DataFrame, + season_specs: list[SeasonTouSpec], + utility: str, + output_dir: Path, +) -> None: + """Plot 2: Aggregate average daily load profile by season.""" + fig, axes = plt.subplots(1, 2, figsize=(14, 5), sharey=True) + seasons = ["summer", "winter"] + + for ax, sn in zip(axes, seasons): + spec = _season_spec_by_name(season_specs, sn) + if spec is None: + continue + peak_hours = _get_peak_hours(spec) + + sdata = hourly[hourly["season"] == sn] + if sdata.empty: + continue + + profile = sdata.groupby("hour_of_day").agg( + orig=("electricity_net_orig", "mean"), + shifted=("electricity_net_shifted", "mean"), + ) + hours = profile.index.values + orig = profile["orig"].values + shifted = profile["shifted"].values + + for h in range(24): + if h in peak_hours: + ax.axvspan(h - 0.5, h + 0.5, alpha=0.10, color="red", zorder=0) + + ax.plot( + hours, + orig, + "o-", + color="#2c3e50", + linewidth=2, + markersize=3, + label="Original", + ) + ax.plot( + hours, + shifted, + "s-", + color="#e74c3c", + linewidth=2, + markersize=3, + label="Shifted", + ) + + ax.fill_between( + hours, + orig, + shifted, + where=orig > shifted, + interpolate=True, + alpha=0.25, + color="#e74c3c", + label="Load removed (peak)", + ) + ax.fill_between( + hours, + orig, + shifted, + where=shifted > orig, + interpolate=True, + alpha=0.25, + color="#3498db", + label="Load added (off-peak)", + ) + + total_orig_peak = float( + sdata.loc[ + sdata["hour_of_day"].isin(peak_hours), "electricity_net_orig" + ].sum() + ) + total_shifted_peak = float( + sdata.loc[ + sdata["hour_of_day"].isin(peak_hours), "electricity_net_shifted" + ].sum() + ) + n_bldgs = sdata["bldg_id"].nunique() + peak_red = ( + (total_orig_peak - total_shifted_peak) / total_orig_peak * 100 + if total_orig_peak > 0 + else 0 + ) + total_shift = float(sdata["hourly_shift_kw"].sum()) / n_bldgs + + ax.set_title(f"{sn.title()} (mean of {n_bldgs} HP buildings)") + ax.set_xlabel("Hour of day") + ax.set_xlim(-0.5, 23.5) + ax.set_xticks(range(0, 24, 3)) + + textstr = ( + f"Peak red: {peak_red:.1f}%\n" + f"kWh shifted/bldg: {total_shift:+,.1f}\n" + f"Ratio: {spec.peak_offpeak_ratio:.2f}" + ) + ax.text( + 0.02, + 0.98, + textstr, + transform=ax.transAxes, + fontsize=8, + verticalalignment="top", + bbox={"boxstyle": "round,pad=0.3", "facecolor": "wheat", "alpha": 0.5}, + ) + + axes[0].set_ylabel("Mean load (kWh)") + axes[0].legend(loc="upper right", fontsize=7) + fig.suptitle( + f"{utility.upper()} — Aggregate daily load profile", + fontsize=13, + fontweight="bold", + ) + fig.tight_layout() + path = output_dir / "aggregate_daily_profile.png" + fig.savefig(path) + plt.close(fig) + log.info("Saved %s", path) + + +def plot_net_shift_by_hour( + hourly: pd.DataFrame, + season_specs: list[SeasonTouSpec], + utility: str, + output_dir: Path, +) -> None: + """Plot 3: Net kW shift by hour-of-day, bars colored by sign.""" + fig, axes = plt.subplots(1, 2, figsize=(14, 5), sharey=True) + seasons = ["summer", "winter"] + + for ax, sn in zip(axes, seasons): + spec = _season_spec_by_name(season_specs, sn) + if spec is None: + continue + peak_hours = _get_peak_hours(spec) + + sdata = hourly[hourly["season"] == sn] + if sdata.empty: + continue + + n_bldgs = sdata["bldg_id"].nunique() + mean_shift = sdata.groupby("hour_of_day")["hourly_shift_kw"].mean() + + colors = ["#e74c3c" if v < 0 else "#3498db" for v in mean_shift.values] + ax.bar( + mean_shift.index, + mean_shift.values, + color=colors, + width=0.8, + edgecolor="white", + linewidth=0.5, + ) + + for h in range(24): + if h in peak_hours: + ax.axvspan(h - 0.5, h + 0.5, alpha=0.06, color="red", zorder=0) + + ax.axhline(0, color="black", linewidth=0.5) + ax.set_title(f"{sn.title()} (mean/building, n={n_bldgs})") + ax.set_xlabel("Hour of day") + ax.set_xlim(-0.5, 23.5) + ax.set_xticks(range(0, 24, 3)) + + net_check = mean_shift.sum() + ax.text( + 0.98, + 0.02, + f"Net: {net_check:+.4f} kWh\n(should be ~0)", + transform=ax.transAxes, + fontsize=7, + ha="right", + bbox={ + "boxstyle": "round,pad=0.3", + "facecolor": "lightyellow", + "alpha": 0.7, + }, + ) + + axes[0].set_ylabel("Mean shift (kWh)") + fig.suptitle( + f"{utility.upper()} — Net load shift by hour", + fontsize=13, + fontweight="bold", + ) + fig.tight_layout() + path = output_dir / "net_shift_by_hour.png" + fig.savefig(path) + plt.close(fig) + log.info("Saved %s", path) + + +def plot_shift_heatmap( + hourly: pd.DataFrame, + season_specs: list[SeasonTouSpec], + utility: str, + output_dir: Path, +) -> None: + """Plot 4: Month x hour heatmap of mean kW shift per building.""" + pivot = hourly.groupby(["month", "hour_of_day"])["hourly_shift_kw"].mean() + heatmap_data = pivot.unstack(level="hour_of_day").sort_index() + + fig, ax = plt.subplots(figsize=(14, 5)) + vmax = max(abs(heatmap_data.min().min()), abs(heatmap_data.max().max())) + im = ax.imshow( + heatmap_data.values, + aspect="auto", + cmap="RdBu", + vmin=-vmax, + vmax=vmax, + interpolation="nearest", + ) + + ax.set_yticks(range(len(heatmap_data.index))) + month_names = [ + "Jan", + "Feb", + "Mar", + "Apr", + "May", + "Jun", + "Jul", + "Aug", + "Sep", + "Oct", + "Nov", + "Dec", + ] + ax.set_yticklabels([month_names[m - 1] for m in heatmap_data.index]) + ax.set_xticks(range(24)) + ax.set_xticklabels([str(h) for h in range(24)], fontsize=8) + ax.set_xlabel("Hour of day") + ax.set_ylabel("Month") + + # Annotate peak hours per season + for spec in season_specs: + peak_hours = _get_peak_hours(spec) + for m in spec.season.months: + row_idx = ( + list(heatmap_data.index).index(m) if m in heatmap_data.index else None + ) + if row_idx is None: + continue + for h in peak_hours: + ax.plot(h, row_idx, "k.", markersize=2, alpha=0.4) + + cbar = fig.colorbar(im, ax=ax, shrink=0.8) + cbar.set_label("Mean shift (kWh/building)") + ax.set_title( + f"{utility.upper()} — Hourly load shift heatmap" + f" (red=removed, blue=added, dots=peak hours)", + fontsize=12, + fontweight="bold", + ) + fig.tight_layout() + path = output_dir / "shift_heatmap.png" + fig.savefig(path) + plt.close(fig) + log.info("Saved %s", path) + + +def plot_peak_reduction_distribution( + hourly: pd.DataFrame, + season_specs: list[SeasonTouSpec], + utility: str, + output_dir: Path, +) -> None: + """Plot 5: Histogram of per-building peak reduction % by season.""" + fig, axes = plt.subplots(1, 2, figsize=(14, 5)) + seasons = ["summer", "winter"] + + for ax, sn in zip(axes, seasons): + spec = _season_spec_by_name(season_specs, sn) + if spec is None: + continue + peak_hours = _get_peak_hours(spec) + + sdata = hourly[hourly["season"] == sn] + if sdata.empty: + continue + + peak_data = sdata[sdata["hour_of_day"].isin(peak_hours)] + bldg_peak = peak_data.groupby("bldg_id").agg( + orig=("electricity_net_orig", "sum"), + shifted=("electricity_net_shifted", "sum"), + ) + bldg_peak["reduction_pct"] = np.where( + bldg_peak["orig"] > 0, + (bldg_peak["orig"] - bldg_peak["shifted"]) / bldg_peak["orig"] * 100, + 0.0, + ) + + data_range = bldg_peak["reduction_pct"].max() - bldg_peak["reduction_pct"].min() + n_bins = max(5, min(30, int(data_range / 0.3))) if data_range > 0.01 else 5 + ax.hist( + bldg_peak["reduction_pct"], + bins=n_bins, + color="#3498db", + edgecolor="white", + alpha=0.8, + ) + + mean_red = bldg_peak["reduction_pct"].mean() + median_red = bldg_peak["reduction_pct"].median() + ax.axvline( + mean_red, + color="#e74c3c", + linewidth=2, + linestyle="-", + label=f"Mean: {mean_red:.1f}%", + ) + ax.axvline( + median_red, + color="#2ecc71", + linewidth=2, + linestyle="--", + label=f"Median: {median_red:.1f}%", + ) + + ax.set_title(f"{sn.title()} (n={len(bldg_peak)} buildings)") + ax.set_xlabel("Peak reduction (%)") + ax.set_ylabel("Count") + ax.legend(fontsize=8) + + textstr = ( + f"Std: {bldg_peak['reduction_pct'].std():.1f}%\n" + f"Range: [{bldg_peak['reduction_pct'].min():.1f}%," + f" {bldg_peak['reduction_pct'].max():.1f}%]" + ) + ax.text( + 0.98, + 0.98, + textstr, + transform=ax.transAxes, + fontsize=8, + ha="right", + va="top", + bbox={ + "boxstyle": "round,pad=0.3", + "facecolor": "lightyellow", + "alpha": 0.7, + }, + ) + + fig.suptitle( + f"{utility.upper()} — Per-building peak reduction distribution", + fontsize=13, + fontweight="bold", + ) + fig.tight_layout() + path = output_dir / "peak_reduction_distribution.png" + fig.savefig(path) + plt.close(fig) + log.info("Saved %s", path) + + +# ── Data outputs ────────────────────────────────────────────────────────────── + + +def write_data_outputs( + hourly: pd.DataFrame, + season_specs: list[SeasonTouSpec], + utility: str, + output_dir: Path, + n_example_buildings: int = 100, +) -> None: + """Write data files for report reuse.""" + # Aggregate hourly profile CSV + agg = ( + hourly.groupby(["season", "hour_of_day"]) + .agg( + mean_load_orig_kw=("electricity_net_orig", "mean"), + mean_load_shifted_kw=("electricity_net_shifted", "mean"), + mean_shift_kw=("hourly_shift_kw", "mean"), + ) + .reset_index() + ) + agg_path = output_dir / f"{utility}_aggregate_hourly_profile.csv" + agg.to_csv(agg_path, index=False) + log.info("Wrote %s", agg_path) + + # Shift heatmap data CSV + hm = ( + hourly.groupby(["month", "hour_of_day"])["hourly_shift_kw"].mean().reset_index() + ) + hm.columns = ["month", "hour_of_day", "mean_shift_kw"] + hm_path = output_dir / f"{utility}_shift_heatmap_data.csv" + hm.to_csv(hm_path, index=False) + log.info("Wrote %s", hm_path) + + # Per-building peak reduction CSV + reduction_rows = [] + for spec in season_specs: + sn = spec.season.name + peak_hours = _get_peak_hours(spec) + sdata = hourly[hourly["season"] == sn] + peak_data = sdata[sdata["hour_of_day"].isin(peak_hours)] + bldg_peak = peak_data.groupby("bldg_id").agg( + orig=("electricity_net_orig", "sum"), + shifted=("electricity_net_shifted", "sum"), + ) + bldg_peak["reduction_pct"] = np.where( + bldg_peak["orig"] > 0, + (bldg_peak["orig"] - bldg_peak["shifted"]) / bldg_peak["orig"] * 100, + 0.0, + ) + bldg_peak["season"] = sn + reduction_rows.append(bldg_peak.reset_index()) + red_df = pd.concat(reduction_rows, ignore_index=True) + red_path = output_dir / f"{utility}_building_peak_reduction.csv" + red_df.to_csv(red_path, index=False) + log.info("Wrote %s", red_path) + + # Per-building hourly data for a subset + bldg_ids = sorted(hourly["bldg_id"].unique()) + if len(bldg_ids) > n_example_buildings: + rng = np.random.default_rng(42) + bldg_ids = sorted(rng.choice(bldg_ids, size=n_example_buildings, replace=False)) + subset = hourly[hourly["bldg_id"].isin(bldg_ids)] + sub_path = output_dir / f"{utility}_building_hourly_shifted.parquet" + subset.to_parquet(sub_path, index=False) + log.info("Wrote %s (%d buildings)", sub_path, len(bldg_ids)) + + +# ── CAIRO tracker cross-check ──────────────────────────────────────────────── + + +def crosscheck_cairo_tracker( + hourly: pd.DataFrame, + batch: str, + utility: str, + period_rate: pd.Series, + season_specs: list[SeasonTouSpec], + output_dir: Path, +) -> None: + """Download CAIRO elasticity tracker and compare to our analytical results.""" + import subprocess + import tempfile + + s3_prefix = f"s3://data.sb/switchbox/cairo/outputs/hp_rates/ny/{utility}/{batch}/" + + # List run dirs to find the run13 tracker + result = subprocess.run( + ["aws", "s3", "ls", s3_prefix, "--recursive"], + capture_output=True, + text=True, + timeout=30, + ) + tracker_s3 = None + for line in result.stdout.splitlines(): + if "run13" in line and "demand_flex_elasticity_tracker.csv" in line: + parts = line.strip().split() + key = parts[-1] # relative to bucket root + tracker_s3 = f"s3://data.sb/{key}" + break + + if tracker_s3 is None: + log.warning("Could not find CAIRO tracker for %s/%s", utility, batch) + return + + log.info("Downloading CAIRO tracker: %s", tracker_s3) + with tempfile.NamedTemporaryFile(suffix=".csv", delete=False) as tmp: + tmp_path = tmp.name + subprocess.run( + ["aws", "s3", "cp", tracker_s3, tmp_path], + capture_output=True, + timeout=30, + ) + cairo_tracker = pd.read_csv(tmp_path) + Path(tmp_path).unlink(missing_ok=True) + + print(f"\n CAIRO tracker cross-check ({utility}, run13):") + print(f" CAIRO tracker: {len(cairo_tracker)} buildings") + print(f" CAIRO columns: {list(cairo_tracker.columns)}") + + # Our analytical tracker: compute per-building per-season-period epsilon + rates = period_rate.to_dict() + our_rows = [] + for spec in season_specs: + sn = spec.season.name + sdata = hourly[hourly["season"] == sn] + if sdata.empty: + continue + + bldg_period = ( + sdata.groupby(["bldg_id", "energy_period"]) + .agg( + orig=("electricity_net_orig", "sum"), + shifted=("electricity_net_shifted", "sum"), + ) + .reset_index() + ) + bldg_period["rate"] = bldg_period["energy_period"].map(rates) + + global_p_flat = ( + sdata["electricity_net_orig"] * sdata["tou_rate"] + ).sum() / sdata["electricity_net_orig"].sum() + + valid = ( + (bldg_period["orig"] > 0) + & (bldg_period["shifted"] > 0) + & (bldg_period["rate"] != global_p_flat) + ) + bldg_period["epsilon"] = np.nan + bldg_period.loc[valid, "epsilon"] = np.log( + bldg_period.loc[valid, "shifted"] / bldg_period.loc[valid, "orig"] + ) / np.log(bldg_period.loc[valid, "rate"] / global_p_flat) + + bldg_period["season"] = sn + our_rows.append(bldg_period) + + our_tracker = pd.concat(our_rows, ignore_index=True) + + # Pivot our tracker to match CAIRO format: bldg_id, season_period columns + our_pivot = our_tracker.pivot_table( + index="bldg_id", + columns=["season", "energy_period"], + values="epsilon", + ) + our_pivot.columns = [f"{s}_period_{p}" for s, p in our_pivot.columns] + + # Compare to CAIRO + common_bldgs = set(our_pivot.index) & set(cairo_tracker["bldg_id"]) + if not common_bldgs: + print(" WARNING: no common buildings between analytical and CAIRO tracker") + return + + print(f" Common buildings: {len(common_bldgs)}") + cairo_indexed = cairo_tracker.set_index("bldg_id") + common_cols = set(our_pivot.columns) & set(cairo_indexed.columns) + if not common_cols: + print(f" Our columns: {list(our_pivot.columns)}") + print(f" CAIRO columns: {list(cairo_indexed.columns)}") + print(" WARNING: no matching column names") + return + + diffs = [] + for col in sorted(common_cols): + ours = our_pivot.loc[list(common_bldgs), col].dropna() + theirs = cairo_indexed.loc[list(common_bldgs), col].dropna() + common_idx = ours.index.intersection(theirs.index) + if common_idx.empty: + continue + diff = (ours.loc[common_idx] - theirs.loc[common_idx]).abs() + diffs.append(diff) + print( + f" {col}: max|Δε|={diff.max():.6f}, " + f"mean|Δε|={diff.mean():.6f}, n={len(common_idx)}" + ) + + if diffs: + all_diffs = pd.concat(diffs) + print( + f" Overall: max|Δε|={all_diffs.max():.6f}, " + f"mean|Δε|={all_diffs.mean():.6f}" + ) + + +# ── Main ────────────────────────────────────────────────────────────────────── + + +def _parse_elasticity(value: str) -> float | dict[str, float]: + """Parse elasticity as a scalar or ``season=value,...`` dict. + + Examples: ``-0.12``, ``winter=-0.12,summer=-0.14``. + """ + try: + return float(value) + except ValueError: + pass + pairs = {} + for part in value.split(","): + k, _, v = part.partition("=") + k = k.strip() + v = v.strip() + if not k or not v: + raise argparse.ArgumentTypeError( + f"Invalid elasticity format: {value!r}. " + "Use a number (e.g. -0.12) or season=value pairs " + "(e.g. winter=-0.12,summer=-0.14)." + ) + pairs[k] = float(v) + return pairs + + +def _parse_args(argv: list[str] | None = None) -> argparse.Namespace: + p = argparse.ArgumentParser(description=__doc__) + p.add_argument("--utility", required=True) + p.add_argument("--state", default="ny", choices=list(STATE_CONFIGS.keys())) + p.add_argument("--elasticity", type=_parse_elasticity, default=-0.10) + p.add_argument("--batch", default=None, help="S3 batch for CAIRO cross-check") + p.add_argument("--output-dir", type=Path, required=True) + p.add_argument( + "--n-example-buildings", + type=int, + default=100, + help="Buildings to include in hourly parquet (default 100)", + ) + p.add_argument( + "--n-profile-buildings", + type=int, + default=5, + help="Buildings to generate individual profile plots for (default 5)", + ) + p.add_argument("-v", "--verbose", action="store_true") + return p.parse_args(argv) + + +def main(argv: list[str] | None = None) -> None: + args = _parse_args(argv) + logging.basicConfig( + level=logging.DEBUG if args.verbose else logging.INFO, + format="%(levelname)s %(name)s: %(message)s", + ) + + cfg = STATE_CONFIGS[args.state] + output_dir = Path(args.output_dir) + output_dir.mkdir(parents=True, exist_ok=True) + + project_root = Path(__file__).resolve().parents[2] + + # Load metadata + log.info("Loading HP metadata...") + hp_meta = load_hp_metadata(cfg["path_metadata"], cfg["path_utility_assignment"]) + util_hp = hp_meta.filter(pl.col("sb.electric_utility") == args.utility) + hp_bldg_ids = util_hp["bldg_id"].to_list() + log.info("%d HP buildings for %s", len(hp_bldg_ids), args.utility) + + if not hp_bldg_ids: + log.error("No HP buildings for %s", args.utility) + sys.exit(1) + + # Load TOU context + tou_deriv_dir = project_root / cfg["path_tou_derivation_dir"] + tariffs_dir = project_root / cfg["path_tariffs_electric_dir"] + specs, tou_tariff = load_tou_context(args.utility, tou_deriv_dir, tariffs_dir) + log.info( + "TOU specs: %s", + [(s.season.name, f"ratio={s.peak_offpeak_ratio:.2f}") for s in specs], + ) + + rate_df = extract_tou_period_rates(tou_tariff) + period_rate = cast(pd.Series, rate_df.groupby("energy_period")["rate"].first()) + log.info("Period rates:\n%s", period_rate) + + # Load building loads + log.info("Loading HP building loads...") + loads_df = load_building_loads(hp_bldg_ids, cfg["path_loads_dir"]) + time_idx = pd.DatetimeIndex( + loads_df.index.get_level_values("time").unique().sort_values() + ) + period_map = assign_hourly_periods(time_idx, tou_tariff) + + # Reproduce the shift + log.info("Reproducing demand-flex shift (ε=%s)...", args.elasticity) + hourly = reproduce_shift(loads_df, period_rate, period_map, specs, args.elasticity) + + # Validation checks + log.info("Running validation checks...") + checks = run_validation_checks(hourly, specs, period_rate) + print_validation_report(checks) + + # Diagnostic plots + log.info("Generating diagnostic plots...") + + # Plot 2: Aggregate daily profile + plot_aggregate_daily_profile(hourly, specs, args.utility, output_dir) + + # Plot 3: Net shift by hour + plot_net_shift_by_hour(hourly, specs, args.utility, output_dir) + + # Plot 4: Shift heatmap + plot_shift_heatmap(hourly, specs, args.utility, output_dir) + + # Plot 5: Peak reduction distribution + plot_peak_reduction_distribution(hourly, specs, args.utility, output_dir) + + # Plot 1: Per-building profiles for illustrative buildings + annual_load = hourly.groupby("bldg_id")["electricity_net_orig"].sum() + quantiles = annual_load.quantile([0.10, 0.25, 0.50, 0.75, 0.90]) + illustrative_bldgs = [] + for q_val in quantiles.values: + closest = (annual_load - q_val).abs().idxmin() + if closest not in illustrative_bldgs: + illustrative_bldgs.append(closest) + + if len(illustrative_bldgs) < args.n_profile_buildings: + remaining = [b for b in annual_load.index if b not in illustrative_bldgs] + rng = np.random.default_rng(42) + extra = rng.choice( + remaining, + size=min( + args.n_profile_buildings - len(illustrative_bldgs), len(remaining) + ), + replace=False, + ) + illustrative_bldgs.extend(extra) + + for bid in illustrative_bldgs: + plot_building_daily_profile(hourly, int(bid), specs, args.utility, output_dir) + + # Data outputs + log.info("Writing data outputs...") + write_data_outputs( + hourly, + specs, + args.utility, + output_dir, + n_example_buildings=args.n_example_buildings, + ) + + # CAIRO tracker cross-check + if args.batch: + log.info("Cross-checking against CAIRO tracker...") + try: + crosscheck_cairo_tracker( + hourly, args.batch, args.utility, period_rate, specs, output_dir + ) + except Exception as e: + log.warning("CAIRO cross-check failed: %s", e) + + # Validation summary JSON + import json as json_mod + + def _make_serializable(obj: object) -> object: + if isinstance(obj, (np.bool_, np.integer)): + return obj.item() + if isinstance(obj, np.floating): + return float(obj) + if isinstance(obj, dict): + return {k: _make_serializable(v) for k, v in obj.items()} + if isinstance(obj, list): + return [_make_serializable(v) for v in obj] + return obj + + summary_path = output_dir / f"{args.utility}_validation_summary.json" + with open(summary_path, "w") as f: + json_mod.dump( + _make_serializable( + { + "utility": args.utility, + "elasticity": args.elasticity, + "n_buildings": len(hp_bldg_ids), + "checks": checks, + } + ), + f, + indent=2, + ) + log.info("Wrote %s", summary_path) + + all_pass = all( + checks["energy_conservation"][s]["pass"] and checks["direction"][s]["pass"] + for s in checks["energy_conservation"] + ) + print(f"\n{'=' * 70}") + print(f"Overall: {'ALL CHECKS PASSED' if all_pass else 'SOME CHECKS FAILED'}") + print(f"Plots and data written to: {output_dir}") + print(f"{'=' * 70}") + + +if __name__ == "__main__": + main() diff --git a/utils/pre/calibrate_demand_flex_elasticity.py b/utils/pre/calibrate_demand_flex_elasticity.py new file mode 100644 index 00000000..af036714 --- /dev/null +++ b/utils/pre/calibrate_demand_flex_elasticity.py @@ -0,0 +1,1401 @@ +"""Per-utility demand-flex elasticity calibration for heat-pump rate design. + +For each utility, loads real TOU derivation data, ResStock HP building loads, +and marginal cost profiles to compute: + - Demand shift at each candidate elasticity (kWh moved, peak reduction %) + - Rate arbitrage savings (the primary bill savings mechanism) + - MC-based RR reduction savings (secondary, small) + - Arcturus "no enabling tech" comparison + - Recommended elasticity per utility + +Operates purely analytically (no CAIRO run required). Results can be validated +against actual CAIRO bill outputs from batch ny_20260325b_r1-16. + +Usage: + uv run python -m utils.pre.calibrate_demand_flex_elasticity \ + --state ny \ + --output-dir /tmp/demand_flex_diagnostic + + # Fast dev iteration with sampled buildings: + uv run python -m utils.pre.calibrate_demand_flex_elasticity \ + --state ny --sample-size 50 --output-dir /tmp/demand_flex_diagnostic +""" + +from __future__ import annotations + +import argparse +import json +import logging +import math +import subprocess +from dataclasses import dataclass, field +from pathlib import Path +from typing import cast + +import numpy as np +import pandas as pd +import polars as pl +import yaml + +from data.eia.hourly_loads.eia_region_config import get_aws_storage_options +from utils.cairo import ( + _build_period_consumption, + _build_period_shift_targets, + _compute_equivalent_flat_tariff, + assign_hourly_periods, + extract_tou_period_rates, +) +from utils.pre.compute_tou import SeasonTouSpec, load_season_specs +from utils.scenario_config import get_residential_customer_count_from_utility_stats + +log = logging.getLogger(__name__) + +# ── Constants ───────────────────────────────────────────────────────────────── + +WINTER_MONTHS = {1, 2, 3, 10, 11, 12} +SUMMER_MONTHS = {4, 5, 6, 7, 8, 9} + +ARCTURUS_NO_TECH_INTERCEPT = -0.011 +ARCTURUS_NO_TECH_SLOPE = -0.065 + +ARCTURUS_WITH_TECH_INTERCEPT = -0.011 +ARCTURUS_WITH_TECH_SLOPE = -0.111 + + +def _build_epsilon_range(start: float, end: float, step: float) -> list[float]: + """Build a list of epsilon values from start to end (inclusive) by step.""" + values: list[float] = [] + current = start + while current >= end - 1e-9: + values.append(round(current, 4)) + current += step + return values + + +STATE_CONFIGS: dict[str, dict] = { + "ny": { + "utilities": ("cenhud", "coned", "nimo", "nyseg", "or", "psegli", "rge"), + "path_metadata": Path( + "/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/metadata/state=NY/" + "upgrade=00/metadata-sb.parquet" + ), + "path_utility_assignment": Path( + "/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/metadata_utility/" + "state=NY/utility_assignment.parquet" + ), + "path_loads_dir": Path( + "/ebs/data/nrel/resstock/res_2024_amy2018_2_sb/load_curve_hourly/" + "state=NY/upgrade=00" + ), + "path_tou_derivation_dir": Path( + "rate_design/hp_rates/ny/config/tou_derivation" + ), + "path_tariffs_electric_dir": Path( + "rate_design/hp_rates/ny/config/tariffs/electric" + ), + "path_rev_requirement_dir": Path( + "rate_design/hp_rates/ny/config/rev_requirement" + ), + "mc_base": "s3://data.sb/switchbox/marginal_costs/ny", + "mc_year": 2025, + "path_electric_utility_stats": "s3://data.sb/eia/861/electric_utility_stats/year=2024/state=NY/data.parquet", + "path_periods_dir": Path("rate_design/hp_rates/ny/config/periods"), + }, +} + + +# ── Data structures ─────────────────────────────────────────────────────────── + + +@dataclass(frozen=True, slots=True) +class UtilityContext: + """All data needed to diagnose one utility.""" + + utility: str + season_specs: list[SeasonTouSpec] + tou_tariff: dict + hp_bldg_ids: list[int] + hp_weights: dict[int, float] + total_weighted_customers: float + hp_weighted_customers: float + delivery_rr: float = 0.0 + + +@dataclass(slots=True) +class SeasonResult: + """Diagnostic results for one utility × season × elasticity.""" + + utility: str + season: str + elasticity: float + price_ratio: float + base_rate: float + peak_hours: list[int] + n_peak_hours_per_day: int + hp_bldg_count: int + hp_weighted_share: float + hp_peak_kwh_orig: float + hp_peak_kwh_shifted: float + peak_reduction_pct: float + kwh_shifted: float + system_peak_reduction_pct: float + p_flat: float + arcturus_peak_reduction_pct: float + delta_vs_arcturus_pct: float + rate_arbitrage_savings_per_hp: float + delivery_mc_savings_total: float + delivery_mc_savings_per_hp: float + delivery_mc_nonzero_hours: int + delivery_mc_peak_overlap_hours: int + + +@dataclass(slots=True) +class SeasonRecommendation: + """Per-season elasticity recommendation.""" + + season: str + recommended_elasticity: float + arcturus_target_pct: float + peak_reduction_pct: float + price_ratio: float + savings_per_hp: float + + +@dataclass(slots=True) +class UtilityResult: + """Aggregated diagnostic results for one utility.""" + + utility: str + season_results: list[SeasonResult] = field(default_factory=list) + annual_weighted_ratio: float = 0.0 + annual_arcturus_pct: float = 0.0 + recommended_elasticity: float = 0.0 + recommended_annual_savings: float = 0.0 + seasonal_recommendations: list[SeasonRecommendation] = field(default_factory=list) + seasonal_recommendations_with_tech: list[SeasonRecommendation] = field( + default_factory=list + ) + delivery_rr: float = 0.0 + rr_decrease_by_elasticity: dict[float, float] = field(default_factory=dict) + + +# ── Arcturus model ──────────────────────────────────────────────────────────── + + +def arcturus_peak_reduction(price_ratio: float) -> float: + """Arcturus 2.0 'no enabling technology' peak reduction prediction. + + Returns a positive fraction (e.g. 0.056 for 5.6% reduction). + """ + if price_ratio <= 1.0: + return 0.0 + return abs( + ARCTURUS_NO_TECH_INTERCEPT + ARCTURUS_NO_TECH_SLOPE * math.log(price_ratio) + ) + + +def arcturus_peak_reduction_with_tech(price_ratio: float) -> float: + """Arcturus 2.0 'with enabling technology' peak reduction prediction. + + Returns a positive fraction (e.g. 0.13 for 13% reduction). + Roughly 2x the no-tech response at the same ratio. + """ + if price_ratio <= 1.0: + return 0.0 + return abs( + ARCTURUS_WITH_TECH_INTERCEPT + ARCTURUS_WITH_TECH_SLOPE * math.log(price_ratio) + ) + + +# ── Data loading ────────────────────────────────────────────────────────────── + + +def load_hp_metadata( + path_metadata: Path, + path_utility_assignment: Path, +) -> pl.DataFrame: + """Load metadata and return HP buildings with utility assignment and weight.""" + meta = pl.read_parquet(path_metadata).select( + "bldg_id", "postprocess_group.has_hp", "weight" + ) + util = pl.read_parquet(path_utility_assignment).select( + "bldg_id", "sb.electric_utility" + ) + joined = meta.join(util, on="bldg_id", how="inner") + hp = joined.filter(pl.col("postprocess_group.has_hp") == True) # noqa: E712 + log.info( + "Loaded %d HP buildings out of %d total", + hp.height, + joined.height, + ) + return hp + + +def load_building_loads( + bldg_ids: list[int], + loads_dir: Path, + sample_size: int | None = None, +) -> pd.DataFrame: + """Load hourly electric net load for a set of buildings from local parquet. + + Returns a pandas DataFrame with MultiIndex (bldg_id, time) and column + 'electricity_net' in kWh, matching CAIRO's expected format. + """ + if sample_size and sample_size < len(bldg_ids): + rng = np.random.default_rng(42) + bldg_ids = list(rng.choice(bldg_ids, size=sample_size, replace=False)) + + frames: list[pd.DataFrame] = [] + missing = 0 + for bid in bldg_ids: + path = loads_dir / f"{bid}-0.parquet" + if not path.exists(): + missing += 1 + continue + df = pl.read_parquet( + path, + columns=["timestamp", "out.electricity.net.energy_consumption"], + ).to_pandas() + df = df.rename( + columns={ + "timestamp": "time", + "out.electricity.net.energy_consumption": "electricity_net", + } + ) + df["bldg_id"] = bid + frames.append(df) + + if missing: + log.warning("Missing load files for %d of %d buildings", missing, len(bldg_ids)) + if not frames: + raise FileNotFoundError( + f"No load files found in {loads_dir} for {len(bldg_ids)} buildings" + ) + + combined = pd.concat(frames, ignore_index=True) + combined = combined.set_index(["bldg_id", "time"]).sort_index() + log.info( + "Loaded %d buildings x %d hours = %d rows", + len(frames), + 8760, + len(combined), + ) + return combined + + +def load_tou_context( + utility: str, + tou_derivation_dir: Path, + tariffs_electric_dir: Path, + *, + use_calibrated: bool = False, +) -> tuple[list[SeasonTouSpec], dict]: + """Load TOU derivation specs and the TOU tariff for one utility. + + When *use_calibrated* is True, loads the calibrated tariff + (``*_calibrated.json``) whose rate **levels** match the revenue + requirement. The pre-calibration tariff has correct peak/off-peak + ratios but absolute $/kWh set at MC levels (~1.6× higher than + revenue-neutral for the HP subclass). + """ + deriv_path = tou_derivation_dir / f"{utility}_hp_seasonalTOU_derivation.json" + specs = load_season_specs(deriv_path) + + if use_calibrated: + tariff_path = ( + tariffs_electric_dir / f"{utility}_hp_seasonalTOU_flex_calibrated.json" + ) + if not tariff_path.exists(): + tariff_path = ( + tariffs_electric_dir / f"{utility}_hp_seasonalTOU_calibrated.json" + ) + if not tariff_path.exists(): + raise FileNotFoundError( + f"No calibrated tariff found for {utility}. " + f"Looked for *_flex_calibrated.json and *_calibrated.json in " + f"{tariffs_electric_dir}. Run a precalc CAIRO scenario first " + f"to generate calibrated tariffs, or use --no-calibrated." + ) + else: + tariff_path = tariffs_electric_dir / f"{utility}_hp_seasonalTOU_flex.json" + if not tariff_path.exists(): + tariff_path = tariffs_electric_dir / f"{utility}_hp_seasonalTOU.json" + + log.info(" Tariff: %s", tariff_path.name) + with open(tariff_path) as f: + tou_tariff = json.load(f) + return specs, tou_tariff + + +def load_delivery_mc( + utility: str, + mc_base: str, + mc_year: int, + storage_options: dict[str, str], +) -> pd.DataFrame: + """Load delivery-only MC (dist+sub-tx + bulk_tx) for one utility. + + Returns DataFrame with columns ['timestamp', 'mc_kwh'] in $/kWh. + """ + dist_path = ( + f"{mc_base}/dist_and_sub_tx/utility={utility}/year={mc_year}/data.parquet" + ) + bulk_path = f"{mc_base}/bulk_tx/utility={utility}/year={mc_year}/data.parquet" + + dist = pl.read_parquet(dist_path, storage_options=storage_options) + if "timestamp" in dist.columns: + ts_dtype = dist.schema["timestamp"] + if isinstance(ts_dtype, pl.Datetime) and ts_dtype.time_zone is not None: + dist = dist.with_columns( + pl.col("timestamp").dt.replace_time_zone(None).alias("timestamp") + ) + dist = dist.select( + "timestamp", + pl.col("mc_total_per_kwh").alias("mc_dist"), + ) + + bulk = pl.read_parquet(bulk_path, storage_options=storage_options) + if "timestamp" in bulk.columns: + ts_dtype = bulk.schema["timestamp"] + if isinstance(ts_dtype, pl.Datetime) and ts_dtype.time_zone is not None: + bulk = bulk.with_columns( + pl.col("timestamp").dt.replace_time_zone(None).alias("timestamp") + ) + bulk = bulk.select( + "timestamp", + pl.col("bulk_tx_cost_enduse").alias("mc_bulk"), + ) + + mc = dist.join(bulk, on="timestamp", how="full", coalesce=True).sort("timestamp") + mc = mc.with_columns( + (pl.col("mc_dist").fill_null(0.0) + pl.col("mc_bulk").fill_null(0.0)).alias( + "mc_kwh" + ) + ) + return mc.select("timestamp", "mc_kwh").to_pandas() + + +# ── Core diagnostic logic ───────────────────────────────────────────────────── + + +def diagnose_season( + utility: str, + season_spec: SeasonTouSpec, + loads_df: pd.DataFrame, + period_rate: pd.Series, + period_map: pd.Series, + elasticity: float, + hp_weights: dict[int, float], + total_weighted_customers: float, + hp_weighted_customers: float, + mc_df: pd.DataFrame | None, +) -> SeasonResult: + """Run diagnostic for one utility × season × elasticity.""" + season_months = set(season_spec.season.months) + season_name = season_spec.season.name + + time_level = pd.DatetimeIndex(loads_df.index.get_level_values("time")) + season_mask = time_level.to_series().dt.month.isin(season_months).to_numpy() + season_df = loads_df.loc[season_mask, ["electricity_net"]].copy().reset_index() + + period_lookup = period_map.reset_index() + period_lookup.columns = pd.Index(["time", "energy_period"]) + season_df = season_df.merge(period_lookup, on="time", how="left") + + period_consumption = _build_period_consumption(season_df) + p_flat = _compute_equivalent_flat_tariff(period_consumption, period_rate) + + targets = _build_period_shift_targets( + period_consumption, period_rate, elasticity, p_flat, receiver_period=None + ) + + peak_periods = set() + offpeak_periods = set() + for _, row in targets.drop_duplicates("energy_period").iterrows(): + ep = int(row["energy_period"]) + rate_val = float(row["rate"]) + if rate_val > p_flat: + peak_periods.add(ep) + else: + offpeak_periods.add(ep) + + peak_orig = float( + targets.loc[targets["energy_period"].isin(peak_periods), "Q_orig"].sum() + ) + peak_shifted = float( + targets.loc[targets["energy_period"].isin(peak_periods), "Q_target"].sum() + ) + peak_reduction_kwh = peak_orig - peak_shifted + peak_reduction_pct = (peak_reduction_kwh / peak_orig * 100) if peak_orig > 0 else 0 + + total_shift = float(targets.loc[targets["load_shift"] < 0, "load_shift"].sum()) + + peak_rates = targets.loc[ + targets["energy_period"].isin(peak_periods), "rate" + ].unique() + offpeak_rates = targets.loc[ + targets["energy_period"].isin(offpeak_periods), "rate" + ].unique() + peak_rate = float(peak_rates.mean()) if len(peak_rates) > 0 else 0.0 + offpeak_rate = float(offpeak_rates.mean()) if len(offpeak_rates) > 0 else 0.0 + rate_spread = peak_rate - offpeak_rate + rate_arb_savings_total = abs(total_shift) * rate_spread + + n_bldgs = len(targets["bldg_id"].unique()) + savings_per_hp = rate_arb_savings_total / n_bldgs if n_bldgs > 0 else 0 + + arcturus_pct = arcturus_peak_reduction(season_spec.peak_offpeak_ratio) * 100 + delta_vs_arcturus = peak_reduction_pct - arcturus_pct + + system_peak_pct = ( + peak_reduction_pct * (hp_weighted_customers / total_weighted_customers) + if total_weighted_customers > 0 + else 0 + ) + + delivery_mc_savings = 0.0 + mc_nonzero = 0 + mc_peak_overlap = 0 + if mc_df is not None: + mc_season = mc_df[mc_df["timestamp"].dt.month.isin(season_months)].copy() + mc_nonzero = int((mc_season["mc_kwh"].abs() > 1e-9).sum()) + + peak_hours_set = set(season_spec.peak_hours) + mc_in_peak = mc_season[mc_season["timestamp"].dt.hour.isin(peak_hours_set)] + mc_peak_overlap = int((mc_in_peak["mc_kwh"].abs() > 1e-9).sum()) + + hourly_shift = season_df.copy() + hourly_shift = hourly_shift.merge( + targets[["bldg_id", "energy_period", "load_shift", "Q_orig"]], + on=["bldg_id", "energy_period"], + how="left", + ) + period_sums = hourly_shift.groupby(["bldg_id", "energy_period"])[ + "electricity_net" + ].transform("sum") + hour_share = np.where( + period_sums != 0, + hourly_shift["electricity_net"] / period_sums, + 0.0, + ) + hourly_shift["hourly_shift_kwh"] = hourly_shift["load_shift"] * hour_share + + weights_series = hourly_shift["bldg_id"].map(hp_weights).fillna(1.0) + hourly_shift["weighted_shift"] = ( + hourly_shift["hourly_shift_kwh"] * weights_series + ) + + system_hourly = ( + hourly_shift.groupby("time")["weighted_shift"].sum().reset_index() + ) + system_hourly.columns = ["timestamp", "sys_shift_kwh"] + system_hourly["timestamp"] = pd.to_datetime(system_hourly["timestamp"]) + + # At runtime, CAIRO re-indexes ResStock AMY2018 loads to the MC year + # (year_run, e.g. 2025) via _return_loads_combined. Our analytical + # loads are still in the source year, so align to MC year here. + mc_year = int(mc_season["timestamp"].dt.year.iloc[0]) + shift_year = int(system_hourly["timestamp"].dt.year.iloc[0]) + if mc_year != shift_year: + year_offset = pd.Timestamp(f"{mc_year}-01-01") - pd.Timestamp( + f"{shift_year}-01-01" + ) + system_hourly["timestamp"] = system_hourly["timestamp"] + year_offset + + mc_merge = mc_season.merge(system_hourly, on="timestamp", how="left") + mc_merge["sys_shift_kwh"] = mc_merge["sys_shift_kwh"].fillna(0.0) + delivery_mc_savings = -float( + (mc_merge["mc_kwh"] * mc_merge["sys_shift_kwh"]).sum() + ) + + return SeasonResult( + utility=utility, + season=season_name, + elasticity=elasticity, + price_ratio=season_spec.peak_offpeak_ratio, + base_rate=season_spec.base_rate, + peak_hours=list(season_spec.peak_hours), + n_peak_hours_per_day=len(season_spec.peak_hours), + hp_bldg_count=len(targets["bldg_id"].unique()), + hp_weighted_share=hp_weighted_customers / total_weighted_customers * 100 + if total_weighted_customers > 0 + else 0, + hp_peak_kwh_orig=peak_orig, + hp_peak_kwh_shifted=peak_shifted, + peak_reduction_pct=peak_reduction_pct, + kwh_shifted=abs(total_shift), + system_peak_reduction_pct=system_peak_pct, + p_flat=p_flat, + arcturus_peak_reduction_pct=arcturus_pct, + delta_vs_arcturus_pct=delta_vs_arcturus, + rate_arbitrage_savings_per_hp=savings_per_hp, + delivery_mc_savings_total=delivery_mc_savings, + delivery_mc_savings_per_hp=delivery_mc_savings / n_bldgs if n_bldgs > 0 else 0, + delivery_mc_nonzero_hours=mc_nonzero, + delivery_mc_peak_overlap_hours=mc_peak_overlap, + ) + + +def diagnose_utility( + ctx: UtilityContext, + loads_df: pd.DataFrame, + elasticities: list[float], + mc_df: pd.DataFrame | None, +) -> UtilityResult: + """Run full diagnostic for one utility across all seasons and elasticities.""" + result = UtilityResult(utility=ctx.utility, delivery_rr=ctx.delivery_rr) + + rate_df = extract_tou_period_rates(ctx.tou_tariff) + period_rate = cast(pd.Series, rate_df.groupby("energy_period")["rate"].first()) + time_idx = pd.DatetimeIndex( + loads_df.index.get_level_values("time").unique().sort_values() + ) + period_map = assign_hourly_periods(time_idx, ctx.tou_tariff) + + for eps in elasticities: + for spec in ctx.season_specs: + sr = diagnose_season( + utility=ctx.utility, + season_spec=spec, + loads_df=loads_df, + period_rate=period_rate, + period_map=period_map, + elasticity=eps, + hp_weights=ctx.hp_weights, + total_weighted_customers=ctx.total_weighted_customers, + hp_weighted_customers=ctx.hp_weighted_customers, + mc_df=mc_df, + ) + result.season_results.append(sr) + + # RR decrease by elasticity: sum delivery MC savings across seasons + if ctx.delivery_rr > 0: + for eps in elasticities: + total_mc_savings = sum( + sr.delivery_mc_savings_total + for sr in result.season_results + if sr.elasticity == eps + ) + result.rr_decrease_by_elasticity[eps] = total_mc_savings / ctx.delivery_rr + + winter_specs = [s for s in ctx.season_specs if s.season.name == "winter"] + summer_specs = [s for s in ctx.season_specs if s.season.name == "summer"] + if winter_specs and summer_specs: + winter_kwh = sum( + sr.hp_peak_kwh_orig + sr.kwh_shifted + for sr in result.season_results + if sr.season == "winter" and sr.elasticity == elasticities[0] + ) + summer_kwh = sum( + sr.hp_peak_kwh_orig + sr.kwh_shifted + for sr in result.season_results + if sr.season == "summer" and sr.elasticity == elasticities[0] + ) + total_kwh = winter_kwh + summer_kwh + if total_kwh > 0: + w_ratio = winter_specs[0].peak_offpeak_ratio + s_ratio = summer_specs[0].peak_offpeak_ratio + result.annual_weighted_ratio = ( + w_ratio * winter_kwh + s_ratio * summer_kwh + ) / total_kwh + else: + result.annual_weighted_ratio = ( + winter_specs[0].peak_offpeak_ratio + summer_specs[0].peak_offpeak_ratio + ) / 2 + elif ctx.season_specs: + result.annual_weighted_ratio = ctx.season_specs[0].peak_offpeak_ratio + + result.annual_arcturus_pct = ( + arcturus_peak_reduction(result.annual_weighted_ratio) * 100 + ) + + # Per-season recommendations: each season has its own TOU ratio and + # therefore its own Arcturus target. Find the elasticity that best matches + # the Arcturus peak reduction for that season's ratio. + season_names = sorted({sr.season for sr in result.season_results}) + for sn in season_names: + season_srs = [sr for sr in result.season_results if sr.season == sn] + if not season_srs: + continue + season_ratio = season_srs[0].price_ratio + season_arcturus = arcturus_peak_reduction(season_ratio) * 100 + + best_season_eps = elasticities[0] + best_season_delta = float("inf") + for eps in elasticities: + matching = [sr for sr in season_srs if sr.elasticity == eps] + if not matching: + continue + peak_red = matching[0].peak_reduction_pct + delta = abs(peak_red - season_arcturus) + if delta < best_season_delta: + best_season_delta = delta + best_season_eps = eps + + best_sr = next( + (sr for sr in season_srs if sr.elasticity == best_season_eps), None + ) + result.seasonal_recommendations.append( + SeasonRecommendation( + season=sn, + recommended_elasticity=best_season_eps, + arcturus_target_pct=season_arcturus, + peak_reduction_pct=best_sr.peak_reduction_pct if best_sr else 0.0, + price_ratio=season_ratio, + savings_per_hp=best_sr.rate_arbitrage_savings_per_hp + if best_sr + else 0.0, + ) + ) + + # Per-season recommendations: with enabling technology + for sn in season_names: + season_srs = [sr for sr in result.season_results if sr.season == sn] + if not season_srs: + continue + season_ratio = season_srs[0].price_ratio + season_arcturus_wt = arcturus_peak_reduction_with_tech(season_ratio) * 100 + + best_wt_eps = elasticities[0] + best_wt_delta = float("inf") + for eps in elasticities: + matching = [sr for sr in season_srs if sr.elasticity == eps] + if not matching: + continue + delta = abs(matching[0].peak_reduction_pct - season_arcturus_wt) + if delta < best_wt_delta: + best_wt_delta = delta + best_wt_eps = eps + + best_wt_sr = next( + (sr for sr in season_srs if sr.elasticity == best_wt_eps), None + ) + result.seasonal_recommendations_with_tech.append( + SeasonRecommendation( + season=sn, + recommended_elasticity=best_wt_eps, + arcturus_target_pct=season_arcturus_wt, + peak_reduction_pct=best_wt_sr.peak_reduction_pct if best_wt_sr else 0.0, + price_ratio=season_ratio, + savings_per_hp=best_wt_sr.rate_arbitrage_savings_per_hp + if best_wt_sr + else 0.0, + ) + ) + + # Annual recommendation: average peak reduction across seasons + best_eps = elasticities[0] + best_delta = float("inf") + for eps in elasticities: + annual_peak_red = np.mean( + [ + sr.peak_reduction_pct + for sr in result.season_results + if sr.elasticity == eps + ] + ) + delta = abs(annual_peak_red - result.annual_arcturus_pct) + if delta < best_delta: + best_delta = delta + best_eps = eps + + result.recommended_elasticity = best_eps + result.recommended_annual_savings = sum( + sr.rate_arbitrage_savings_per_hp + for sr in result.season_results + if sr.elasticity == best_eps + ) + return result + + +# ── Output formatting ───────────────────────────────────────────────────────── + + +def results_to_dataframe(results: list[UtilityResult]) -> pl.DataFrame: + """Convert all season-level results to a flat Polars DataFrame.""" + rows = [] + for ur in results: + for sr in ur.season_results: + rows.append( + { + "utility": sr.utility, + "season": sr.season, + "elasticity": sr.elasticity, + "price_ratio": round(sr.price_ratio, 4), + "base_rate_kwh": round(sr.base_rate, 6), + "peak_rate_kwh": round(sr.base_rate * sr.price_ratio, 6), + "n_peak_hours_per_day": sr.n_peak_hours_per_day, + "hp_bldg_count": sr.hp_bldg_count, + "hp_weighted_share_pct": round(sr.hp_weighted_share, 2), + "hp_peak_kwh_orig": round(sr.hp_peak_kwh_orig, 1), + "hp_peak_kwh_shifted": round(sr.hp_peak_kwh_shifted, 1), + "peak_reduction_pct": round(sr.peak_reduction_pct, 3), + "kwh_shifted": round(sr.kwh_shifted, 1), + "system_peak_reduction_pct": round(sr.system_peak_reduction_pct, 4), + "p_flat": round(sr.p_flat, 6), + "arcturus_peak_reduction_pct": round( + sr.arcturus_peak_reduction_pct, 3 + ), + "delta_vs_arcturus_pct": round(sr.delta_vs_arcturus_pct, 3), + "rate_arbitrage_savings_per_hp": round( + sr.rate_arbitrage_savings_per_hp, 2 + ), + "delivery_mc_savings_total": round(sr.delivery_mc_savings_total, 2), + "delivery_mc_savings_per_hp": round( + sr.delivery_mc_savings_per_hp, 2 + ), + "delivery_mc_nonzero_hours": sr.delivery_mc_nonzero_hours, + "delivery_mc_peak_overlap_hours": sr.delivery_mc_peak_overlap_hours, + "annual_weighted_ratio": round(ur.annual_weighted_ratio, 4), + "annual_arcturus_pct": round(ur.annual_arcturus_pct, 3), + "recommended_elasticity": ur.recommended_elasticity, + "recommended_annual_savings": round( + ur.recommended_annual_savings, 2 + ), + "seasonal_recommended_elasticity": _seasonal_rec_field( + ur, sr.season, "recommended_elasticity" + ), + "seasonal_arcturus_target_pct": _seasonal_rec_field( + ur, sr.season, "arcturus_target_pct" + ), + "seasonal_recommended_savings": _seasonal_rec_field( + ur, sr.season, "savings_per_hp" + ), + "seasonal_recommended_elasticity_with_tech": _seasonal_rec_field( + ur, sr.season, "recommended_elasticity", with_tech=True + ), + "seasonal_arcturus_target_pct_with_tech": _seasonal_rec_field( + ur, sr.season, "arcturus_target_pct", with_tech=True + ), + "seasonal_recommended_savings_with_tech": _seasonal_rec_field( + ur, sr.season, "savings_per_hp", with_tech=True + ), + } + ) + return pl.DataFrame(rows) + + +def _seasonal_rec_field( + ur: UtilityResult, season: str, field_name: str, *, with_tech: bool = False +) -> float: + """Look up a seasonal recommendation field for the output DataFrame.""" + recs = ( + ur.seasonal_recommendations_with_tech + if with_tech + else ur.seasonal_recommendations + ) + for rec in recs: + if rec.season == season: + return round(getattr(rec, field_name), 4) + return 0.0 + + +S3_OUTPUTS_BASE = "s3://data.sb/switchbox/cairo/outputs/hp_rates" + +COMPARISON_PAIRS: list[tuple[str, str, str, str]] = [ + ("In-sample delivery", "run9", "run13", "precalc u00, subclasses"), + ("In-sample supply", "run10", "run14", "precalc u00, subclasses + supply"), + ("Out-of-sample delivery", "run11", "run15", "default u02, all-HP"), + ("Out-of-sample supply", "run12", "run16", "default u02, all-HP + supply"), +] + + +def _find_run_dir(batch_prefix: str, run_fragment: str) -> str | None: + """Find the timestamped run directory under a batch prefix on S3.""" + result = subprocess.run( + ["aws", "s3", "ls", batch_prefix], + capture_output=True, + text=True, + check=False, + ) + if result.returncode != 0: + return None + for line in result.stdout.strip().splitlines(): + parts = line.strip().split() + if len(parts) >= 2 and "PRE" in parts: + dirname = parts[-1].rstrip("/") + if run_fragment in dirname: + return f"{batch_prefix}{dirname}" + return None + + +def _read_annual_bills(csv_path: str) -> pl.DataFrame | None: + """Read a CAIRO bill CSV and return Annual rows only.""" + try: + return cast( + pl.DataFrame, + pl.scan_csv(csv_path).filter(pl.col("month") == "Annual").collect(), + ) + except Exception as e: + log.warning(" Failed to read %s: %s", csv_path, e) + return None + + +@dataclass(frozen=True, slots=True) +class PairResult: + """Savings stats for one comparison pair.""" + + label: str + baseline_run: str + flex_run: str + description: str + mean_savings: float + median_savings: float + n_bldgs: int + + +@dataclass(frozen=True, slots=True) +class UtilityBillComparison: + """All comparison pairs for one utility.""" + + utility: str + analytical_savings: float + analytical_savings_seasonal: float + pairs: list[PairResult] + + +def compare_batch_bills( + state: str, + batch: str, + results: list[UtilityResult], + hp_meta: pl.DataFrame, + *, + with_tech: bool = False, +) -> list[UtilityBillComparison]: + """Compare analytical savings against CAIRO bill outputs across run pairs.""" + comparisons: list[UtilityBillComparison] = [] + + for ur in results: + utility = ur.utility + batch_prefix = f"{S3_OUTPUTS_BASE}/{state}/{utility}/{batch}/" + + ls_result = subprocess.run( + ["aws", "s3", "ls", batch_prefix], + capture_output=True, + text=True, + check=False, + ) + if ls_result.returncode != 0: + log.warning(" %s: batch not found at %s", utility, batch_prefix) + continue + + run_dirs: dict[str, str] = {} + for line in ls_result.stdout.strip().splitlines(): + parts = line.strip().split() + if len(parts) >= 2 and "PRE" in parts: + dirname = parts[-1].rstrip("/") + run_dirs[dirname] = f"{batch_prefix}{dirname}" + + hp_bldg_ids = hp_meta.filter(pl.col("sb.electric_utility") == utility)[ + "bldg_id" + ].to_list() + + pairs: list[PairResult] = [] + for label, base_frag, flex_frag, desc in COMPARISON_PAIRS: + base_dir = next((v for k, v in run_dirs.items() if base_frag in k), None) + flex_dir = next((v for k, v in run_dirs.items() if flex_frag in k), None) + if not base_dir or not flex_dir: + log.warning( + " %s: %s or %s not found, skipping %s", + utility, + base_frag, + flex_frag, + label, + ) + continue + + base_df = _read_annual_bills(f"{base_dir}/bills/elec_bills_year_run.csv") + flex_df = _read_annual_bills(f"{flex_dir}/bills/elec_bills_year_run.csv") + if base_df is None or flex_df is None: + continue + + joined = ( + base_df.select( + pl.col("bldg_id"), + pl.col("bill_level").alias("bill_baseline"), + ) + .join( + flex_df.select( + pl.col("bldg_id"), + pl.col("bill_level").alias("bill_flex"), + ), + on="bldg_id", + how="inner", + ) + .with_columns( + (pl.col("bill_baseline") - pl.col("bill_flex")).alias("savings"), + ) + ) + + is_in_sample = "In-sample" in label + if is_in_sample: + joined = joined.filter(pl.col("bldg_id").is_in(hp_bldg_ids)) + + if joined.is_empty(): + log.warning(" %s: no buildings for %s", utility, label) + continue + + pairs.append( + PairResult( + label=label, + baseline_run=base_frag, + flex_run=flex_frag, + description=desc, + mean_savings=float(joined["savings"].mean()), # type: ignore[arg-type] + median_savings=float(joined["savings"].median()), # type: ignore[arg-type] + n_bldgs=joined.height, + ) + ) + + if pairs: + recs = ( + ur.seasonal_recommendations_with_tech + if with_tech + else ur.seasonal_recommendations + ) + seasonal_savings = sum(sr.savings_per_hp for sr in recs) + comparisons.append( + UtilityBillComparison( + utility=utility, + analytical_savings=ur.recommended_annual_savings, + analytical_savings_seasonal=seasonal_savings, + pairs=pairs, + ) + ) + + return comparisons + + +def print_batch_comparison( + batch: str, comparisons: list[UtilityBillComparison] +) -> None: + """Print analytical vs CAIRO bill comparison across all run pairs.""" + print("\n" + "=" * 80) + print(f"CAIRO BILL COMPARISON (batch: {batch})") + print("=" * 80) + + for uc in comparisons: + ref = uc.analytical_savings_seasonal or uc.analytical_savings + label_suffix = "seasonal" if uc.analytical_savings_seasonal else "annual" + print( + f"\n {uc.utility.upper()} " + f"(analytical: ${ref:.2f}/HP {label_suffix}" + f", annual=${uc.analytical_savings:.2f}/HP)" + ) + print( + f" {'Comparison':26s} {'Runs':>10s} {'Mean sav':>9s}" + f" {'Med sav':>8s} {'Δ(mean)':>8s} {'N':>5s} Description" + ) + print( + f" {'─' * 26} {'─' * 10} {'─' * 9}" + f" {'─' * 8} {'─' * 8} {'─' * 5} {'─' * 28}" + ) + for p in uc.pairs: + delta = p.mean_savings - ref + print( + f" {p.label:26s} {p.baseline_run:>4s}→{p.flex_run:<4s}" + f" ${p.mean_savings:>8.2f}" + f" ${p.median_savings:>7.2f}" + f" {delta:>+8.2f}" + f" {p.n_bldgs:>5d} {p.description}" + ) + print() + + +def _print_seasonal_table( + title: str, results: list[UtilityResult], variant: str +) -> None: + """Print a seasonal recommendation table (no_tech or with_tech).""" + print(f" {title}\n") + print( + f" {'Utility':8s} {'Season':8s} {'ε':>6s} {'Savings/HP':>11s}" + f" {'Ratio':>7s} {'Peak red':>8s} {'Arcturus':>8s}" + ) + print( + f" {'─' * 8} {'─' * 8} {'─' * 6} {'─' * 11}" + f" {'─' * 7} {'─' * 8} {'─' * 8}" + ) + for ur in results: + recs = ( + ur.seasonal_recommendations + if variant == "no_tech" + else ur.seasonal_recommendations_with_tech + ) + for rec in recs: + print( + f" {ur.utility:8s} {rec.season:8s} {rec.recommended_elasticity:>6.2f}" + f" ${rec.savings_per_hp:>9.2f}" + f" {rec.price_ratio:>7.3f}" + f" {rec.peak_reduction_pct:>7.1f}%" + f" {rec.arcturus_target_pct:>7.1f}%" + ) + print() + + +def print_summary(results: list[UtilityResult], *, verbose: bool = False) -> None: + """Print a human-readable summary to stdout.""" + print("\n" + "=" * 80) + print("DEMAND-FLEX ELASTICITY CALIBRATION") + print("=" * 80) + + # ── Annual recommendation table ─────────────────────────────────────── + print("\n RECOMMENDED ELASTICITIES (annual)\n") + print( + f" {'Utility':8s} {'ε':>6s} {'Savings/HP':>11s}" + f" {'Peak win':>8s} {'Wt ratio':>8s} {'Arcturus':>8s}" + ) + print(f" {'─' * 8} {'─' * 6} {'─' * 11} {'─' * 8} {'─' * 8} {'─' * 8}") + for ur in results: + n_peak = 0 + for sr in ur.season_results: + if sr.elasticity == ur.recommended_elasticity: + n_peak = sr.n_peak_hours_per_day + break + print( + f" {ur.utility:8s} {ur.recommended_elasticity:>6.2f}" + f" ${ur.recommended_annual_savings:>9.2f}" + f" {n_peak:>5d} hr" + f" {ur.annual_weighted_ratio:>8.3f}" + f" {ur.annual_arcturus_pct:>7.1f}%" + ) + print() + + # ── Seasonal recommendation table ───────────────────────────────────── + _print_seasonal_table( + "RECOMMENDED ELASTICITIES — no enabling technology (seasonal)", + results, + "no_tech", + ) + _print_seasonal_table( + "RECOMMENDED ELASTICITIES — with enabling technology (seasonal)", + results, + "with_tech", + ) + + # ── RR decrease table ────────────────────────────────────────────────── + has_rr = any(ur.rr_decrease_by_elasticity for ur in results) + if has_rr: + all_eps = sorted( + {e for ur in results for e in ur.rr_decrease_by_elasticity}, + ) + print(" DELIVERY RR DECREASE BY ELASTICITY (delivery MC savings only)\n") + header = f" {'Utility':8s} {'Del. RR':>14s}" + for eps in all_eps: + header += f" {eps:>7.2f}" + print(header) + print(f" {'─' * 8} {'─' * 14}" + "".join(f" {'─' * 7}" for _ in all_eps)) + for ur in results: + if not ur.rr_decrease_by_elasticity: + continue + row = f" {ur.utility:8s} ${ur.delivery_rr / 1e6:>10.1f}M" + for eps in all_eps: + pct = ur.rr_decrease_by_elasticity.get(eps, 0.0) * 100 + row += f" {pct:>7.4f}%" + print(row) + # Also show absolute dollar savings for the recommended elasticity + print() + print(f" {'Utility':8s} {'ε':>6s} {'Del MC savings':>14s} Note") + print(f" {'─' * 8} {'─' * 6} {'─' * 14} {'─' * 40}") + for ur in results: + if not ur.rr_decrease_by_elasticity: + continue + rec_eps = ur.recommended_elasticity + total_mc = sum( + sr.delivery_mc_savings_total + for sr in ur.season_results + if sr.elasticity == rec_eps + ) + print( + f" {ur.utility:8s} {rec_eps:>6.2f}" + f" ${total_mc:>13,.0f}" + f" delivery only; supply MC savings not included" + ) + print() + + # ── Caveats ─────────────────────────────────────────────────────────── + print(" CAVEATS:") + print(" - Arcturus uses aggregate heterogeneous response; our model applies") + print(" uniform constant elasticity. Recommended epsilon is approximate.") + print(" - Arcturus log-linear form differs from our power-law model;") + print(" comparison valid at specific ratios, not across full curve.") + + # ── Per-utility detail (verbose only) ──────────────────────────────── + if not verbose: + return + + for ur in results: + print(f"\n{'─' * 80}") + print(f"UTILITY: {ur.utility.upper()}") + print( + f" Annual weighted ratio: {ur.annual_weighted_ratio:.3f}" + f" │ Arcturus target: {ur.annual_arcturus_pct:.1f}% peak reduction" + ) + print( + f" RECOMMENDED ELASTICITY: {ur.recommended_elasticity}" + f" │ Annual savings/HP customer: ${ur.recommended_annual_savings:.2f}" + ) + + for eps in sorted({sr.elasticity for sr in ur.season_results}): + print(f"\n ε = {eps}:") + for sr in ur.season_results: + if sr.elasticity != eps: + continue + print( + f" {sr.season:8s} ratio={sr.price_ratio:.2f}" + f" peak_red={sr.peak_reduction_pct:5.2f}%" + f" (arcturus={sr.arcturus_peak_reduction_pct:.2f}%" + f" Δ={sr.delta_vs_arcturus_pct:+.2f}%)" + ) + print( + f" shifted={sr.kwh_shifted:,.0f} kWh" + f" arb_savings=${sr.rate_arbitrage_savings_per_hp:.2f}/hp" + f" mc_savings=${sr.delivery_mc_savings_per_hp:.2f}/hp" + ) + print( + f" mc_nonzero_hrs={sr.delivery_mc_nonzero_hours}" + f" mc_peak_overlap={sr.delivery_mc_peak_overlap_hours}" + f" sys_peak_impact={sr.system_peak_reduction_pct:.3f}%" + ) + + +# ── Main ────────────────────────────────────────────────────────────────────── + + +def _parse_args(argv: list[str] | None = None) -> argparse.Namespace: + p = argparse.ArgumentParser(description=__doc__) + p.add_argument("--state", required=True, choices=list(STATE_CONFIGS.keys())) + p.add_argument( + "--epsilon-start", + type=float, + default=-0.04, + help="Start of elasticity sweep range (default: -0.04)", + ) + p.add_argument( + "--epsilon-end", + type=float, + default=-0.50, + help="End of elasticity sweep range, inclusive (default: -0.50)", + ) + p.add_argument( + "--epsilon-step", + type=float, + default=-0.02, + help="Step size for elasticity sweep (default: -0.02)", + ) + p.add_argument("--output-dir", type=Path, default=None) + p.add_argument( + "--sample-size", + type=int, + default=None, + help="Limit HP buildings per utility for faster dev iteration", + ) + p.add_argument( + "--utilities", + default=None, + help="Comma-separated utility subset (default: all)", + ) + p.add_argument( + "--no-calibrated", + dest="calibrated", + action="store_false", + default=True, + help="Use pre-calibration (MC-level) tariff rates instead of calibrated", + ) + p.add_argument( + "--compare-batch", + default=None, + help="CAIRO batch name to compare analytical predictions against actual bills", + ) + p.add_argument( + "--with-tech", + action="store_true", + help="Use with-tech recommendations for batch comparison (match CAIRO runs that used elasticity_with_tech)", + ) + p.add_argument( + "--write-periods", + action="store_true", + help="Write seasonal elasticities into each utility's config/periods YAML", + ) + p.add_argument("-v", "--verbose", action="store_true") + return p.parse_args(argv) + + +def main(argv: list[str] | None = None) -> None: + args = _parse_args(argv) + logging.basicConfig( + level=logging.DEBUG if args.verbose else logging.INFO, + format="%(levelname)s %(name)s: %(message)s", + ) + + cfg = STATE_CONFIGS[args.state] + elasticities = _build_epsilon_range( + args.epsilon_start, args.epsilon_end, args.epsilon_step + ) + utilities = tuple(args.utilities.split(",")) if args.utilities else cfg["utilities"] + storage_options = get_aws_storage_options() + + project_root = Path(__file__).resolve().parents[2] + + log.info("Loading HP metadata...") + hp_meta = load_hp_metadata(cfg["path_metadata"], cfg["path_utility_assignment"]) + + all_meta = pl.read_parquet(cfg["path_utility_assignment"]).select( + "bldg_id", "weight", "sb.electric_utility" + ) + + results: list[UtilityResult] = [] + for utility in utilities: + log.info("=" * 60) + log.info("Diagnosing %s...", utility) + + util_hp = hp_meta.filter(pl.col("sb.electric_utility") == utility) + hp_bldg_ids = util_hp["bldg_id"].to_list() + + util_all = all_meta.filter(pl.col("sb.electric_utility") == utility) + raw_weight_sum = float(util_all["weight"].sum()) + + eia_customer_count = get_residential_customer_count_from_utility_stats( + cfg["path_electric_utility_stats"], + utility, + storage_options=storage_options, + ) + scale = eia_customer_count / raw_weight_sum if raw_weight_sum > 0 else 1.0 + log.info( + " EIA customer count: %d, raw weight sum: %.0f, scale factor: %.4f", + eia_customer_count, + raw_weight_sum, + scale, + ) + + hp_weights = { + bid: w * scale + for bid, w in zip(util_hp["bldg_id"].to_list(), util_hp["weight"].to_list()) + } + hp_weighted = float(util_hp["weight"].sum()) * scale + total_weighted = raw_weight_sum * scale # == eia_customer_count + + log.info( + " %d HP buildings (%.0f weighted, %.1f%% of %.0f total)", + len(hp_bldg_ids), + hp_weighted, + hp_weighted / total_weighted * 100 if total_weighted else 0, + total_weighted, + ) + + if not hp_bldg_ids: + log.warning(" No HP buildings for %s, skipping", utility) + continue + + tou_deriv_dir = project_root / cfg["path_tou_derivation_dir"] + tariffs_dir = project_root / cfg["path_tariffs_electric_dir"] + specs, tou_tariff = load_tou_context( + utility, tou_deriv_dir, tariffs_dir, use_calibrated=args.calibrated + ) + log.info( + " TOU specs: %s", + [(s.season.name, f"ratio={s.peak_offpeak_ratio:.2f}") for s in specs], + ) + + log.info(" Loading HP building loads...") + loads_df = load_building_loads( + hp_bldg_ids, cfg["path_loads_dir"], sample_size=args.sample_size + ) + + log.info(" Loading delivery MC data...") + try: + mc_df = load_delivery_mc( + utility, cfg["mc_base"], cfg["mc_year"], storage_options + ) + log.info(" MC data loaded: %d rows", len(mc_df)) + except Exception as e: + log.warning(" Failed to load MC data for %s: %s", utility, e) + mc_df = None + + delivery_rr = 0.0 + rr_dir = project_root / cfg.get("path_rev_requirement_dir", "") + rr_path = rr_dir / f"{utility}.yaml" + if rr_path.exists(): + with open(rr_path) as f: + rr_data = yaml.safe_load(f) + delivery_rr = float(rr_data.get("total_delivery_revenue_requirement", 0)) + log.info(" Delivery RR: $%.0fM", delivery_rr / 1e6) + + ctx = UtilityContext( + utility=utility, + season_specs=specs, + tou_tariff=tou_tariff, + hp_bldg_ids=hp_bldg_ids, + hp_weights=hp_weights, + total_weighted_customers=total_weighted, + hp_weighted_customers=hp_weighted, + delivery_rr=delivery_rr, + ) + + ur = diagnose_utility(ctx, loads_df, elasticities, mc_df) + results.append(ur) + log.info( + " Recommended ε=%.2f, annual savings=$%.2f/hp", + ur.recommended_elasticity, + ur.recommended_annual_savings, + ) + + print_summary(results, verbose=args.verbose) + + if args.write_periods: + periods_dir = project_root / cfg["path_periods_dir"] + for ur in results: + if not ur.seasonal_recommendations: + log.warning("No seasonal recommendations for %s, skipping", ur.utility) + continue + periods_path = periods_dir / f"{ur.utility}.yaml" + if periods_path.exists(): + with open(periods_path) as f: + periods_data = yaml.safe_load(f) or {} + else: + periods_data = {} + periods_data["elasticity"] = { + sr.season: sr.recommended_elasticity + for sr in ur.seasonal_recommendations + } + if ur.seasonal_recommendations_with_tech: + periods_data["elasticity_with_tech"] = { + sr.season: sr.recommended_elasticity + for sr in ur.seasonal_recommendations_with_tech + } + with open(periods_path, "w") as f: + yaml.dump(periods_data, f, default_flow_style=False, sort_keys=False) + log.info( + "Wrote elasticity %s and elasticity_with_tech %s to %s", + periods_data["elasticity"], + periods_data.get("elasticity_with_tech"), + periods_path, + ) + + if args.output_dir: + args.output_dir.mkdir(parents=True, exist_ok=True) + df = results_to_dataframe(results) + csv_path = args.output_dir / "demand_flex_diagnostic.csv" + df.write_csv(csv_path) + log.info("Wrote diagnostic CSV: %s", csv_path) + + parquet_path = args.output_dir / "demand_flex_diagnostic.parquet" + df.write_parquet(parquet_path) + log.info("Wrote diagnostic parquet: %s", parquet_path) + + if args.compare_batch: + log.info("Comparing against CAIRO batch: %s", args.compare_batch) + comparisons = compare_batch_bills( + state=args.state, + batch=args.compare_batch, + results=results, + hp_meta=hp_meta, + with_tech=args.with_tech, + ) + if comparisons: + print_batch_comparison(args.compare_batch, comparisons) + else: + log.warning("No comparisons could be made for batch %s", args.compare_batch) + + +if __name__ == "__main__": + main() diff --git a/utils/pre/create_scenario_yamls.py b/utils/pre/create_scenario_yamls.py index cf569d66..6e9ebb99 100644 --- a/utils/pre/create_scenario_yamls.py +++ b/utils/pre/create_scenario_yamls.py @@ -325,7 +325,27 @@ def parse_required_float(key: str) -> float: except ValueError: run["sample_size"] = sample_size - run["elasticity"] = parse_required_float("elasticity") + elasticity_raw = require_non_empty("elasticity") + try: + run["elasticity"] = float(elasticity_raw) + except ValueError: + state = str(run["state"]).lower() + config_dir = get_project_root() / "rate_design" / "hp_rates" / state / "config" + periods_path = config_dir / elasticity_raw + if not periods_path.exists(): + raise FileNotFoundError( + f"Elasticity path {elasticity_raw!r} resolved to " + f"{periods_path} which does not exist" + ) + with open(periods_path) as f: + periods_data = yaml.safe_load(f) + + enabling_tech = get_optional("enabling_tech").strip().lower() + use_with_tech = enabling_tech not in ("false", "no", "0") + yaml_key = "elasticity_with_tech" if use_with_tech else "elasticity" + if yaml_key not in periods_data: + raise ValueError(f"No '{yaml_key}' key in {periods_path}") + run["elasticity"] = periods_data[yaml_key] return run