Skip to content

v4.4.55 - IBKR day lookup fixes and tearsheet metrics hooks#978

Merged
grzesir merged 12 commits intodevfrom
version/4.4.55
Mar 16, 2026
Merged

v4.4.55 - IBKR day lookup fixes and tearsheet metrics hooks#978
grzesir merged 12 commits intodevfrom
version/4.4.55

Conversation

@grzesir
Copy link
Contributor

@grzesir grzesir commented Mar 16, 2026

What

This release finalizes 4.4.55 with IBKR/routed day-timestep lookup hardening, stale no-data cache refresh behavior, BACKTESTING_PARAMETERS env support, and new tearsheet metrics extensibility (tearsheet_custom_metrics + *_tearsheet_metrics.json).

Why

Backtests needed parity-safe multi-timeframe lookup behavior and reliable machine-readable tearsheet outputs for downstream automation.

Risk

  • Backtest artifact naming/path changes (*_tearsheet_metrics.json) could affect consumers expecting the old *_metrics.json filename.
  • Day-timestep lookup behavior changed for stock/index routing paths; regression coverage added.

Tests run

  • python3 -m pytest -q tests/test_tearsheet_custom_metrics_hook.py tests/test_tearsheet_metrics_json.py tests/test_trader_tearsheet_metrics_passthrough.py --tb=short (pass)
  • python3 -m pytest -m "not apitest and not downloader" --tb=short -q --durations=30 (timed out locally; release gated on green GitHub CI for same marker)

Perf evidence

No new performance claim in this PR.

Docs

  • CHANGELOG.md updated for 4.4.55
  • docsrc/lifecycle_methods.tearsheet_custom_metrics.rst added
  • docsrc/backtesting.tearsheet_html.rst updated

Summary by CodeRabbit

  • New Features

    • Added BACKTESTING_PARAMETERS environment variable for per-run parameter overrides.
    • Added machine-readable tearsheet metrics JSON artifacts alongside HTML tearsheets.
    • Added tearsheet_custom_metrics() hook to supply custom metrics to tearsheets.
  • Changed

    • Trading fees now support per_contract_fee option for futures and options.
    • Enhanced data source and backtesting configuration handling.
  • Fixed

    • Day-timestep asset lookup regression.
    • IBKR cache refresh behavior for large data windows.
    • Order processing race condition.
  • Documentation

    • Updated environment variables and tearsheet metrics guidance.
    • Enhanced trading fee configuration examples.

grzesir and others added 12 commits March 8, 2026 23:47
…at_fee

flat_fee charges a fixed amount per ORDER regardless of contract count.
per_contract_fee is multiplied by order quantity, correctly modeling
broker commissions (e.g. IBKR $0.65/contract). A 40-contract spread
at flat_fee=0.65 costs $0.65 total; per_contract_fee=0.65 costs $26.00.

Updated examples.rst, faq.rst, getting_started.rst, and llms.txt.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Allows injecting strategy parameters via environment variable without
code changes. Parses JSON dict from BACKTESTING_PARAMETERS and merges
with highest priority on top of existing strategy parameters. Useful
for running parameter sweeps (same code, different params per backtest).

Includes: env var parsing in credentials.py, merge logic in _strategy.py,
public docs in environment_variables.rst, and 10 unit tests covering
valid JSON, nested dicts, invalid inputs, and edge cases.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
After generating the HTML tearsheet, also calls qs.reports.metrics_json()
to produce a machine-readable metrics.json file alongside other artifacts.
The new file contains all 60+ scalar metrics, rolling time series, and
drawdown details for downstream consumption by BotSpot Node.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

Release 4.4.55 introduces machine-readable tearsheet metrics JSON artifacts, BACKTESTING_PARAMETERS environment variable for per-run parameter overrides, a new tearsheet_custom_metrics() lifecycle hook for custom metrics injection, improved IBKR no-data cache handling to suppress redundant fetches, and documentation updates for trading fees and new features.

Changes

Cohort / File(s) Summary
Release & Version
CHANGELOG.md, setup.py
Bumped version to 4.4.55 (2026-03-15) with changelog entry documenting new tearsheet metrics, BACKTESTING_PARAMETERS env var, custom metrics hook, IBKR fixes, and trading fee guidance updates.
Documentation
docsrc/backtesting.tearsheet_html.rst, docsrc/environment_variables.rst, docsrc/examples.rst, docsrc/faq.rst, docsrc/getting_started.rst, docsrc/lifecycle_methods.rst, docsrc/lifecycle_methods.tearsheet_custom_metrics.rst, llms.txt
Expanded docs to explain machine-readable *_tearsheet_metrics.json artifacts, BACKTESTING_PARAMETERS env var for parameter overrides, trading fee configurations (per_contract_fee vs. flat_fee), new tearsheet_custom_metrics hook lifecycle method with signature/behavior specifications, and examples for futures/options fee setups.
Environment & Configuration
lumibot/credentials.py
Added BACKTESTING_PARAMETERS global that parses JSON from the BACKTESTING_PARAMETERS environment variable, merging dict values as high-priority parameter overrides with warning on invalid JSON.
Tearsheet Metrics & Custom Hooks
lumibot/strategies/strategy.py, lumibot/strategies/_strategy.py, lumibot/tools/indicators.py
Introduced tearsheet_custom_metrics() hook in Strategy to inject user-defined metrics; added utility methods in _Strategy for extracting returns, computing drawdown, and collecting custom metrics; extended create_tearsheet to accept custom_metrics and tearsheet_metrics_file, writing machine-readable metrics JSON alongside HTML; propagated tearsheet_metrics_file through backtest/backtest_analysis/run_backtest signatures.
Trader Passthrough
lumibot/traders/trader.py
Added tearsheet_metrics_file parameter to Trader.run_all and propagated it through backtest_analysis calls to ensure metrics file paths reach tearsheet generation.
Data Source Preferences
lumibot/backtesting/interactive_brokers_rest_backtesting.py, lumibot/data_sources/pandas_data.py
Added PREFER_NATIVE_DAY_BARS_FOR_STOCK_INDEX flag (default False) to control whether day-timestep requests for stocks/indices strictly require native day data or allow satisfaction by minute data; adjusted PandasData day-bar lookup logic to respect preference flag.
IBKR History & No-Data Handling
lumibot/tools/ibkr_helper.py, lumibot/tools/data_downloader_queue_client.py
Enhanced IBKR history fetching with runtime no-data window suppression to avoid redundant fetch attempts; added placeholder detection and cache-window coverage logic; introduced fallback to smaller periods for large bar windows when chart data is unavailable; added terminal no-data error detection and early-fail path in queue client to prevent prolonged 202-response loops.
Test Coverage
tests/test_backtesting_parameters.py, tests/test_pandas_data_find_asset_timestep_match.py, tests/test_tearsheet_custom_metrics_hook.py, tests/test_tearsheet_metrics_json.py, tests/test_trader_tearsheet_metrics_passthrough.py, tests/backtest/test_ibkr_helper_stale_end_negative_cache.py
Added comprehensive tests for BACKTESTING_PARAMETERS env var parsing, PandasData day-bar preference behavior, custom metrics hook collection and validation, tearsheet metrics JSON creation with degenerate/insufficient data handling, Trader metrics-file passthrough, and IBKR placeholder-window suppression after cache restart.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Backtest as Backtest Runner
    participant Strategy as Strategy
    participant Metrics as Metrics Collector
    participant Tearsheet as Tearsheet Generator
    participant JSON as JSON Writer

    User->>Backtest: backtest(tearsheet_metrics_file="metrics.json")
    
    Backtest->>Strategy: run() with BACKTESTING_PARAMETERS override
    Strategy->>Strategy: Initialize with env params merged
    Strategy->>Backtest: Return backtest results
    
    Backtest->>Metrics: _collect_custom_tearsheet_metrics()
    Metrics->>Strategy: Call tearsheet_custom_metrics(stats_df, returns, drawdown, ...)
    Strategy-->>Metrics: Return custom metrics dict
    Metrics->>Backtest: Return collected metrics
    
    Backtest->>Tearsheet: create_tearsheet(..., custom_metrics=metrics, tearsheet_metrics_file="metrics.json")
    Tearsheet->>Tearsheet: Generate HTML with custom metrics
    Tearsheet->>JSON: Write metrics to JSON (summary_only mode)
    JSON-->>Tearsheet: metrics.json created
    Tearsheet-->>Backtest: Tearsheet complete
    
    Backtest-->>User: Returns with HTML + JSON artifacts
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~35 minutes

Possibly related PRs

  • PR #966: Modifies lumibot/tools/indicators.py and lumibot/strategies/_strategy.py with overlapping tearsheet/metrics export infrastructure changes.
  • PR #972: Updates lumibot/tools/ibkr_helper.py's history-fetch paging and no-data handling logic, sharing IBKR robustness improvements.
  • PR #974: Implements per_contract_fee support for TradingFee; this PR documents and integrates that feature into examples and tests.

Poem

🐰 Whiskers twitching with glee!
Metrics now flow free and clear,
JSON files bounce and dance,
Custom hooks let strategies prance,
No-data windows disappear! 🌟

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 41.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'v4.4.55 - IBKR day lookup fixes and tearsheet metrics hooks' accurately summarizes the primary changes: version bump, IBKR/day-timestep fixes, and new tearsheet metrics features.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch version/4.4.55
📝 Coding Plan
  • Generate coding plan for human review comments

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 Pylint (4.0.5)
lumibot/data_sources/pandas_data.py

************* Module pylintrc
pylintrc:1:0: F0011: error while parsing the configuration: File contains no section headers.
file: 'pylintrc', line: 1
'known-third-party=lumibot' (config-parse-error)
[
{
"type": "convention",
"module": "lumibot.data_sources.pandas_data",
"obj": "",
"line": 18,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "lumibot/data_sources/pandas_data.py",
"symbol": "line-too-long",
"message": "Line too long (109/100)",
"message-id": "C0301"
},
{
"type": "convention",
"module": "lumibot.data_sources.pandas_data",
"obj": "",
"line": 34,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "lumibot/data_sources/pandas_data.py",
"symbol": "line-too-long",
"message": "Line too long (119/100)",
"message-id": "C0301"
},
{
"type": "convention",
"modu

... [truncated 32464 characters] ...

  "obj": "PandasData.get_historical_prices",
    "line": 661,
    "column": 4,
    "endLine": 661,
    "endColumn": 29,
    "path": "lumibot/data_sources/pandas_data.py",
    "symbol": "too-many-positional-arguments",
    "message": "Too many positional arguments (9/5)",
    "message-id": "R0917"
},
{
    "type": "refactor",
    "module": "lumibot.data_sources.pandas_data",
    "obj": "PandasData.get_historical_prices",
    "line": 689,
    "column": 8,
    "endLine": 692,
    "endColumn": 23,
    "path": "lumibot/data_sources/pandas_data.py",
    "symbol": "no-else-return",
    "message": "Unnecessary \"elif\" after \"return\", remove the leading \"el\" from \"elif\"",
    "message-id": "R1705"
}

]

lumibot/backtesting/interactive_brokers_rest_backtesting.py

************* Module pylintrc
pylintrc:1:0: F0011: error while parsing the configuration: File contains no section headers.
file: 'pylintrc', line: 1
'known-third-party=lumibot' (config-parse-error)
[
{
"type": "convention",
"module": "lumibot.backtesting.interactive_brokers_rest_backtesting",
"obj": "",
"line": 24,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "lumibot/backtesting/interactive_brokers_rest_backtesting.py",
"symbol": "line-too-long",
"message": "Line too long (104/100)",
"message-id": "C0301"
},
{
"type": "convention",
"module": "lumibot.backtesting.interactive_brokers_rest_backtesting",
"obj": "",
"line": 43,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "lumibot/backtesting/interactive_brokers_rest_backtesting.py",
"symbol": "line-too-long",
"message": "Line too long (1

... [truncated 26384 characters] ...

tween_dates",
"line": 550,
"column": 4,
"endLine": 550,
"endColumn": 43,
"path": "lumibot/backtesting/interactive_brokers_rest_backtesting.py",
"symbol": "too-many-locals",
"message": "Too many local variables (19/15)",
"message-id": "R0914"
},
{
"type": "warning",
"module": "lumibot.backtesting.interactive_brokers_rest_backtesting",
"obj": "InteractiveBrokersRESTBacktesting.get_historical_prices_between_dates",
"line": 571,
"column": 8,
"endLine": 571,
"endColumn": 11,
"path": "lumibot/backtesting/interactive_brokers_rest_backtesting.py",
"symbol": "unused-variable",
"message": "Unused variable 'qty'",
"message-id": "W0612"
}
]

lumibot/credentials.py

************* Module pylintrc
pylintrc:1:0: F0011: error while parsing the configuration: File contains no section headers.
file: 'pylintrc', line: 1
'known-third-party=lumibot' (config-parse-error)
[
{
"type": "convention",
"module": "lumibot.credentials",
"obj": "",
"line": 1,
"column": 7,
"endLine": null,
"endColumn": null,
"path": "lumibot/credentials.py",
"symbol": "trailing-whitespace",
"message": "Trailing whitespace",
"message-id": "C0303"
},
{
"type": "convention",
"module": "lumibot.credentials",
"obj": "",
"line": 2,
"column": 0,
"endLine": null,
"endColumn": null,
"path": "lumibot/credentials.py",
"symbol": "line-too-long",
"message": "Line too long (144/100)",
"message-id": "C0301"
},
{
"type": "convention",
"module": "lumibot.credentials",
"obj": "",

... [truncated 35475 characters] ...

dentials",
"obj": "",
"line": 15,
"column": 0,
"endLine": 15,
"endColumn": 27,
"path": "lumibot/credentials.py",
"symbol": "wrong-import-order",
"message": "third party import "dateutil.parser" should be placed before local import "brokers.Alpaca"",
"message-id": "C0411"
},
{
"type": "convention",
"module": "lumibot.credentials",
"obj": "",
"line": 18,
"column": 0,
"endLine": 18,
"endColumn": 51,
"path": "lumibot/credentials.py",
"symbol": "wrong-import-order",
"message": "first party import "lumibot.tools.lumibot_logger.get_logger" should be placed before local import "brokers.Alpaca"",
"message-id": "C0411"
}
]

  • 13 others

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can enforce grammar and style rules using `languagetool`.

Configure the reviews.tools.languagetool setting to enable/disable rules and categories. Refer to the LanguageTool Community to learn more.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CHANGELOG.md`:
- Around line 5-20: Update the 4.4.55 release block in CHANGELOG.md to follow
the required format by adding the missing headings "Deprecated", "Removed", and
"Security" (in addition to the existing Added/Changed/Fixed) and ensure the
release header uses the exact version/date format "## 4.4.55 - YYYY-MM-DD"; add
placeholder content (or explicit "None" notes) under each new heading so the
block validates against the repository changelog schema.

In `@docsrc/backtesting.tearsheet_html.rst`:
- Around line 25-30: Update the backtesting tearsheet docs to state the default
JSON filename and compatibility note: mention that the default output name is
now "*_tearsheet_metrics.json" (written alongside "*_tearsheet.html" by default)
and that callers can override the path using the tearsheet_metrics_file
parameter; also note that older consumers expecting "*_metrics.json" should
update their pipelines or explicitly set tearsheet_metrics_file for backward
compatibility, and keep the existing reference to
Strategy.tearsheet_custom_metrics(...) for appending strategy-specific fields.

In `@llms.txt`:
- Around line 139-140: The backtest call passes an undefined identifier `fee` to
MyStrategy.backtest (buy_trading_fees/sell_trading_fees); define `fee`
beforehand with the appropriate type/value (e.g., a numeric percentage or list
matching the expected fee format) and then pass it into MyStrategy.backtest so
the snippet is copy-pasteable — ensure the variable name matches exactly (`fee`)
and its type aligns with the backtest API expectations for
buy_trading_fees/sell_trading_fees.

In `@lumibot/strategies/_strategy.py`:
- Around line 582-585: The current merge uses the module-level
BACKTESTING_PARAMETERS dict directly which causes nested mutable values (e.g.,
ALLOCATION) to be shared across strategy instances; fix by deep-copying
BACKTESTING_PARAMETERS before merging into self.parameters (use copy.deepcopy on
BACKTESTING_PARAMETERS) so the merged dict is independent per instance and
mutations to self.parameters do not bleed into subsequent runs.
- Around line 1734-1737: The _extract_returns_series function currently rejects
non-pandas frames causing Polars DataFrames produced by _dump_benchmark_stats to
be treated as empty; update _extract_returns_series to accept polars.DataFrame
by detecting the polars type (e.g., isinstance(frame, pl.DataFrame)) and
converting it to a pandas.DataFrame (using frame.to_pandas()) before the
existing processing so benchmark_returns passed into tearsheet_custom_metrics is
populated; ensure you only import the polars symbol where needed or guard the
import to avoid hard dependency.

In `@lumibot/tools/data_downloader_queue_client.py`:
- Around line 821-826: The fast-fail branch in data_downloader_queue_client.py
only checks for "chart data unavailable" but should use the same
terminal-no-data logic as
lumibot/tools/ibkr_helper.py::_is_terminal_no_data_error; update the conditional
inside the status == "failed" block to call or reuse
_is_terminal_no_data_error(str(info.error or "")) (and still verify the path
contains "ibkr/iserver/marketdata/history" and attempts >= 3) so other terminal
messages like "no data available" or "asset does not exist" trigger the
fast-fail.

In `@lumibot/tools/ibkr_helper.py`:
- Around line 1793-1837: The current _window_is_placeholder_covered function
incorrectly infers coverage by pairing the nearest "missing" markers on each
side which can fuse separate gaps; instead persist explicit missing-interval
identifiers when creating placeholders (e.g., modify _record_missing_window to
write paired start/end markers with a shared interval_id or
interval_start/interval_end fields), then change _window_is_placeholder_covered
to look up markers with the same interval identifier (or matching start/end
pair) and only treat the window as covered if there exists a single persisted
interval record whose start <= start_local and end >= end_local; reference the
functions _window_is_placeholder_covered and _record_missing_window and the
"missing" marker rows when implementing this check and adding the
interval_id/interval_start/interval_end columns.
- Around line 772-802: The code records the entire requested window
(start_utc..end_utc) as missing which can overwrite cached boundary bars; change
the call to _record_missing_window to persist only the failing segment by using
seg_start and seg_end (i.e. start_dt=_to_utc(seg_start),
end_dt=_to_utc(seg_end)) while keeping the wider debounce window update in
_RUNTIME_HISTORY_NO_DATA_WINDOWS in-memory; after calling _record_missing_window
keep the existing df_cache = _read_cache_frame(cache_file) reload step unchanged
so subsequent logic sees the newly-recorded missing segment.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 20aa2d34-c9fd-4832-a92b-3815877de210

📥 Commits

Reviewing files that changed from the base of the PR and between c962eb9 and bddbfb6.

⛔ Files ignored due to path filters (1)
  • tests/backtest/backtest_performance_history.csv is excluded by !**/*.csv
📒 Files selected for processing (25)
  • CHANGELOG.md
  • docsrc/backtesting.tearsheet_html.rst
  • docsrc/environment_variables.rst
  • docsrc/examples.rst
  • docsrc/faq.rst
  • docsrc/getting_started.rst
  • docsrc/lifecycle_methods.rst
  • docsrc/lifecycle_methods.tearsheet_custom_metrics.rst
  • llms.txt
  • lumibot/backtesting/interactive_brokers_rest_backtesting.py
  • lumibot/credentials.py
  • lumibot/data_sources/pandas_data.py
  • lumibot/strategies/_strategy.py
  • lumibot/strategies/strategy.py
  • lumibot/tools/data_downloader_queue_client.py
  • lumibot/tools/ibkr_helper.py
  • lumibot/tools/indicators.py
  • lumibot/traders/trader.py
  • setup.py
  • tests/backtest/test_ibkr_helper_stale_end_negative_cache.py
  • tests/test_backtesting_parameters.py
  • tests/test_pandas_data_find_asset_timestep_match.py
  • tests/test_tearsheet_custom_metrics_hook.py
  • tests/test_tearsheet_metrics_json.py
  • tests/test_trader_tearsheet_metrics_passthrough.py

Comment on lines +5 to +20
### Added
- `BACKTESTING_PARAMETERS` environment variable support for parameter injection in backtest runs.
- Machine-readable `*_tearsheet_metrics.json` artifacts (summary-first) with placeholder output on insufficient/degenerate returns.
- New strategy lifecycle hook `tearsheet_custom_metrics(...)` for appending custom metrics to tearsheet HTML and JSON artifacts.
- Regression coverage for multi-timeframe day-timestep stock lookup and tearsheet metrics/custom-hook passthrough.

### Changed
- Backtest analysis and trader APIs now accept `tearsheet_metrics_file`; default output filename is `*_tearsheet_metrics.json`.
- QuantStats `metrics_json` generation now runs in `summary_only` mode and forwards custom metrics to both HTML and JSON outputs.
- Documentation updates for tearsheet metrics/lifecycle hooks and TradingFee guidance (`per_contract_fee` usage).

### Fixed
- Day-timestep asset lookup regression for multi-timeframe stock/index backtests (including minute->day fallback paths where appropriate).
- IBKR stale no-data cache reuse now forces refresh when requested windows extend beyond cached coverage.
- ProjectX order processing race-condition and tracking hardening merged from `dev`.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add missing required changelog sections for this release block.

The 4.4.55 entry is missing Deprecated, Removed, and Security headings required by the repository changelog format.

🛠️ Suggested patch
 ### Fixed
 - Day-timestep asset lookup regression for multi-timeframe stock/index backtests (including minute->day fallback paths where appropriate).
 - IBKR stale no-data cache reuse now forces refresh when requested windows extend beyond cached coverage.
 - ProjectX order processing race-condition and tracking hardening merged from `dev`.
+
+### Deprecated
+- None.
+
+### Removed
+- None.
+
+### Security
+- None.

As per coding guidelines: "Changelog format must use sections: Added, Changed, Fixed, Deprecated, Removed, Security, with date in format ## X.Y.Z - YYYY-MM-DD".

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Added
- `BACKTESTING_PARAMETERS` environment variable support for parameter injection in backtest runs.
- Machine-readable `*_tearsheet_metrics.json` artifacts (summary-first) with placeholder output on insufficient/degenerate returns.
- New strategy lifecycle hook `tearsheet_custom_metrics(...)` for appending custom metrics to tearsheet HTML and JSON artifacts.
- Regression coverage for multi-timeframe day-timestep stock lookup and tearsheet metrics/custom-hook passthrough.
### Changed
- Backtest analysis and trader APIs now accept `tearsheet_metrics_file`; default output filename is `*_tearsheet_metrics.json`.
- QuantStats `metrics_json` generation now runs in `summary_only` mode and forwards custom metrics to both HTML and JSON outputs.
- Documentation updates for tearsheet metrics/lifecycle hooks and TradingFee guidance (`per_contract_fee` usage).
### Fixed
- Day-timestep asset lookup regression for multi-timeframe stock/index backtests (including minute->day fallback paths where appropriate).
- IBKR stale no-data cache reuse now forces refresh when requested windows extend beyond cached coverage.
- ProjectX order processing race-condition and tracking hardening merged from `dev`.
### Added
- `BACKTESTING_PARAMETERS` environment variable support for parameter injection in backtest runs.
- Machine-readable `*_tearsheet_metrics.json` artifacts (summary-first) with placeholder output on insufficient/degenerate returns.
- New strategy lifecycle hook `tearsheet_custom_metrics(...)` for appending custom metrics to tearsheet HTML and JSON artifacts.
- Regression coverage for multi-timeframe day-timestep stock lookup and tearsheet metrics/custom-hook passthrough.
### Changed
- Backtest analysis and trader APIs now accept `tearsheet_metrics_file`; default output filename is `*_tearsheet_metrics.json`.
- QuantStats `metrics_json` generation now runs in `summary_only` mode and forwards custom metrics to both HTML and JSON outputs.
- Documentation updates for tearsheet metrics/lifecycle hooks and TradingFee guidance (`per_contract_fee` usage).
### Fixed
- Day-timestep asset lookup regression for multi-timeframe stock/index backtests (including minute->day fallback paths where appropriate).
- IBKR stale no-data cache reuse now forces refresh when requested windows extend beyond cached coverage.
- ProjectX order processing race-condition and tracking hardening merged from `dev`.
### Deprecated
- None.
### Removed
- None.
### Security
- None.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` around lines 5 - 20, Update the 4.4.55 release block in
CHANGELOG.md to follow the required format by adding the missing headings
"Deprecated", "Removed", and "Security" (in addition to the existing
Added/Changed/Fixed) and ensure the release header uses the exact version/date
format "## 4.4.55 - YYYY-MM-DD"; add placeholder content (or explicit "None"
notes) under each new heading so the block validates against the repository
changelog schema.

Comment on lines +25 to +30
Alongside ``*_tearsheet.html``, LumiBot also writes ``*_tearsheet_metrics.json``.

- This JSON contains summary tearsheet metrics in a machine-readable structure.
- It is intended for downstream automation (agents, dashboards, APIs).
- You can append strategy-specific metrics by implementing
``Strategy.tearsheet_custom_metrics(...)``.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document the default filename and compatibility note here.

This reads as if the JSON is always written next to the HTML and only introduces a new summary artifact. In this release the default name changes from *_metrics.json to *_tearsheet_metrics.json, and callers can override the path via tearsheet_metrics_file, so a short note here would prevent downstream automation from making the wrong assumption.

✍️ Suggested wording
-Alongside ``*_tearsheet.html``, LumiBot also writes ``*_tearsheet_metrics.json``.
+By default, LumiBot also writes ``*_tearsheet_metrics.json`` alongside
+``*_tearsheet.html``. If you pass ``tearsheet_metrics_file``, that custom path
+is used instead.
 
-- This JSON contains summary tearsheet metrics in a machine-readable structure.
+- This JSON contains machine-readable tearsheet metrics and related details for
+  downstream automation.
 - It is intended for downstream automation (agents, dashboards, APIs).
 - You can append strategy-specific metrics by implementing
   ``Strategy.tearsheet_custom_metrics(...)``.
+- Existing consumers looking for ``*_metrics.json`` should be updated to the new
+  default filename.
Based on learnings "Always update public documentation in `docsrc/` when making user-facing changes to Strategy methods, properties, brokers, entities, or backtesting features".
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Alongside ``*_tearsheet.html``, LumiBot also writes ``*_tearsheet_metrics.json``.
- This JSON contains summary tearsheet metrics in a machine-readable structure.
- It is intended for downstream automation (agents, dashboards, APIs).
- You can append strategy-specific metrics by implementing
``Strategy.tearsheet_custom_metrics(...)``.
By default, LumiBot also writes ``*_tearsheet_metrics.json`` alongside
``*_tearsheet.html``. If you pass ``tearsheet_metrics_file``, that custom path
is used instead.
- This JSON contains machine-readable tearsheet metrics and related details for
downstream automation.
- It is intended for downstream automation (agents, dashboards, APIs).
- You can append strategy-specific metrics by implementing
``Strategy.tearsheet_custom_metrics(...)``.
- Existing consumers looking for ``*_metrics.json`` should be updated to the new
default filename.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docsrc/backtesting.tearsheet_html.rst` around lines 25 - 30, Update the
backtesting tearsheet docs to state the default JSON filename and compatibility
note: mention that the default output name is now "*_tearsheet_metrics.json"
(written alongside "*_tearsheet.html" by default) and that callers can override
the path using the tearsheet_metrics_file parameter; also note that older
consumers expecting "*_metrics.json" should update their pipelines or explicitly
set tearsheet_metrics_file for backward compatibility, and keep the existing
reference to Strategy.tearsheet_custom_metrics(...) for appending
strategy-specific fields.

Comment on lines +139 to +140
# Pass to backtest:
result = MyStrategy.backtest(datasource, buy_trading_fees=[fee], sell_trading_fees=[fee])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Define fee before using it in the backtest call.

Line 140 passes fee into buy_trading_fees/sell_trading_fees, but none of the preceding examples bind a variable with that name. As written, this block is not copy-pasteable.

✍️ Minimal fix
-# Pass to backtest:
-result = MyStrategy.backtest(datasource, buy_trading_fees=[fee], sell_trading_fees=[fee])
+# Pass to backtest:
+fee = TradingFee(per_contract_fee=0.65)
+result = MyStrategy.backtest(datasource, buy_trading_fees=[fee], sell_trading_fees=[fee])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@llms.txt` around lines 139 - 140, The backtest call passes an undefined
identifier `fee` to MyStrategy.backtest (buy_trading_fees/sell_trading_fees);
define `fee` beforehand with the appropriate type/value (e.g., a numeric
percentage or list matching the expected fee format) and then pass it into
MyStrategy.backtest so the snippet is copy-pasteable — ensure the variable name
matches exactly (`fee`) and its type aligns with the backtest API expectations
for buy_trading_fees/sell_trading_fees.

Comment on lines +582 to +585
# Apply BACKTESTING_PARAMETERS env var override (highest priority, wins over code-level params)
from lumibot.credentials import BACKTESTING_PARAMETERS
if BACKTESTING_PARAMETERS is not None and isinstance(BACKTESTING_PARAMETERS, dict):
self.parameters = {**self.parameters, **BACKTESTING_PARAMETERS}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Deep-copy nested env overrides before merging.

Line 585 shallow-copies a module-level dict that is reused across strategy instances. Nested values like ALLOCATION stay shared, so mutating self.parameters during one backtest can bleed into later runs in the same process.

🛠️ Proposed fix
-        if BACKTESTING_PARAMETERS is not None and isinstance(BACKTESTING_PARAMETERS, dict):
-            self.parameters = {**self.parameters, **BACKTESTING_PARAMETERS}
+        if BACKTESTING_PARAMETERS is not None and isinstance(BACKTESTING_PARAMETERS, dict):
+            from copy import deepcopy
+
+            self.parameters = {**self.parameters, **deepcopy(BACKTESTING_PARAMETERS)}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lumibot/strategies/_strategy.py` around lines 582 - 585, The current merge
uses the module-level BACKTESTING_PARAMETERS dict directly which causes nested
mutable values (e.g., ALLOCATION) to be shared across strategy instances; fix by
deep-copying BACKTESTING_PARAMETERS before merging into self.parameters (use
copy.deepcopy on BACKTESTING_PARAMETERS) so the merged dict is independent per
instance and mutations to self.parameters do not bleed into subsequent runs.

Comment on lines +1734 to +1737
def _extract_returns_series(frame, returns_col: str = "return", value_col: str | None = None) -> pd.Series:
"""Extract a clean returns series from a strategy/benchmark dataframe."""
if frame is None or not isinstance(frame, pd.DataFrame) or frame.empty:
return pd.Series(dtype=float)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Accept Polars inputs in _extract_returns_series().

Line 1736 drops any non-pandas frame, but _dump_benchmark_stats() stores Polars DataFrames on the Polygon/Alpaca paths. That makes benchmark_returns empty and the new tearsheet_custom_metrics() hook silently loses benchmark-aware metrics on those data sources.

🛠️ Proposed fix
-        if frame is None or not isinstance(frame, pd.DataFrame) or frame.empty:
+        if frame is None:
+            return pd.Series(dtype=float)
+        if isinstance(frame, pl.DataFrame):
+            frame = frame.to_pandas()
+        if not isinstance(frame, pd.DataFrame) or frame.empty:
             return pd.Series(dtype=float)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _extract_returns_series(frame, returns_col: str = "return", value_col: str | None = None) -> pd.Series:
"""Extract a clean returns series from a strategy/benchmark dataframe."""
if frame is None or not isinstance(frame, pd.DataFrame) or frame.empty:
return pd.Series(dtype=float)
def _extract_returns_series(frame, returns_col: str = "return", value_col: str | None = None) -> pd.Series:
"""Extract a clean returns series from a strategy/benchmark dataframe."""
if frame is None:
return pd.Series(dtype=float)
if isinstance(frame, pl.DataFrame):
frame = frame.to_pandas()
if not isinstance(frame, pd.DataFrame) or frame.empty:
return pd.Series(dtype=float)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lumibot/strategies/_strategy.py` around lines 1734 - 1737, The
_extract_returns_series function currently rejects non-pandas frames causing
Polars DataFrames produced by _dump_benchmark_stats to be treated as empty;
update _extract_returns_series to accept polars.DataFrame by detecting the
polars type (e.g., isinstance(frame, pl.DataFrame)) and converting it to a
pandas.DataFrame (using frame.to_pandas()) before the existing processing so
benchmark_returns passed into tearsheet_custom_metrics is populated; ensure you
only import the polars symbol where needed or guard the import to avoid hard
dependency.

Comment on lines +821 to +826
if status == "failed":
err_text = str(info.error or "").lower()
if (
"ibkr/iserver/marketdata/history" in str(info.path or "")
and "chart data unavailable" in err_text
and int(info.attempts or 0) >= 3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Broaden the terminal no-data match here.

This fast-fail only recognizes "chart data unavailable", but lumibot/tools/ibkr_helper.py::_is_terminal_no_data_error treats several IBKR messages as the same terminal condition. Errors like "no data available" or "asset does not exist" will still sit in the 202 polling loop until the outer timeout, so the regression is only partially fixed.

💡 Minimal fix
-                        if (
-                            "ibkr/iserver/marketdata/history" in str(info.path or "")
-                            and "chart data unavailable" in err_text
-                            and int(info.attempts or 0) >= 3
-                        ):
+                        terminal_no_data_tokens = (
+                            "chart data unavailable",
+                            "no data available",
+                            "does not have data",
+                            "asset does not exist",
+                        )
+                        if (
+                            "ibkr/iserver/marketdata/history" in str(info.path or "").lower()
+                            and any(token in err_text for token in terminal_no_data_tokens)
+                            and int(info.attempts or 0) >= 3
+                        ):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lumibot/tools/data_downloader_queue_client.py` around lines 821 - 826, The
fast-fail branch in data_downloader_queue_client.py only checks for "chart data
unavailable" but should use the same terminal-no-data logic as
lumibot/tools/ibkr_helper.py::_is_terminal_no_data_error; update the conditional
inside the status == "failed" block to call or reuse
_is_terminal_no_data_error(str(info.error or "")) (and still verify the path
contains "ibkr/iserver/marketdata/history" and attempts >= 3) so other terminal
messages like "no data available" or "asset does not exist" trigger the
fast-fail.

Comment on lines +772 to +802
try:
# Suppress repeat fetches for the same cached series within this process.
existing_block = _RUNTIME_HISTORY_NO_DATA_WINDOWS.get(runtime_no_data_key)
if existing_block is None:
_RUNTIME_HISTORY_NO_DATA_WINDOWS[runtime_no_data_key] = (start_utc, end_utc)
else:
_RUNTIME_HISTORY_NO_DATA_WINDOWS[runtime_no_data_key] = (
min(existing_block[0], start_utc),
max(existing_block[1], end_utc),
)
_record_missing_window(
asset=asset,
quote=quote,
timestep=timestep,
exchange=effective_exchange,
source=history_source,
include_after_hours=include_after_hours,
# Mark the whole requested window for this get_price_data call so
# subsequent iterations don't re-submit near-identical failing slices.
start_dt=_to_utc(start_utc),
end_dt=_to_utc(end_utc),
)
# Reload to include the newly written missing markers.
df_cache = _read_cache_frame(cache_file)
except Exception:
pass
fetched = pd.DataFrame()
if terminal_no_data:
# No-data terminal errors are not recoverable by trying more segments in the
# same iteration/window.
break
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Persist only the failing segment as missing.

_record_missing_window() overwrites duplicate timestamps on merge, so recording start_utc..end_utc here can replace already-cached boundary bars with missing=True when only a later segment failed. It also suppresses fetches for untouched parts of the request because the loop breaks immediately afterward. Record seg_start..seg_end here and keep any wider debounce window in memory only.

Suggested fix
                         _record_missing_window(
                             asset=asset,
                             quote=quote,
                             timestep=timestep,
                             exchange=effective_exchange,
                             source=history_source,
                             include_after_hours=include_after_hours,
-                            # Mark the whole requested window for this get_price_data call so
-                            # subsequent iterations don't re-submit near-identical failing slices.
-                            start_dt=_to_utc(start_utc),
-                            end_dt=_to_utc(end_utc),
+                            start_dt=_to_utc(seg_start),
+                            end_dt=_to_utc(seg_end),
                         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lumibot/tools/ibkr_helper.py` around lines 772 - 802, The code records the
entire requested window (start_utc..end_utc) as missing which can overwrite
cached boundary bars; change the call to _record_missing_window to persist only
the failing segment by using seg_start and seg_end (i.e.
start_dt=_to_utc(seg_start), end_dt=_to_utc(seg_end)) while keeping the wider
debounce window update in _RUNTIME_HISTORY_NO_DATA_WINDOWS in-memory; after
calling _record_missing_window keep the existing df_cache =
_read_cache_frame(cache_file) reload step unchanged so subsequent logic sees the
newly-recorded missing segment.

Comment on lines +1793 to +1837
def _window_is_placeholder_covered(
df_cache: pd.DataFrame,
*,
start_local: datetime,
end_local: datetime,
) -> bool:
"""Return True when [start_local, end_local] is fully covered by placeholder markers.

IBKR uses `_record_missing_window()` to write `missing=True` marker rows at the start/end of a
known no-data interval. On a fresh process, we should still honor those persisted markers and
avoid re-submitting identical history requests for sub-windows inside that interval.
"""
if df_cache is None or df_cache.empty or "missing" not in df_cache.columns:
return False

try:
missing_mask = df_cache["missing"].fillna(False).astype(bool)
except Exception:
return False

if not bool(missing_mask.any()):
return False

missing_index = pd.DatetimeIndex(df_cache.index[missing_mask]).sort_values()
if len(missing_index) < 2:
return False

left_candidates = missing_index[missing_index <= start_local]
right_candidates = missing_index[missing_index >= end_local]
if len(left_candidates) == 0 or len(right_candidates) == 0:
return False

left = left_candidates.max()
right = right_candidates.min()
if left > right:
return False

between = df_cache.loc[(df_cache.index >= left) & (df_cache.index <= right)]
if between.empty or "missing" not in between.columns:
return False

try:
return bool(between["missing"].fillna(False).astype(bool).all())
except Exception:
return False
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The placeholder coverage check can fuse separate gaps.

This helper picks the nearest missing=True row on each side of the request and treats everything between them as one no-data interval. With cached gaps [a,b] and [c,d], a request inside (b,c) will be reported as covered if there are no cached real rows there yet, so valid data never gets fetched. Persist explicit interval pairs/IDs instead of inferring coverage from sorted marker timestamps.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lumibot/tools/ibkr_helper.py` around lines 1793 - 1837, The current
_window_is_placeholder_covered function incorrectly infers coverage by pairing
the nearest "missing" markers on each side which can fuse separate gaps; instead
persist explicit missing-interval identifiers when creating placeholders (e.g.,
modify _record_missing_window to write paired start/end markers with a shared
interval_id or interval_start/interval_end fields), then change
_window_is_placeholder_covered to look up markers with the same interval
identifier (or matching start/end pair) and only treat the window as covered if
there exists a single persisted interval record whose start <= start_local and
end >= end_local; reference the functions _window_is_placeholder_covered and
_record_missing_window and the "missing" marker rows when implementing this
check and adding the interval_id/interval_start/interval_end columns.

@grzesir grzesir merged commit 9dcb351 into dev Mar 16, 2026
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant