Skip to content

feat: write batch history data to InfluxDB with original timestamps#71

Open
wxtry wants to merge 22 commits intomash2k3:mainfrom
wxtry:pr/influxdb-batch-export
Open

feat: write batch history data to InfluxDB with original timestamps#71
wxtry wants to merge 22 commits intomash2k3:mainfrom
wxtry:pr/influxdb-batch-export

Conversation

@wxtry
Copy link

@wxtry wxtry commented Mar 8, 2026

Summary

Optional companion to #70 — writes batch historical data points to InfluxDB with their original timestamps, enabling high-resolution historical data in Grafana.

When devices report batch history (JSON Type 17, TLV CMD 0x42/0x31), the integration already imports hourly aggregates into HA Statistics (#70). This PR additionally writes the raw data points to InfluxDB, preserving per-sample timestamps and full precision.

How it works

  • Checks if the HA InfluxDB integration is active (hass.data["influxdb"])
  • If present, writes data points using the same format as HA's built-in event_to_json (matching measurement name = unit_of_measurement, tags = domain + entity_id)
  • If InfluxDB is not configured, this code path is silently skipped — zero impact

Known limitations

  • Accesses hass.data["influxdb"] internal API (InfluxThread.influx.write) — not a public interface, may break on HA updates
  • Only supports InfluxDB v1 (matches HA's current influxdb component)

I understand this feature couples to InfluxDB internals, which may not be desirable for the project. Happy to adjust or withdraw if you'd prefer not to take on this maintenance burden.

Depends on

Test plan

  • Verified on 6 devices: batch data appears in InfluxDB with correct measurement names and entity IDs
  • Verified graceful skip when InfluxDB integration is not loaded
  • Verified data format matches HA's native InfluxDB writes (Grafana queries work seamlessly)

🤖 Generated with Claude Code

wxtry and others added 22 commits March 4, 2026 22:07
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Options were silently wiped by async_create_entry(data={})
- Now: async_update_entry for model data, async_create_entry for options
- Use config_entry.entry_id instead of private _config_entry_id
- Guard _auto_switch_report_mode_on_battery_state() with options check
- Default initial TLV config to Historic mode
- Change JSON default update_interval from 15s to 300s

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- TLV Historic mode uses config_entry.options timeout (default 65 min)
- JSON devices use config_entry.options timeout (default 65 min)
- Real-time mode keeps fixed 300s timeout

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add _import_batch_statistics() helper
- CMD 0x42 history: all points imported via async_import_statistics
- Latest point still updates entity current state
- Statistics aligned to 5-minute boundaries
- Fix test_batch_history alignment test math (1709500123 % 300 = 223, not 123)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Type 17: parse batch sensorData, update entities from latest point
- Import all batch points to HA long-term statistics
- Send ACK when need_ack=1
- Type 13 explicitly ignored (was silently dropped before)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HA's async_import_statistics requires statistic_id to be lowercase.
async_import_statistics validates statistic_id as entity_id format and
requires source="recorder". External statistics need
async_add_external_statistics which accepts the colon-separated format.

Also adds enhanced status logging and error handling for statistics
import to aid debugging offline timeout behavior.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HA requires timestamps at the top of the hour (minutes=0, seconds=0).
Changed from 5-minute alignment to 1-hour alignment, and aggregate
multiple data points within the same hour by averaging.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Firmware 2.0.6 sends historical data via CMD 0x31 instead of CMD 0x42.
Both carry identical sensorData[] arrays. Treat CMD 0x31 the same as
CMD 0x42 for batch statistics import.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Keep recorder dependency addition but revert fork-specific name,
documentation URL, issue tracker, and version to upstream values.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reuse HA's built-in InfluxDB integration connection to write historical
sensor data directly, preserving original device timestamps for Grafana.
Works for both JSON Type 17 and TLV CMD 0x42/0x31 batch data paths.
Gracefully no-ops if InfluxDB integration is not configured.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HA's InfluxDB integration uses unit_of_measurement (e.g. °C, %, ppm)
as the measurement name, not "state". Match this behavior so batch
historical data merges with normal state_changed event data in Grafana.
Also add debug logging for InfluxDB batch writes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant