-
-
Notifications
You must be signed in to change notification settings - Fork 36.7k
Add endpoint system df information #160134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Add endpoint system df information #160134
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds Docker system disk usage information to the Portainer integration, enabling monitoring of reclaimable disk space per endpoint. The changes depend on the pyportainer library upgrade from version 1.0.19 to 1.0.21 (PR #160130), which adds the necessary API endpoints for retrieving system df data.
Key Changes
- Added five new diagnostic sensors to track disk usage metrics for containers, images, and volumes
- Integrated docker_system_df API call into the coordinator's data update cycle
- Updated dependency to pyportainer 1.0.21
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
homeassistant/components/portainer/sensor.py |
Added 5 new sensor entity descriptions for disk usage metrics (reclaimable/total size for containers and images, total size for volumes) |
homeassistant/components/portainer/coordinator.py |
Integrated docker_system_df API call and added DockerSystemDF to coordinator data model |
homeassistant/components/portainer/strings.json |
Added translation keys for the new disk usage sensor entities |
homeassistant/components/portainer/icons.json |
Added icons (file-restore, harddisk) for the new disk usage sensors |
homeassistant/components/portainer/manifest.json |
Updated pyportainer requirement from 1.0.19 to 1.0.21 |
requirements_all.txt |
Updated pyportainer version to 1.0.21 |
requirements_test_all.txt |
Updated pyportainer version to 1.0.21 |
tests/components/portainer/conftest.py |
Added DockerSystemDF mock return value to test client setup |
tests/components/portainer/fixtures/docker_system_df.json |
Added test fixture with sample disk usage data for images, containers, volumes, and build cache |
| containers = await self.portainer.get_containers(endpoint.id) | ||
| docker_version = await self.portainer.docker_version(endpoint.id) | ||
| docker_info = await self.portainer.docker_info(endpoint.id) | ||
| docker_system_df = await self.portainer.docker_system_df(endpoint.id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this integration grows more and more, we may consider not downloading all the data every time, but only the data that is needed.
systemmonitor implements such a pattern:
core/homeassistant/components/systemmonitor/coordinator.py
Lines 122 to 147 in 3b2a7ba
| self._initial_update: bool = True | |
| self.update_subscribers: dict[tuple[str, str], set[str]] = ( | |
| self.set_subscribers_tuples(arguments) | |
| ) | |
| def set_subscribers_tuples( | |
| self, arguments: list[str] | |
| ) -> dict[tuple[str, str], set[str]]: | |
| """Set tuples in subscribers dictionary.""" | |
| _disk_defaults: dict[tuple[str, str], set[str]] = {} | |
| for argument in arguments: | |
| _disk_defaults[("disks", argument)] = set() | |
| return { | |
| **_disk_defaults, | |
| ("addresses", ""): set(), | |
| ("battery", ""): set(), | |
| ("boot", ""): set(), | |
| ("cpu_percent", ""): set(), | |
| ("fan_speed", ""): set(), | |
| ("io_counters", ""): set(), | |
| ("load", ""): set(), | |
| ("memory", ""): set(), | |
| ("processes", ""): set(), | |
| ("swap", ""): set(), | |
| ("temperatures", ""): set(), | |
| } |
core/homeassistant/components/systemmonitor/coordinator.py
Lines 153 to 163 in 3b2a7ba
| _data = await self.hass.async_add_executor_job(self.update_data) | |
| load: tuple = (None, None, None) | |
| if self.update_subscribers[("load", "")] or self._initial_update: | |
| load = os.getloadavg() | |
| _LOGGER.debug("Load: %s", load) | |
| cpu_percent: float | None = None | |
| if self.update_subscribers[("cpu_percent", "")] or self._initial_update: | |
| cpu_percent = self._psutil.cpu_percent(interval=None) | |
| _LOGGER.debug("cpu_percent: %s", cpu_percent) |
core/homeassistant/components/systemmonitor/sensor.py
Lines 661 to 673 in 3b2a7ba
| async def async_added_to_hass(self) -> None: | |
| """When added to hass.""" | |
| self.coordinator.update_subscribers[ | |
| self.entity_description.add_to_update(self) | |
| ].add(self.entity_id) | |
| return await super().async_added_to_hass() | |
| async def async_will_remove_from_hass(self) -> None: | |
| """When removed from hass.""" | |
| self.coordinator.update_subscribers[ | |
| self.entity_description.add_to_update(self) | |
| ].remove(self.entity_id) | |
| return await super().async_will_remove_from_hass() |
Breaking change
Proposed change
Adds new system df information per endpoint, including reclaimable disk space.
Needs a git rebase, once #160130 is merged, to remove the requirements files and manifest.json to be properly updated.
Type of change
Additional information
Checklist
ruff format homeassistant tests)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest.requirements_all.txt.Updated by running
python3 -m script.gen_requirements_all.To help with the load of incoming pull requests: