Skip to content

Conversation

@erwindouna
Copy link
Contributor

@erwindouna erwindouna commented Jan 2, 2026

Breaking change

Proposed change

Adds new system df information per endpoint, including reclaimable disk space.
Needs a git rebase, once #160130 is merged, to remove the requirements files and manifest.json to be properly updated.

Type of change

  • Dependency upgrade
  • Bugfix (non-breaking change which fixes an issue)
  • New integration (thank you!)
  • New feature (which adds functionality to an existing integration)
  • Deprecation (breaking change to happen in the future)
  • Breaking change (fix/feature causing existing functionality to break)
  • Code quality improvements to existing code or addition of tests

Additional information

  • This PR fixes or closes issue: fixes #
  • This PR is related to issue:
  • Link to documentation pull request:
  • Link to developer documentation pull request:
  • Link to frontend pull request:

Checklist

  • I understand the code I am submitting and can explain how it works.
  • The code change is tested and works locally.
  • Local tests pass. Your PR cannot be merged unless tests pass
  • There is no commented out code in this PR.
  • I have followed the development checklist
  • I have followed the perfect PR recommendations
  • The code has been formatted using Ruff (ruff format homeassistant tests)
  • Tests have been added to verify that the new code works.
  • Any generated code has been carefully reviewed for correctness and compliance with project standards.

If user exposed functionality or configuration variables are added/changed:

If the code communicates with devices, web services, or third-party tools:

  • The manifest file has all fields filled out correctly.
    Updated and included derived files by running: python3 -m script.hassfest.
  • New or updated dependencies have been added to requirements_all.txt.
    Updated by running python3 -m script.gen_requirements_all.
  • For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.

To help with the load of incoming pull requests:

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds Docker system disk usage information to the Portainer integration, enabling monitoring of reclaimable disk space per endpoint. The changes depend on the pyportainer library upgrade from version 1.0.19 to 1.0.21 (PR #160130), which adds the necessary API endpoints for retrieving system df data.

Key Changes

  • Added five new diagnostic sensors to track disk usage metrics for containers, images, and volumes
  • Integrated docker_system_df API call into the coordinator's data update cycle
  • Updated dependency to pyportainer 1.0.21

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
homeassistant/components/portainer/sensor.py Added 5 new sensor entity descriptions for disk usage metrics (reclaimable/total size for containers and images, total size for volumes)
homeassistant/components/portainer/coordinator.py Integrated docker_system_df API call and added DockerSystemDF to coordinator data model
homeassistant/components/portainer/strings.json Added translation keys for the new disk usage sensor entities
homeassistant/components/portainer/icons.json Added icons (file-restore, harddisk) for the new disk usage sensors
homeassistant/components/portainer/manifest.json Updated pyportainer requirement from 1.0.19 to 1.0.21
requirements_all.txt Updated pyportainer version to 1.0.21
requirements_test_all.txt Updated pyportainer version to 1.0.21
tests/components/portainer/conftest.py Added DockerSystemDF mock return value to test client setup
tests/components/portainer/fixtures/docker_system_df.json Added test fixture with sample disk usage data for images, containers, volumes, and build cache

containers = await self.portainer.get_containers(endpoint.id)
docker_version = await self.portainer.docker_version(endpoint.id)
docker_info = await self.portainer.docker_info(endpoint.id)
docker_system_df = await self.portainer.docker_system_df(endpoint.id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this integration grows more and more, we may consider not downloading all the data every time, but only the data that is needed.

systemmonitor implements such a pattern:

self._initial_update: bool = True
self.update_subscribers: dict[tuple[str, str], set[str]] = (
self.set_subscribers_tuples(arguments)
)
def set_subscribers_tuples(
self, arguments: list[str]
) -> dict[tuple[str, str], set[str]]:
"""Set tuples in subscribers dictionary."""
_disk_defaults: dict[tuple[str, str], set[str]] = {}
for argument in arguments:
_disk_defaults[("disks", argument)] = set()
return {
**_disk_defaults,
("addresses", ""): set(),
("battery", ""): set(),
("boot", ""): set(),
("cpu_percent", ""): set(),
("fan_speed", ""): set(),
("io_counters", ""): set(),
("load", ""): set(),
("memory", ""): set(),
("processes", ""): set(),
("swap", ""): set(),
("temperatures", ""): set(),
}

_data = await self.hass.async_add_executor_job(self.update_data)
load: tuple = (None, None, None)
if self.update_subscribers[("load", "")] or self._initial_update:
load = os.getloadavg()
_LOGGER.debug("Load: %s", load)
cpu_percent: float | None = None
if self.update_subscribers[("cpu_percent", "")] or self._initial_update:
cpu_percent = self._psutil.cpu_percent(interval=None)
_LOGGER.debug("cpu_percent: %s", cpu_percent)

async def async_added_to_hass(self) -> None:
"""When added to hass."""
self.coordinator.update_subscribers[
self.entity_description.add_to_update(self)
].add(self.entity_id)
return await super().async_added_to_hass()
async def async_will_remove_from_hass(self) -> None:
"""When removed from hass."""
self.coordinator.update_subscribers[
self.entity_description.add_to_update(self)
].remove(self.entity_id)
return await super().async_will_remove_from_hass()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants