Skip to content

Conversation

@kernel-patches-daemon-bpf-rc
Copy link

Pull request for series with
subject: memcg: reading memcg stats more efficiently
version: 2
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=1012038

@kernel-patches-daemon-bpf-rc
Copy link
Author

Upstream branch: 48a97ff
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1012038
version: 2

@kernel-patches-review-bot
Copy link

commit 141a6230950e8b781e9438b1d3031107f7ab2155
Author: JP Kobryn <[email protected]>

memcg: introduce kfuncs for fetching memcg stats

This patch introduces four BPF kfuncs for fetching memory cgroup statistics
directly from BPF programs, avoiding the overhead of reading and parsing the
memory.stat file. The kfuncs provide access to memcg stats, node stats, and
vm events.

> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 4deda3362..6547c27d4 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -871,6 +871,73 @@ unsigned long memcg_events_local(struct mem_cgroup *memcg, int event)
>  }
>  #endif
>
> +static inline struct mem_cgroup *memcg_from_cgroup(struct cgroup *cgrp)
> +{
> +	return cgrp ? mem_cgroup_from_css(cgrp->subsys[memory_cgrp_id]) : NULL;
                                              ^^^^^^
The subsys array is marked __rcu in struct cgroup (see
include/linux/cgroup-defs.h:544). Other code accessing cgrp->subsys[]
uses rcu_dereference_check() or rcu_dereference_raw() (see
kernel/cgroup/cgroup.c:531 and kernel/cgroup/cgroup.c:703).

Should this access use an RCU dereference primitive? If BPF programs
execute within an RCU read-side critical section, it would be good to
make that explicit here for consistency with the rest of the codebase.

> +}
> +
> +__bpf_kfunc static void memcg_flush_stats(struct cgroup *cgrp)
> +{
> +	struct mem_cgroup *memcg = memcg_from_cgroup(cgrp);
> +
> +	if (!memcg)
> +		return;
> +
> +	mem_cgroup_flush_stats(memcg);
> +}
> +
> +__bpf_kfunc static unsigned long memcg_stat_fetch(struct cgroup *cgrp,
> +		enum memcg_stat_item item)
> +{
> +	struct mem_cgroup *memcg = memcg_from_cgroup(cgrp);
> +
> +	if (!memcg)
> +		return 0;
> +
> +	return memcg_page_state_output(memcg, item);
> +}
> +
> +__bpf_kfunc static unsigned long memcg_node_stat_fetch(struct cgroup *cgrp,
> +		enum node_stat_item item)
> +{
> +	struct mem_cgroup *memcg = memcg_from_cgroup(cgrp);
> +
> +	if (!memcg)
> +		return 0;
> +
> +	return memcg_page_state_output(memcg, item);
> +}
> +
> +__bpf_kfunc static unsigned long memcg_vm_event_fetch(struct cgroup *cgrp,
> +		enum vm_event_item item)
> +{
> +	struct mem_cgroup *memcg = memcg_from_cgroup(cgrp);
> +
> +	if (!memcg)
> +		return 0;
> +
> +	return memcg_events(memcg, item);
> +}
> +
> +BTF_KFUNCS_START(bpf_memcontrol_kfunc_ids)
> +BTF_ID_FLAGS(func, memcg_flush_stats, KF_TRUSTED_ARGS | KF_SLEEPABLE)
> +BTF_ID_FLAGS(func, memcg_stat_fetch, KF_TRUSTED_ARGS)
> +BTF_ID_FLAGS(func, memcg_node_stat_fetch, KF_TRUSTED_ARGS)
> +BTF_ID_FLAGS(func, memcg_vm_event_fetch, KF_TRUSTED_ARGS)
> +BTF_KFUNCS_END(bpf_memcontrol_kfunc_ids)
> +
> +static const struct btf_kfunc_id_set bpf_memcontrol_kfunc_set = {
> +	.owner          = THIS_MODULE,
> +	.set            = &bpf_memcontrol_kfunc_ids,
> +};
> +
> +static int __init bpf_memcontrol_kfunc_init(void)
> +{
> +	return register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC,
> +					 &bpf_memcontrol_kfunc_set);
> +}
> +late_initcall(bpf_memcontrol_kfunc_init);
> +
>  struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
>  {
>  	/*



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: memcg: introduce kfuncs for fetching memcg stats
CI run summary: https://github.com/kernel-patches/bpf-rc/actions/runs/18539971331

@kernel-patches-daemon-bpf-rc
Copy link
Author

Forwarding comment 3407981460 via email
In-Reply-To: [email protected]
Patch: https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/

@kernel-patches-daemon-bpf-rc
Copy link
Author

Upstream branch: 48a97ff
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1012038
version: 2

@kernel-patches-daemon-bpf-rc
Copy link
Author

Upstream branch: 48a97ff
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1012038
version: 2

Reading from the memory.stat file can be expensive because of the string
encoding/decoding and text filtering involved. Introduce three kfuncs for
fetching each type of memcg stat from a bpf program. This allows data to be
transferred directly to userspace, eliminating the need for string
encoding/decoding. It also removes the need for text filtering since it
allows for fetching specific stats.

The patch also includes a kfunc for flushing stats in order to read the
latest values. Note that this is not required for fetching stats, since the
kernel periodically flushes memcg stats. It is left up to the programmer
whether they want more recent stats or not.

Signed-off-by: JP Kobryn <[email protected]>
Acked-by: Shakeel Butt <[email protected]>
Add test coverage for the kfuncs that fetch memcg stats. Using some common
stats, test before and after scenarios ensuring that the given stat
increases by some arbitrary amount. The stats selected cover the three
categories represented by the enums: node_stat_item, memcg_stat_item,
vm_event_item.

Since only a subset of all stats are queried, use a static struct made up
of fields for each stat. Write to the struct with the fetched values when
the bpf program is invoked and read the fields in the user mode program for
verification.

Signed-off-by: JP Kobryn <[email protected]>
@kernel-patches-daemon-bpf-rc
Copy link
Author

Upstream branch: 48a97ff
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1012038
version: 2

@kernel-patches-daemon-bpf-rc
Copy link
Author

At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=1012038 expired. Closing PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants