Skip to content

Commit 869712f

Browse files
hnaztorvalds
authored andcommitted
mm: memcontrol: fix network errors from failing __GFP_ATOMIC charges
While upgrading from 4.16 to 5.2, we noticed these allocation errors in the log of the new kernel: SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) cache: tw_sock_TCPv6(960:helper-logs), object size: 232, buffer size: 240, default order: 1, min order: 0 node 0: slabs: 5, objs: 170, free: 0 slab_out_of_memory+1 ___slab_alloc+969 __slab_alloc+14 kmem_cache_alloc+346 inet_twsk_alloc+60 tcp_time_wait+46 tcp_fin+206 tcp_data_queue+2034 tcp_rcv_state_process+784 tcp_v6_do_rcv+405 __release_sock+118 tcp_close+385 inet_release+46 __sock_release+55 sock_close+17 __fput+170 task_work_run+127 exit_to_usermode_loop+191 do_syscall_64+212 entry_SYSCALL_64_after_hwframe+68 accompanied by an increase in machines going completely radio silent under memory pressure. One thing that changed since 4.16 is e699e2c ("net, mm: account sock objects to kmemcg"), which made these slab caches subject to cgroup memory accounting and control. The problem with that is that cgroups, unlike the page allocator, do not maintain dedicated atomic reserves. As a cgroup's usage hovers at its limit, atomic allocations - such as done during network rx - can fail consistently for extended periods of time. The kernel is not able to operate under these conditions. We don't want to revert the culprit patch, because it indeed tracks a potentially substantial amount of memory used by a cgroup. We also don't want to implement dedicated atomic reserves for cgroups. There is no point in keeping a fixed margin of unused bytes in the cgroup's memory budget to accomodate a consumer that is impossible to predict - we'd be wasting memory and get into configuration headaches, not unlike what we have going with min_free_kbytes. We do this for physical mem because we have to, but cgroups are an accounting game. Instead, account these privileged allocations to the cgroup, but let them bypass the configured limit if they have to. This way, we get the benefits of accounting the consumed memory and have it exert pressure on the rest of the cgroup, but like with the page allocator, we shift the burden of reclaimining on behalf of atomic allocations onto the regular allocations that can block. Link: http://lkml.kernel.org/r/[email protected] Fixes: e699e2c ("net, mm: account sock objects to kmemcg") Signed-off-by: Johannes Weiner <[email protected]> Reviewed-by: Shakeel Butt <[email protected]> Cc: Suleiman Souhlal <[email protected]> Cc: Michal Hocko <[email protected]> Cc: <[email protected]> [4.18+] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 656d571 commit 869712f

File tree

1 file changed

+9
-0
lines changed

1 file changed

+9
-0
lines changed

mm/memcontrol.c

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2534,6 +2534,15 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
25342534
goto retry;
25352535
}
25362536

2537+
/*
2538+
* Memcg doesn't have a dedicated reserve for atomic
2539+
* allocations. But like the global atomic pool, we need to
2540+
* put the burden of reclaim on regular allocation requests
2541+
* and let these go through as privileged allocations.
2542+
*/
2543+
if (gfp_mask & __GFP_ATOMIC)
2544+
goto force;
2545+
25372546
/*
25382547
* Unlike in global OOM situations, memcg is not in a physical
25392548
* memory shortage. Allow dying and OOM-killed tasks to

0 commit comments

Comments
 (0)