Skip to content

Commit 7572a47

Browse files
netoptimizerMartin KaFai Lau
authored andcommitted
bpf: cpumap: Disable page_pool direct xdp_return need larger scope
When running an XDP bpf_prog on the remote CPU in cpumap code then we must disable the direct return optimization that xdp_return can perform for mem_type page_pool. This optimization assumes code is still executing under RX-NAPI of the original receiving CPU, which isn't true on this remote CPU. The cpumap code already disabled this via helpers xdp_set_return_frame_no_direct() and xdp_clear_return_frame_no_direct(), but the scope didn't include xdp_do_flush(). When doing XDP_REDIRECT towards e.g devmap this causes the function bq_xmit_all() to run with direct return optimization enabled. This can lead to hard to find bugs. The issue only happens when bq_xmit_all() cannot ndo_xdp_xmit all frames and them frees them via xdp_return_frame_rx_napi(). Fix by expanding scope to include xdp_do_flush(). Found-by Dragos Tatulea <[email protected]> Fixes: 11941f8 ("bpf: cpumap: Implement generic cpumap") Reported-by: Chris Arges <[email protected]> Signed-off-by: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Martin KaFai Lau <[email protected]> Tested-by: Chris Arges <[email protected]> Link: https://patch.msgid.link/175519587755.3008742.1088294435150406835.stgit@firesoul
1 parent 8f5ae30 commit 7572a47

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

kernel/bpf/cpumap.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
186186
struct xdp_buff xdp;
187187
int i, nframes = 0;
188188

189-
xdp_set_return_frame_no_direct();
190189
xdp.rxq = &rxq;
191190

192191
for (i = 0; i < n; i++) {
@@ -231,7 +230,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
231230
}
232231
}
233232

234-
xdp_clear_return_frame_no_direct();
235233
stats->pass += nframes;
236234

237235
return nframes;
@@ -255,6 +253,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
255253

256254
rcu_read_lock();
257255
bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
256+
xdp_set_return_frame_no_direct();
258257

259258
ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats);
260259
if (unlikely(ret->skb_n))
@@ -264,6 +263,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
264263
if (stats->redirect)
265264
xdp_do_flush();
266265

266+
xdp_clear_return_frame_no_direct();
267267
bpf_net_ctx_clear(bpf_net_ctx);
268268
rcu_read_unlock();
269269

0 commit comments

Comments
 (0)