OSDN Git Service

bpf: disable preemption for bpf progs attached to uprobe
authorAlexei Starovoitov <ast@kernel.org>
Mon, 24 Feb 2020 19:27:15 +0000 (11:27 -0800)
committerAlexei Starovoitov <ast@kernel.org>
Tue, 25 Feb 2020 00:17:14 +0000 (16:17 -0800)
trace_call_bpf() no longer disables preemption on its own.
All callers of this function has to do it explicitly.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
kernel/trace/trace_uprobe.c

index 18d16f3..2a8e8e9 100644 (file)
@@ -1333,8 +1333,15 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
        int size, esize;
        int rctx;
 
-       if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
-               return;
+       if (bpf_prog_array_valid(call)) {
+               u32 ret;
+
+               preempt_disable();
+               ret = trace_call_bpf(call, regs);
+               preempt_enable();
+               if (!ret)
+                       return;
+       }
 
        esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));