From: Linus Torvalds Date: Tue, 2 Nov 2021 03:05:19 +0000 (-0700) Subject: Merge tag 'trace-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt... X-Git-Url: http://git.osdn.net/view?a=commitdiff_plain;h=79ef0c00142519bc34e1341447f3797436cc48bf;p=uclinux-h8%2Flinux.git Merge tag 'trace-v5.16' of git://git./linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: - kprobes: Restructured stack unwinder to show properly on x86 when a stack dump happens from a kretprobe callback. - Fix to bootconfig parsing - Have tracefs allow owner and group permissions by default (only denying others). There's been pressure to allow non root to tracefs in a controlled fashion, and using groups is probably the safest. - Bootconfig memory managament updates. - Bootconfig clean up to have the tools directory be less dependent on changes in the kernel tree. - Allow perf to be traced by function tracer. - Rewrite of function graph tracer to be a callback from the function tracer instead of having its own trampoline (this change will happen on an arch by arch basis, and currently only x86_64 implements it). - Allow multiple direct trampolines (bpf hooks to functions) be batched together in one synchronization. - Allow histogram triggers to add variables that can perform calculations against the event's fields. - Use the linker to determine architecture callbacks from the ftrace trampoline to allow for proper parameter prototypes and prevent warnings from the compiler. - Extend histogram triggers to key off of variables. - Have trace recursion use bit magic to determine preempt context over if branches. - Have trace recursion disable preemption as all use cases do anyway. - Added testing for verification of tracing utilities. - Various small clean ups and fixes. * tag 'trace-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (101 commits) tracing/histogram: Fix semicolon.cocci warnings tracing/histogram: Fix documentation inline emphasis warning tracing: Increase PERF_MAX_TRACE_SIZE to handle Sentinel1 and docker together tracing: Show size of requested perf buffer bootconfig: Initialize ret in xbc_parse_tree() ftrace: do CPU checking after preemption disabled ftrace: disable preemption when recursion locked tracing/histogram: Document expression arithmetic and constants tracing/histogram: Optimize division by a power of 2 tracing/histogram: Covert expr to const if both operands are constants tracing/histogram: Simplify handling of .sym-offset in expressions tracing: Fix operator precedence for hist triggers expression tracing: Add division and multiplication support for hist triggers tracing: Add support for creating hist trigger variables from literal selftests/ftrace: Stop tracing while reading the trace file by default MAINTAINERS: Update KPROBES and TRACING entries test_kprobes: Move it from kernel/ to lib/ docs, kprobes: Remove invalid URL and add new reference samples/kretprobes: Fix return value if register_kretprobe() failed lib/bootconfig: Fix the xbc_get_info kerneldoc ... --- 79ef0c00142519bc34e1341447f3797436cc48bf diff --cc arch/parisc/kernel/ftrace.c index 2a1f826b3def,b14011d3c2f1..4d392e4ed358 --- a/arch/parisc/kernel/ftrace.c +++ b/arch/parisc/kernel/ftrace.c @@@ -93,15 -94,8 +93,9 @@@ int ftrace_disable_ftrace_graph_caller( #endif #ifdef CONFIG_DYNAMIC_FTRACE - - int __init ftrace_dyn_arch_init(void) - { - return 0; - } - int ftrace_update_ftrace_func(ftrace_func_t func) { + ftrace_func = func; return 0; } diff --cc include/linux/trace_recursion.h index fe95f0922526,a13f23b04d73..c303f7a114e9 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@@ -139,8 -155,11 +135,11 @@@ extern void ftrace_record_recursion(uns # define do_ftrace_record_recursion(ip, pip) do { } while (0) #endif + /* + * Preemption is promised to be disabled when return bit >= 0. + */ static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsigned long pip, - int start, int max) + int start) { unsigned int val = READ_ONCE(current->trace_recursion); int bit; @@@ -148,10 -167,18 +147,14 @@@ bit = trace_get_context_bit() + start; if (unlikely(val & (1 << bit))) { /* - * It could be that preempt_count has not been updated during - * a switch between contexts. Allow for a single recursion. + * If an interrupt occurs during a trace, and another trace + * happens in that interrupt but before the preempt_count is + * updated to reflect the new interrupt context, then this + * will think a recursion occurred, and the event will be dropped. + * Let a single instance happen via the TRANSITION_BIT to + * not drop those events. */ - bit = TRACE_TRANSITION_BIT; + bit = TRACE_CTX_TRANSITION + start; if (val & (1 << bit)) { do_ftrace_record_recursion(ip, pip); return -1; @@@ -162,12 -192,22 +165,18 @@@ current->trace_recursion = val; barrier(); + preempt_disable_notrace(); + - return bit + 1; + return bit; } + /* + * Preemption will be enabled (if it was previously enabled). + */ static __always_inline void trace_clear_recursion(int bit) { - if (!bit) - return; - + preempt_enable_notrace(); barrier(); - bit--; trace_recursion_clear(bit); } diff --cc kernel/kprobes.c index 9a38e7581a5c,4676627cb066..e9db0c810554 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@@ -1250,10 -1258,10 +1258,10 @@@ void kprobe_busy_end(void } /* - * This function is called from finish_task_switch() when task 'tk' becomes - * dead, so that we can recycle any kretprobe instances associated - * with this task. These left over instances represent probed functions - * that have been called but will never return. + * This function is called from delayed_put_task_struct() when a task is - * dead and cleaned up to recycle any function-return probe instances - * associated with this task. These left over instances represent probed - * functions that have been called but will never return. ++ * dead and cleaned up to recycle any kretprobe instances associated with ++ * this task. These left over instances represent probed functions that ++ * have been called but will never return. */ void kprobe_flush_task(struct task_struct *tk) { diff --cc kernel/trace/ftrace.c index feebf57c6458,b4ed1a301232..f3ea4e20072f --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@@ -6977,7 -7198,12 +7198,12 @@@ __ftrace_ops_list_func(unsigned long ip struct ftrace_ops *op; int bit; + /* + * The ftrace_test_and_set_recursion() will disable preemption, + * which is required since some of the ops may be dynamically + * allocated, they must be freed after a synchronize_rcu(). + */ - bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX); + bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START); if (bit < 0) return; diff --cc kernel/trace/trace.c index bc677cd64224,985390cb8441..c88bbfe75d1d --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@@ -1744,15 -1721,16 +1721,15 @@@ void latency_fsnotify(struct trace_arra irq_work_queue(&tr->fsnotify_irqwork); } -/* - * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \ - * defined(CONFIG_FSNOTIFY) - */ -#else +#elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \ + || defined(CONFIG_OSNOISE_TRACER) #define trace_create_maxlat_file(tr, d_tracer) \ - trace_create_file("tracing_max_latency", 0644, d_tracer, \ - &tr->max_latency, &tracing_max_lat_fops) + trace_create_file("tracing_max_latency", TRACE_MODE_WRITE, \ + d_tracer, &tr->max_latency, &tracing_max_lat_fops) +#else +#define trace_create_maxlat_file(tr, d_tracer) do { } while (0) #endif #ifdef CONFIG_TRACER_MAX_TRACE