OSDN Git Service

android-x86/kernel.git
8 years agobpf tools: Collect version and license from ELF sections
Wang Nan [Wed, 1 Jul 2015 02:13:57 +0000 (02:13 +0000)]
bpf tools: Collect version and license from ELF sections

Expand bpf_obj_elf_collect() to collect license and kernel version
information in eBPF object file. eBPF object file should have a section
named 'license', which contains a string. It should also have a section
named 'version', contains a u32 LINUX_VERSION_CODE.

bpf_obj_validate() is introduced to validate object file after loaded.
Currently it only check existence of 'version' section.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-10-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agobpf tools: Iterate over ELF sections to collect information
Wang Nan [Wed, 1 Jul 2015 02:13:56 +0000 (02:13 +0000)]
bpf tools: Iterate over ELF sections to collect information

bpf_obj_elf_collect() is introduced to iterate over each elf sections to
collection information in eBPF object files. This function will futher
enhanced to collect license, kernel version, programs, configs and map
information.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-9-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agobpf tools: Check endianness and make libbpf fail early
Wang Nan [Wed, 1 Jul 2015 02:13:55 +0000 (02:13 +0000)]
bpf tools: Check endianness and make libbpf fail early

Check endianness according to EHDR. Code is taken from
tools/perf/util/symbol-elf.c.

Libbpf doesn't magically convert missmatched endianness. Even if we swap
eBPF instructions to correct byte order, we are unable to deal with
endianness in code logical generated by LLVM.

Therefore, libbpf should simply reject missmatched ELF object, and let
LLVM to create good code.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-8-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agobpf tools: Read eBPF object from buffer
Wang Nan [Wed, 1 Jul 2015 02:13:54 +0000 (02:13 +0000)]
bpf tools: Read eBPF object from buffer

To support dynamic compiling, this patch allows caller to pass a
in-memory buffer to libbpf by bpf_object__open_buffer(). libbpf calls
elf_memory() to open it as ELF object file.

Because __bpf_object__open() collects all required data and won't need
that buffer anymore, libbpf uses that buffer directly instead of clone a
new buffer. Caller of libbpf can free that buffer or use it do other
things after bpf_object__open_buffer() return.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-7-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agobpf tools: Open eBPF object file and do basic validation
Wang Nan [Wed, 1 Jul 2015 02:13:53 +0000 (02:13 +0000)]
bpf tools: Open eBPF object file and do basic validation

This patch defines basic interface of libbpf. 'struct bpf_object' will
be the handler of each object file. Its internal structure is hide to
user. eBPF object files are compiled by LLVM as ELF format. In this
patch, libelf is used to open those files, read EHDR and do basic
validation according to e_type and e_machine.

All elf related staffs are grouped together and reside in efile field of
'struct bpf_object'. bpf_object__elf_finish() is introduced to clear it.

After all eBPF programs in an object file are loaded, related ELF
information is useless. Close the object file and free those memory.

The zfree() and zclose() functions are introduced to ensure setting NULL
pointers and negative file descriptors after resources are released.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-6-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agobpf tools: Allow caller to set printing function
Wang Nan [Wed, 1 Jul 2015 02:13:52 +0000 (02:13 +0000)]
bpf tools: Allow caller to set printing function

By libbpf_set_print(), users of libbpf are allowed to register he/she
own debug, info and warning printing functions. Libbpf will use those
functions to print messages. If not provided, default info and warning
printing functions are fprintf(stderr, ...); default debug printing
is NULL.

This API is designed to be used by perf, enables it to register its own
logging functions to make all logs uniform, instead of separated
logging level control.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-5-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agobpf tools: Introduce 'bpf' library and add bpf feature check
Wang Nan [Wed, 1 Jul 2015 02:13:51 +0000 (02:13 +0000)]
bpf tools: Introduce 'bpf' library and add bpf feature check

This is the first patch of libbpf. The goal of libbpf is to create a
standard way for accessing eBPF object files. This patch creates
'Makefile' and 'Build' for it, allows 'make' to build libbpf.a and
libbpf.so, 'make install' to put them into proper directories.
Most part of Makefile is borrowed from traceevent.

Before building, it checks the existence of libelf in Makefile, and deny
to build if not found. Instead of throwing an error if libelf not found,
the error raises in a phony target "elfdep". This design is to ensure
'make clean' still workable even if libelf is not found.

Because libbpf requires 'kern_version' field set for 'union bpf_attr'
(bpfdep" is used for that dependency), Kernel BPF API is also checked
by intruducing a new feature check 'bpf' into tools/build/feature,
which checks the existence and version of linux/bpf.h. When building
libbpf, it searches that file from include/uapi/linux in kernel source
tree (controlled by FEATURE_CHECK_CFLAGS-bpf). Since it searches kernel
source tree it reside, installing of newest kernel headers is not
required, except we are trying to port these files to an old kernel.

To avoid checking that file when perf building, the newly introduced
'bpf' feature check doesn't added into FEATURE_TESTS and
FEATURE_DISPLAY by default in tools/build/Makefile.feature, but added
into libbpf's specific.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Bcc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-4-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoMerge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Fri, 7 Aug 2015 07:11:30 +0000 (09:11 +0200)]
Merge tag 'perf-core-for-mingo' of git://git./linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

User visible changes:

  - IPC and cycle accounting in 'perf annotate'. (Andi Kleen)

  - Display cycles in branch sort mode in 'perf report'. (Andi Kleen)

  - Add total time column to 'perf trace' syscall stats summary. (Milian Woff)

Infrastructure changes:

  - PMU helpers to use in Intel PT. (Adrian Hunter)

  - Fix perf-with-kcore script not to split args with spaces. (Adrian Hunter)

  - Add empty Build files for some more architectures. (Ben Hutchings)

  - Move 'perf stat' config variables to a struct to allow using some
    of its functions in more places. (Jiri Olsa)

  - Add DWARF register names for 'xtensa' arch. (Max Filippov)

  - Implement BPF programs attached to uprobes. (Wang Nan)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf tools: Extend the event parser maximum error index
Adrian Hunter [Fri, 17 Jul 2015 16:33:51 +0000 (19:33 +0300)]
perf tools: Extend the event parser maximum error index

Extend the event parser maximum error index from 10 to 13.  That allows
PMU config terms of up to 10 characters to display un-truncated in the
error message.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-17-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Validate config term maximum value
Adrian Hunter [Fri, 17 Jul 2015 16:33:50 +0000 (19:33 +0300)]
perf tools: Validate config term maximum value

Currently the value of a PMU config term is silently truncated if it is
too big. This is an impediment to validating the value for other
criteria later on i.e.  the user provides an invalid value that gets
truncated to a valid one.

The maximum value validation is only done for the parser where the error
is passed back to the user. In other cases the silent truncation
continues so as not to affect tools that perhaps rely on it.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-16-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Add perf_pmu__format_bits()
Adrian Hunter [Fri, 17 Jul 2015 16:33:49 +0000 (19:33 +0300)]
perf tools: Add perf_pmu__format_bits()

Add perf_pmu__format_bits() to get the format bits for a PMU config
term.  Intel PT will use this to validate terms and to record format
bits to enable later interpreting the config from the attribute stored
in the perf.data file.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-15-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Fix perf-with-kcore handling of arguments containing spaces
Adrian Hunter [Fri, 17 Jul 2015 16:33:47 +0000 (19:33 +0300)]
perf tools: Fix perf-with-kcore handling of arguments containing spaces

Fix the perf-with-kcore script so that it doesn't split arguments that
contain spaces.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-13-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf auxtrace: Fix period type 'i' not working
Adrian Hunter [Fri, 17 Jul 2015 16:33:46 +0000 (19:33 +0300)]
perf auxtrace: Fix period type 'i' not working

PERF_ITRACE_PERIOD_INSTRUCTIONS is zero so it got overwritten by the
default period type.

Fix by checking if the period type was set rather than if the value was
zero when applying the default.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-12-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools xtensa: Add DWARF register names
Max Filippov [Sat, 18 Jul 2015 08:30:11 +0000 (11:30 +0300)]
perf tools xtensa: Add DWARF register names

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Marc Gauthier <marc@cadence.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-xtensa@linux-xtensa.org
Link: http://lkml.kernel.org/r/1437208216-15729-9-git-send-email-jcmvbkbc@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf report: Display cycles in branch sort mode
Andi Kleen [Sat, 18 Jul 2015 15:24:53 +0000 (08:24 -0700)]
perf report: Display cycles in branch sort mode

Display the cycles by default in branch sort mode.

To make enough room for the new column I removed dso_to. It is usually
redundant with dso_from.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-9-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf top: Add branch annotation code to top
Andi Kleen [Sat, 18 Jul 2015 15:24:52 +0000 (08:24 -0700)]
perf top: Add branch annotation code to top

Now that we can process branch data in annotate it makes sense to
support enabling branch recording from top too. Most of the code needed
for this is already in shared code with report. But we need to add:

- The option parsing code (using shared code from the previous patch)
- Document the options
- Set up the IPC/cycles accounting state in the top session
- Call the accounting code in the hist iter callback

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-8-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf annotate: Finally display IPC and cycle accounting
Andi Kleen [Sat, 18 Jul 2015 15:24:51 +0000 (08:24 -0700)]
perf annotate: Finally display IPC and cycle accounting

Add two new columns to the annotate display and display the average
cycles and the compute IPC if available.

When the LBR was not in any branch mode the IPC computation is
automatically disabled. We still display the cycle information.

Example output (with made up numbers):

The second column is the IPC and third average cycles.

                 │    __attribute__((noinline)) f2()
                 │    {
  5.15  0.07     │       push   %rbp
  0.01  0.07     │       mov    %rsp,%rbp
                 │            c = a / b;
  9.87  0.07     │       mov    a,%eax
        0.07     │       mov    b,%ecx
        0.07     │       cltd
  4.92  0.07  123│       idiv   %ecx
 70.79  0.07     │       mov    %eax,__TMC_END__
                 │    }
  9.25  0.07     │       pop    %rbp
  0.01  0.07  123│     ← retq

v2: Fix display problems.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-7-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf annotate: Compute IPC and basic block cycles
Andi Kleen [Sat, 18 Jul 2015 15:24:50 +0000 (08:24 -0700)]
perf annotate: Compute IPC and basic block cycles

Compute the IPC and the basic block cycles for the annotate display.

IPC is computed by counting the instructions, and then dividing the
accounted cycles by that count.

The actual IPC computation can only be done at annotate time, because we
need to parse the objdump output first to know the number of
instructions in the basic block.

The cycles/IPC are also put into the perf function annotation so that
the display code can show them.

Again basic block overlaps are not handled, with the longest winning,
but there are some heuristics to hide the IPC when the longest is not
the most common.

v2: Compute IPC correctly.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-6-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf report: Add processing for cycle histograms
Andi Kleen [Sat, 18 Jul 2015 15:24:49 +0000 (08:24 -0700)]
perf report: Add processing for cycle histograms

Call the earlier added cycle histogram infrastructure from the perf
report hist iter callback. For this we walk the branch records.

This allows to use cycle histograms when browsing perf report annotate.

v2: Rename flag

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-5-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf report: Add infrastructure for a cycles histogram
Andi Kleen [Sat, 18 Jul 2015 15:24:48 +0000 (08:24 -0700)]
perf report: Add infrastructure for a cycles histogram

This adds the basic infrastructure to keep track of cycle counts per
basic block for annotate. We allocate an array similar to the normal
accounting, and then account branch cycles there.

We handle two cases:

cycles per basic block with start and cycles per branch (these are later
used for either IPC or just cycles per BB)

In the start case we cannot handle overlaps, so always the longest basic
block wins.

For the cycles per branch case everything is accurately accounted.

v2: Remove unnecessary checks. Slight restructure. Move
symbol__get_annotation to another patch. Move histogram allocation.
v3: Merged with current tree

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf report: Add flag for non ANY branch mode
Andi Kleen [Sat, 18 Jul 2015 15:24:47 +0000 (08:24 -0700)]
perf report: Add flag for non ANY branch mode

Later patches need to cheaply check that the branch mode is in ANY.  Add
a new function to check all event attrs and add a flag to the report
state, which is then initialized.

v2: Rename flag

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Add support for cycles, weight branch_info field
Andi Kleen [Sat, 18 Jul 2015 15:24:46 +0000 (08:24 -0700)]
perf tools: Add support for cycles, weight branch_info field

cycles is a new branch_info field available on some CPUs that indicates
the time deltas between branches in the LBR.

Add a sort key and output code for the cycles to allow to display the
basic block cycles individually in perf report.

We also pass in the cycles for weight when LBRs are processed, which
allows to get global and local weight, to get an estimate of the total
cost.

And also print the cycles information for perf report -D.  I also added
printing for the previously missing LBR flags (mispredict etc.)

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Add empty Build files for architectures lacking them
Ben Hutchings [Tue, 4 Aug 2015 16:10:27 +0000 (17:10 +0100)]
perf tools: Add empty Build files for architectures lacking them

perf currently fails to build on MIPS as there is no
tools/perf/arch/mips/Build file.  Adding an empty file fixes this as
there are no MIPS-specific sources to build.

It looks like the same is needed for Alpha and PA-RISC, though I
haven't been able to test those.

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Fixes: 5e8c0fb6a957 ("perf build: Add arch x86 objects building")
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1438704627.7315.2.camel@decadent.org.uk
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf stat: Move counter processing code into stat object
Jiri Olsa [Tue, 21 Jul 2015 12:31:27 +0000 (14:31 +0200)]
perf stat: Move counter processing code into stat object

Moving counter processing code into stat object as
perf_stat__process_counter.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-8-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf stat: Pass 'struct perf_stat_config' into process_counter()
Jiri Olsa [Tue, 21 Jul 2015 12:31:26 +0000 (14:31 +0200)]
perf stat: Pass 'struct perf_stat_config' into process_counter()

Passing 'struct perf_stat_config' into process_counter(), so that we can
make process_counter() non static and use it from other places.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-7-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf stat: Move 'interval' into struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:25 +0000 (14:31 +0200)]
perf stat: Move 'interval' into struct perf_stat_config

Moving 'interval' into struct perf_stat_config. The point is to
centralize the base stat config so it could be used localy together with
other stat routines in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-6-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf stat: Move 'output' into struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:24 +0000 (14:31 +0200)]
perf stat: Move 'output' into struct perf_stat_config

Moving 'output' into struct perf_stat_config. The point is to centralize
the base stat config so it could be used localy together with other stat
routines in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-5-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf stat: Move 'scale' into struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:23 +0000 (14:31 +0200)]
perf stat: Move 'scale' into struct perf_stat_config

Moving 'scale' into struct perf_stat_config. The point is to centralize
the base stat config so it could be used localy together with other stat
routines in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-4-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf stat: Introduce struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:22 +0000 (14:31 +0200)]
perf stat: Introduce struct perf_stat_config

Moving 'aggr_mode' into new struct. The point is to centralize the base
stat config so it could be used localy together with other stat routines
in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Add missing forward declaration of struct map to probe-event.h
Wang Nan [Fri, 19 Jun 2015 08:42:48 +0000 (08:42 +0000)]
perf tools: Add missing forward declaration of struct map to probe-event.h

Commit 7b6ff0bdbf4f7f429c2116cca92a6d171217449e ("perf probe ppc64le:
Fixup function entry if using kallsyms lookup") adds 'struct map' into
probe-event.h but not forward declares it. This patch fixes it.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Fixes: 7b6ff0bdbf4f ("perf probe ppc64le: Fixup function entry if using kallsyms lookup")
Link: http://lkml.kernel.org/n/1436445342-1402-30-git-send-email-wangnan0@huawei.com
[ No need to include map.h, just forward declare 'struct map' ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Introduce veprintf
Wang Nan [Fri, 31 Jul 2015 13:35:33 +0000 (10:35 -0300)]
perf tools: Introduce veprintf

va_args alternative to eprintf().

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-19-git-send-email-wangnan0@huawei.com
[ split from another patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agotracing, perf: Implement BPF programs attached to uprobes
Wang Nan [Wed, 1 Jul 2015 02:13:50 +0000 (02:13 +0000)]
tracing, perf: Implement BPF programs attached to uprobes

By copying BPF related operation to uprobe processing path, this patch
allow users attach BPF programs to uprobes like what they are already
doing on kprobes.

After this patch, users are allowed to use PERF_EVENT_IOC_SET_BPF on a
uprobe perf event. Which make it possible to profile user space programs
and kernel events together using BPF.

Because of this patch, CONFIG_BPF_EVENTS should be selected by
CONFIG_UPROBE_EVENT to ensure trace_call_bpf() is compiled even if
KPROBE_EVENT is not set.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-3-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agobpf: Use correct #ifdef controller for trace_call_bpf()
Wang Nan [Wed, 1 Jul 2015 02:13:49 +0000 (02:13 +0000)]
bpf: Use correct #ifdef controller for trace_call_bpf()

Commit e1abf2cc8d5d80b41c4419368ec743ccadbb131e ("bpf: Fix the build on
BPF_SYSCALL=y && !CONFIG_TRACING kernels, make it more configurable")
updated the building condition of bpf_trace.o from CONFIG_BPF_SYSCALL
to CONFIG_BPF_EVENTS, but the corresponding #ifdef controller in
trace_events.h for trace_call_bpf() was not changed. Which, in theory,
is incorrect.

With current Kconfigs, we can create a .config with CONFIG_BPF_SYSCALL=y
and CONFIG_BPF_EVENTS=n by unselecting CONFIG_KPROBE_EVENT and
selecting CONFIG_BPF_SYSCALL. With these options, trace_call_bpf() will
be defined as an extern function, but if anyone calls it a symbol missing
error will be triggered since bpf_trace.o was not built.

This patch changes the #ifdef controller for trace_call_bpf() from
CONFIG_BPF_SYSCALL to CONFIG_BPF_EVENTS. I'll show its correctness:

Before this patch:

   BPF_SYSCALL   BPF_EVENTS   trace_call_bpf   bpf_trace.o
   y             y           normal           compiled
   n             n           inline           not compiled
   y             n           normal           not compiled (incorrect)
   n             y          impossible (BPF_EVENTS depends on BPF_SYSCALL)

After this patch:

   BPF_SYSCALL   BPF_EVENTS   trace_call_bpf   bpf_trace.o
   y             y           normal           compiled
   n             n           inline           not compiled
   y             n           inline           not compiled (fixed)
   n             y          impossible (BPF_EVENTS depends on BPF_SYSCALL)

So this patch doesn't break anything. QED.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-2-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf trace: Add total time column to summary.
Milian Wolff [Thu, 6 Aug 2015 09:24:29 +0000 (11:24 +0200)]
perf trace: Add total time column to summary.

It is cumbersome to manually calculate the total time spent in a given
syscall by multiplying the average value with the number of calls.

Instead, we now do this directly inside perf trace.

Note that this is also done by 'strace', which even adds a column with
relative numbers - something we could do in the future.

Example:

  perf trace -s find /some/folder > /dev/null

   Summary of events:

   find (19976), 700123 events, 100.0%, 0.000 msec

     syscall            calls    total       min       avg       max      stddev
                                 (msec)    (msec)    (msec)    (msec)        (%)
     --------------- -------- --------- --------- --------- ---------     ------
     read                   4     0.006     0.001     0.002     0.003     27.42%
     write               8046     9.617     0.001     0.001     0.035      0.56%
     open               34196    40.384     0.001     0.001     0.071      0.30%
     close              68375    57.104     0.001     0.001     0.076      0.25%
     stat                   4     0.004     0.001     0.001     0.001      3.14%
     fstat              34189    27.518     0.001     0.001     0.060      0.34%
     mmap                  13     0.029     0.001     0.002     0.003     10.74%
     mprotect               6     0.018     0.002     0.003     0.005     17.04%
     munmap                 3     0.014     0.003     0.005     0.006     24.87%
     brk                   87     0.490     0.001     0.006     0.016      6.50%
     ioctl                  3     0.004     0.001     0.001     0.003     36.39%
     access                 1     0.004     0.004     0.004     0.004      0.00%
     uname                  1     0.001     0.001     0.001     0.001      0.00%
     getdents           68393   143.600     0.001     0.002     0.187      0.95%
     fchdir             68371    56.980     0.001     0.001     0.111      0.39%
     arch_prctl             1     0.001     0.001     0.001     0.001      0.00%
     openat             34184    41.737     0.001     0.001     0.102      0.41%
     newfstatat         34184    41.180     0.001     0.001     0.064      0.34%

Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LPU-Reference: 1438853069-5902-1-git-send-email-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoMerge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Thu, 6 Aug 2015 06:51:18 +0000 (08:51 +0200)]
Merge tag 'perf-core-for-mingo' of git://git./linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

New features:

  - Deref sys_enter pointer args with contents from probe:vfs_getname, showing
    pathnames instead of pointers in many syscalls in 'perf trace'. (Arnaldo Carvalho de Melo)

  - Make 'perf trace' write to stderr by default, just like 'strace'. (Milian Woff)

Infrastructure changes:

  - color_vfprintf() fixes. (Andi Kleen, Jiri Olsa)

  - Allow enabling/disabling PERF_SAMPLE_TIME per event. (Kan Liang)

  - Fix build errors with mipsel-linux-uclibc compiler. (Petri Gynther)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf tools: Fix build errors with mipsel-linux-uclibc compiler
Petri Gynther [Wed, 5 Aug 2015 00:38:01 +0000 (17:38 -0700)]
perf tools: Fix build errors with mipsel-linux-uclibc compiler

linux/tools$ make ARCH=mips CROSS_COMPILE=mipsel-linux- perf
...
config/Makefile:256: *** No gnu/libc-version.h found, please install
glibc-dev[el].  Stop.
make[1]: *** [all] Error 2
make: *** [perf] Error 2

...
In file included from builtin-sched.c:13:0:
util/cloexec.h:8:12: error: redundant redeclaration of ‘sched_getcpu’
 [-Werror=redundant-decls]
 extern int sched_getcpu(void) __THROW;

mipsel-buildroot-linux-uclibc/sysroot/usr/include/bits/sched.h:88:12:
 note: previous declaration of ‘sched_getcpu’ was here
 extern int sched_getcpu (void) __THROW;

uclibc info:
sysroot/usr/include/bits/uClibc_config.h
__UCLIBC_MAJOR__ 0
__UCLIBC_MINOR__ 9
__UCLIBC_SUBLEVEL__ 33

sysroot/usr/include/features.h
__UCLIBC__ 1
__GLIBC__ 2
__GLIBC_MINOR__ 2

Signed-off-by: Petri Gynther <pgynther@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1438735081-24131-1-git-send-email-pgynther@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf trace: Write to stderr by default
Milian Wolff [Wed, 5 Aug 2015 19:52:23 +0000 (16:52 -0300)]
perf trace: Write to stderr by default

Without this patch, it is cumbersome to read the trace output but
ignoring the normal, potentially verbose, output of the debuggee.  One
common example is doing something like the following:

 perf trace -s find /tmp > /dev/null

Without this patch, the trace summary will be lost. Now, it will still
be printed at the end. This behavior is also applied by strace.

Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/n/tip-tqnks6y2cnvm5f9g2dsfr7zl@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Do not include escape sequences in color_vfprintf return
Andi Kleen [Tue, 4 Aug 2015 00:50:02 +0000 (17:50 -0700)]
perf tools: Do not include escape sequences in color_vfprintf return

color_vprintf was including the length of the invisible escape sequences
in its return argument. Don't include them to make the return value
usable for indentation calculations.

v2: Add comment, rebase

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1438649408-20807-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Remove trail argument to color vsprintf
Jiri Olsa [Tue, 4 Aug 2015 00:50:01 +0000 (17:50 -0700)]
perf tools: Remove trail argument to color vsprintf

Seems like it's always '\n' through color_fprintf_ln, which is not used
at all, removing.. ;-)

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1438649408-20807-2-git-send-email-andi@firstfloor.org
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Refine parse/config callchain functions
Kan Liang [Tue, 4 Aug 2015 08:30:20 +0000 (04:30 -0400)]
perf tools: Refine parse/config callchain functions

Pass global callchain_param into parse_callchain_record_opt and
perf_evsel__config_callgraph as parameter. So we can reuse these
functions to parse/config local param for callchain.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438677022-34296-3-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Per-event time support
Kan Liang [Tue, 4 Aug 2015 08:30:19 +0000 (04:30 -0400)]
perf tools: Per-event time support

This patchkit adds the ability to turn off time stamps per event.

One usaful case for partial time is to work with per-event callgraph to
enable "PEBS threshold > 1" (https://lkml.org/lkml/2015/5/10/196), which
can significantly reduce the sampling overhead.

The event samples with time stamps off will not be ordered.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438677022-34296-2-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf trace: Use vfs_getname syscall arg beautifier in more syscalls
Arnaldo Carvalho de Melo [Wed, 5 Aug 2015 02:31:25 +0000 (23:31 -0300)]
perf trace: Use vfs_getname syscall arg beautifier in more syscalls

Those were covered and tested in this cset:

 access, chdir, chmod, chown, chroot, creat, getxattr,
 inotify_add_watch, lchown, lgetxattr, listxattr,
 lsetxattr, mkdir, mkdirat, mknod, rmdir, faccessat,
 newfstatat, openat, readlink, readlinkat, removexattr,
 setxattr, statfs, swapon, swapoff, truncate, unlinkat,
 utime, utimes, utimensat.

E.g.:

  # trace -e statfs,access,mkdir mkdir /tmp/bla
   0.285 (0.020 ms): mkdir/2799 access(filename: /etc/ld.so.preload, mode: R         ) = -1 ENOENT No such file or directory
   1.070 (0.032 ms): mkdir/2799 statfs(pathname: /sys/fs/selinux, buf: 0x7ffeafbdc930) = 0
   1.087 (0.013 ms): mkdir/2799 statfs(pathname: /sys/fs/selinux, buf: 0x7ffeafbdc820) = 0
   1.189 (0.014 ms): mkdir/2799 access(filename: /etc/selinux/config                 ) = 0
   1.905 (0.610 ms): mkdir/2799 mkdir(pathname: /tmp/bla, mode: 511                  ) = 0
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-wbqtnlktquun3wtpjdz3okul@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
  and an empty message aborts the commit.

8 years agoperf trace: Deref sys_enter pointer args with contents from probe:vfs_getname
Arnaldo Carvalho de Melo [Wed, 5 Aug 2015 01:30:09 +0000 (22:30 -0300)]
perf trace: Deref sys_enter pointer args with contents from probe:vfs_getname

To work like strace and dereference syscall pointer args we need to
insert probes (or tracepoints) right after we copy those bytes from
userspace.

Since we're formatting the syscall args at raw_syscalls:sys_enter time,
we need to have a formatter that just stores the position where, later,
when we get the probe:vfs_getname, we can insert the pointer contents.

Now, if a probe:vfs_getname with this format is in place:

 # perf probe -l
  probe:vfs_getname (on getname_flags:72@/home/git/linux/fs/namei.c with pathname)

That was, in this case, put in place with:

 # perf probe 'vfs_getname=getname_flags:72 pathname=filename:string'
 Added new event:
  probe:vfs_getname    (on getname_flags:72 with pathname=filename:string)

 You can now use it in all perf tools, such as:

perf record -e probe:vfs_getname -aR sleep 1
 #

Then 'perf trace' will notice that and do the pointer -> contents
expansion:

 # trace -e open touch /tmp/bla
  0.165 (0.010 ms): touch/17752 open(filename: /etc/ld.so.cache, flags: CLOEXEC) = 3
  0.195 (0.011 ms): touch/17752 open(filename: /lib64/libc.so.6, flags: CLOEXEC) = 3
  0.512 (0.012 ms): touch/17752 open(filename: /usr/lib/locale/locale-archive, flags: CLOEXEC) = 3
  0.582 (0.012 ms): touch/17752 open(filename: /tmp/bla, flags: CREAT|NOCTTY|NONBLOCK|WRONLY, mode: 438) = 3
 #

Roughly equivalent to strace's output:

 # strace -rT -e open touch /tmp/bla
  0.000000 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 <0.000039>
  0.000317 open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 <0.000102>
  0.001461 open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 <0.000072>
  0.000405 open("/tmp/bla", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 <0.000055>
  0.000641 +++ exited with 0 +++
 #

Now we need to either look for at all syscalls that are marked as
pointers and have some well known names ("filename", "pathname", etc)
and set the arg formatter to the one used for the "open" syscall in this
patch.

This implementation works for syscalls with just a string being copied
from userspace, for matching syscalls with more than one string being
copied via the same probe/trace point (vfs_getname) we need to extend
the vfs_getname probe spec to include the pointer too, but there are
some problems with that in 'perf probe' or the kernel kprobes code, need
to investigate before considering supporting multiple strings per
syscall.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xvuwx6nuj8cf389kf9s2ue2s@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf trace: Use a constant for the syscall formatting buffer
Arnaldo Carvalho de Melo [Wed, 5 Aug 2015 01:17:29 +0000 (22:17 -0300)]
perf trace: Use a constant for the syscall formatting buffer

We were using it as a magic number, 1024, fix that.

Eventually we need to stop doing it per line, and do it per
arg, traversing the args at output time, to avoid the memmove()
calls that will be used in the next cset to replace pointers
present at raw_syscalls:sys_enter time with its contents that
appear at probe:vfs_getname time, before raw_syscalls:sys_exit
time.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-4sz3wid39egay1pp8qmbur4u@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf trace: Remember if the vfs_getname tracepoint/kprobe is in place
Arnaldo Carvalho de Melo [Tue, 4 Aug 2015 20:01:04 +0000 (17:01 -0300)]
perf trace: Remember if the vfs_getname tracepoint/kprobe is in place

So that we can later decide if we will store where to expand the
pathname once we are handling vfs_getname or if we should instead
just go on and straight away print the pointer.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ytxk5s5jpc50wahffmlxgxuw@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf trace: Do not show syscall tracepoint filter in the --no-syscalls case
Arnaldo Carvalho de Melo [Mon, 3 Aug 2015 20:12:29 +0000 (17:12 -0300)]
perf trace: Do not show syscall tracepoint filter in the --no-syscalls case

We were accessing trace->syscalls.events members even when that struct
wasn't initialized, i.e. --no-syscalls was specified on the command
line, fix it to show that, still in debug mode, when we have an event
qualifier list, i.e. when we actually are doing subset syscall tracing.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Fixes: 19867b6186f3 ("perf trace: Use event filters for the event qualifier list")
Link: http://lkml.kernel.org/n/tip-7980ym6vujgh3yiai0cqzc88@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf script: No tracepoints? Don't call libtraceevent.
Arnaldo Carvalho de Melo [Mon, 3 Aug 2015 19:27:40 +0000 (16:27 -0300)]
perf script: No tracepoints? Don't call libtraceevent.

The libtraceevent handler (session->tevent) is only initialized when
there are tracepoints in a perf.data event list, so do not call
pevent_set_function_resolve() in those cases, fixing a segfault.

Reported-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xyynkucl5p4bcs13zi4i4b1f@git.kernel.org
link: http://lkml.kernel.org/r/20150803174113.GA20282@krava.redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf/x86/intel/pebs: Robustify PEBS buffer drain
Peter Zijlstra [Wed, 15 Jul 2015 12:35:46 +0000 (14:35 +0200)]
perf/x86/intel/pebs: Robustify PEBS buffer drain

Vince Weaver and Stephane Eranian reported warnings in the PEBS
code when running the perf fuzzer. Stephane wrote:

  > I can reproduce the problem on my HSW running the fuzzer.
  >
  > I can see why this could be happening if you are mixing PEBS and non PEBS events
  > in the bottom 4 counters. I suspect:
  >         for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) {
  >                 if ((counts[bit] == 0) && (error[bit] == 0))
  >                         continue;
  >
  > This test is not correct when you have non-PEBS events mixed with
  > PEBS events and they overflow at the same time. They will have
  > counts[i] != 0 but error[i] == 0, and thus you fall thru the loop
  > and hit the assert. Or it is something along those lines.

The only way I can make this work is if ->status only has !PEBS events
set, because if it has both set we'll take that slow path which masks
out the !PEBS bits.

After masking there are 3 options:

 - there is one bit set, and its @bit, we increment counts[bit].

 - there are multiple bits set, we increment error[] for each set bit,
   we do not increment counts[].

 - there are no bits set, we do nothing.

The intent was to never increment counts[] for !PEBS events.

Now if we start out with only a single !PEBS event set, we'll pass the
test and increment counts[] for a !PEBS and hit the warn.

Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Reported-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/pebs: Fix event disable PEBS buffer drain
Liang, Kan [Fri, 3 Jul 2015 20:08:27 +0000 (20:08 +0000)]
perf/x86/intel/pebs: Fix event disable PEBS buffer drain

When disabling a PEBS event, we need to drain the buffer. Doing so
requires a correct cpuc->pebs_active mask.

The current code clears the pebs_active bit before draining the
buffer. Fix that.

Signed-off-by: "Liang, Kan" <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver<vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/37D7C6CF3E00A74B8858931C1DB2F07701885A65@SHSMSX103.ccr.corp.intel.com
[ Fixed the SOB. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86: Add an MSR PMU driver
Andy Lutomirski [Mon, 20 Jul 2015 15:49:06 +0000 (11:49 -0400)]
perf/x86: Add an MSR PMU driver

This patch adds an MSR PMU to support free running MSR counters. Such
as time and freq related counters includes TSC, IA32_APERF, IA32_MPERF
and IA32_PPERF, but also SMI_COUNT.

The events are exposed in sysfs for use by perf stat and other tools.
The files are under /sys/devices/msr/events/

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kan Liang <kan.liang@intel.com>
[ s/freq/msr/, added SMI_COUNT, fixed bugs. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: adrian.hunter@intel.com
Cc: dsahern@gmail.com
Cc: eranian@google.com
Cc: jolsa@kernel.org
Cc: mark.rutland@arm.com
Cc: namhyung@kernel.org
Link: http://lkml.kernel.org/r/1437407346-31186-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/uncore: Add Broadwell-DE uncore support
Kan Liang [Thu, 2 Jul 2015 12:12:52 +0000 (08:12 -0400)]
perf/x86/intel/uncore: Add Broadwell-DE uncore support

The uncore subsystem for Broadwell-DE is similar to Haswell-EP.  There
are some differences in pci device IDs, box number and constraints.

Please refer to the public document:

  http://www.intel.com/content/www/us/en/processors/xeon/xeon-d-1500-uncore-performance-monitoring.html

Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1435839172-15114-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel: Use 0x11 as extra reg test value
Andi Kleen [Tue, 30 Jun 2015 23:33:24 +0000 (16:33 -0700)]
perf/x86/intel: Use 0x11 as extra reg test value

The next patch adds a new perf extra register where 0x1ff is not a valid
value. Use 0x11 instead.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1435707205-6676-3-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86: Make merge_attr() global to use from perf_event_intel
Andi Kleen [Mon, 29 Jun 2015 21:22:13 +0000 (14:22 -0700)]
perf/x86: Make merge_attr() global to use from perf_event_intel

merge_attr() allows to merge two sysfs attribute tables.
Export it to be usable by other files too.

Next patch is going to use that to extend the sysfs format
attributes for a CPU.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1435612935-24425-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/lbr: Limit LBR accesses to TOS in callstack mode
Andi Kleen [Thu, 28 May 2015 04:13:18 +0000 (21:13 -0700)]
perf/x86/intel/lbr: Limit LBR accesses to TOS in callstack mode

In callstack mode the LBR is not a ring buffer, but a stack that grows up
and down. This means in  this case we don't need to access all LBRs, only the
ones up to TOS. Do this optimization for the normal LBR read, and the context
switch save/restore code. For save/restore it can be done unconditionally, as
it only runs when call stack mode is active.

This recovers some of the cost of going to 32 LBRs on Skylake.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Link: http://lkml.kernel.org/r/1432786398-23861-6-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/lbr: Use correct index to save/restore LBR_INFO with call stack
Andi Kleen [Thu, 28 May 2015 04:13:17 +0000 (21:13 -0700)]
perf/x86/intel/lbr: Use correct index to save/restore LBR_INFO with call stack

Use the correct index to save/restore the LBR_INFO_x MSR in
callstack mode. This is more a cleanup, as even with the wrong
index the register was correctly saved/restored, and also
LBR callgraph mode in perf tools do not really need anything in
LBR_INFO. But still better to use the right index.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Link: http://lkml.kernel.org/r/1432786398-23861-5-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel: Add Intel Skylake PMU support
Andi Kleen [Sun, 10 May 2015 19:22:44 +0000 (12:22 -0700)]
perf/x86/intel: Add Intel Skylake PMU support

Add perf core PMU support for future Intel Skylake CPU cores.

The code is based on Haswell/Broadwell.

There is a new cache event list, based on the updated Haswell
event list.

Skylake has removed most counter constraints on basic
events, so the basic constraints table now only has a single
entry (plus the fixed counters).

TSX support and various other setups are all shared with Haswell.

Skylake has 32 LBR entries. Add a new LBR init function
to set this up. The filters are all the same as Haswell.

It also has a new LBR format with a separate LBR_INFO_* MSR,
but that has been already added earlier.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-7-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/lbr: Optimize v4 LBR unfreezing
Andi Kleen [Sun, 10 May 2015 19:22:46 +0000 (12:22 -0700)]
perf/x86/intel/lbr: Optimize v4 LBR unfreezing

In Arch perfmon v4 the GLOBAL_STATUS reset automatically unfreezes
LBRs. So no need to do it manually in the LBR code. Add a check
to skip it.

v2: Move test up to beginning of function.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-9-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel: Move PMU ACK to after LBR read
Andi Kleen [Sun, 10 May 2015 19:22:47 +0000 (12:22 -0700)]
perf/x86/intel: Move PMU ACK to after LBR read

With Arch Perfmon v4 the PMU ack unfreezes the LBRs. So we need to do
the PMU ack after the LBR reading, otherwise the LBRs would be polluted
by the PMI handler.

This is a minimal change. In principle the ACK could be moved much later.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-10-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel: Handle new arch perfmon v4 status bits
Andi Kleen [Sun, 10 May 2015 19:22:45 +0000 (12:22 -0700)]
perf/x86/intel: Handle new arch perfmon v4 status bits

ArchPerfmon v4 has some new status bits in GLOBAL_STATUS.

These need to be ignored when deciding whether a NMI
was an NMI, to avoid eating all NMIs when they
stay set, see:

    b292d7a10487 ("perf/x86/intel: ignore CondChgd bit to avoid false NMI handling")

This patch ignores the new ASIF bit, which indicates
that SGX interfered with the PMU, and also the new
LBR freezing bits, which are set when the LBRs get
frozen, plus the existing CondChange (set by JTAG
debuggers and some buggy BIOSes)

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-8-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/lbr: Add support for LBRv5
Andi Kleen [Sun, 10 May 2015 19:22:43 +0000 (12:22 -0700)]
perf/x86/intel/lbr: Add support for LBRv5

Add support for the new LBRv5 format used on Intel Skylake CPUs.

The flags for mispredict, abort, in_tx etc. moved to range of separate
LBR_INFO_* MSRs. Teach the LBR code to read those. The original
LBR registers stay the same, except they have full sign
extension now.

LBR_INFO also reports a cycle count to the last branch.
Report the cycle information using the new "cycles" branch_info
output field.

In addition we have to context switch and clear the new INFO
MSRs to avoid any information leaks.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-6-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf: Add cycles to branch_info
Andi Kleen [Sun, 10 May 2015 19:22:42 +0000 (12:22 -0700)]
perf: Add cycles to branch_info

Intel Skylake supports reporting the time in cycles a branch in the LBR
took, to give a rough indication of the basic block performance.

Export the cycle information in the branch_info structure.
This can be done by just reusing some currently zero padding.

This is just the generic header change. The architecture
still needs to fill it in.

There's no attempt to convert to real time, as we really
want cycles here.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-5-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agox86: Add new MSRs and MSR bits used for Intel Skylake PMU support
Andi Kleen [Sun, 10 May 2015 19:22:41 +0000 (12:22 -0700)]
x86: Add new MSRs and MSR bits used for Intel Skylake PMU support

Add new MSRs (LBR_INFO) and some new MSR bits used by the Intel Skylake
PMU driver.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-4-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/lbr: Allow time stamp for free running PEBSv3
Andi Kleen [Thu, 28 May 2015 04:13:14 +0000 (21:13 -0700)]
perf/x86/intel/lbr: Allow time stamp for free running PEBSv3

With PEBSv3 the PEBS record contains a time stamp. That means we can allow
free-running PEBS without a PMI even if the user program requested a time stamp.
This avoids the need to use -T to get free running PEBS, and also avoids
any problems with mis-identifying MMAPs later.

Move the free_running_flags state into a variable in x86_pmu and use it.
This only works when no explicit clock_id is set.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1432786398-23861-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel: Add support for PEBSv3 profiling
Andi Kleen [Sun, 10 May 2015 19:22:40 +0000 (12:22 -0700)]
perf/x86/intel: Add support for PEBSv3 profiling

PEBSv3 is the same as the existing PEBSv2 used on Haswell,
but it adds a new TSC field. Add support to the generic
PEBS handler to handle the new format, and overwrite
the perf time stamp using the new native_sched_clock_from_tsc().

Right now the time stamp is just slightly more accurate,
as it is nearer the actual event trigger point. With
the PEBS threshold > 1 patchkit it will be much more accurate,
avoid the problems with MMAP mismatches earlier.
The accurate time stamping is only implemented for
the default trace clock for now.

v2: Use _skl prefix. Check for default clock_id.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-3-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86: Add a native_perf_sched_clock_from_tsc()
Andi Kleen [Sun, 10 May 2015 19:22:39 +0000 (12:22 -0700)]
perf/x86: Add a native_perf_sched_clock_from_tsc()

PEBSv3 has a raw TSC time stamp in its memory buffer that
later needs to to be converted to perf_clock.

Add a native_sched_clock_from_tsc() that works the same
as native_sched_clock(), but starts with an already given
TSC value.

Paravirt is ignored, it will just get the native clock.
But there isn't a para virtualized PEBS anyway.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/pt: Add new timing packet enables
Alexander Shishkin [Thu, 30 Jul 2015 13:15:31 +0000 (16:15 +0300)]
perf/x86/intel/pt: Add new timing packet enables

Intel PT chapter in the new Intel Architecture SDM adds several packets
corresponding enable bits and registers that control packet generation.
Also, additional bits in the Intel PT CPUID leaf were added to enumerate
presence and parameters of these new packets and features.

The packets and enables are:

  * CYC: cycle accurate mode, provides the number of cycles elapsed since
    previous CYC packet; its presence and available threshold values are
    enumerated via CPUID;

  * MTC: mini time counter packets, used for tracking TSC time between
    full TSC packets; its presence and available resolution options are
    enumerated via CPUID;

  * PSB packet period is now configurable, available period values are
    enumerated via CPUID.

This patch adds corresponding bit and register definitions, pmu driver
capabilities based on CPUID enumeration, new attribute format bits for
the new featurens and extends event configuration validation function
to take these into account.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1438262131-12725-1-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/pt: Do not force sync packets on every schedule-in
Alexander Shishkin [Thu, 30 Jul 2015 13:48:24 +0000 (16:48 +0300)]
perf/x86/intel/pt: Do not force sync packets on every schedule-in

Currently, the PT driver zeroes out the status register every time before
starting the event. However, all the writable bits are already taken care
of in pt_handle_status() function, except the new PacketByteCnt field,
which in new versions of PT contains the number of packet bytes written
since the last sync (PSB) packet. Zeroing it out before enabling PT forces
a sync packet to be written. This means that, with the existing code, a
sync packet (PSB and PSBEND, 18 bytes in total) will be generated every
time a PT event is scheduled in.

To avoid these unnecessary syncs and save a WRMSR in the fast path, this
patch changes the default behavior to not clear PacketByteCnt field, so
that the sync packets will be generated with the period specified as
"psb_period" attribute config field. This has little impact on the trace
data as the other packets that are normally sent within PSB+ (between PSB
and PSBEND) have their own generation scenarios which do not depend on the
sync packets.

One exception where we do need to force PSB like this when tracing starts,
so that the decoder has a clear sync point in the trace. For this purpose
we aready have hw::itrace_started flag, which we are currently using to
output PERF_RECORD_ITRACE_START. This patch moves setting itrace_started
from perf core to the pmu::start, where it should still be 0 on the very
first run.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1438264104-16189-1-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/hw_breakpoints: Fix check for kernel-space breakpoints
Andy Lutomirski [Fri, 31 Jul 2015 03:32:42 +0000 (20:32 -0700)]
perf/x86/hw_breakpoints: Fix check for kernel-space breakpoints

The check looked wrong, although I think it was actually safe.  TASK_SIZE
is unnecessarily small for compat tasks, and it wasn't possible to make
a range breakpoint so large it started in user space and ended in kernel
space.

Nonetheless, let's fix up the check for the benefit of future
readers.  A breakpoint is in the kernel if either end is in the
kernel.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/136be387950e78f18cea60e9d1bef74465d0ee8f.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/hw_breakpoints: Improve range breakpoint validation
Andy Lutomirski [Fri, 31 Jul 2015 03:32:41 +0000 (20:32 -0700)]
perf/x86/hw_breakpoints: Improve range breakpoint validation

Range breakpoints will do the wrong thing if the address isn't
aligned.  While we're there, add comments about why it's safe for
instruction breakpoints.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/ae25d14d61f2f43b78e0a247e469f3072df7e201.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/hw_breakpoints: Disallow kernel breakpoints unless kprobe-safe
Andy Lutomirski [Fri, 31 Jul 2015 03:32:40 +0000 (20:32 -0700)]
perf/x86/hw_breakpoints: Disallow kernel breakpoints unless kprobe-safe

Code on the kprobe blacklist doesn't want unexpected int3
exceptions. It probably doesn't want unexpected debug exceptions
either. Be safe: disallow breakpoints in nokprobes code.

On non-CONFIG_KPROBES kernels, there is no kprobe blacklist.  In
that case, disallow kernel breakpoints entirely.

It will be particularly important to keep hw breakpoints out of the
entry and NMI code once we move debug exceptions off the IST stack.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/e14b152af99640448d895e3c2a8c2d5ee19a1325.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel: Fix SLM MSR_OFFCORE_RSP1 valid_mask
Kan Liang [Wed, 24 Jun 2015 18:23:35 +0000 (11:23 -0700)]
perf/x86/intel: Fix SLM MSR_OFFCORE_RSP1 valid_mask

AVG_LATENCY(bit 38) is only available on MSR_OFFCORE_RSP0.
So the bit should be removed from RSP1 valid_mask.

Since RSP0 and RSP1 may have different valid_mask, intel_alt_er should
validate the config on the alternate offcore reg before replacing it.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1435170215-5017-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/lbr: Kill off intel_pmu_needs_lbr_smpl for good
Alexander Shishkin [Wed, 24 Jun 2015 10:05:49 +0000 (13:05 +0300)]
perf/x86/intel/lbr: Kill off intel_pmu_needs_lbr_smpl for good

The x86_lbr_exclusive commit (4807034248be "perf/x86: Mark Intel PT and
LBR/BTS as mutually exclusive") mistakenly moved intel_pmu_needs_lbr_smpl()
to perf_event.h, while another commit (a46a2300019 "perf: Simplify the
branch stack check") removed it in favor of needs_branch_stack().

This patch gets rid of intel_pmu_needs_lbr_smpl() for good.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1435140349-32588-3-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/bts: Drop redundant declarations
Alexander Shishkin [Wed, 24 Jun 2015 10:05:48 +0000 (13:05 +0300)]
perf/x86/intel/bts: Drop redundant declarations

Both intel_pmu_enable_bts() and intel_pmu_disable_bts() are in perf_event.h
header file, no need to have them declared again in the driver.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1435140349-32588-2-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/uncore: Use Sandy Bridge client PMU on Haswell/Broadwell
Andi Kleen [Mon, 15 Jun 2015 05:57:41 +0000 (22:57 -0700)]
perf/x86/intel/uncore: Use Sandy Bridge client PMU on Haswell/Broadwell

Haswell and Broadwell have the same uncore CBOX/ARB PMU as Sandy Bridge.
Add the respective model numbers to enable the SNB uncore PMU.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1434347862-28490-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/uncore: Add support for ARB uncore PMU on Sandy/IvyBridge
Andi Kleen [Mon, 15 Jun 2015 05:57:40 +0000 (22:57 -0700)]
perf/x86/intel/uncore: Add support for ARB uncore PMU on Sandy/IvyBridge

Add a new "ARB" uncore PMU that is used to monitor the uncore queue
arbiter. This is useful to measure uncore queue occupancy and similar
statistics. The registers all have the same format as the
existing CBOX PMU.

Also move the event constraints from the CBOX to ARB. The 0x80+
events are ARB events and cannot be scheduled on a CBOX PMU.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1434347862-28490-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/uncore: Remove use of macro DEFINE_PCI_DEVICE_TABLE()
Vaishali Thakkar [Fri, 17 Jul 2015 05:27:59 +0000 (10:57 +0530)]
perf/x86/intel/uncore: Remove use of macro DEFINE_PCI_DEVICE_TABLE()

The DEFINE_PCI_DEVICE_TABLE() macro is deprecated. Use
'struct pci_device_id' instead of DEFINE_PCI_DEVICE_TABLE(),
with the goal of getting rid of this macro completely.

This Coccinelle semantic patch performs this transformation:

@@
identifier a;
declarer name DEFINE_PCI_DEVICE_TABLE;
initializer i;
@@
- DEFINE_PCI_DEVICE_TABLE(a)
+ const struct pci_device_id a[] = i;

Signed-off-by: Vaishali Thakkar <vthakkar1994@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150717052759.GA6265@vaishali-Ideapad-Z570
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf/x86/intel/rapl: Add support for Knights Landing (KNL)
Dasaratharaman Chandramouli [Tue, 26 May 2015 18:47:39 +0000 (11:47 -0700)]
perf/x86/intel/rapl: Add support for Knights Landing (KNL)

Knights Landing DRAM RAPL supports PKG and DRAM RAPL domains.
DRAM RAPL has a different fixed energy unit (2^-16J) similar to
that of HSW.

Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Stephane Eranian <eranian@google.com>
Acked-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jacob Pan Jun <jacob.jun.pan@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nikhil Rao <nikhil.rao@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/aa63b4a3af3160152fea1a10c807f4200527280c.1432665809.git.dasaratharaman.chandramouli@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Fix the waitqueue_active() check in xol_free_insn_slot()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:36 +0000 (15:40 +0200)]
uprobes: Fix the waitqueue_active() check in xol_free_insn_slot()

The xol_free_insn_slot()->waitqueue_active() check is buggy. We
need mb() after we set the conditon for wait_event(), or
xol_take_insn_slot() can miss the wakeup.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134036.GA4799@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Use vm_special_mapping to name the XOL vma
Oleg Nesterov [Tue, 21 Jul 2015 13:40:33 +0000 (15:40 +0200)]
uprobes: Use vm_special_mapping to name the XOL vma

Change xol_add_vma() to use _install_special_mapping(), this way
we can name the vma installed by uprobes. Currently it looks
like private anonymous mapping, this is confusing and
complicates the debugging. With this change /proc/$pid/maps
reports "[uprobes]".

As a side effect this will cause core dumps to include the XOL vma
and I think this is good; this can help to debug the problem if
the app crashed because it was probed.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134033.GA4796@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Fix the usage of install_special_mapping()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:31 +0000 (15:40 +0200)]
uprobes: Fix the usage of install_special_mapping()

install_special_mapping(pages) expects that "pages" is the zero-
terminated array while xol_add_vma() passes &area->page, this
means that special_mapping_fault() can wrongly use the next
member in xol_area (vaddr) as "struct page *".

Fortunately, this area is not expandable so pgoff != 0 isn't
possible (modulo bugs in special_mapping_vmops), but still this
does not look good.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134031.GA4789@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes/x86: Make arch_uretprobe_is_alive(RP_CHECK_CALL) more clever
Oleg Nesterov [Tue, 21 Jul 2015 13:40:28 +0000 (15:40 +0200)]
uprobes/x86: Make arch_uretprobe_is_alive(RP_CHECK_CALL) more clever

The previous change documents that cleanup_return_instances()
can't always detect the dead frames, the stack can grow. But
there is one special case which imho worth fixing:
arch_uretprobe_is_alive() can return true when the stack didn't
actually grow, but the next "call" insn uses the already
invalidated frame.

Test-case:

#include <stdio.h>
#include <setjmp.h>

jmp_buf jmp;
int nr = 1024;

void func_2(void)
{
if (--nr == 0)
return;
longjmp(jmp, 1);
}

void func_1(void)
{
setjmp(jmp);
func_2();
}

int main(void)
{
func_1();
return 0;
}

If you ret-probe func_1() and func_2() prepare_uretprobe() hits
the MAX_URETPROBE_DEPTH limit and "return" from func_2() is not
reported.

When we know that the new call is not chained, we can do the
more strict check. In this case "sp" points to the new ret-addr,
so every frame which uses the same "sp" must be dead. The only
complication is that arch_uretprobe_is_alive() needs to know was
it chained or not, so we add the new RP_CHECK_CHAIN_CALL enum
and change prepare_uretprobe() to pass RP_CHECK_CALL only if
!chained.

Note: arch_uretprobe_is_alive() could also re-read *sp and check
if this word is still trampoline_vaddr. This could obviously
improve the logic, but I would like to avoid another
copy_from_user() especially in the case when we can't avoid the
false "alive == T" positives.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134028.GA4786@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Add the "enum rp_check ctx" arg to arch_uretprobe_is_alive()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:26 +0000 (15:40 +0200)]
uprobes: Add the "enum rp_check ctx" arg to arch_uretprobe_is_alive()

arch/x86 doesn't care (so far), but as Pratyush Anand pointed
out other architectures might want why arch_uretprobe_is_alive()
was called and use different checks depending on the context.
Add the new argument to distinguish 2 callers.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134026.GA4779@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Change prepare_uretprobe() to (try to) flush the dead frames
Oleg Nesterov [Tue, 21 Jul 2015 13:40:23 +0000 (15:40 +0200)]
uprobes: Change prepare_uretprobe() to (try to) flush the dead frames

Change prepare_uretprobe() to flush the !arch_uretprobe_is_alive()
return_instance's. This is not needed correctness-wise, but can help
to avoid the failure caused by MAX_URETPROBE_DEPTH.

Note: in this case arch_uretprobe_is_alive() can be false
positive, the stack can grow after longjmp(). Unfortunately, the
kernel can't 100% solve this problem, but see the next patch.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134023.GA4776@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Change handle_trampoline() to flush the frames invalidated by longjmp()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:21 +0000 (15:40 +0200)]
uprobes: Change handle_trampoline() to flush the frames invalidated by longjmp()

Test-case:

#include <stdio.h>
#include <setjmp.h>

jmp_buf jmp;

void func_2(void)
{
longjmp(jmp, 1);
}

void func_1(void)
{
if (setjmp(jmp))
return;
func_2();
printf("ERR!! I am running on the caller's stack\n");
}

int main(void)
{
func_1();
return 0;
}

fails if you probe func_1() and func_2() because
handle_trampoline() assumes that the probed function should must
return and hit the bp installed be prepare_uretprobe(). But in
this case func_2() does not return, so when func_1() returns the
kernel uses the no longer valid return_instance of func_2().

Change handle_trampoline() to unwind ->return_instances until we
know that the next chain is alive or NULL, this ensures that the
current chain is the last we need to report and free.

Alternatively, every return_instance could use unique
trampoline_vaddr, in this case we could use it as a key. And
this could solve the problem with sigaltstack() automatically.

But this approach needs more changes, and it puts the "hard"
limit on MAX_URETPROBE_DEPTH. Plus it can not solve another
problem partially fixed by the next patch.

Note: this change has no effect on !x86, the arch-agnostic
version of arch_uretprobe_is_alive() just returns "true".

TODO: as documented by the previous change, arch_uretprobe_is_alive()
      can be fooled by sigaltstack/etc.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134021.GA4773@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes/x86: Reimplement arch_uretprobe_is_alive()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:18 +0000 (15:40 +0200)]
uprobes/x86: Reimplement arch_uretprobe_is_alive()

Add the x86 specific version of arch_uretprobe_is_alive()
helper. It returns true if the stack frame mangled by
prepare_uretprobe() is still on stack. So if it returns false,
we know that the probed function has already returned.

We add the new return_instance->stack member and change the
generic code to initialize it in prepare_uretprobe, but it
should be equally useful for other architectures.

TODO: this assumes that the probed application can't use
      multiple stacks (say sigaltstack). We will try to improve
      this logic later.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134018.GA4766@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Export 'struct return_instance', introduce arch_uretprobe_is_alive()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:16 +0000 (15:40 +0200)]
uprobes: Export 'struct return_instance', introduce arch_uretprobe_is_alive()

Add the new "weak" helper, arch_uretprobe_is_alive(), used by
the next patches. It should return true if this return_instance
is still valid. The arch agnostic version just always returns
true.

The patch exports "struct return_instance" for the architectures
which want to override this hook. We can also cleanup
prepare_uretprobe() if we pass the new return_instance to
arch_uretprobe_hijack_return_addr().

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134016.GA4762@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Change handle_trampoline() to find the next chain beforehand
Oleg Nesterov [Tue, 21 Jul 2015 13:40:13 +0000 (15:40 +0200)]
uprobes: Change handle_trampoline() to find the next chain beforehand

No functional changes, preparation.

Add the new helper, find_next_ret_chain(), which finds the first
!chained entry and returns its ->next. Yes, it is suboptimal. We
probably want to turn ->chained into ->start_of_this_chain
pointer and avoid another loop. But this needs the boring
changes in dup_utask(), so lets do this later.

Change the main loop in handle_trampoline() to unwind the stack
until ri is equal to the pointer returned by this new helper.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134013.GA4755@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Change prepare_uretprobe() to use uprobe_warn()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:10 +0000 (15:40 +0200)]
uprobes: Change prepare_uretprobe() to use uprobe_warn()

Turn the last pr_warn() in uprobes.c into uprobe_warn().

While at it:

   - s/kzalloc/kmalloc, we initialize every member of 'ri'

   - remove the pointless comment above the obvious code

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134010.GA4752@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Send SIGILL if handle_trampoline() fails
Oleg Nesterov [Tue, 21 Jul 2015 13:40:08 +0000 (15:40 +0200)]
uprobes: Send SIGILL if handle_trampoline() fails

1. It doesn't make sense to continue if handle_trampoline()
   fails, change handle_swbp() to always return after this call.

2. Turn pr_warn() into uprobe_warn(), and change
   handle_trampoline() to send SIGILL on failure. It is pointless to
   return to user mode with the corrupted instruction_pointer() which
   we can't restore.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134008.GA4745@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Introduce free_ret_instance()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:06 +0000 (15:40 +0200)]
uprobes: Introduce free_ret_instance()

We can simplify uprobe_free_utask() and handle_uretprobe_chain()
if we add a simple helper which does put_uprobe/kfree and
returns the ->next return_instance.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134006.GA4740@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agouprobes: Introduce get_uprobe()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:03 +0000 (15:40 +0200)]
uprobes: Introduce get_uprobe()

Cosmetic. Add the new trivial helper, get_uprobe(). It matches
put_uprobe() we already have and we can simplify a couple of its
users.

Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134003.GA4736@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoMerge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Fri, 31 Jul 2015 07:59:50 +0000 (09:59 +0200)]
Merge tag 'perf-core-for-mingo' of git://git./linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

User visible changes:

  - Force period term to overload global settings, i.e. previously this
    command line:

     $ perf record -e 'cpu/instructions,period=20000/',cycles -c 1000 sleep 1

    would result in both events having a period equal to 1000, with the fix we
    get something saner:

     $ perf evlist -v | grep period
     cpu/instructions,period=20000/: ... { sample_period, sample_freq }: 20000, ...
     cycles: ... { sample_period, sample_freq }: 1000 ...
     $

   (Jiri Olsa)

Infrastructure changes:

  - Use the dummy software event with freq=0 in the twatch.py python
    binding example, to avoid disabling nohz. (Arnaldo Carvalho de Melo)

  - Add some missing constants to the python binding. (Arnaldo Carvalho de Melo)

  - Fix mismatched declarations for elf_getphdrnum, that happens
    only in the corner case where this function is not found on
    the system.  (Arnaldo Carvalho de Melo)

  - Add build test for having ending double slash. (Jiri Olsa)

  - Introduce callgraph_set for callgraph option. (Kan Liang)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoMerge branch 'perf/urgent' into perf/core, to merge fixes before pulling more changes
Ingo Molnar [Fri, 31 Jul 2015 07:59:28 +0000 (09:59 +0200)]
Merge branch 'perf/urgent' into perf/core, to merge fixes before pulling more changes

Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoMerge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Fri, 31 Jul 2015 07:56:48 +0000 (09:56 +0200)]
Merge tag 'perf-urgent-for-mingo' of git://git./linux/kernel/git/acme/linux into perf/urgent

Pull perf/urgent fixes from Arnaldo Carvalho de Melo:

  - Fix 'perf stat' transaction length metrics. (Andi Kleen)

  - Fix test build error when bindir contains double slash. (Pawel Moll)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoperf tests: Adding build test for having ending double slash
Jiri Olsa [Mon, 27 Jul 2015 18:24:17 +0000 (20:24 +0200)]
perf tests: Adding build test for having ending double slash

Pawel Moll reported build issue for having extra slash (/) at the end of
the prefix variable.

  $ make prefix=/usr/local/

    CC       tests/attr.o
  tests/attr.c: In function ‘test__attr’:
  tests/attr.c:168:50: error: expected ‘)’ before ‘;’ token
    snprintf(path_perf, PATH_MAX, "%s/perf", BINDIR);
                                                ^
  tests/attr.c:176:1: error: expected ‘;’ before ‘}’ token
   }
   ^
  tests/attr.c:176:1: error: control reaches end of non-void function [-Werror=return-type]
   }
   ^
  cc1: all warnings being treated as errors

Adding automated test case for this.

Reported-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20150727182417.GD20509@krava.brq.redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Introduce callgraph_set for callgraph option
Kan Liang [Wed, 29 Jul 2015 09:42:12 +0000 (05:42 -0400)]
perf tools: Introduce callgraph_set for callgraph option

Introduce callgraph_set to indicate whether the callgraph option was set
by user.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438162936-59698-4-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Force period term to overload global settings
Jiri Olsa [Wed, 29 Jul 2015 09:42:11 +0000 (05:42 -0400)]
perf tools: Force period term to overload global settings

Currently the command line option settings beats the per event period
settings:

With no global settings, we get per-event configuration:

  $ perf record -e 'cpu/instructions,period=20000/' sleep 1
  $ perf evlist -v
  ... { sample_period, sample_freq }: 20000 ...

With 'c' option period setup, we get 'c' option value:
  $ perf record -e 'cpu/instructions,period=20000/' -c 1000 sleep 1
  $ perf evlist -v
  ... { sample_period, sample_freq }: 1000 ...

This patch makes the per-event settings overload the global 'c' option
setup:

  $ perf record -e 'cpu/instructions,period=20000/' -c 1000 sleep 1
  $ perf evlist -v
  ... { sample_period, sample_freq }: 20000 ...

I think the making the per-event settings to overload any other config
makes more sense than current state. However it breaks the current
'period' term handling, which might cause some noise.. so let's see ;-).

Also fixing parse event tests with the new behaviour.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438162936-59698-3-git-send-email-kan.liang@intel.com
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf tools: Add support for event post configuration
Jiri Olsa [Wed, 29 Jul 2015 09:42:10 +0000 (05:42 -0400)]
perf tools: Add support for event post configuration

Add support to overload any global settings for event and force user
specified term value. It will be useful for new time and backtrace
terms.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438162936-59698-2-git-send-email-kan.liang@intel.com
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf session env: Rename exit method
Arnaldo Carvalho de Melo [Wed, 29 Jul 2015 15:18:24 +0000 (12:18 -0300)]
perf session env: Rename exit method

The semantic associated in tools/perf/ with foo__delete(instance) is to
release all resources referenced by 'instance' members and then release
the memory for 'instance' itself.

The perf_session_env__delete() function isn't doing this, it just does
the first part, but the space used by 'instance' itself isn't freed, as
it is embedded in a larger structure, that will be freed at other stage.

For these cases we se foo__exit(), i.e. the usage is:

 void foo__delete(foo)
 {
         if (foo) {
                 foo__exit(foo);
                 free(foo);
         }
 }

But when we have something like:

 struct bar {
         struct foo foo;
         . . .
 }

Then we can't really call foo__delete(&bar.foo), we must have this
instead:

 void bar__exit(bar)
 {
         foo__exit(&bar.foo);
         /* free other bar-> resources */
 }

 void bar__delete(bar)
 {
         if (bar) {
bar__exit(bar);
                free(bar);
         }
 }

So just rename perf_session_env__delete() to perf_session_env__exit().

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-djbgpcfo5udqptx3q0flwtmk@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
8 years agoperf symbols: Fix mismatched declarations for elf_getphdrnum
Arnaldo Carvalho de Melo [Tue, 28 Jul 2015 15:01:33 +0000 (12:01 -0300)]
perf symbols: Fix mismatched declarations for elf_getphdrnum

When HAVE_ELF_GETPHDRNUM_SUPPORT is false we trip on this problem:

    CC       /tmp/build/perf/util/symbol-elf.o
  util/symbol-elf.c:41:12: error: static declaration of ‘elf_getphdrnum’ follows non-static declaration
   static int elf_getphdrnum(Elf *elf, size_t *dst)
            ^
  In file included from util/symbol.h:19:0,
                   from util/symbol-elf.c:8:
  /usr/include/libelf.h:206:12: note: previous declaration of ‘elf_getphdrnum’ was here
   extern int elf_getphdrnum (Elf *__elf, size_t *__dst);
            ^
    MKDIR    /tmp/build/perf/bench/
  /home/git/linux/tools/build/Makefile.build:68: recipe for target '/tmp/build/perf/util/symbol-elf.o' failed
  make[3]: *** [/tmp/build/perf/util/symbol-elf.o] Error 1

Fix it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qcmekyfedmov4sxr0wahcikr@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>