<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/trace/ftrace.c, branch v3.14</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v3.14</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v3.14'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2014-01-23T00:35:21Z</updated>
<entry>
<title>Merge tag 'trace-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace</title>
<updated>2014-01-23T00:35:21Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2014-01-23T00:35:21Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=60eaa0190f6b39dce18eb1975d9773ed8bc9a534'/>
<id>urn:sha1:60eaa0190f6b39dce18eb1975d9773ed8bc9a534</id>
<content type='text'>
Pull tracing updates from Steven Rostedt:
 "This pull request has a new feature to ftrace, namely the trace event
  triggers by Tom Zanussi.  A trigger is a way to enable an action when
  an event is hit.  The actions are:

   o  trace on/off - enable or disable tracing
   o  snapshot     - save the current trace buffer in the snapshot
   o  stacktrace   - dump the current stack trace to the ringbuffer
   o  enable/disable events - enable or disable another event

  Namhyung Kim added updates to the tracing uprobes code.  Having the
  uprobes add support for fetch methods.

  The rest are various bug fixes with the new code, and minor ones for
  the old code"

* tag 'trace-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (38 commits)
  tracing: Fix buggered tee(2) on tracing_pipe
  tracing: Have trace buffer point back to trace_array
  ftrace: Fix synchronization location disabling and freeing ftrace_ops
  ftrace: Have function graph only trace based on global_ops filters
  ftrace: Synchronize setting function_trace_op with ftrace_trace_function
  tracing: Show available event triggers when no trigger is set
  tracing: Consolidate event trigger code
  tracing: Fix counter for traceon/off event triggers
  tracing: Remove double-underscore naming in syscall trigger invocations
  tracing/kprobes: Add trace event trigger invocations
  tracing/probes: Fix build break on !CONFIG_KPROBE_EVENT
  tracing/uprobes: Add @+file_offset fetch method
  uprobes: Allocate -&gt;utask before handler_chain() for tracing handlers
  tracing/uprobes: Add support for full argument access methods
  tracing/uprobes: Fetch args before reserving a ring buffer
  tracing/uprobes: Pass 'is_return' to traceprobe_parse_probe_arg()
  tracing/probes: Implement 'memory' fetch method for uprobes
  tracing/probes: Add fetch{,_size} member into deref fetch method
  tracing/probes: Move 'symbol' fetch method to kprobes
  tracing/probes: Implement 'stack' fetch method for uprobes
  ...
</content>
</entry>
<entry>
<title>ftrace: Fix synchronization location disabling and freeing ftrace_ops</title>
<updated>2014-01-13T17:56:21Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2014-01-13T17:56:21Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=a4c35ed241129dd142be4cadb1e5a474a56d5464'/>
<id>urn:sha1:a4c35ed241129dd142be4cadb1e5a474a56d5464</id>
<content type='text'>
The synchronization needed after ftrace_ops are unregistered must happen
after the callback is disabled from becing called by functions.

The current location happens after the function is being removed from the
internal lists, but not after the function callbacks were disabled, leaving
the functions susceptible of being called after their callbacks are freed.

This affects perf and any externel users of function tracing (LTTng and
SystemTap).

Cc: stable@vger.kernel.org # 3.0+
Fixes: cdbe61bfe704 "ftrace: Allow dynamically allocated function tracers"
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Have function graph only trace based on global_ops filters</title>
<updated>2014-01-13T15:52:58Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2014-01-13T15:30:23Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=23a8e8441a0a74dd612edf81dc89d1600bc0a3d1'/>
<id>urn:sha1:23a8e8441a0a74dd612edf81dc89d1600bc0a3d1</id>
<content type='text'>
Doing some different tests, I discovered that function graph tracing, when
filtered via the set_ftrace_filter and set_ftrace_notrace files, does
not always keep with them if another function ftrace_ops is registered
to trace functions.

The reason is that function graph just happens to trace all functions
that the function tracer enables. When there was only one user of
function tracing, the function graph tracer did not need to worry about
being called by functions that it did not want to trace. But now that there
are other users, this becomes a problem.

For example, one just needs to do the following:

 # cd /sys/kernel/debug/tracing
 # echo schedule &gt; set_ftrace_filter
 # echo function_graph &gt; current_tracer
 # cat trace
[..]
 0)               |  schedule() {
 ------------------------------------------
 0)    &lt;idle&gt;-0    =&gt;   rcu_pre-7
 ------------------------------------------

 0) ! 2980.314 us |  }
 0)               |  schedule() {
 ------------------------------------------
 0)   rcu_pre-7    =&gt;    &lt;idle&gt;-0
 ------------------------------------------

 0) + 20.701 us   |  }

 # echo 1 &gt; /proc/sys/kernel/stack_tracer_enabled
 # cat trace
[..]
 1) + 20.825 us   |      }
 1) + 21.651 us   |    }
 1) + 30.924 us   |  } /* SyS_ioctl */
 1)               |  do_page_fault() {
 1)               |    __do_page_fault() {
 1)   0.274 us    |      down_read_trylock();
 1)   0.098 us    |      find_vma();
 1)               |      handle_mm_fault() {
 1)               |        _raw_spin_lock() {
 1)   0.102 us    |          preempt_count_add();
 1)   0.097 us    |          do_raw_spin_lock();
 1)   2.173 us    |        }
 1)               |        do_wp_page() {
 1)   0.079 us    |          vm_normal_page();
 1)   0.086 us    |          reuse_swap_page();
 1)   0.076 us    |          page_move_anon_rmap();
 1)               |          unlock_page() {
 1)   0.082 us    |            page_waitqueue();
 1)   0.086 us    |            __wake_up_bit();
 1)   1.801 us    |          }
 1)   0.075 us    |          ptep_set_access_flags();
 1)               |          _raw_spin_unlock() {
 1)   0.098 us    |            do_raw_spin_unlock();
 1)   0.105 us    |            preempt_count_sub();
 1)   1.884 us    |          }
 1)   9.149 us    |        }
 1) + 13.083 us   |      }
 1)   0.146 us    |      up_read();

When the stack tracer was enabled, it enabled all functions to be traced, which
now the function graph tracer also traces. This is a side effect that should
not occur.

To fix this a test is added when the function tracing is changed, as well as when
the graph tracer is enabled, to see if anything other than the ftrace global_ops
function tracer is enabled. If so, then the graph tracer calls a test trampoline
that will look at the function that is being traced and compare it with the
filters defined by the global_ops.

As an optimization, if there's no other function tracers registered, or if
the only registered function tracers also use the global ops, the function
graph infrastructure will call the registered function graph callback directly
and not go through the test trampoline.

Cc: stable@vger.kernel.org # 3.3+
Fixes: d2d45c7a03a2 "tracing: Have stack_tracer use a separate list of functions"
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Synchronize setting function_trace_op with ftrace_trace_function</title>
<updated>2014-01-10T03:00:25Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-11-08T19:17:30Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=405e1d834807e51b2ebd3dea81cb51e53fb61504'/>
<id>urn:sha1:405e1d834807e51b2ebd3dea81cb51e53fb61504</id>
<content type='text'>
ftrace_trace_function is a variable that holds what function will be called
directly by the assembly code (mcount). If just a single function is
registered and it handles recursion itself, then the assembly will call that
function directly without any helper function. It also passes in the
ftrace_op that was registered with the callback. The ftrace_op to send is
stored in the function_trace_op variable.

The ftrace_trace_function and function_trace_op needs to be coordinated such
that the called callback wont be called with the wrong ftrace_op, otherwise
bad things can happen if it expected a different op. Luckily, there's no
callback that doesn't use the helper functions that requires this. But
there soon will be and this needs to be fixed.

Use a set_function_trace_op to store the ftrace_op to set the
function_trace_op to when it is safe to do so (during the update function
within the breakpoint or stop machine calls). Or if dynamic ftrace is not
being used (static tracing) then we have to do a bit more synchronization
when the ftrace_trace_function is set as that takes affect immediately
(as oppose to dynamic ftrace doing it with the modification of the trampoline).

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>tracing: Add generic tracing_lseek() function</title>
<updated>2014-01-02T21:17:12Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-12-21T22:39:40Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=098c879e1f2d6ee7afbfe959f6b04070065cec90'/>
<id>urn:sha1:098c879e1f2d6ee7afbfe959f6b04070065cec90</id>
<content type='text'>
Trace event triggers added a lseek that uses the ftrace_filter_lseek()
function. Unfortunately, when function tracing is not configured in
that function is not defined and the kernel fails to build.

This is the second time that function was added to a file ops and
it broke the build due to requiring special config dependencies.

Make a generic tracing_lseek() that all the tracing utilities may
use.

Also, modify the old ftrace_filter_lseek() to return 0 instead of
1 on WRONLY. Not sure why it was a 1 as that does not make sense.

This also changes the old tracing_seek() to modify the file pos
pointer on WRONLY as well.

Reported-by: kbuild test robot &lt;fengguang.wu@intel.com&gt;
Tested-by: Tom Zanussi &lt;tom.zanussi@linux.intel.com&gt;
Acked-by: Tom Zanussi &lt;tom.zanussi@linux.intel.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Initialize the ftrace profiler for each possible cpu</title>
<updated>2013-12-16T15:53:46Z</updated>
<author>
<name>Miao Xie</name>
<email>miaox@cn.fujitsu.com</email>
</author>
<published>2013-12-16T07:20:01Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c4602c1c818bd6626178d6d3fcc152d9f2f48ac0'/>
<id>urn:sha1:c4602c1c818bd6626178d6d3fcc152d9f2f48ac0</id>
<content type='text'>
Ftrace currently initializes only the online CPUs. This implementation has
two problems:
- If we online a CPU after we enable the function profile, and then run the
  test, we will lose the trace information on that CPU.
  Steps to reproduce:
  # echo 0 &gt; /sys/devices/system/cpu/cpu1/online
  # cd &lt;debugfs&gt;/tracing/
  # echo &lt;some function name&gt; &gt;&gt; set_ftrace_filter
  # echo 1 &gt; function_profile_enabled
  # echo 1 &gt; /sys/devices/system/cpu/cpu1/online
  # run test
- If we offline a CPU before we enable the function profile, we will not clear
  the trace information when we enable the function profile. It will trouble
  the users.
  Steps to reproduce:
  # cd &lt;debugfs&gt;/tracing/
  # echo &lt;some function name&gt; &gt;&gt; set_ftrace_filter
  # echo 1 &gt; function_profile_enabled
  # run test
  # cat trace_stat/function*
  # echo 0 &gt; /sys/devices/system/cpu/cpu1/online
  # echo 0 &gt; function_profile_enabled
  # echo 1 &gt; function_profile_enabled
  # cat trace_stat/function*
  # run test
  # cat trace_stat/function*

So it is better that we initialize the ftrace profiler for each possible cpu
every time we enable the function profile instead of just the online ones.

Link: http://lkml.kernel.org/r/1387178401-10619-1-git-send-email-miaox@cn.fujitsu.com

Cc: stable@vger.kernel.org # 2.6.31+
Signed-off-by: Miao Xie &lt;miaox@cn.fujitsu.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Fix function graph with loading of modules</title>
<updated>2013-11-26T15:36:50Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-11-26T01:59:46Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8a56d7761d2d041ae5e8215d20b4167d8aa93f51'/>
<id>urn:sha1:8a56d7761d2d041ae5e8215d20b4167d8aa93f51</id>
<content type='text'>
Commit 8c4f3c3fa9681 "ftrace: Check module functions being traced on reload"
fixed module loading and unloading with respect to function tracing, but
it missed the function graph tracer. If you perform the following

 # cd /sys/kernel/debug/tracing
 # echo function_graph &gt; current_tracer
 # modprobe nfsd
 # echo nop &gt; current_tracer

You'll get the following oops message:

 ------------[ cut here ]------------
 WARNING: CPU: 2 PID: 2910 at /linux.git/kernel/trace/ftrace.c:1640 __ftrace_hash_rec_update.part.35+0x168/0x1b9()
 Modules linked in: nfsd exportfs nfs_acl lockd ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables uinput snd_hda_codec_idt
 CPU: 2 PID: 2910 Comm: bash Not tainted 3.13.0-rc1-test #7
 Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
  0000000000000668 ffff8800787efcf8 ffffffff814fe193 ffff88007d500000
  0000000000000000 ffff8800787efd38 ffffffff8103b80a 0000000000000668
  ffffffff810b2b9a ffffffff81a48370 0000000000000001 ffff880037aea000
 Call Trace:
  [&lt;ffffffff814fe193&gt;] dump_stack+0x4f/0x7c
  [&lt;ffffffff8103b80a&gt;] warn_slowpath_common+0x81/0x9b
  [&lt;ffffffff810b2b9a&gt;] ? __ftrace_hash_rec_update.part.35+0x168/0x1b9
  [&lt;ffffffff8103b83e&gt;] warn_slowpath_null+0x1a/0x1c
  [&lt;ffffffff810b2b9a&gt;] __ftrace_hash_rec_update.part.35+0x168/0x1b9
  [&lt;ffffffff81502f89&gt;] ? __mutex_lock_slowpath+0x364/0x364
  [&lt;ffffffff810b2cc2&gt;] ftrace_shutdown+0xd7/0x12b
  [&lt;ffffffff810b47f0&gt;] unregister_ftrace_graph+0x49/0x78
  [&lt;ffffffff810c4b30&gt;] graph_trace_reset+0xe/0x10
  [&lt;ffffffff810bf393&gt;] tracing_set_tracer+0xa7/0x26a
  [&lt;ffffffff810bf5e1&gt;] tracing_set_trace_write+0x8b/0xbd
  [&lt;ffffffff810c501c&gt;] ? ftrace_return_to_handler+0xb2/0xde
  [&lt;ffffffff811240a8&gt;] ? __sb_end_write+0x5e/0x5e
  [&lt;ffffffff81122aed&gt;] vfs_write+0xab/0xf6
  [&lt;ffffffff8150a185&gt;] ftrace_graph_caller+0x85/0x85
  [&lt;ffffffff81122dbd&gt;] SyS_write+0x59/0x82
  [&lt;ffffffff8150a185&gt;] ftrace_graph_caller+0x85/0x85
  [&lt;ffffffff8150a2d2&gt;] system_call_fastpath+0x16/0x1b
 ---[ end trace 940358030751eafb ]---

The above mentioned commit didn't go far enough. Well, it covered the
function tracer by adding checks in __register_ftrace_function(). The
problem is that the function graph tracer circumvents that (for a slight
efficiency gain when function graph trace is running with a function
tracer. The gain was not worth this).

The problem came with ftrace_startup() which should always be called after
__register_ftrace_function(), if you want this bug to be completely fixed.

Anyway, this solution moves __register_ftrace_function() inside of
ftrace_startup() and removes the need to call them both.

Reported-by: Dave Wysochanski &lt;dwysocha@redhat.com&gt;
Fixes: ed926f9b35cd ("ftrace: Use counters to enable functions to trace")
Cc: stable@vger.kernel.org # 3.0+
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>tracing: Make register/unregister_ftrace_command __init</title>
<updated>2013-11-05T22:43:40Z</updated>
<author>
<name>Tom Zanussi</name>
<email>tom.zanussi@linux.intel.com</email>
</author>
<published>2013-10-24T13:34:18Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=38de93abec8d8acd8d6dbbe9b0d92d6d5cdb3090'/>
<id>urn:sha1:38de93abec8d8acd8d6dbbe9b0d92d6d5cdb3090</id>
<content type='text'>
register/unregister_ftrace_command() are only ever called from __init
functions, so can themselves be made __init.

Also make register_snapshot_cmd() __init for the same reason.

Link: http://lkml.kernel.org/r/d4042c8cadb7ae6f843ac9a89a24e1c6a3099727.1382620672.git.tom.zanussi@linux.intel.com

Signed-off-by: Tom Zanussi &lt;tom.zanussi@linux.intel.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Have control op function callback only trace when RCU is watching</title>
<updated>2013-11-05T21:04:26Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-11-04T23:34:44Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=b5aa3a472b6d13d57a7521a663290dea2fb483a7'/>
<id>urn:sha1:b5aa3a472b6d13d57a7521a663290dea2fb483a7</id>
<content type='text'>
Dave Jones reported that trinity would be able to trigger the following
back trace:

 ===============================
 [ INFO: suspicious RCU usage. ]
 3.10.0-rc2+ #38 Not tainted
 -------------------------------
 include/linux/rcupdate.h:771 rcu_read_lock() used illegally while idle!
 other info that might help us debug this:

 RCU used illegally from idle CPU!  rcu_scheduler_active = 1, debug_locks = 0
 RCU used illegally from extended quiescent state!
 1 lock held by trinity-child1/18786:
  #0:  (rcu_read_lock){.+.+..}, at: [&lt;ffffffff8113dd48&gt;] __perf_event_overflow+0x108/0x310
 stack backtrace:
 CPU: 3 PID: 18786 Comm: trinity-child1 Not tainted 3.10.0-rc2+ #38
  0000000000000000 ffff88020767bac8 ffffffff816e2f6b ffff88020767baf8
  ffffffff810b5897 ffff88021de92520 0000000000000000 ffff88020767bbf8
  0000000000000000 ffff88020767bb78 ffffffff8113ded4 ffffffff8113dd48
 Call Trace:
  [&lt;ffffffff816e2f6b&gt;] dump_stack+0x19/0x1b
  [&lt;ffffffff810b5897&gt;] lockdep_rcu_suspicious+0xe7/0x120
  [&lt;ffffffff8113ded4&gt;] __perf_event_overflow+0x294/0x310
  [&lt;ffffffff8113dd48&gt;] ? __perf_event_overflow+0x108/0x310
  [&lt;ffffffff81309289&gt;] ? __const_udelay+0x29/0x30
  [&lt;ffffffff81076054&gt;] ? __rcu_read_unlock+0x54/0xa0
  [&lt;ffffffff816f4000&gt;] ? ftrace_call+0x5/0x2f
  [&lt;ffffffff8113dfa1&gt;] perf_swevent_overflow+0x51/0xe0
  [&lt;ffffffff8113e08f&gt;] perf_swevent_event+0x5f/0x90
  [&lt;ffffffff8113e1c9&gt;] perf_tp_event+0x109/0x4f0
  [&lt;ffffffff8113e36f&gt;] ? perf_tp_event+0x2af/0x4f0
  [&lt;ffffffff81074630&gt;] ? __rcu_read_lock+0x20/0x20
  [&lt;ffffffff8112d79f&gt;] perf_ftrace_function_call+0xbf/0xd0
  [&lt;ffffffff8110e1e1&gt;] ? ftrace_ops_control_func+0x181/0x210
  [&lt;ffffffff81074630&gt;] ? __rcu_read_lock+0x20/0x20
  [&lt;ffffffff81100cae&gt;] ? rcu_eqs_enter_common+0x5e/0x470
  [&lt;ffffffff8110e1e1&gt;] ftrace_ops_control_func+0x181/0x210
  [&lt;ffffffff816f4000&gt;] ftrace_call+0x5/0x2f
  [&lt;ffffffff8110e229&gt;] ? ftrace_ops_control_func+0x1c9/0x210
  [&lt;ffffffff816f4000&gt;] ? ftrace_call+0x5/0x2f
  [&lt;ffffffff81074635&gt;] ? debug_lockdep_rcu_enabled+0x5/0x40
  [&lt;ffffffff81074635&gt;] ? debug_lockdep_rcu_enabled+0x5/0x40
  [&lt;ffffffff81100cae&gt;] ? rcu_eqs_enter_common+0x5e/0x470
  [&lt;ffffffff8110112a&gt;] rcu_eqs_enter+0x6a/0xb0
  [&lt;ffffffff81103673&gt;] rcu_user_enter+0x13/0x20
  [&lt;ffffffff8114541a&gt;] user_enter+0x6a/0xd0
  [&lt;ffffffff8100f6d8&gt;] syscall_trace_leave+0x78/0x140
  [&lt;ffffffff816f46af&gt;] int_check_syscall_exit_work+0x34/0x3d
 ------------[ cut here ]------------

Perf uses rcu_read_lock() but as the function tracer can trace functions
even when RCU is not currently active, this makes the rcu_read_lock()
used by perf ineffective.

As perf is currently the only user of the ftrace_ops_control_func() and
perf is also the only function callback that actively uses rcu_read_lock(),
the quick fix is to prevent the ftrace_ops_control_func() from calling
its callbacks if RCU is not active.

With Paul's new "rcu_is_watching()" we can tell if RCU is active or not.

Reported-by: Dave Jones &lt;davej@redhat.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Jiri Olsa &lt;jolsa@redhat.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Add set_graph_notrace filter</title>
<updated>2013-10-19T02:23:16Z</updated>
<author>
<name>Namhyung Kim</name>
<email>namhyung.kim@lge.com</email>
</author>
<published>2013-10-14T08:24:26Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=29ad23b00474c34e3b5040dda508c78d33a1a3eb'/>
<id>urn:sha1:29ad23b00474c34e3b5040dda508c78d33a1a3eb</id>
<content type='text'>
The set_graph_notrace filter is analogous to set_ftrace_notrace and
can be used for eliminating uninteresting part of function graph trace
output.  It also works with set_graph_function nicely.

  # cd /sys/kernel/debug/tracing/
  # echo do_page_fault &gt; set_graph_function
  # perf ftrace live true
   2)               |  do_page_fault() {
   2)               |    __do_page_fault() {
   2)   0.381 us    |      down_read_trylock();
   2)   0.055 us    |      __might_sleep();
   2)   0.696 us    |      find_vma();
   2)               |      handle_mm_fault() {
   2)               |        handle_pte_fault() {
   2)               |          __do_fault() {
   2)               |            filemap_fault() {
   2)               |              find_get_page() {
   2)   0.033 us    |                __rcu_read_lock();
   2)   0.035 us    |                __rcu_read_unlock();
   2)   1.696 us    |              }
   2)   0.031 us    |              __might_sleep();
   2)   2.831 us    |            }
   2)               |            _raw_spin_lock() {
   2)   0.046 us    |              add_preempt_count();
   2)   0.841 us    |            }
   2)   0.033 us    |            page_add_file_rmap();
   2)               |            _raw_spin_unlock() {
   2)   0.057 us    |              sub_preempt_count();
   2)   0.568 us    |            }
   2)               |            unlock_page() {
   2)   0.084 us    |              page_waitqueue();
   2)   0.126 us    |              __wake_up_bit();
   2)   1.117 us    |            }
   2)   7.729 us    |          }
   2)   8.397 us    |        }
   2)   8.956 us    |      }
   2)   0.085 us    |      up_read();
   2) + 12.745 us   |    }
   2) + 13.401 us   |  }
  ...

  # echo handle_mm_fault &gt; set_graph_notrace
  # perf ftrace live true
   1)               |  do_page_fault() {
   1)               |    __do_page_fault() {
   1)   0.205 us    |      down_read_trylock();
   1)   0.041 us    |      __might_sleep();
   1)   0.344 us    |      find_vma();
   1)   0.069 us    |      up_read();
   1)   4.692 us    |    }
   1)   5.311 us    |  }
  ...

Link: http://lkml.kernel.org/r/1381739066-7531-5-git-send-email-namhyung@kernel.org

Signed-off-by: Namhyung Kim &lt;namhyung@kernel.org&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
</feed>
