<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/trace/trace.c, branch v4.8</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.8</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.8'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2016-09-26T01:40:13Z</updated>
<entry>
<title>Merge tag 'trace-v4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace</title>
<updated>2016-09-26T01:40:13Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2016-09-26T01:40:13Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=4c04b4b534cbe8c0cc0661e232bbb9708e212bc2'/>
<id>urn:sha1:4c04b4b534cbe8c0cc0661e232bbb9708e212bc2</id>
<content type='text'>
Pull tracefs fixes from Steven Rostedt:
 "Al Viro has been looking at the tracefs code, and has pointed out some
  issues.  This contains one fix by me and one by Al.  I'm sure that
  he'll come up with more but for now I tested these patches and they
  don't appear to have any negative impact on tracing"

* tag 'trace-v4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
  fix memory leaks in tracing_buffers_splice_read()
  tracing: Move mutex to protect against resetting of seq data
</content>
</entry>
<entry>
<title>fix memory leaks in tracing_buffers_splice_read()</title>
<updated>2016-09-25T17:30:13Z</updated>
<author>
<name>Al Viro</name>
<email>viro@zeniv.linux.org.uk</email>
</author>
<published>2016-09-17T22:31:46Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1ae2293dd6d2f5c823cf97e60b70d03631cd622f'/>
<id>urn:sha1:1ae2293dd6d2f5c823cf97e60b70d03631cd622f</id>
<content type='text'>
Cc: stable@vger.kernel.org
Signed-off-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>tracing: Move mutex to protect against resetting of seq data</title>
<updated>2016-09-25T14:27:08Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2016-09-24T02:57:13Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1245800c0f96eb6ebb368593e251d66c01e61022'/>
<id>urn:sha1:1245800c0f96eb6ebb368593e251d66c01e61022</id>
<content type='text'>
The iter-&gt;seq can be reset outside the protection of the mutex. So can
reading of user data. Move the mutex up to the beginning of the function.

Fixes: d7350c3f45694 ("tracing/core: make the read callbacks reentrants")
Cc: stable@vger.kernel.org # 2.6.30+
Reported-by: Al Viro &lt;viro@ZenIV.linux.org.uk&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>tracing: Using for_each_set_bit() to simplify trace_pid_write()</title>
<updated>2016-07-05T15:22:40Z</updated>
<author>
<name>Wei Yongjun</name>
<email>yongjun_wei@trendmicro.com.cn</email>
</author>
<published>2016-07-04T15:10:04Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=67f20b084574def586ecba68508acd5d054ccc88'/>
<id>urn:sha1:67f20b084574def586ecba68508acd5d054ccc88</id>
<content type='text'>
Using for_each_set_bit() to simplify the code.

Link: http://lkml.kernel.org/r/1467645004-11169-1-git-send-email-weiyj_lk@163.com

Signed-off-by: Wei Yongjun &lt;yongjun_wei@trendmicro.com.cn&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Move toplevel init out of ftrace_init_tracefs()</title>
<updated>2016-07-05T14:47:03Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2016-07-05T14:04:34Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=501c2375253c0795048f48368e0b3e8b2f6646dc'/>
<id>urn:sha1:501c2375253c0795048f48368e0b3e8b2f6646dc</id>
<content type='text'>
Commit 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap like events
do") placed ftrace_init_tracefs into the instance creation, and encapsulated
the top level updating with an if conditional, as the top level only gets
updated at boot up. Unfortunately, this triggers section mismatch errors as
the init functions are called from a function that can be called later, and
the section mismatch logic is unaware of the if conditional that would
prevent it from happening at run time.

To make everyone happy, create a separate ftrace_init_tracefs_toplevel()
routine that only gets called by init functions, and this will be what calls
other init functions for the toplevel directory.

Link: http://lkml.kernel.org/r/20160704102139.19cbc0d9@gandalf.local.home

Reported-by: kbuild test robot &lt;fengguang.wu@intel.com&gt;
Reported-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Fixes: 345ddcc882d8 ("ftrace: Have set_ftrace_pid use the bitmap like events do")
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>tracing: Skip more functions when doing stack tracing of events</title>
<updated>2016-06-23T22:48:56Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2016-06-23T18:03:47Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=be54f69c26193de31053190761e521903b89d098'/>
<id>urn:sha1:be54f69c26193de31053190761e521903b89d098</id>
<content type='text'>
 # echo 1 &gt; options/stacktrace
 # echo 1 &gt; events/sched/sched_switch/enable
 # cat trace
          &lt;idle&gt;-0     [002] d..2  1982.525169: &lt;stack trace&gt;
 =&gt; save_stack_trace
 =&gt; __ftrace_trace_stack
 =&gt; trace_buffer_unlock_commit_regs
 =&gt; event_trigger_unlock_commit
 =&gt; trace_event_buffer_commit
 =&gt; trace_event_raw_event_sched_switch
 =&gt; __schedule
 =&gt; schedule
 =&gt; schedule_preempt_disabled
 =&gt; cpu_startup_entry
 =&gt; start_secondary

The above shows that we are seeing 6 functions before ever making it to the
caller of the sched_switch event.

 # echo stacktrace &gt; events/sched/sched_switch/trigger
 # cat trace
          &lt;idle&gt;-0     [002] d..3  2146.335208: &lt;stack trace&gt;
 =&gt; trace_event_buffer_commit
 =&gt; trace_event_raw_event_sched_switch
 =&gt; __schedule
 =&gt; schedule
 =&gt; schedule_preempt_disabled
 =&gt; cpu_startup_entry
 =&gt; start_secondary

The stacktrace trigger isn't as bad, because it adds its own skip to the
stacktracing, but still has two events extra.

One issue is that if the stacktrace passes its own "regs" then there should
be no addition to the skip, as the regs will not include the functions being
called. This was an issue that was fixed by commit 7717c6be6999 ("tracing:
Fix stacktrace skip depth in trace_buffer_unlock_commit_regs()" as adding
the skip number for kprobes made the probes not have any stack at all.

But since this is only an issue when regs is being used, a skip should be
added if regs is NULL. Now we have:

 # echo 1 &gt; options/stacktrace
 # echo 1 &gt; events/sched/sched_switch/enable
 # cat trace
          &lt;idle&gt;-0     [000] d..2  1297.676333: &lt;stack trace&gt;
 =&gt; __schedule
 =&gt; schedule
 =&gt; schedule_preempt_disabled
 =&gt; cpu_startup_entry
 =&gt; rest_init
 =&gt; start_kernel
 =&gt; x86_64_start_reservations
 =&gt; x86_64_start_kernel

 # echo stacktrace &gt; events/sched/sched_switch/trigger
 # cat trace
          &lt;idle&gt;-0     [002] d..3  1370.759745: &lt;stack trace&gt;
 =&gt; __schedule
 =&gt; schedule
 =&gt; schedule_preempt_disabled
 =&gt; cpu_startup_entry
 =&gt; start_secondary

And kprobes are not touched.

Reported-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>tracing: Choose static tp_printk buffer by explicit nesting count</title>
<updated>2016-06-20T13:54:20Z</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@kernel.org</email>
</author>
<published>2016-05-26T19:00:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=e2ace001176dc9745a472fe8bda1f0b28a4d7351'/>
<id>urn:sha1:e2ace001176dc9745a472fe8bda1f0b28a4d7351</id>
<content type='text'>
Currently, the trace_printk code chooses which static buffer to use based
on what type of atomic context (NMI, IRQ, etc) it's in.  Simplify the
code and make it more robust: simply count the nesting depth and choose
a buffer based on the current nesting depth.

The new code will only drop an event if we nest more than 4 deep,
and the old code was guaranteed to malfunction if that happened.

Link: http://lkml.kernel.org/r/07ab03aecfba25fcce8f9a211b14c9c5e2865c58.1464289095.git.luto@kernel.org

Acked-by: Namhyung Kim &lt;namhyung@kernel.org&gt;
Signed-off-by: Andy Lutomirski &lt;luto@kernel.org&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>ftrace: Have set_ftrace_pid use the bitmap like events do</title>
<updated>2016-06-20T13:54:19Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2016-04-22T22:11:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=345ddcc882d8896dcbdcb3e0ee4a415fc23ec8b0'/>
<id>urn:sha1:345ddcc882d8896dcbdcb3e0ee4a415fc23ec8b0</id>
<content type='text'>
Convert set_ftrace_pid to use the bitmap like set_event_pid does. This
allows for instances to use the pid filtering as well, and will allow for
function-fork option to set if the children of a traced function should be
traced or not.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>tracing: Move pid_list write processing into its own function</title>
<updated>2016-06-20T13:54:18Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2016-04-21T15:35:30Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=76c813e26606d35ea9d8d6f96e646b3944c730a9'/>
<id>urn:sha1:76c813e26606d35ea9d8d6f96e646b3944c730a9</id>
<content type='text'>
The addition of PIDs into a pid_list via the write operation of
set_event_pid is a bit complex. The same operation will be needed for
function tracing pids. Move the code into its own generic function in
trace.c, so that we can avoid duplication of this code.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
<entry>
<title>tracing: Move the pid_list seq_file functions to be global</title>
<updated>2016-06-20T13:54:17Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2016-04-20T19:19:54Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=5cc8976bd52153678ca37cc1e3000833b20276f3'/>
<id>urn:sha1:5cc8976bd52153678ca37cc1e3000833b20276f3</id>
<content type='text'>
To allow other aspects of ftrace to use the pid_list logic, we need to reuse
the seq_file functions. Making the generic part into functions that can be
called by other files will help in this regard.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
</content>
</entry>
</feed>
