<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/trace/bpf_trace.c, branch v5.4</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v5.4</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v5.4'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2019-09-29T00:47:33Z</updated>
<entry>
<title>Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net</title>
<updated>2019-09-29T00:47:33Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2019-09-29T00:47:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=02dc96ef6c25f990452c114c59d75c368a1f4c8f'/>
<id>urn:sha1:02dc96ef6c25f990452c114c59d75c368a1f4c8f</id>
<content type='text'>
Pull networking fixes from David Miller:

 1) Sanity check URB networking device parameters to avoid divide by
    zero, from Oliver Neukum.

 2) Disable global multicast filter in NCSI, otherwise LLDP and IPV6
    don't work properly. Longer term this needs a better fix tho. From
    Vijay Khemka.

 3) Small fixes to selftests (use ping when ping6 is not present, etc.)
    from David Ahern.

 4) Bring back rt_uses_gateway member of struct rtable, it's semantics
    were not well understood and trying to remove it broke things. From
    David Ahern.

 5) Move usbnet snaity checking, ignore endpoints with invalid
    wMaxPacketSize. From Bjørn Mork.

 6) Missing Kconfig deps for sja1105 driver, from Mao Wenan.

 7) Various small fixes to the mlx5 DR steering code, from Alaa Hleihel,
    Alex Vesker, and Yevgeny Kliteynik

 8) Missing CAP_NET_RAW checks in various places, from Ori Nimron.

 9) Fix crash when removing sch_cbs entry while offloading is enabled,
    from Vinicius Costa Gomes.

10) Signedness bug fixes, generally in looking at the result given by
    of_get_phy_mode() and friends. From Dan Crapenter.

11) Disable preemption around BPF_PROG_RUN() calls, from Eric Dumazet.

12) Don't create VRF ipv6 rules if ipv6 is disabled, from David Ahern.

13) Fix quantization code in tcp_bbr, from Kevin Yang.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (127 commits)
  net: tap: clean up an indentation issue
  nfp: abm: fix memory leak in nfp_abm_u32_knode_replace
  tcp: better handle TCP_USER_TIMEOUT in SYN_SENT state
  sk_buff: drop all skb extensions on free and skb scrubbing
  tcp_bbr: fix quantization code to not raise cwnd if not probing bandwidth
  mlxsw: spectrum_flower: Fail in case user specifies multiple mirror actions
  Documentation: Clarify trap's description
  mlxsw: spectrum: Clear VLAN filters during port initialization
  net: ena: clean up indentation issue
  NFC: st95hf: clean up indentation issue
  net: phy: micrel: add Asym Pause workaround for KSZ9021
  net: socionext: ave: Avoid using netdev_err() before calling register_netdev()
  ptp: correctly disable flags on old ioctls
  lib: dimlib: fix help text typos
  net: dsa: microchip: Always set regmap stride to 1
  nfp: flower: fix memory leak in nfp_flower_spawn_vnic_reprs
  nfp: flower: prevent memory leak in nfp_flower_spawn_phy_reprs
  net/sched: Set default of CONFIG_NET_TC_SKB_EXT to N
  vrf: Do not attempt to create IPv6 mcast rule if IPv6 is disabled
  net: sched: sch_sfb: don't call qdisc_put() while holding tree lock
  ...
</content>
</entry>
<entry>
<title>Merge branch 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security</title>
<updated>2019-09-28T15:14:15Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2019-09-28T15:14:15Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=aefcf2f4b58155d27340ba5f9ddbe9513da8286d'/>
<id>urn:sha1:aefcf2f4b58155d27340ba5f9ddbe9513da8286d</id>
<content type='text'>
Pull kernel lockdown mode from James Morris:
 "This is the latest iteration of the kernel lockdown patchset, from
  Matthew Garrett, David Howells and others.

  From the original description:

    This patchset introduces an optional kernel lockdown feature,
    intended to strengthen the boundary between UID 0 and the kernel.
    When enabled, various pieces of kernel functionality are restricted.
    Applications that rely on low-level access to either hardware or the
    kernel may cease working as a result - therefore this should not be
    enabled without appropriate evaluation beforehand.

    The majority of mainstream distributions have been carrying variants
    of this patchset for many years now, so there's value in providing a
    doesn't meet every distribution requirement, but gets us much closer
    to not requiring external patches.

  There are two major changes since this was last proposed for mainline:

   - Separating lockdown from EFI secure boot. Background discussion is
     covered here: https://lwn.net/Articles/751061/

   -  Implementation as an LSM, with a default stackable lockdown LSM
      module. This allows the lockdown feature to be policy-driven,
      rather than encoding an implicit policy within the mechanism.

  The new locked_down LSM hook is provided to allow LSMs to make a
  policy decision around whether kernel functionality that would allow
  tampering with or examining the runtime state of the kernel should be
  permitted.

  The included lockdown LSM provides an implementation with a simple
  policy intended for general purpose use. This policy provides a coarse
  level of granularity, controllable via the kernel command line:

    lockdown={integrity|confidentiality}

  Enable the kernel lockdown feature. If set to integrity, kernel features
  that allow userland to modify the running kernel are disabled. If set to
  confidentiality, kernel features that allow userland to extract
  confidential information from the kernel are also disabled.

  This may also be controlled via /sys/kernel/security/lockdown and
  overriden by kernel configuration.

  New or existing LSMs may implement finer-grained controls of the
  lockdown features. Refer to the lockdown_reason documentation in
  include/linux/security.h for details.

  The lockdown feature has had signficant design feedback and review
  across many subsystems. This code has been in linux-next for some
  weeks, with a few fixes applied along the way.

  Stephen Rothwell noted that commit 9d1f8be5cf42 ("bpf: Restrict bpf
  when kernel lockdown is in confidentiality mode") is missing a
  Signed-off-by from its author. Matthew responded that he is providing
  this under category (c) of the DCO"

* 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (31 commits)
  kexec: Fix file verification on S390
  security: constify some arrays in lockdown LSM
  lockdown: Print current-&gt;comm in restriction messages
  efi: Restrict efivar_ssdt_load when the kernel is locked down
  tracefs: Restrict tracefs when the kernel is locked down
  debugfs: Restrict debugfs when the kernel is locked down
  kexec: Allow kexec_file() with appropriate IMA policy when locked down
  lockdown: Lock down perf when in confidentiality mode
  bpf: Restrict bpf when kernel lockdown is in confidentiality mode
  lockdown: Lock down tracing and perf kprobes when in confidentiality mode
  lockdown: Lock down /proc/kcore
  x86/mmiotrace: Lock down the testmmiotrace module
  lockdown: Lock down module params that specify hardware parameters (eg. ioport)
  lockdown: Lock down TIOCSSERIAL
  lockdown: Prohibit PCMCIA CIS storage when the kernel is locked down
  acpi: Disable ACPI table override if the kernel is locked down
  acpi: Ignore acpi_rsdp kernel param when the kernel has been locked down
  ACPI: Limit access to custom_method when the kernel is locked down
  x86/msr: Restrict MSR access when the kernel is locked down
  x86: Lock down IO port access when the kernel is locked down
  ...
</content>
</entry>
<entry>
<title>bpf: Fix bpf_event_output re-entry issue</title>
<updated>2019-09-27T09:24:29Z</updated>
<author>
<name>Allan Zhang</name>
<email>allanzhang@google.com</email>
</author>
<published>2019-09-25T23:43:12Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=768fb61fcc13b2acaca758275d54c09a65e2968b'/>
<id>urn:sha1:768fb61fcc13b2acaca758275d54c09a65e2968b</id>
<content type='text'>
BPF_PROG_TYPE_SOCK_OPS program can reenter bpf_event_output because it
can be called from atomic and non-atomic contexts since we don't have
bpf_prog_active to prevent it happen.

This patch enables 3 levels of nesting to support normal, irq and nmi
context.

We can easily reproduce the issue by running netperf crr mode with 100
flows and 10 threads from netperf client side.

Here is the whole stack dump:

[  515.228898] WARNING: CPU: 20 PID: 14686 at kernel/trace/bpf_trace.c:549 bpf_event_output+0x1f9/0x220
[  515.228903] CPU: 20 PID: 14686 Comm: tcp_crr Tainted: G        W        4.15.0-smp-fixpanic #44
[  515.228904] Hardware name: Intel TBG,ICH10/Ikaria_QC_1b, BIOS 1.22.0 06/04/2018
[  515.228905] RIP: 0010:bpf_event_output+0x1f9/0x220
[  515.228906] RSP: 0018:ffff9a57ffc03938 EFLAGS: 00010246
[  515.228907] RAX: 0000000000000012 RBX: 0000000000000001 RCX: 0000000000000000
[  515.228907] RDX: 0000000000000000 RSI: 0000000000000096 RDI: ffffffff836b0f80
[  515.228908] RBP: ffff9a57ffc039c8 R08: 0000000000000004 R09: 0000000000000012
[  515.228908] R10: ffff9a57ffc1de40 R11: 0000000000000000 R12: 0000000000000002
[  515.228909] R13: ffff9a57e13bae00 R14: 00000000ffffffff R15: ffff9a57ffc1e2c0
[  515.228910] FS:  00007f5a3e6ec700(0000) GS:ffff9a57ffc00000(0000) knlGS:0000000000000000
[  515.228910] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  515.228911] CR2: 0000537082664fff CR3: 000000061fed6002 CR4: 00000000000226f0
[  515.228911] Call Trace:
[  515.228913]  &lt;IRQ&gt;
[  515.228919]  [&lt;ffffffff82c6c6cb&gt;] bpf_sockopt_event_output+0x3b/0x50
[  515.228923]  [&lt;ffffffff8265daee&gt;] ? bpf_ktime_get_ns+0xe/0x10
[  515.228927]  [&lt;ffffffff8266fda5&gt;] ? __cgroup_bpf_run_filter_sock_ops+0x85/0x100
[  515.228930]  [&lt;ffffffff82cf90a5&gt;] ? tcp_init_transfer+0x125/0x150
[  515.228933]  [&lt;ffffffff82cf9159&gt;] ? tcp_finish_connect+0x89/0x110
[  515.228936]  [&lt;ffffffff82cf98e4&gt;] ? tcp_rcv_state_process+0x704/0x1010
[  515.228939]  [&lt;ffffffff82c6e263&gt;] ? sk_filter_trim_cap+0x53/0x2a0
[  515.228942]  [&lt;ffffffff82d90d1f&gt;] ? tcp_v6_inbound_md5_hash+0x6f/0x1d0
[  515.228945]  [&lt;ffffffff82d92160&gt;] ? tcp_v6_do_rcv+0x1c0/0x460
[  515.228947]  [&lt;ffffffff82d93558&gt;] ? tcp_v6_rcv+0x9f8/0xb30
[  515.228951]  [&lt;ffffffff82d737c0&gt;] ? ip6_route_input+0x190/0x220
[  515.228955]  [&lt;ffffffff82d5f7ad&gt;] ? ip6_protocol_deliver_rcu+0x6d/0x450
[  515.228958]  [&lt;ffffffff82d60246&gt;] ? ip6_rcv_finish+0xb6/0x170
[  515.228961]  [&lt;ffffffff82d5fb90&gt;] ? ip6_protocol_deliver_rcu+0x450/0x450
[  515.228963]  [&lt;ffffffff82d60361&gt;] ? ipv6_rcv+0x61/0xe0
[  515.228966]  [&lt;ffffffff82d60190&gt;] ? ipv6_list_rcv+0x330/0x330
[  515.228969]  [&lt;ffffffff82c4976b&gt;] ? __netif_receive_skb_one_core+0x5b/0xa0
[  515.228972]  [&lt;ffffffff82c497d1&gt;] ? __netif_receive_skb+0x21/0x70
[  515.228975]  [&lt;ffffffff82c4a8d2&gt;] ? process_backlog+0xb2/0x150
[  515.228978]  [&lt;ffffffff82c4aadf&gt;] ? net_rx_action+0x16f/0x410
[  515.228982]  [&lt;ffffffff830000dd&gt;] ? __do_softirq+0xdd/0x305
[  515.228986]  [&lt;ffffffff8252cfdc&gt;] ? irq_exit+0x9c/0xb0
[  515.228989]  [&lt;ffffffff82e02de5&gt;] ? smp_call_function_single_interrupt+0x65/0x120
[  515.228991]  [&lt;ffffffff82e020e1&gt;] ? call_function_single_interrupt+0x81/0x90
[  515.228992]  &lt;/IRQ&gt;
[  515.228996]  [&lt;ffffffff82a11ff0&gt;] ? io_serial_in+0x20/0x20
[  515.229000]  [&lt;ffffffff8259c040&gt;] ? console_unlock+0x230/0x490
[  515.229003]  [&lt;ffffffff8259cbaa&gt;] ? vprintk_emit+0x26a/0x2a0
[  515.229006]  [&lt;ffffffff8259cbff&gt;] ? vprintk_default+0x1f/0x30
[  515.229008]  [&lt;ffffffff8259d9f5&gt;] ? vprintk_func+0x35/0x70
[  515.229011]  [&lt;ffffffff8259d4bb&gt;] ? printk+0x50/0x66
[  515.229013]  [&lt;ffffffff82637637&gt;] ? bpf_event_output+0xb7/0x220
[  515.229016]  [&lt;ffffffff82c6c6cb&gt;] ? bpf_sockopt_event_output+0x3b/0x50
[  515.229019]  [&lt;ffffffff8265daee&gt;] ? bpf_ktime_get_ns+0xe/0x10
[  515.229023]  [&lt;ffffffff82c29e87&gt;] ? release_sock+0x97/0xb0
[  515.229026]  [&lt;ffffffff82ce9d6a&gt;] ? tcp_recvmsg+0x31a/0xda0
[  515.229029]  [&lt;ffffffff8266fda5&gt;] ? __cgroup_bpf_run_filter_sock_ops+0x85/0x100
[  515.229032]  [&lt;ffffffff82ce77c1&gt;] ? tcp_set_state+0x191/0x1b0
[  515.229035]  [&lt;ffffffff82ced10e&gt;] ? tcp_disconnect+0x2e/0x600
[  515.229038]  [&lt;ffffffff82cecbbb&gt;] ? tcp_close+0x3eb/0x460
[  515.229040]  [&lt;ffffffff82d21082&gt;] ? inet_release+0x42/0x70
[  515.229043]  [&lt;ffffffff82d58809&gt;] ? inet6_release+0x39/0x50
[  515.229046]  [&lt;ffffffff82c1f32d&gt;] ? __sock_release+0x4d/0xd0
[  515.229049]  [&lt;ffffffff82c1f3e5&gt;] ? sock_close+0x15/0x20
[  515.229052]  [&lt;ffffffff8273b517&gt;] ? __fput+0xe7/0x1f0
[  515.229055]  [&lt;ffffffff8273b66e&gt;] ? ____fput+0xe/0x10
[  515.229058]  [&lt;ffffffff82547bf2&gt;] ? task_work_run+0x82/0xb0
[  515.229061]  [&lt;ffffffff824086df&gt;] ? exit_to_usermode_loop+0x7e/0x11f
[  515.229064]  [&lt;ffffffff82408171&gt;] ? do_syscall_64+0x111/0x130
[  515.229067]  [&lt;ffffffff82e0007c&gt;] ? entry_SYSCALL_64_after_hwframe+0x3d/0xa2

Fixes: a5a3a828cd00 ("bpf: add perf event notificaton support for sock_ops")
Signed-off-by: Allan Zhang &lt;allanzhang@google.com&gt;
Signed-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
Reviewed-by: Stanislav Fomichev &lt;sdf@google.com&gt;
Reviewed-by: Eric Dumazet &lt;edumazet@google.com&gt;
Acked-by: John Fastabend &lt;john.fastabend@gmail.com&gt;
Link: https://lore.kernel.org/bpf/20190925234312.94063-2-allanzhang@google.com
</content>
</entry>
<entry>
<title>bpf: Restrict bpf when kernel lockdown is in confidentiality mode</title>
<updated>2019-08-20T04:54:16Z</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2019-08-20T00:17:59Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=9d1f8be5cf42b497a3bddf1d523f2bb142e9318c'/>
<id>urn:sha1:9d1f8be5cf42b497a3bddf1d523f2bb142e9318c</id>
<content type='text'>
bpf_read() and bpf_read_str() could potentially be abused to (eg) allow
private keys in kernel memory to be leaked. Disable them if the kernel
has been locked down in confidentiality mode.

Suggested-by: Alexei Starovoitov &lt;alexei.starovoitov@gmail.com&gt;
Signed-off-by: Matthew Garrett &lt;mjg59@google.com&gt;
Reviewed-by: Kees Cook &lt;keescook@chromium.org&gt;
cc: netdev@vger.kernel.org
cc: Chun-Yi Lee &lt;jlee@suse.com&gt;
cc: Alexei Starovoitov &lt;alexei.starovoitov@gmail.com&gt;
Cc: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
Signed-off-by: James Morris &lt;jmorris@namei.org&gt;
</content>
</entry>
<entry>
<title>bpf: fix compiler warning with CONFIG_MODULES=n</title>
<updated>2019-06-26T12:44:07Z</updated>
<author>
<name>Yonghong Song</name>
<email>yhs@fb.com</email>
</author>
<published>2019-06-26T00:35:03Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=9db1ff0a415c7de8eb67df5b2c56ac409ccefc37'/>
<id>urn:sha1:9db1ff0a415c7de8eb67df5b2c56ac409ccefc37</id>
<content type='text'>
With CONFIG_MODULES=n, the following compiler warning occurs:
  /data/users/yhs/work/net-next/kernel/trace/bpf_trace.c:605:13: warning:
      ‘do_bpf_send_signal’ defined but not used [-Wunused-function]
  static void do_bpf_send_signal(struct irq_work *entry)

The __init function send_signal_irq_work_init(), which calls
do_bpf_send_signal(), is defined under CONFIG_MODULES. Hence,
when CONFIG_MODULES=n, nobody calls static function do_bpf_send_signal(),
hence the warning.

The init function send_signal_irq_work_init() should work without
CONFIG_MODULES. Moving it out of CONFIG_MODULES
code section fixed the compiler warning, and also make bpf_send_signal()
helper work without CONFIG_MODULES.

Fixes: 8b401f9ed244 ("bpf: implement bpf_send_signal() helper")
Reported-By: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Yonghong Song &lt;yhs@fb.com&gt;
Acked-by: Song Liu &lt;songliubraving@fb.com&gt;
Signed-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
</content>
</entry>
<entry>
<title>Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net</title>
<updated>2019-06-18T03:20:36Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@davemloft.net</email>
</author>
<published>2019-06-18T02:48:13Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=13091aa30535b719e269f20a7bc34002bf5afae5'/>
<id>urn:sha1:13091aa30535b719e269f20a7bc34002bf5afae5</id>
<content type='text'>
Honestly all the conflicts were simple overlapping changes,
nothing really interesting to report.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>bpf: fix nested bpf tracepoints with per-cpu data</title>
<updated>2019-06-15T23:33:35Z</updated>
<author>
<name>Matt Mullins</name>
<email>mmullins@fb.com</email>
</author>
<published>2019-06-11T21:53:04Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=9594dc3c7e71b9f52bee1d7852eb3d4e3aea9e99'/>
<id>urn:sha1:9594dc3c7e71b9f52bee1d7852eb3d4e3aea9e99</id>
<content type='text'>
BPF_PROG_TYPE_RAW_TRACEPOINTs can be executed nested on the same CPU, as
they do not increment bpf_prog_active while executing.

This enables three levels of nesting, to support
  - a kprobe or raw tp or perf event,
  - another one of the above that irq context happens to call, and
  - another one in nmi context
(at most one of which may be a kprobe or perf event).

Fixes: 20b9d7ac4852 ("bpf: avoid excessive stack usage for perf_sample_data")
Signed-off-by: Matt Mullins &lt;mmullins@fb.com&gt;
Acked-by: Andrii Nakryiko &lt;andriin@fb.com&gt;
Acked-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
Signed-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;
</content>
</entry>
<entry>
<title>bpf: tracing: properly use bpf_prog_array api</title>
<updated>2019-05-29T13:17:35Z</updated>
<author>
<name>Stanislav Fomichev</name>
<email>sdf@google.com</email>
</author>
<published>2019-05-28T21:14:44Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=e672db03ab0e43e41ab6f8b2156a10d6e40f243d'/>
<id>urn:sha1:e672db03ab0e43e41ab6f8b2156a10d6e40f243d</id>
<content type='text'>
Now that we don't have __rcu markers on the bpf_prog_array helpers,
let's use proper rcu_dereference_protected to obtain array pointer
under mutex.

Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Signed-off-by: Stanislav Fomichev &lt;sdf@google.com&gt;
Signed-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
</content>
</entry>
<entry>
<title>bpf: check signal validity in nmi for bpf_send_signal() helper</title>
<updated>2019-05-28T08:51:33Z</updated>
<author>
<name>Yonghong Song</name>
<email>yhs@fb.com</email>
</author>
<published>2019-05-25T18:57:53Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=e1afb70252a8614e1ef7aec05ff1b84fd324b782'/>
<id>urn:sha1:e1afb70252a8614e1ef7aec05ff1b84fd324b782</id>
<content type='text'>
Commit 8b401f9ed244 ("bpf: implement bpf_send_signal() helper")
introduced bpf_send_signal() helper. If the context is nmi,
the sending signal work needs to be deferred to irq_work.
If the signal is invalid, the error will appear in irq_work
and it won't be propagated to user.

This patch did an early check in the helper itself to notify
user invalid signal, as suggested by Daniel.

Suggested-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
Signed-off-by: Yonghong Song &lt;yhs@fb.com&gt;
Signed-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
</content>
</entry>
<entry>
<title>bpf: implement bpf_send_signal() helper</title>
<updated>2019-05-24T21:26:47Z</updated>
<author>
<name>Yonghong Song</name>
<email>yhs@fb.com</email>
</author>
<published>2019-05-23T21:47:45Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8b401f9ed2441ad9e219953927a842d24ed051fc'/>
<id>urn:sha1:8b401f9ed2441ad9e219953927a842d24ed051fc</id>
<content type='text'>
This patch tries to solve the following specific use case.

Currently, bpf program can already collect stack traces
through kernel function get_perf_callchain()
when certain events happens (e.g., cache miss counter or
cpu clock counter overflows). But such stack traces are
not enough for jitted programs, e.g., hhvm (jited php).
To get real stack trace, jit engine internal data structures
need to be traversed in order to get the real user functions.

bpf program itself may not be the best place to traverse
the jit engine as the traversing logic could be complex and
it is not a stable interface either.

Instead, hhvm implements a signal handler,
e.g. for SIGALARM, and a set of program locations which
it can dump stack traces. When it receives a signal, it will
dump the stack in next such program location.

Such a mechanism can be implemented in the following way:
  . a perf ring buffer is created between bpf program
    and tracing app.
  . once a particular event happens, bpf program writes
    to the ring buffer and the tracing app gets notified.
  . the tracing app sends a signal SIGALARM to the hhvm.

But this method could have large delays and causing profiling
results skewed.

This patch implements bpf_send_signal() helper to send
a signal to hhvm in real time, resulting in intended stack traces.

Acked-by: Andrii Nakryiko &lt;andriin@fb.com&gt;
Signed-off-by: Yonghong Song &lt;yhs@fb.com&gt;
Signed-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;
</content>
</entry>
</feed>
