aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/trace
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@kernel.org>2024-04-04 13:08:01 -0700
committerAlexei Starovoitov <ast@kernel.org>2024-04-04 13:08:01 -0700
commitd82c045f9dfde6b9ea220d7f8310c98210dfc8cb (patch)
tree05376a14c790f914df2197a0431c3608ac025e66 /kernel/trace
parentbpf, riscv: Implement bpf_addr_space_cast instruction (diff)
parentbpf: inline bpf_get_branch_snapshot() helper (diff)
downloadlinux-d82c045f9dfde6b9ea220d7f8310c98210dfc8cb.tar.gz
linux-d82c045f9dfde6b9ea220d7f8310c98210dfc8cb.zip
Merge branch 'inline-bpf_get_branch_snapshot-bpf-helper'
Andrii Nakryiko says: ==================== Inline bpf_get_branch_snapshot() BPF helper Implement inlining of bpf_get_branch_snapshot() BPF helper using generic BPF assembly approach. This allows to reduce LBR record usage right before LBR records are captured from inside BPF program. See v1 cover letter ([0]) for some visual examples. I dropped them from v2 because there are multiple independent changes landing and being reviewed, all of which remove different parts of LBR record waste, so presenting final state of LBR "waste" gets more complicated until all of the pieces land. [0] https://lore.kernel.org/bpf/20240321180501.734779-1-andrii@kernel.org/ v2->v3: - fix BPF_MUL instruction definition; v1->v2: - inlining of bpf_get_smp_processor_id() split out into a separate patch set implementing internal per-CPU BPF instruction; - add efficient divide-by-24 through multiplication logic, and leave comments to explain the idea behind it; this way inlined version of bpf_get_branch_snapshot() has no compromises compared to non-inlined version of the helper (Alexei). ==================== Link: https://lore.kernel.org/r/20240404002640.1774210-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'kernel/trace')
-rw-r--r--kernel/trace/bpf_trace.c4
1 files changed, 0 insertions, 4 deletions
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 6d0c95638e1b..afb232b1d7c2 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1188,9 +1188,6 @@ static const struct bpf_func_proto bpf_get_attach_cookie_proto_tracing = {
BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
{
-#ifndef CONFIG_X86
- return -ENOENT;
-#else
static const u32 br_entry_size = sizeof(struct perf_branch_entry);
u32 entry_cnt = size / br_entry_size;
@@ -1203,7 +1200,6 @@ BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
return -ENOENT;
return entry_cnt * br_entry_size;
-#endif
}
static const struct bpf_func_proto bpf_get_branch_snapshot_proto = {