From: Masami Hiramatsu <mhiramat@kernel.org>
To: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>,
mark.rutland@arm.com, broonie@kernel.org, jthierry@redhat.com,
catalin.marinas@arm.com, will@kernel.org,
linux-arm-kernel@lists.infradead.org,
live-patching@vger.kernel.org, linux-kernel@vger.kernel.org,
Masami Hiramatsu <mhiramat@kernel.org>
Subject: Re: [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks
Date: Mon, 5 Apr 2021 22:24:36 +0900 [thread overview]
Message-ID: <20210405222436.4fda9a930676d95e862744af@kernel.org> (raw)
In-Reply-To: <bd13a433-c060-c501-8e76-5e856d77a225@linux.microsoft.com>
Hi Madhaven,
On Sat, 3 Apr 2021 22:29:12 -0500
"Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> wrote:
> >> Check for kretprobe
> >> ===================
> >>
> >> For functions with a kretprobe set up, probe code executes on entry
> >> to the function and replaces the return address in the stack frame with a
> >> kretprobe trampoline. Whenever the function returns, control is
> >> transferred to the trampoline. The trampoline eventually returns to the
> >> original return address.
> >>
> >> A stack trace taken while executing in the function (or in functions that
> >> get called from the function) will not show the original return address.
> >> Similarly, a stack trace taken while executing in the trampoline itself
> >> (and functions that get called from the trampoline) will not show the
> >> original return address. This means that the caller of the probed function
> >> will not show. This makes the stack trace unreliable.
> >>
> >> Add the kretprobe trampoline to special_functions[].
> >>
> >> FYI, each task contains a task->kretprobe_instances list that can
> >> theoretically be consulted to find the orginal return address. But I am
> >> not entirely sure how to safely traverse that list for stack traces
> >> not on the current process. So, I have taken the easy way out.
> >
> > For kretprobes, unwinding from the trampoline or kretprobe handler
> > shouldn't be a reliability concern for live patching, for similar
> > reasons as above.
> >
>
> Please see previous answer.
>
> > Otherwise, when unwinding from a blocked task which has
> > 'kretprobe_trampoline' on the stack, the unwinder needs a way to get the
> > original return address. Masami has been working on an interface to
> > make that possible for x86. I assume something similar could be done
> > for arm64.
> >
>
> OK. Until that is available, this case needs to be addressed.
Actually, I've done that on arm64 :) See below patch.
(and I also have a similar code for arm32, what I'm considering is how
to unify x86/arm/arm64 kretprobe_find_ret_addr(), since those are very
similar.)
This is applicable on my x86 series v5
https://lore.kernel.org/bpf/161676170650.330141.6214727134265514123.stgit@devnote2/
Thank you,
From 947cf6cf1fd4154edd5533d18c2f8dfedc8d993d Mon Sep 17 00:00:00 2001
From: Masami Hiramatsu <mhiramat@kernel.org>
Date: Sat, 20 Mar 2021 00:14:29 +0900
Subject: [PATCH] arm64: Recover kretprobe modified return address in
stacktrace
Since the kretprobe replaces the function return address with
the kretprobe_trampoline on the stack, arm64 unwinder shows it
instead of the correct return address.
This finds the correct return address from the per-task
kretprobe_instances list and verify it is in between the
caller fp and callee fp.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
arch/arm64/include/asm/stacktrace.h | 2 ++
arch/arm64/kernel/probes/kprobes.c | 28 ++++++++++++++++++++++++++++
arch/arm64/kernel/stacktrace.c | 3 +++
kernel/kprobes.c | 8 ++++----
4 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index eb29b1fe8255..50ebc9e9dba9 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -9,6 +9,7 @@
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/types.h>
+#include <linux/llist.h>
#include <asm/memory.h>
#include <asm/ptrace.h>
@@ -59,6 +60,7 @@ struct stackframe {
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
int graph;
#endif
+ struct llist_node *kr_cur;
};
extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index fce681fdfce6..204e475cbff3 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -410,6 +410,34 @@ int __init arch_populate_kprobe_blacklist(void)
return ret;
}
+unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
+ struct llist_node **cur);
+
+unsigned long kretprobe_find_ret_addr(struct task_struct *tsk,
+ void *fp, struct llist_node **cur)
+{
+ struct kretprobe_instance *ri;
+ unsigned long ret;
+
+ do {
+ ret = __kretprobe_find_ret_addr(tsk, cur);
+ if (!ret)
+ return ret;
+ ri = container_of(*cur, struct kretprobe_instance, llist);
+ /*
+ * Since arm64 stores the stack pointer of the entry of target
+ * function (callee) to ri->fp, the given real @fp must be
+ * smaller than ri->fp, but bigger than the previous ri->fp.
+ *
+ * callee sp (prev ri->fp)
+ * fp (and *saved_lr)
+ * caller sp (ri->fp)
+ */
+ } while (ri->fp <= fp);
+
+ return ret;
+}
+
void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
{
return (void *)kretprobe_trampoline_handler(regs, (void *)kernel_stack_pointer(regs));
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index ad20981dfda4..956486d4ac10 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -105,6 +105,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
frame->pc = ret_stack->ret;
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+ if (is_kretprobe_trampoline(frame->pc))
+ frame->pc = kretprobe_find_ret_addr(tsk, (void *)fp, &frame->kr_cur);
frame->pc = ptrauth_strip_insn_pac(frame->pc);
@@ -199,6 +201,7 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
{
struct stackframe frame;
+ memset(&frame, 0, sizeof(frame));
if (regs)
start_backtrace(&frame, regs->regs[29], regs->pc);
else if (task == current)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 4ce3e6f5d28d..12677a463561 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1859,8 +1859,8 @@ static struct notifier_block kprobe_exceptions_nb = {
#ifdef CONFIG_KRETPROBES
/* This assumes the tsk is current or the task which is not running. */
-static unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
- struct llist_node **cur)
+unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
+ struct llist_node **cur)
{
struct kretprobe_instance *ri = NULL;
struct llist_node *node = *cur;
@@ -1882,8 +1882,8 @@ static unsigned long __kretprobe_find_ret_addr(struct task_struct *tsk,
}
NOKPROBE_SYMBOL(__kretprobe_find_ret_addr);
-unsigned long kretprobe_find_ret_addr(struct task_struct *tsk, void *fp,
- struct llist_node **cur)
+unsigned long __weak kretprobe_find_ret_addr(struct task_struct *tsk,
+ void *fp, struct llist_node **cur)
{
struct kretprobe_instance *ri = NULL;
unsigned long ret;
--
2.25.1
--
Masami Hiramatsu <mhiramat@kernel.org>
next prev parent reply other threads:[~2021-04-05 13:24 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <77bd5edeea72d44533c769b1e8c0fea7a9d7eb3a>
2021-03-30 19:09 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks madvenka
2021-03-30 19:09 ` [RFC PATCH v1 1/4] arm64: Implement infrastructure for " madvenka
2021-04-01 15:27 ` Mark Brown
2021-04-01 17:44 ` Madhavan T. Venkataraman
2021-03-30 19:09 ` [RFC PATCH v1 2/4] arm64: Mark a stack trace unreliable if an EL1 exception frame is detected madvenka
2021-04-01 17:21 ` Mark Brown
2021-03-30 19:09 ` [RFC PATCH v1 3/4] arm64: Detect FTRACE cases that make the stack trace unreliable madvenka
2021-04-01 14:27 ` Mark Brown
2021-04-01 17:43 ` Madhavan T. Venkataraman
2021-04-01 18:28 ` Mark Brown
2021-04-01 18:40 ` Madhavan T. Venkataraman
2021-04-01 18:53 ` Madhavan T. Venkataraman
2021-04-01 19:47 ` Madhavan T. Venkataraman
2021-04-06 11:02 ` Mark Brown
2021-04-01 17:48 ` Madhavan T. Venkataraman
2021-03-30 19:09 ` [RFC PATCH v1 4/4] arm64: Mark stack trace as unreliable if kretprobed functions are present madvenka
2021-04-01 17:23 ` Mark Brown
2021-04-03 17:01 ` [RFC PATCH v1 0/4] arm64: Implement stack trace reliability checks Josh Poimboeuf
2021-04-04 3:29 ` Madhavan T. Venkataraman
2021-04-05 13:24 ` Masami Hiramatsu [this message]
2021-04-05 13:46 ` Madhavan T. Venkataraman
2021-04-05 14:56 ` Madhavan T. Venkataraman
2021-04-05 17:12 ` Madhavan T. Venkataraman
2021-04-05 23:39 ` Masami Hiramatsu
2021-04-05 23:40 ` Masami Hiramatsu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210405222436.4fda9a930676d95e862744af@kernel.org \
--to=mhiramat@kernel.org \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=jpoimboe@redhat.com \
--cc=jthierry@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=live-patching@vger.kernel.org \
--cc=madvenka@linux.microsoft.com \
--cc=mark.rutland@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).