From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27257C4338F for ; Thu, 29 Jul 2021 17:09:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0FF5760C40 for ; Thu, 29 Jul 2021 17:09:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229874AbhG2RJ2 (ORCPT ); Thu, 29 Jul 2021 13:09:28 -0400 Received: from linux.microsoft.com ([13.77.154.182]:33970 "EHLO linux.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229556AbhG2RJZ (ORCPT ); Thu, 29 Jul 2021 13:09:25 -0400 Received: from [192.168.254.32] (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 6377220B36E8; Thu, 29 Jul 2021 10:09:21 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 6377220B36E8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1627578562; bh=14nH6SQktOnjxmNI3AnyHw2tOfTbGWH5RglX6lPMTP4=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=ioDYcPWXlMj9NO1PLenA1nVcqxxBEoj7elW5y3VLz0nS9e2BfvsjCEOzDiRYiFkef w2P/pzYBU4xb7+lXEYmaKYJhNxkzQ/GNqcPgv95y+7O07oECWuDuSiyd4E7RyczwXA KEPKKVQrkk6J2lfKhyHE1rt/PxX8RrwxHqMsZ4SM= Subject: Re: [RFC PATCH v6 3/3] arm64: Create a list of SYM_CODE functions, check return PC against list To: Mark Rutland Cc: broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org References: <3f2aab69a35c243c5e97f47c4ad84046355f5b90> <20210630223356.58714-1-madvenka@linux.microsoft.com> <20210630223356.58714-4-madvenka@linux.microsoft.com> <20210728172523.GB47345@C02TD0UTHF1T.local> <20210729154804.GA59940@C02TD0UTHF1T.local> From: "Madhavan T. Venkataraman" Message-ID: Date: Thu, 29 Jul 2021 12:09:20 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210729154804.GA59940@C02TD0UTHF1T.local> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 7/29/21 10:48 AM, Mark Rutland wrote: > On Thu, Jul 29, 2021 at 09:06:26AM -0500, Madhavan T. Venkataraman wrote: >> Responses inline... >> >> On 7/28/21 12:25 PM, Mark Rutland wrote: >>> On Wed, Jun 30, 2021 at 05:33:56PM -0500, madvenka@linux.microsoft.com wrote: >>>> From: "Madhavan T. Venkataraman" >>>> ... ... >>>> +static struct code_range *sym_code_functions; >>>> +static int num_sym_code_functions; >>>> + >>>> +int __init init_sym_code_functions(void) >>>> +{ >>>> + size_t size; >>>> + >>>> + size = (unsigned long)__sym_code_functions_end - >>>> + (unsigned long)__sym_code_functions_start; >>>> + >>>> + sym_code_functions = kmalloc(size, GFP_KERNEL); >>>> + if (!sym_code_functions) >>>> + return -ENOMEM; >>>> + >>>> + memcpy(sym_code_functions, __sym_code_functions_start, size); >>>> + /* Update num_sym_code_functions after copying sym_code_functions. */ >>>> + smp_mb(); >>>> + num_sym_code_functions = size / sizeof(struct code_range); >>>> + >>>> + return 0; >>>> +} >>>> +early_initcall(init_sym_code_functions); >>> >>> What's the point of copying this, given we don't even sort it? >>> >>> If we need to keep it around, it would be nicer to leave it where the >>> linker put it, but make it rodata or ro_after_init. >>> >> >> I was planning to sort it for performance. I have a comment to that effect. >> But I can remove the copy and retain the info in linker data. > > I think for now it's better to place it in .rodata. If we need to sort > this, we can rework that later, preferably sorting at compile time as > with extable entries. > > That way this is *always* in a usable state, and there's a much lower > risk of this being corrupted by a stray write. > OK. >>>> + /* >>>> + * Check the return PC against sym_code_functions[]. If there is a >>>> + * match, then the consider the stack frame unreliable. These functions >>>> + * contain low-level code where the frame pointer and/or the return >>>> + * address register cannot be relied upon. This addresses the following >>>> + * situations: >>>> + * >>>> + * - Exception handlers and entry assembly >>>> + * - Trampoline assembly (e.g., ftrace, kprobes) >>>> + * - Hypervisor-related assembly >>>> + * - Hibernation-related assembly >>>> + * - CPU start-stop, suspend-resume assembly >>>> + * - Kernel relocation assembly >>>> + * >>>> + * Some special cases covered by sym_code_functions[] deserve a mention >>>> + * here: >>>> + * >>>> + * - All EL1 interrupt and exception stack traces will be considered >>>> + * unreliable. This is the correct behavior as interrupts and >>>> + * exceptions can happen on any instruction including ones in the >>>> + * frame pointer prolog and epilog. Unless stack metadata is >>>> + * available so the unwinder can unwind through these special >>>> + * cases, such stack traces will be considered unreliable. >>> >>> As mentioned previously, we *can* reliably unwind precisely one step >>> across an exception boundary, as we can be certain of the PC value at >>> the time the exception was taken, but past this we can't be certain >>> whether the LR is legitimate. >>> >>> I'd like that we capture that precisely in the unwinder, and I'm >>> currently reworking the entry assembly to make that possible. >>> >>>> + * >>>> + * - A task can get preempted at the end of an interrupt. Stack >>>> + * traces of preempted tasks will show the interrupt frame in the >>>> + * stack trace and will be considered unreliable. >>>> + * >>>> + * - Breakpoints are exceptions. So, all stack traces in the break >>>> + * point handler (including probes) will be considered unreliable. >>>> + * >>>> + * - All of the ftrace entry trampolines are considered unreliable. >>>> + * So, all stack traces taken from tracer functions will be >>>> + * considered unreliable. >>>> + * >>>> + * - The Function Graph Tracer return trampoline (return_to_handler) >>>> + * and the Kretprobe return trampoline (kretprobe_trampoline) are >>>> + * also considered unreliable. >>> >>> We should be able to unwind these reliably if we specifically identify >>> them. I think we need a two-step check here; we should assume that >>> SYM_CODE() is unreliable by default, but in specific cases we should >>> unwind that reliably. >>> >>>> + * Some of the special cases above can be unwound through using >>>> + * special logic in unwind_frame(). >>>> + * >>>> + * - return_to_handler() is handled by the unwinder by attempting >>>> + * to retrieve the original return address from the per-task >>>> + * return address stack. >>>> + * >>>> + * - kretprobe_trampoline() can be handled in a similar fashion by >>>> + * attempting to retrieve the original return address from the >>>> + * per-task kretprobe instance list. >>> >>> We don't do this today, >>> >>>> + * >>>> + * - I reckon optprobes can be handled in a similar fashion in the >>>> + * future? >>>> + * >>>> + * - Stack traces taken from the FTrace tracer functions can be >>>> + * handled as well. ftrace_call is an inner label defined in the >>>> + * Ftrace entry trampoline. This is the location where the call >>>> + * to a tracer function is patched. So, if the return PC equals >>>> + * ftrace_call+4, it is reliable. At that point, proper stack >>>> + * frames have already been set up for the traced function and >>>> + * its caller. >>>> + * >>>> + * NOTE: >>>> + * If sym_code_functions[] were sorted, a binary search could be >>>> + * done to make this more performant. >>>> + */ >>> >>> Since some of the above is speculative (e.g. the bit about optprobes), >>> and as code will change over time, I think we should have a much terser >>> comment, e.g. >>> >>> /* >>> * As SYM_CODE functions don't follow the usual calling >>> * conventions, we assume by default that any SYM_CODE function >>> * cannot be unwound reliably. >>> * >>> * Note that this includes exception entry/return sequences and >>> * trampoline for ftrace and kprobes. >>> */ >>> >>> ... and then if/when we try to unwind a specific SYM_CODE function >>> reliably, we add the comment for that specifically. >>> >> >> Just to confirm, are you suggesting that I remove the entire large comment >> detailing the various cases and replace the whole thing with the terse comment? > > Yes. > > For clarity, let's take your bullet-point list above as a list of > examples, and make that: > > /* > * As SYM_CODE functions don't follow the usual calling > * conventions, we assume by default that any SYM_CODE function > * cannot be unwound reliably. > * > * Note that this includes: > * > * - Exception handlers and entry assembly > * - Trampoline assembly (e.g., ftrace, kprobes) > * - Hypervisor-related assembly > * - Hibernation-related assembly > * - CPU start-stop, suspend-resume assembly > * - Kernel relocation assembly > */ > OK. >> I did the large comment because of Mark Brown's input that we must be >> verbose about all the cases so that it is clear in the future what the >> different cases are and how we handle them in this code. As the code >> evolves, the comments would evolve. > > The bulk of the comment just enumerates cases and says we treat them as > unreliable, which I think is already clear from the terser comment with > the list. The cases which mention special treatment (e.g. for unwinding > through return_to_handler) aren't actually handled here (and the > kretprobes case isn't handled at all today), so this isn't the right > place for those -- they'll inevitably drift from the implementation. > >> I can replace the comment if you want. Please confirm. > > Yes please. If you can use the wording I've suggested immediately above > (with your list folded in), that would be great. > OK. I will use your suggested text. Thanks. Madhavan From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15B16C4338F for ; Thu, 29 Jul 2021 17:12:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D9E4260C40 for ; Thu, 29 Jul 2021 17:12:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D9E4260C40 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:From:References:Cc:To:Subject:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lK2DYZJAvuC0PZDyRK79rzj73yrkyUTo6zM3k3D+Tsg=; b=IlJYSzpaG6EetP38jWNdZ6PE0p VGYlm8TjlEKGN/FKvcDFOQDI8oY4RU2hDpDm7JcXSA5BNHo8JHXCthJTtPpGA2C0UKlfxygVXR4l/ 75NSDzTbjkhsMfp4Ab0Bb/fh6bd5yDTeKwO6H28VEEI2Pd+YGlr3inPUckKlITlrVThvMq53d0hqJ loO7thTAATJ7XB8/073032xXdxnvazGqI4020WnZkA/SVlWovDfu7HW97et4X6rNs5bLqqKEEB3Qx 03CQjw80tsD4j4CtdV41cNX7caF1ntdq28R7zuwG0RVhi4NIOtRs7XrnZAp9My+ORj56IcL1/w7in bP4dmR6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m99Yj-005Ffm-1q; Thu, 29 Jul 2021 17:11:01 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m99X8-005Eo9-TV for linux-arm-kernel@lists.infradead.org; Thu, 29 Jul 2021 17:09:25 +0000 Received: from [192.168.254.32] (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 6377220B36E8; Thu, 29 Jul 2021 10:09:21 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 6377220B36E8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1627578562; bh=14nH6SQktOnjxmNI3AnyHw2tOfTbGWH5RglX6lPMTP4=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=ioDYcPWXlMj9NO1PLenA1nVcqxxBEoj7elW5y3VLz0nS9e2BfvsjCEOzDiRYiFkef w2P/pzYBU4xb7+lXEYmaKYJhNxkzQ/GNqcPgv95y+7O07oECWuDuSiyd4E7RyczwXA KEPKKVQrkk6J2lfKhyHE1rt/PxX8RrwxHqMsZ4SM= Subject: Re: [RFC PATCH v6 3/3] arm64: Create a list of SYM_CODE functions, check return PC against list To: Mark Rutland Cc: broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org References: <3f2aab69a35c243c5e97f47c4ad84046355f5b90> <20210630223356.58714-1-madvenka@linux.microsoft.com> <20210630223356.58714-4-madvenka@linux.microsoft.com> <20210728172523.GB47345@C02TD0UTHF1T.local> <20210729154804.GA59940@C02TD0UTHF1T.local> From: "Madhavan T. Venkataraman" Message-ID: Date: Thu, 29 Jul 2021 12:09:20 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210729154804.GA59940@C02TD0UTHF1T.local> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210729_100923_089791_E706843A X-CRM114-Status: GOOD ( 44.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 7/29/21 10:48 AM, Mark Rutland wrote: > On Thu, Jul 29, 2021 at 09:06:26AM -0500, Madhavan T. Venkataraman wrote: >> Responses inline... >> >> On 7/28/21 12:25 PM, Mark Rutland wrote: >>> On Wed, Jun 30, 2021 at 05:33:56PM -0500, madvenka@linux.microsoft.com wrote: >>>> From: "Madhavan T. Venkataraman" >>>> ... ... >>>> +static struct code_range *sym_code_functions; >>>> +static int num_sym_code_functions; >>>> + >>>> +int __init init_sym_code_functions(void) >>>> +{ >>>> + size_t size; >>>> + >>>> + size = (unsigned long)__sym_code_functions_end - >>>> + (unsigned long)__sym_code_functions_start; >>>> + >>>> + sym_code_functions = kmalloc(size, GFP_KERNEL); >>>> + if (!sym_code_functions) >>>> + return -ENOMEM; >>>> + >>>> + memcpy(sym_code_functions, __sym_code_functions_start, size); >>>> + /* Update num_sym_code_functions after copying sym_code_functions. */ >>>> + smp_mb(); >>>> + num_sym_code_functions = size / sizeof(struct code_range); >>>> + >>>> + return 0; >>>> +} >>>> +early_initcall(init_sym_code_functions); >>> >>> What's the point of copying this, given we don't even sort it? >>> >>> If we need to keep it around, it would be nicer to leave it where the >>> linker put it, but make it rodata or ro_after_init. >>> >> >> I was planning to sort it for performance. I have a comment to that effect. >> But I can remove the copy and retain the info in linker data. > > I think for now it's better to place it in .rodata. If we need to sort > this, we can rework that later, preferably sorting at compile time as > with extable entries. > > That way this is *always* in a usable state, and there's a much lower > risk of this being corrupted by a stray write. > OK. >>>> + /* >>>> + * Check the return PC against sym_code_functions[]. If there is a >>>> + * match, then the consider the stack frame unreliable. These functions >>>> + * contain low-level code where the frame pointer and/or the return >>>> + * address register cannot be relied upon. This addresses the following >>>> + * situations: >>>> + * >>>> + * - Exception handlers and entry assembly >>>> + * - Trampoline assembly (e.g., ftrace, kprobes) >>>> + * - Hypervisor-related assembly >>>> + * - Hibernation-related assembly >>>> + * - CPU start-stop, suspend-resume assembly >>>> + * - Kernel relocation assembly >>>> + * >>>> + * Some special cases covered by sym_code_functions[] deserve a mention >>>> + * here: >>>> + * >>>> + * - All EL1 interrupt and exception stack traces will be considered >>>> + * unreliable. This is the correct behavior as interrupts and >>>> + * exceptions can happen on any instruction including ones in the >>>> + * frame pointer prolog and epilog. Unless stack metadata is >>>> + * available so the unwinder can unwind through these special >>>> + * cases, such stack traces will be considered unreliable. >>> >>> As mentioned previously, we *can* reliably unwind precisely one step >>> across an exception boundary, as we can be certain of the PC value at >>> the time the exception was taken, but past this we can't be certain >>> whether the LR is legitimate. >>> >>> I'd like that we capture that precisely in the unwinder, and I'm >>> currently reworking the entry assembly to make that possible. >>> >>>> + * >>>> + * - A task can get preempted at the end of an interrupt. Stack >>>> + * traces of preempted tasks will show the interrupt frame in the >>>> + * stack trace and will be considered unreliable. >>>> + * >>>> + * - Breakpoints are exceptions. So, all stack traces in the break >>>> + * point handler (including probes) will be considered unreliable. >>>> + * >>>> + * - All of the ftrace entry trampolines are considered unreliable. >>>> + * So, all stack traces taken from tracer functions will be >>>> + * considered unreliable. >>>> + * >>>> + * - The Function Graph Tracer return trampoline (return_to_handler) >>>> + * and the Kretprobe return trampoline (kretprobe_trampoline) are >>>> + * also considered unreliable. >>> >>> We should be able to unwind these reliably if we specifically identify >>> them. I think we need a two-step check here; we should assume that >>> SYM_CODE() is unreliable by default, but in specific cases we should >>> unwind that reliably. >>> >>>> + * Some of the special cases above can be unwound through using >>>> + * special logic in unwind_frame(). >>>> + * >>>> + * - return_to_handler() is handled by the unwinder by attempting >>>> + * to retrieve the original return address from the per-task >>>> + * return address stack. >>>> + * >>>> + * - kretprobe_trampoline() can be handled in a similar fashion by >>>> + * attempting to retrieve the original return address from the >>>> + * per-task kretprobe instance list. >>> >>> We don't do this today, >>> >>>> + * >>>> + * - I reckon optprobes can be handled in a similar fashion in the >>>> + * future? >>>> + * >>>> + * - Stack traces taken from the FTrace tracer functions can be >>>> + * handled as well. ftrace_call is an inner label defined in the >>>> + * Ftrace entry trampoline. This is the location where the call >>>> + * to a tracer function is patched. So, if the return PC equals >>>> + * ftrace_call+4, it is reliable. At that point, proper stack >>>> + * frames have already been set up for the traced function and >>>> + * its caller. >>>> + * >>>> + * NOTE: >>>> + * If sym_code_functions[] were sorted, a binary search could be >>>> + * done to make this more performant. >>>> + */ >>> >>> Since some of the above is speculative (e.g. the bit about optprobes), >>> and as code will change over time, I think we should have a much terser >>> comment, e.g. >>> >>> /* >>> * As SYM_CODE functions don't follow the usual calling >>> * conventions, we assume by default that any SYM_CODE function >>> * cannot be unwound reliably. >>> * >>> * Note that this includes exception entry/return sequences and >>> * trampoline for ftrace and kprobes. >>> */ >>> >>> ... and then if/when we try to unwind a specific SYM_CODE function >>> reliably, we add the comment for that specifically. >>> >> >> Just to confirm, are you suggesting that I remove the entire large comment >> detailing the various cases and replace the whole thing with the terse comment? > > Yes. > > For clarity, let's take your bullet-point list above as a list of > examples, and make that: > > /* > * As SYM_CODE functions don't follow the usual calling > * conventions, we assume by default that any SYM_CODE function > * cannot be unwound reliably. > * > * Note that this includes: > * > * - Exception handlers and entry assembly > * - Trampoline assembly (e.g., ftrace, kprobes) > * - Hypervisor-related assembly > * - Hibernation-related assembly > * - CPU start-stop, suspend-resume assembly > * - Kernel relocation assembly > */ > OK. >> I did the large comment because of Mark Brown's input that we must be >> verbose about all the cases so that it is clear in the future what the >> different cases are and how we handle them in this code. As the code >> evolves, the comments would evolve. > > The bulk of the comment just enumerates cases and says we treat them as > unreliable, which I think is already clear from the terser comment with > the list. The cases which mention special treatment (e.g. for unwinding > through return_to_handler) aren't actually handled here (and the > kretprobes case isn't handled at all today), so this isn't the right > place for those -- they'll inevitably drift from the implementation. > >> I can replace the comment if you want. Please confirm. > > Yes please. If you can use the wording I've suggested immediately above > (with your list folded in), that would be great. > OK. I will use your suggested text. Thanks. Madhavan _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel