From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEC51C433B4 for ; Tue, 13 Apr 2021 22:04:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AFEBE613D1 for ; Tue, 13 Apr 2021 22:04:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348603AbhDMWEc (ORCPT ); Tue, 13 Apr 2021 18:04:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348594AbhDMWE1 (ORCPT ); Tue, 13 Apr 2021 18:04:27 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CC75C061574 for ; Tue, 13 Apr 2021 15:04:06 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id nm3-20020a17090b19c3b029014e1bbf6c60so5542178pjb.4 for ; Tue, 13 Apr 2021 15:04:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=LGxxVWyAs9dyahnM33mN+x7JouMtxM08wbYWDjX4hcA=; b=D0DK+/8IgCKY5mLNY64HrrcUKYgT/HIorOeija2J+mmicTQiU7ZdB6ioSl3E6aasXE UeCzHig20wQWQFjMlHg/8wI7HRFDf4kmnf85eYuUB+voTOr1UWNxYbArB+pj77OIjG+C lIc7uS3AF5XPQcN5Siy4HJypeZ/nxpA6xuC/QLz83JQhUv/ZBBQryFZUbYkrAszGyl33 s7CqgHJvohgcENP4wCWOOMF9OLv3WHjFhBk08IHcO1MIGTyIwU3sBWq002SMz7MklqFB Xj2CUNTqhVxzjUsoNWsVR6+qxC4D2ltb24fmzjhSJb9KxWaMS5Dt336tBBYNMKEqaV77 pjUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=LGxxVWyAs9dyahnM33mN+x7JouMtxM08wbYWDjX4hcA=; b=Yvo5fbFreF1Msjef3hHXnCp3SCNuyZnQjdJK4WfLB0mH+n0DM75v5QBMHaOODBWBhy rDIsVoGbIYp2c5JnpR1jRjuVxx+HIDY1E7ip+fSxh6NnYvnDSbs4CLXzTe57XmkZ0Gv9 Ye5kzN3xQHbnx2Dl0440Pck3UtBz8HMgOuBEfjrhuwGHsZqNkFdB9SeYVmEaP7TZjvwS jAwCNEDJc9aSuUm8ti6E+9h9LOftBpKa3gO2DJuUKfQoJHStMACTXPn/AeW+g2hgd8YT +e3+8Ax99aDzwWWu0SBQaszhtn1sow/cFlO9U8q0Qnta+X7geglk+IJizJ9hWV4Nl3sX Ok2w== X-Gm-Message-State: AOAM531HDxpMJdZzqexQzV0EY1x5OvjKL0r7CdppkLw2ee6asII3kv4Q o9DD6iQOSNjed0Y3b9LJqmHt6II6E3cG+aoTZmGqwA== X-Google-Smtp-Source: ABdhPJwyyi5IaBAgQwAyh2v2conM6EZGv2l3sL0naEvmBj9nPd6oQXG+C5L1LzQz964XMQPSXa9lphpnH1VPu22hHxc= X-Received: by 2002:a17:90b:3b4a:: with SMTP id ot10mr2197521pjb.48.1618351445861; Tue, 13 Apr 2021 15:04:05 -0700 (PDT) MIME-Version: 1.0 References: <20210413162240.3131033-1-eric.dumazet@gmail.com> <20210413162240.3131033-4-eric.dumazet@gmail.com> <567941475.72456.1618332885342.JavaMail.zimbra@efficios.com> <989543379.72506.1618334454075.JavaMail.zimbra@efficios.com> <1347243835.72576.1618336812739.JavaMail.zimbra@efficios.com> In-Reply-To: From: Arjun Roy Date: Tue, 13 Apr 2021 15:03:54 -0700 Message-ID: Subject: Re: [PATCH v2 3/3] rseq: optimise rseq_get_rseq_cs() and clear_rseq_cs() To: David Laight Cc: Eric Dumazet , Mathieu Desnoyers , Eric Dumazet , Ingo Molnar , Peter Zijlstra , paulmck , Boqun Feng , linux-kernel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 13, 2021 at 2:19 PM David Laight wrote: > > > If we're special-casing 64-bit architectures anyways - unrolling the > > 32B copy_from_user() for struct rseq_cs appears to be roughly 5-10% > > savings on x86-64 when I measured it (well, in a microbenchmark, not > > in rseq_get_rseq_cs() directly). Perhaps that could be an additional > > avenue for improvement here. > > The killer is usually 'user copy hardening'. > It significantly slows down sendmsg() and recvmsg(). > I've got measurable performance improvements by > using __copy_from_user() when the buffer since has > already been checked - but isn't a compile-time constant. > > There is also scope for using _get_user() when reading > iovec[] (instead of copy_from_user()) and doing all the > bound checks (etc) in the loop. > That gives a measurable improvement for writev("/dev/null"). > I must sort those patches out again. > > David > In this case I mean replacing copy_from_user(rseq_cs, urseq_cs, sizeof(*rseq_cs)) with 4 (x8B=32 total) unsafe_get_user() (wrapped in user_read_access_begin/end()) which I think would just bypass user copy hardening (as far as I can tell). -Arjun > - > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > Registration No: 1397386 (Wales)