From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23EC6C433EF for ; Thu, 16 Dec 2021 17:43:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240114AbhLPRnD (ORCPT ); Thu, 16 Dec 2021 12:43:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239988AbhLPRnC (ORCPT ); Thu, 16 Dec 2021 12:43:02 -0500 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [IPv6:2a00:1450:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66B56C061574 for ; Thu, 16 Dec 2021 09:43:01 -0800 (PST) Received: by mail-ed1-x530.google.com with SMTP id y12so88860825eda.12 for ; Thu, 16 Dec 2021 09:43:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ms/wuM8WAzRPY4WykXwi0/YehvvfoHEwDko5JETWgyY=; b=Yt6iQrPRR02VnMfhyQFLe4lPoyYi1LLJZh1EAgoFFL5odLJNrpYEgdV1gCmEQ/8Ebm vqyvFiXql5+DLQ9/AE42zzsnV8GD+7jyfxwfcG0fq/qlAeVI5VhSB0SiPNjhiYS6tgxi kvXLQUaOrvL+UE8xSz2YpOdT/AUCe9TuJhNKQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ms/wuM8WAzRPY4WykXwi0/YehvvfoHEwDko5JETWgyY=; b=H45wiKQlIRA2zbhtKTUiLYd5v8y3mODPPzb+VpZGc7wFCiHA/Xx2PMVsD6Ymc1IeFq oKXMmwChnHAwKCTp+ruSsaVvREXoP8jOATx0Bo7iLzntzCtfhtPmzBW4IY7+wZozpKPF hk8Xg/EBIcwp9Myrxku8AHpuOzEgQPcewFaYu1Jcrd8V1iarZDqbpC8zm5iQG0B+gwRW NoBoZkbXGkRG466w/MP7s9xvI1JkEnutKNNHJKNUqbLhaqchAk5h3JT0V7p5YwdeSeaX nCWlMlB2w7kLIfJ76Rq76ysnFmj0IKXbkhyvr0xgCNZL7dWPL5/GjTcDw+wNJP2JhWCK N+Mw== X-Gm-Message-State: AOAM531MHW/Q6TPiY0f8B3PvF3r1NRsLc+3FXNoFPC5gL5cJASN6Q5ST mnHk24ukOeWWAmL1ShK44lb2kaOou4mutc99Ecw= X-Google-Smtp-Source: ABdhPJwgX6N6OioES522vakD8oLE2wd++DlXFTzaIzRW82fencIVKdXWowMKHVOJQb0UxvegYhfk+w== X-Received: by 2002:a50:e0c4:: with SMTP id j4mr22219051edl.239.1639676579834; Thu, 16 Dec 2021 09:42:59 -0800 (PST) Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com. [209.85.221.51]) by smtp.gmail.com with ESMTPSA id sh30sm2057930ejc.117.2021.12.16.09.42.57 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 16 Dec 2021 09:42:59 -0800 (PST) Received: by mail-wr1-f51.google.com with SMTP id v11so45529358wrw.10 for ; Thu, 16 Dec 2021 09:42:57 -0800 (PST) X-Received: by 2002:adf:8b0e:: with SMTP id n14mr9660488wra.281.1639676577218; Thu, 16 Dec 2021 09:42:57 -0800 (PST) MIME-Version: 1.0 References: <20210514100106.3404011-1-arnd@kernel.org> In-Reply-To: From: Linus Torvalds Date: Thu, 16 Dec 2021 09:42:41 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 00/13] Unify asm/unaligned.h around struct helper To: Ard Biesheuvel Cc: Arnd Bergmann , "Jason A. Donenfeld" , Johannes Berg , Kees Cook , Nick Desaulniers , linux-arch , Vineet Gupta , Arnd Bergmann , Amitkumar Karwar , Benjamin Herrenschmidt , Borislav Petkov , Eric Dumazet , Florian Fainelli , Ganapathi Bhat , Geert Uytterhoeven , "H. Peter Anvin" , Ingo Molnar , Jakub Kicinski , James Morris , Jens Axboe , John Johansen , Jonas Bonn , Kalle Valo , Michael Ellerman , Paul Mackerras , Rich Felker , "Richard Russon (FlatCap)" , Russell King , "Serge E. Hallyn" , Sharvari Harisangam , Stafford Horne , Stefan Kristiansson , Thomas Gleixner , Vladimir Oltean , Xinming Hu , Yoshinori Sato , X86 ML , Linux Kernel Mailing List , Linux ARM , linux-m68k , Linux Crypto Mailing List , openrisc@lists.librecores.org, "open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)" , Linux-sh list , "open list:SPARC + UltraSPARC (sparc/sparc64)" , linux-ntfs-dev@lists.sourceforge.net, linux-block , linux-wireless , "open list:BPF JIT for MIPS (32-BIT AND 64-BIT)" , LSM List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-arch@vger.kernel.org On Thu, Dec 16, 2021 at 9:29 AM Ard Biesheuvel wrote: > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is used in many places to > conditionally emit code that violates C alignment rules. E.g., there > is this example in Documentation/core-api/unaligned-memory-access.rst: > > bool ether_addr_equal(const u8 *addr1, const u8 *addr2) > { > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) | > ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4))); > return fold == 0; > #else It probably works fine in practice - the one case we had was really pretty special, and about the vectorizer doing odd things. But I think we should strive to convert these to use "get_unaligned()", since code generation is fine. It still often makes sense to have that test for the config variable, simply because the approach might be different if we know unaligned accesses are slow. So I'll happily take patches that do obvious conversions to get_unaligned() where they make sense, but I don't think we should consider this some huge hard requirement. Linus