All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next] bpf: Shrink size of struct bpf_map/bpf_array.
@ 2024-02-20 23:50 Alexei Starovoitov
  2024-02-21 16:40 ` Yonghong Song
  2024-02-21 17:10 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Alexei Starovoitov @ 2024-02-20 23:50 UTC (permalink / raw
  To: bpf; +Cc: daniel, andrii, martin.lau, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Back in 2018 the commit be95a845cc44 ("bpf: avoid false sharing of map refcount with max_entries")
added ____cacheline_aligned to "struct bpf_map" to make sure that fields like
refcnt don't share a cache line with max_entries that is used to bounds check
map access. That was done to make spectre style attacks harder. The main
mitigation is done via code similar to array_index_nospec(), of course.
This was an additional precaution.
It increased the size of "struct bpf_map" a little, but it's affect
on all other maps (like array) is significant, since "struct bpf_map" is
typically the first member in other map types.

Undo this ____cacheline_aligned tag. Instead move freeze_mutex field around,
so that refcnt and max_entries are still in different cache lines.

The main effect is seen in sizeof(struct bpf_array) that reduces from 320 to 248 bytes.

BEFORE:

struct bpf_map {
	const struct bpf_map_ops  * ops;                 /*     0     8 */
	...
	char                       name[16];             /*    96    16 */

	/* XXX 16 bytes hole, try to pack */

	/* --- cacheline 2 boundary (128 bytes) --- */
	atomic64_t refcnt __attribute__((__aligned__(64))); /*   128     8 */
	...
	/* size: 256, cachelines: 4, members: 30 */
	/* sum members: 232, holes: 1, sum holes: 16 */
	/* padding: 8 */
	/* paddings: 1, sum paddings: 2 */
} __attribute__((__aligned__(64)));

struct bpf_array {
	struct bpf_map             map;                  /*     0   256 */
	...
	/* size: 320, cachelines: 5, members: 5 */
	/* padding: 48 */
	/* paddings: 1, sum paddings: 8 */
} __attribute__((__aligned__(64)));

AFTER:

struct bpf_map {
	/* size: 232, cachelines: 4, members: 30 */
	/* paddings: 1, sum paddings: 2 */
	/* last cacheline: 40 bytes */
};
struct bpf_array {
	/* size: 248, cachelines: 4, members: 5 */
	/* last cacheline: 56 bytes */
};

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index c7aa99b44dbd..814dc913a968 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -251,10 +251,7 @@ struct bpf_list_node_kern {
 } __attribute__((aligned(8)));
 
 struct bpf_map {
-	/* The first two cachelines with read-mostly members of which some
-	 * are also accessed in fast-path (e.g. ops, max_entries).
-	 */
-	const struct bpf_map_ops *ops ____cacheline_aligned;
+	const struct bpf_map_ops *ops;
 	struct bpf_map *inner_map_meta;
 #ifdef CONFIG_SECURITY
 	void *security;
@@ -276,17 +273,14 @@ struct bpf_map {
 	struct obj_cgroup *objcg;
 #endif
 	char name[BPF_OBJ_NAME_LEN];
-	/* The 3rd and 4th cacheline with misc members to avoid false sharing
-	 * particularly with refcounting.
-	 */
-	atomic64_t refcnt ____cacheline_aligned;
+	struct mutex freeze_mutex;
+	atomic64_t refcnt;
 	atomic64_t usercnt;
 	/* rcu is used before freeing and work is only used during freeing */
 	union {
 		struct work_struct work;
 		struct rcu_head rcu;
 	};
-	struct mutex freeze_mutex;
 	atomic64_t writecnt;
 	/* 'Ownership' of program-containing map is claimed by the first program
 	 * that is going to use this map or by the first program which FD is
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf-next] bpf: Shrink size of struct bpf_map/bpf_array.
  2024-02-20 23:50 [PATCH bpf-next] bpf: Shrink size of struct bpf_map/bpf_array Alexei Starovoitov
@ 2024-02-21 16:40 ` Yonghong Song
  2024-02-21 17:10 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Yonghong Song @ 2024-02-21 16:40 UTC (permalink / raw
  To: Alexei Starovoitov, bpf; +Cc: daniel, andrii, martin.lau, kernel-team


On 2/20/24 3:50 PM, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> Back in 2018 the commit be95a845cc44 ("bpf: avoid false sharing of map refcount with max_entries")
> added ____cacheline_aligned to "struct bpf_map" to make sure that fields like
> refcnt don't share a cache line with max_entries that is used to bounds check
> map access. That was done to make spectre style attacks harder. The main
> mitigation is done via code similar to array_index_nospec(), of course.
> This was an additional precaution.
> It increased the size of "struct bpf_map" a little, but it's affect
> on all other maps (like array) is significant, since "struct bpf_map" is
> typically the first member in other map types.
>
> Undo this ____cacheline_aligned tag. Instead move freeze_mutex field around,
> so that refcnt and max_entries are still in different cache lines.
>
> The main effect is seen in sizeof(struct bpf_array) that reduces from 320 to 248 bytes.
>
> BEFORE:
>
> struct bpf_map {
> 	const struct bpf_map_ops  * ops;                 /*     0     8 */
> 	...
> 	char                       name[16];             /*    96    16 */
>
> 	/* XXX 16 bytes hole, try to pack */
>
> 	/* --- cacheline 2 boundary (128 bytes) --- */
> 	atomic64_t refcnt __attribute__((__aligned__(64))); /*   128     8 */
> 	...
> 	/* size: 256, cachelines: 4, members: 30 */
> 	/* sum members: 232, holes: 1, sum holes: 16 */
> 	/* padding: 8 */
> 	/* paddings: 1, sum paddings: 2 */
> } __attribute__((__aligned__(64)));
>
> struct bpf_array {
> 	struct bpf_map             map;                  /*     0   256 */
> 	...
> 	/* size: 320, cachelines: 5, members: 5 */
> 	/* padding: 48 */
> 	/* paddings: 1, sum paddings: 8 */
> } __attribute__((__aligned__(64)));
>
> AFTER:
>
> struct bpf_map {
> 	/* size: 232, cachelines: 4, members: 30 */
> 	/* paddings: 1, sum paddings: 2 */
> 	/* last cacheline: 40 bytes */
> };
> struct bpf_array {
> 	/* size: 248, cachelines: 4, members: 5 */
> 	/* last cacheline: 56 bytes */
> };
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Yonghong Song <yonghong.song@linux.dev>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf-next] bpf: Shrink size of struct bpf_map/bpf_array.
  2024-02-20 23:50 [PATCH bpf-next] bpf: Shrink size of struct bpf_map/bpf_array Alexei Starovoitov
  2024-02-21 16:40 ` Yonghong Song
@ 2024-02-21 17:10 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-02-21 17:10 UTC (permalink / raw
  To: Alexei Starovoitov; +Cc: bpf, daniel, andrii, martin.lau, kernel-team

Hello:

This patch was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Tue, 20 Feb 2024 15:50:01 -0800 you wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> Back in 2018 the commit be95a845cc44 ("bpf: avoid false sharing of map refcount with max_entries")
> added ____cacheline_aligned to "struct bpf_map" to make sure that fields like
> refcnt don't share a cache line with max_entries that is used to bounds check
> map access. That was done to make spectre style attacks harder. The main
> mitigation is done via code similar to array_index_nospec(), of course.
> This was an additional precaution.
> It increased the size of "struct bpf_map" a little, but it's affect
> on all other maps (like array) is significant, since "struct bpf_map" is
> typically the first member in other map types.
> 
> [...]

Here is the summary with links:
  - [bpf-next] bpf: Shrink size of struct bpf_map/bpf_array.
    https://git.kernel.org/bpf/bpf-next/c/f86783991809

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-02-21 17:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-20 23:50 [PATCH bpf-next] bpf: Shrink size of struct bpf_map/bpf_array Alexei Starovoitov
2024-02-21 16:40 ` Yonghong Song
2024-02-21 17:10 ` patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.