BPF Archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head.
@ 2024-05-10  1:13 Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 1/9] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
                   ` (8 more replies)
  0 siblings, 9 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee

Some types, such as type kptr, bpf_rb_root, and bpf_list_head, are
treated in a special way. Previously, these types could not be the
type of a field in a struct type that is used as the type of a global
variable. They could not be the type of a field in a struct type that
is used as the type of a field in the value type of a map either. They
could not even be the type of array elements. This means that they can
only be the type of global variables or of direct fields in the value
type of a map.

The patch set aims to enable the use of these specific types in arrays
and struct fields, providing flexibility. It examines the types of
global variables or the value types of maps, such as arrays and struct
types, recursively to identify these special types and generate field
information for them.

For example,

  ...
  struct task_struct __kptr *ptr[3];
  ...

it will create 3 instances of "struct btf_field" in the "btf_record" of
the data section.

 [...,
  btf_field(offset=0x100, type=BPF_KPTR_REF),
  btf_field(offset=0x108, type=BPF_KPTR_REF),
  btf_field(offset=0x110, type=BPF_KPTR_REF),
  ...
 ]

It creates a record of each of three elements. These three records are
almost identical except their offsets.

Another example is

  ...
  struct A {
    ...
    struct task_struct __kptr *task;
    struct bpf_rb_root root;
    ...
  }

  struct A foo[2];

it will create 4 records.

 [...,
  btf_field(offset=0x7100, type=BPF_KPTR_REF),
  btf_field(offset=0x7108, type=BPF_RB_ROOT:),
  btf_field(offset=0x7200, type=BPF_KPTR_REF),
  btf_field(offset=0x7208, type=BPF_RB_ROOT:),
  ...
 ]

Assuming that the size of an element/struct A is 0x100 and "foo"
starts at 0x7000, it includes two kptr records at 0x7100 and 0x7200,
and two rbtree root records at 0x7108 and 0x7208.

All these field information will be flatten, for struct types, and
repeated, for arrays.

---
Changes from v4:

 - Return -E2BIG for i == MAX_RESOLVE_DEPTH.

Changes from v3:

 - Refactor the common code of btf_find_struct_field() and
   btf_find_datasec_var().

 - Limit the number of levels looking into a struct types.

Changes from v2:

 - Support fields in nested struct type.

 - Remove nelems and duplicate field information with offset
   adjustments for arrays.

Changes from v1:

 - Move the check of element alignment out of btf_field_cmp() to
   btf_record_find().

 - Change the order of the previous patch 4 "bpf:
   check_map_kptr_access() compute the offset from the reg state" as
   the patch 7 now.

 - Reject BPF_RB_NODE and BPF_LIST_NODE with nelems > 1.

 - Rephrase the commit log of the patch "bpf: check_map_access() with
   the knowledge of arrays" to clarify the alignment on elements.

v4: https://lore.kernel.org/all/20240508063218.2806447-1-thinker.li@gmail.com/
v3: https://lore.kernel.org/all/20240501204729.484085-1-thinker.li@gmail.com/
v2: https://lore.kernel.org/all/20240412210814.603377-1-thinker.li@gmail.com/
v1: https://lore.kernel.org/bpf/20240410004150.2917641-1-thinker.li@gmail.com/

Kui-Feng Lee (9):
  bpf: Remove unnecessary checks on the offset of btf_field.
  bpf: Remove unnecessary call to btf_field_type_size().
  bpf: refactor btf_find_struct_field() and btf_find_datasec_var().
  bpf: create repeated fields for arrays.
  bpf: look into the types of the fields of a struct type recursively.
  bpf: limit the number of levels of a nested struct type.
  selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  selftests/bpf: Test global bpf_rb_root arrays and fields in nested
    struct types.
  selftests/bpf: Test global bpf_list_head arrays.

 kernel/bpf/btf.c                              | 305 ++++++++++++------
 kernel/bpf/verifier.c                         |   4 +-
 .../selftests/bpf/prog_tests/cpumask.c        |   5 +
 .../selftests/bpf/prog_tests/linked_list.c    |  12 +
 .../testing/selftests/bpf/prog_tests/rbtree.c |  47 +++
 .../selftests/bpf/progs/cpumask_success.c     | 133 ++++++++
 .../testing/selftests/bpf/progs/linked_list.c |  42 +++
 tools/testing/selftests/bpf/progs/rbtree.c    |  77 +++++
 8 files changed, 517 insertions(+), 108 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 1/9] bpf: Remove unnecessary checks on the offset of btf_field.
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 2/9] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee, Eduard Zingerman

reg_find_field_offset() always return a btf_field with a matching offset
value. Checking the offset of the returned btf_field is unnecessary.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 kernel/bpf/verifier.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9e3aba08984e..3cc8e68b5b73 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -11640,7 +11640,7 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
 
 	node_off = reg->off + reg->var_off.value;
 	field = reg_find_field_offset(reg, node_off, node_field_type);
-	if (!field || field->offset != node_off) {
+	if (!field) {
 		verbose(env, "%s not found at offset=%u\n", node_type_name, node_off);
 		return -EINVAL;
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 2/9] bpf: Remove unnecessary call to btf_field_type_size().
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 1/9] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 3/9] bpf: refactor btf_find_struct_field() and btf_find_datasec_var() Kui-Feng Lee
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee, Eduard Zingerman

field->size has been initialized by bpf_parse_fields() with the value
returned by btf_field_type_size(). Use it instead of calling
btf_field_type_size() again.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 kernel/bpf/btf.c      | 2 +-
 kernel/bpf/verifier.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 821063660d9f..226138bd139a 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6693,7 +6693,7 @@ int btf_struct_access(struct bpf_verifier_log *log,
 		for (i = 0; i < rec->cnt; i++) {
 			struct btf_field *field = &rec->fields[i];
 			u32 offset = field->offset;
-			if (off < offset + btf_field_type_size(field->type) && offset < off + size) {
+			if (off < offset + field->size && offset < off + size) {
 				bpf_log(log,
 					"direct access to %s is disallowed\n",
 					btf_field_type_name(field->type));
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3cc8e68b5b73..7cfbc06d8d1b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5448,7 +5448,7 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
 		 * this program. To check that [x1, x2) overlaps with [y1, y2),
 		 * it is sufficient to check x1 < y2 && y1 < x2.
 		 */
-		if (reg->smin_value + off < p + btf_field_type_size(field->type) &&
+		if (reg->smin_value + off < p + field->size &&
 		    p < reg->umax_value + off + size) {
 			switch (field->type) {
 			case BPF_KPTR_UNREF:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 3/9] bpf: refactor btf_find_struct_field() and btf_find_datasec_var().
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 1/9] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 2/9] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 4/9] bpf: create repeated fields for arrays Kui-Feng Lee
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee, Eduard Zingerman

Move common code of the two functions to btf_find_field_one().

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 kernel/bpf/btf.c | 180 +++++++++++++++++++++--------------------------
 1 file changed, 79 insertions(+), 101 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 226138bd139a..2ce61c3a7e28 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3494,72 +3494,95 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
 
 #undef field_mask_test_name
 
+static int btf_find_field_one(const struct btf *btf,
+			      const struct btf_type *var,
+			      const struct btf_type *var_type,
+			      int var_idx,
+			      u32 off, u32 expected_size,
+			      u32 field_mask, u32 *seen_mask,
+			      struct btf_field_info *info, int info_cnt)
+{
+	int ret, align, sz, field_type;
+	struct btf_field_info tmp;
+
+	field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
+					field_mask, seen_mask, &align, &sz);
+	if (field_type == 0)
+		return 0;
+	if (field_type < 0)
+		return field_type;
+
+	if (expected_size && expected_size != sz)
+		return 0;
+	if (off % align)
+		return 0;
+
+	switch (field_type) {
+	case BPF_SPIN_LOCK:
+	case BPF_TIMER:
+	case BPF_WORKQUEUE:
+	case BPF_LIST_NODE:
+	case BPF_RB_NODE:
+	case BPF_REFCOUNT:
+		ret = btf_find_struct(btf, var_type, off, sz, field_type,
+				      info_cnt ? &info[0] : &tmp);
+		if (ret < 0)
+			return ret;
+		break;
+	case BPF_KPTR_UNREF:
+	case BPF_KPTR_REF:
+	case BPF_KPTR_PERCPU:
+		ret = btf_find_kptr(btf, var_type, off, sz,
+				    info_cnt ? &info[0] : &tmp);
+		if (ret < 0)
+			return ret;
+		break;
+	case BPF_LIST_HEAD:
+	case BPF_RB_ROOT:
+		ret = btf_find_graph_root(btf, var, var_type,
+					  var_idx, off, sz,
+					  info_cnt ? &info[0] : &tmp,
+					  field_type);
+		if (ret < 0)
+			return ret;
+		break;
+	default:
+		return -EFAULT;
+	}
+
+	if (ret == BTF_FIELD_IGNORE)
+		return 0;
+	if (!info_cnt)
+		return -E2BIG;
+
+	return 1;
+}
+
 static int btf_find_struct_field(const struct btf *btf,
 				 const struct btf_type *t, u32 field_mask,
 				 struct btf_field_info *info, int info_cnt)
 {
-	int ret, idx = 0, align, sz, field_type;
+	int ret, idx = 0;
 	const struct btf_member *member;
-	struct btf_field_info tmp;
 	u32 i, off, seen_mask = 0;
 
 	for_each_member(i, t, member) {
 		const struct btf_type *member_type = btf_type_by_id(btf,
 								    member->type);
 
-		field_type = btf_get_field_type(__btf_name_by_offset(btf, member_type->name_off),
-						field_mask, &seen_mask, &align, &sz);
-		if (field_type == 0)
-			continue;
-		if (field_type < 0)
-			return field_type;
-
 		off = __btf_member_bit_offset(t, member);
 		if (off % 8)
 			/* valid C code cannot generate such BTF */
 			return -EINVAL;
 		off /= 8;
-		if (off % align)
-			continue;
-
-		switch (field_type) {
-		case BPF_SPIN_LOCK:
-		case BPF_TIMER:
-		case BPF_WORKQUEUE:
-		case BPF_LIST_NODE:
-		case BPF_RB_NODE:
-		case BPF_REFCOUNT:
-			ret = btf_find_struct(btf, member_type, off, sz, field_type,
-					      idx < info_cnt ? &info[idx] : &tmp);
-			if (ret < 0)
-				return ret;
-			break;
-		case BPF_KPTR_UNREF:
-		case BPF_KPTR_REF:
-		case BPF_KPTR_PERCPU:
-			ret = btf_find_kptr(btf, member_type, off, sz,
-					    idx < info_cnt ? &info[idx] : &tmp);
-			if (ret < 0)
-				return ret;
-			break;
-		case BPF_LIST_HEAD:
-		case BPF_RB_ROOT:
-			ret = btf_find_graph_root(btf, t, member_type,
-						  i, off, sz,
-						  idx < info_cnt ? &info[idx] : &tmp,
-						  field_type);
-			if (ret < 0)
-				return ret;
-			break;
-		default:
-			return -EFAULT;
-		}
 
-		if (ret == BTF_FIELD_IGNORE)
-			continue;
-		if (idx >= info_cnt)
-			return -E2BIG;
-		++idx;
+		ret = btf_find_field_one(btf, t, member_type, i,
+					 off, 0,
+					 field_mask, &seen_mask,
+					 &info[idx], info_cnt - idx);
+		if (ret < 0)
+			return ret;
+		idx += ret;
 	}
 	return idx;
 }
@@ -3568,66 +3591,21 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
 				u32 field_mask, struct btf_field_info *info,
 				int info_cnt)
 {
-	int ret, idx = 0, align, sz, field_type;
+	int ret, idx = 0;
 	const struct btf_var_secinfo *vsi;
-	struct btf_field_info tmp;
 	u32 i, off, seen_mask = 0;
 
 	for_each_vsi(i, t, vsi) {
 		const struct btf_type *var = btf_type_by_id(btf, vsi->type);
 		const struct btf_type *var_type = btf_type_by_id(btf, var->type);
 
-		field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
-						field_mask, &seen_mask, &align, &sz);
-		if (field_type == 0)
-			continue;
-		if (field_type < 0)
-			return field_type;
-
 		off = vsi->offset;
-		if (vsi->size != sz)
-			continue;
-		if (off % align)
-			continue;
-
-		switch (field_type) {
-		case BPF_SPIN_LOCK:
-		case BPF_TIMER:
-		case BPF_WORKQUEUE:
-		case BPF_LIST_NODE:
-		case BPF_RB_NODE:
-		case BPF_REFCOUNT:
-			ret = btf_find_struct(btf, var_type, off, sz, field_type,
-					      idx < info_cnt ? &info[idx] : &tmp);
-			if (ret < 0)
-				return ret;
-			break;
-		case BPF_KPTR_UNREF:
-		case BPF_KPTR_REF:
-		case BPF_KPTR_PERCPU:
-			ret = btf_find_kptr(btf, var_type, off, sz,
-					    idx < info_cnt ? &info[idx] : &tmp);
-			if (ret < 0)
-				return ret;
-			break;
-		case BPF_LIST_HEAD:
-		case BPF_RB_ROOT:
-			ret = btf_find_graph_root(btf, var, var_type,
-						  -1, off, sz,
-						  idx < info_cnt ? &info[idx] : &tmp,
-						  field_type);
-			if (ret < 0)
-				return ret;
-			break;
-		default:
-			return -EFAULT;
-		}
-
-		if (ret == BTF_FIELD_IGNORE)
-			continue;
-		if (idx >= info_cnt)
-			return -E2BIG;
-		++idx;
+		ret = btf_find_field_one(btf, var, var_type, -1, off, vsi->size,
+					 field_mask, &seen_mask,
+					 &info[idx], info_cnt - idx);
+		if (ret < 0)
+			return ret;
+		idx += ret;
 	}
 	return idx;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 4/9] bpf: create repeated fields for arrays.
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
                   ` (2 preceding siblings ...)
  2024-05-10  1:13 ` [PATCH bpf-next v5 3/9] bpf: refactor btf_find_struct_field() and btf_find_datasec_var() Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 5/9] bpf: look into the types of the fields of a struct type recursively Kui-Feng Lee
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee, Eduard Zingerman

The verifier uses field information for certain special types, such as
kptr, rbtree root, and list head. These types are treated
differently. However, we did not previously support these types in
arrays. This update examines arrays and duplicates field information the
same number of times as the length of the array if the element type is one
of the special types.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 kernel/bpf/btf.c | 62 ++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 58 insertions(+), 4 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 2ce61c3a7e28..4fefa27d5aea 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3494,6 +3494,41 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
 
 #undef field_mask_test_name
 
+/* Repeat a field for a specified number of times.
+ *
+ * Copy and repeat the first field for repeat_cnt
+ * times. The field is repeated by adding the offset of each field
+ * with
+ *   (i + 1) * elem_size
+ * where i is the repeat index and elem_size is the size of an element.
+ */
+static int btf_repeat_field(struct btf_field_info *info,
+			    u32 repeat_cnt, u32 elem_size)
+{
+	u32 i;
+	u32 cur;
+
+	/* Ensure not repeating fields that should not be repeated. */
+	switch (info[0].type) {
+	case BPF_KPTR_UNREF:
+	case BPF_KPTR_REF:
+	case BPF_KPTR_PERCPU:
+	case BPF_LIST_HEAD:
+	case BPF_RB_ROOT:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	cur = 1;
+	for (i = 0; i < repeat_cnt; i++) {
+		memcpy(&info[cur], &info[0], sizeof(info[0]));
+		info[cur++].off += (i + 1) * elem_size;
+	}
+
+	return 0;
+}
+
 static int btf_find_field_one(const struct btf *btf,
 			      const struct btf_type *var,
 			      const struct btf_type *var_type,
@@ -3504,6 +3539,21 @@ static int btf_find_field_one(const struct btf *btf,
 {
 	int ret, align, sz, field_type;
 	struct btf_field_info tmp;
+	const struct btf_array *array;
+	u32 i, nelems = 1;
+
+	/* Walk into array types to find the element type and the number of
+	 * elements in the (flattened) array.
+	 */
+	for (i = 0; i < MAX_RESOLVE_DEPTH && btf_type_is_array(var_type); i++) {
+		array = btf_array(var_type);
+		nelems *= array->nelems;
+		var_type = btf_type_by_id(btf, array->type);
+	}
+	if (i == MAX_RESOLVE_DEPTH)
+		return -E2BIG;
+	if (nelems == 0)
+		return 0;
 
 	field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
 					field_mask, seen_mask, &align, &sz);
@@ -3512,7 +3562,7 @@ static int btf_find_field_one(const struct btf *btf,
 	if (field_type < 0)
 		return field_type;
 
-	if (expected_size && expected_size != sz)
+	if (expected_size && expected_size != sz * nelems)
 		return 0;
 	if (off % align)
 		return 0;
@@ -3552,10 +3602,14 @@ static int btf_find_field_one(const struct btf *btf,
 
 	if (ret == BTF_FIELD_IGNORE)
 		return 0;
-	if (!info_cnt)
+	if (nelems > info_cnt)
 		return -E2BIG;
-
-	return 1;
+	if (nelems > 1) {
+		ret = btf_repeat_field(info, nelems - 1, sz);
+		if (ret < 0)
+			return ret;
+	}
+	return nelems;
 }
 
 static int btf_find_struct_field(const struct btf *btf,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 5/9] bpf: look into the types of the fields of a struct type recursively.
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
                   ` (3 preceding siblings ...)
  2024-05-10  1:13 ` [PATCH bpf-next v5 4/9] bpf: create repeated fields for arrays Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 6/9] bpf: limit the number of levels of a nested struct type Kui-Feng Lee
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee

The verifier has field information for specific special types, such as
kptr, rbtree root, and list head. These types are handled
differently. However, we did not previously examine the types of fields of
a struct type variable. Field information records were not generated for
the kptrs, rbtree roots, and linked_list heads that are not located at the
outermost struct type of a variable.

For example,

  struct A {
    struct task_struct __kptr * task;
  };

  struct B {
    struct A mem_a;
  }

  struct B var_b;

It did not examine "struct A" so as not to generate field information for
the kptr in "struct A" for "var_b".

This patch enables BPF programs to define fields of these special types in
a struct type other than the direct type of a variable or in a struct type
that is the type of a field in the value type of a map.

Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 kernel/bpf/btf.c | 93 +++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 73 insertions(+), 20 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 4fefa27d5aea..e78e2e41467d 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3494,41 +3494,83 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
 
 #undef field_mask_test_name
 
-/* Repeat a field for a specified number of times.
+/* Repeat a number of fields for a specified number of times.
  *
- * Copy and repeat the first field for repeat_cnt
- * times. The field is repeated by adding the offset of each field
- * with
+ * Copy the fields starting from the first field and repeat them for
+ * repeat_cnt times. The fields are repeated by adding the offset of each
+ * field with
  *   (i + 1) * elem_size
  * where i is the repeat index and elem_size is the size of an element.
  */
-static int btf_repeat_field(struct btf_field_info *info,
-			    u32 repeat_cnt, u32 elem_size)
+static int btf_repeat_fields(struct btf_field_info *info,
+			     u32 field_cnt, u32 repeat_cnt, u32 elem_size)
 {
-	u32 i;
+	u32 i, j;
 	u32 cur;
 
 	/* Ensure not repeating fields that should not be repeated. */
-	switch (info[0].type) {
-	case BPF_KPTR_UNREF:
-	case BPF_KPTR_REF:
-	case BPF_KPTR_PERCPU:
-	case BPF_LIST_HEAD:
-	case BPF_RB_ROOT:
-		break;
-	default:
-		return -EINVAL;
+	for (i = 0; i < field_cnt; i++) {
+		switch (info[i].type) {
+		case BPF_KPTR_UNREF:
+		case BPF_KPTR_REF:
+		case BPF_KPTR_PERCPU:
+		case BPF_LIST_HEAD:
+		case BPF_RB_ROOT:
+			break;
+		default:
+			return -EINVAL;
+		}
 	}
 
-	cur = 1;
+	cur = field_cnt;
 	for (i = 0; i < repeat_cnt; i++) {
-		memcpy(&info[cur], &info[0], sizeof(info[0]));
-		info[cur++].off += (i + 1) * elem_size;
+		memcpy(&info[cur], &info[0], field_cnt * sizeof(info[0]));
+		for (j = 0; j < field_cnt; j++)
+			info[cur++].off += (i + 1) * elem_size;
 	}
 
 	return 0;
 }
 
+static int btf_find_struct_field(const struct btf *btf,
+				 const struct btf_type *t, u32 field_mask,
+				 struct btf_field_info *info, int info_cnt);
+
+/* Find special fields in the struct type of a field.
+ *
+ * This function is used to find fields of special types that is not a
+ * global variable or a direct field of a struct type. It also handles the
+ * repetition if it is the element type of an array.
+ */
+static int btf_find_nested_struct(const struct btf *btf, const struct btf_type *t,
+				  u32 off, u32 nelems,
+				  u32 field_mask, struct btf_field_info *info,
+				  int info_cnt)
+{
+	int ret, err, i;
+
+	ret = btf_find_struct_field(btf, t, field_mask, info, info_cnt);
+
+	if (ret <= 0)
+		return ret;
+
+	/* Shift the offsets of the nested struct fields to the offsets
+	 * related to the container.
+	 */
+	for (i = 0; i < ret; i++)
+		info[i].off += off;
+
+	if (nelems > 1) {
+		err = btf_repeat_fields(info, ret, nelems - 1, t->size);
+		if (err == 0)
+			ret *= nelems;
+		else
+			ret = err;
+	}
+
+	return ret;
+}
+
 static int btf_find_field_one(const struct btf *btf,
 			      const struct btf_type *var,
 			      const struct btf_type *var_type,
@@ -3557,6 +3599,17 @@ static int btf_find_field_one(const struct btf *btf,
 
 	field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
 					field_mask, seen_mask, &align, &sz);
+	/* Look into variables of struct types */
+	if ((field_type == BPF_KPTR_REF || !field_type) &&
+	    __btf_type_is_struct(var_type)) {
+		sz = var_type->size;
+		if (expected_size && expected_size != sz * nelems)
+			return 0;
+		ret = btf_find_nested_struct(btf, var_type, off, nelems, field_mask,
+					     &info[0], info_cnt);
+		return ret;
+	}
+
 	if (field_type == 0)
 		return 0;
 	if (field_type < 0)
@@ -3605,7 +3658,7 @@ static int btf_find_field_one(const struct btf *btf,
 	if (nelems > info_cnt)
 		return -E2BIG;
 	if (nelems > 1) {
-		ret = btf_repeat_field(info, nelems - 1, sz);
+		ret = btf_repeat_fields(info, 1, nelems - 1, sz);
 		if (ret < 0)
 			return ret;
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 6/9] bpf: limit the number of levels of a nested struct type.
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
                   ` (4 preceding siblings ...)
  2024-05-10  1:13 ` [PATCH bpf-next v5 5/9] bpf: look into the types of the fields of a struct type recursively Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10  2:37   ` Eduard Zingerman
  2024-05-10  1:13 ` [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields Kui-Feng Lee
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee

Limit the number of levels looking into struct types to avoid running out
of stack space.

Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 kernel/bpf/btf.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index e78e2e41467d..e122e30f8cf5 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3534,7 +3534,8 @@ static int btf_repeat_fields(struct btf_field_info *info,
 
 static int btf_find_struct_field(const struct btf *btf,
 				 const struct btf_type *t, u32 field_mask,
-				 struct btf_field_info *info, int info_cnt);
+				 struct btf_field_info *info, int info_cnt,
+				 u32 level);
 
 /* Find special fields in the struct type of a field.
  *
@@ -3545,11 +3546,15 @@ static int btf_find_struct_field(const struct btf *btf,
 static int btf_find_nested_struct(const struct btf *btf, const struct btf_type *t,
 				  u32 off, u32 nelems,
 				  u32 field_mask, struct btf_field_info *info,
-				  int info_cnt)
+				  int info_cnt, u32 level)
 {
 	int ret, err, i;
 
-	ret = btf_find_struct_field(btf, t, field_mask, info, info_cnt);
+	level++;
+	if (level >= MAX_RESOLVE_DEPTH)
+		return -E2BIG;
+
+	ret = btf_find_struct_field(btf, t, field_mask, info, info_cnt, level);
 
 	if (ret <= 0)
 		return ret;
@@ -3577,7 +3582,8 @@ static int btf_find_field_one(const struct btf *btf,
 			      int var_idx,
 			      u32 off, u32 expected_size,
 			      u32 field_mask, u32 *seen_mask,
-			      struct btf_field_info *info, int info_cnt)
+			      struct btf_field_info *info, int info_cnt,
+			      u32 level)
 {
 	int ret, align, sz, field_type;
 	struct btf_field_info tmp;
@@ -3606,7 +3612,7 @@ static int btf_find_field_one(const struct btf *btf,
 		if (expected_size && expected_size != sz * nelems)
 			return 0;
 		ret = btf_find_nested_struct(btf, var_type, off, nelems, field_mask,
-					     &info[0], info_cnt);
+					     &info[0], info_cnt, level);
 		return ret;
 	}
 
@@ -3667,7 +3673,8 @@ static int btf_find_field_one(const struct btf *btf,
 
 static int btf_find_struct_field(const struct btf *btf,
 				 const struct btf_type *t, u32 field_mask,
-				 struct btf_field_info *info, int info_cnt)
+				 struct btf_field_info *info, int info_cnt,
+				 u32 level)
 {
 	int ret, idx = 0;
 	const struct btf_member *member;
@@ -3686,7 +3693,7 @@ static int btf_find_struct_field(const struct btf *btf,
 		ret = btf_find_field_one(btf, t, member_type, i,
 					 off, 0,
 					 field_mask, &seen_mask,
-					 &info[idx], info_cnt - idx);
+					 &info[idx], info_cnt - idx, level);
 		if (ret < 0)
 			return ret;
 		idx += ret;
@@ -3696,7 +3703,7 @@ static int btf_find_struct_field(const struct btf *btf,
 
 static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
 				u32 field_mask, struct btf_field_info *info,
-				int info_cnt)
+				int info_cnt, u32 level)
 {
 	int ret, idx = 0;
 	const struct btf_var_secinfo *vsi;
@@ -3709,7 +3716,8 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
 		off = vsi->offset;
 		ret = btf_find_field_one(btf, var, var_type, -1, off, vsi->size,
 					 field_mask, &seen_mask,
-					 &info[idx], info_cnt - idx);
+					 &info[idx], info_cnt - idx,
+					 level);
 		if (ret < 0)
 			return ret;
 		idx += ret;
@@ -3722,9 +3730,9 @@ static int btf_find_field(const struct btf *btf, const struct btf_type *t,
 			  int info_cnt)
 {
 	if (__btf_type_is_struct(t))
-		return btf_find_struct_field(btf, t, field_mask, info, info_cnt);
+		return btf_find_struct_field(btf, t, field_mask, info, info_cnt, 0);
 	else if (btf_type_is_datasec(t))
-		return btf_find_datasec_var(btf, t, field_mask, info, info_cnt);
+		return btf_find_datasec_var(btf, t, field_mask, info, info_cnt, 0);
 	return -EINVAL;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
                   ` (5 preceding siblings ...)
  2024-05-10  1:13 ` [PATCH bpf-next v5 6/9] bpf: limit the number of levels of a nested struct type Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10 10:03   ` Eduard Zingerman
  2024-05-10  1:13 ` [PATCH bpf-next v5 8/9] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 9/9] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee
  8 siblings, 1 reply; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee

Make sure that BPF programs can declare global kptr arrays and kptr fields
in struct types that is the type of a global variable or the type of a
nested descendant field in a global variable.

An array with only one element is special case, that it treats the element
like a non-array kptr field. Nested arrays are also tested to ensure they
are handled properly.

Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 .../selftests/bpf/prog_tests/cpumask.c        |   5 +
 .../selftests/bpf/progs/cpumask_success.c     | 133 ++++++++++++++++++
 2 files changed, 138 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/cpumask.c b/tools/testing/selftests/bpf/prog_tests/cpumask.c
index ecf89df78109..2570bd4b0cb2 100644
--- a/tools/testing/selftests/bpf/prog_tests/cpumask.c
+++ b/tools/testing/selftests/bpf/prog_tests/cpumask.c
@@ -18,6 +18,11 @@ static const char * const cpumask_success_testcases[] = {
 	"test_insert_leave",
 	"test_insert_remove_release",
 	"test_global_mask_rcu",
+	"test_global_mask_array_one_rcu",
+	"test_global_mask_array_rcu",
+	"test_global_mask_array_l2_rcu",
+	"test_global_mask_nested_rcu",
+	"test_global_mask_nested_deep_rcu",
 	"test_cpumask_weight",
 };
 
diff --git a/tools/testing/selftests/bpf/progs/cpumask_success.c b/tools/testing/selftests/bpf/progs/cpumask_success.c
index 7a1e64c6c065..0b6383fa9958 100644
--- a/tools/testing/selftests/bpf/progs/cpumask_success.c
+++ b/tools/testing/selftests/bpf/progs/cpumask_success.c
@@ -12,6 +12,25 @@ char _license[] SEC("license") = "GPL";
 
 int pid, nr_cpus;
 
+struct kptr_nested {
+	struct bpf_cpumask __kptr * mask;
+};
+
+struct kptr_nested_mid {
+	int dummy;
+	struct kptr_nested m;
+};
+
+struct kptr_nested_deep {
+	struct kptr_nested_mid ptrs[2];
+};
+
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array[2];
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array_l2[2][1];
+private(MASK) static struct bpf_cpumask __kptr * global_mask_array_one[1];
+private(MASK) static struct kptr_nested global_mask_nested[2];
+private(MASK) static struct kptr_nested_deep global_mask_nested_deep;
+
 static bool is_test_task(void)
 {
 	int cur_pid = bpf_get_current_pid_tgid() >> 32;
@@ -460,6 +479,120 @@ int BPF_PROG(test_global_mask_rcu, struct task_struct *task, u64 clone_flags)
 	return 0;
 }
 
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_one_rcu, struct task_struct *task, u64 clone_flags)
+{
+	struct bpf_cpumask *local, *prev;
+
+	if (!is_test_task())
+		return 0;
+
+	/* Kptr arrays with one element are special cased, being treated
+	 * just like a single pointer.
+	 */
+
+	local = create_cpumask();
+	if (!local)
+		return 0;
+
+	prev = bpf_kptr_xchg(&global_mask_array_one[0], local);
+	if (prev) {
+		bpf_cpumask_release(prev);
+		err = 3;
+		return 0;
+	}
+
+	bpf_rcu_read_lock();
+	local = global_mask_array_one[0];
+	if (!local) {
+		err = 4;
+		bpf_rcu_read_unlock();
+		return 0;
+	}
+
+	bpf_rcu_read_unlock();
+
+	return 0;
+}
+
+static int _global_mask_array_rcu(struct bpf_cpumask **mask0,
+				  struct bpf_cpumask **mask1)
+{
+	struct bpf_cpumask *local;
+
+	if (!is_test_task())
+		return 0;
+
+	/* Check if two kptrs in the array work and independently */
+
+	local = create_cpumask();
+	if (!local)
+		return 0;
+
+	bpf_rcu_read_lock();
+
+	local = bpf_kptr_xchg(mask0, local);
+	if (local) {
+		err = 1;
+		goto err_exit;
+	}
+
+	/* [<mask 0>, NULL] */
+	if (!*mask0 || *mask1) {
+		err = 2;
+		goto err_exit;
+	}
+
+	local = create_cpumask();
+	if (!local) {
+		err = 9;
+		goto err_exit;
+	}
+
+	local = bpf_kptr_xchg(mask1, local);
+	if (local) {
+		err = 10;
+		goto err_exit;
+	}
+
+	/* [<mask 0>, <mask 1>] */
+	if (!*mask0 || !*mask1 || *mask0 == *mask1) {
+		err = 11;
+		goto err_exit;
+	}
+
+err_exit:
+	if (local)
+		bpf_cpumask_release(local);
+	bpf_rcu_read_unlock();
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_rcu, struct task_struct *task, u64 clone_flags)
+{
+	return _global_mask_array_rcu(&global_mask_array[0], &global_mask_array[1]);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_array_l2_rcu, struct task_struct *task, u64 clone_flags)
+{
+	return _global_mask_array_rcu(&global_mask_array_l2[0][0], &global_mask_array_l2[1][0]);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_nested_rcu, struct task_struct *task, u64 clone_flags)
+{
+	return _global_mask_array_rcu(&global_mask_nested[0].mask, &global_mask_nested[1].mask);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_global_mask_nested_deep_rcu, struct task_struct *task, u64 clone_flags)
+{
+	return _global_mask_array_rcu(&global_mask_nested_deep.ptrs[0].m.mask,
+				      &global_mask_nested_deep.ptrs[1].m.mask);
+}
+
 SEC("tp_btf/task_newtask")
 int BPF_PROG(test_cpumask_weight, struct task_struct *task, u64 clone_flags)
 {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 8/9] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types.
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
                   ` (6 preceding siblings ...)
  2024-05-10  1:13 ` [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  2024-05-10  1:13 ` [PATCH bpf-next v5 9/9] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee
  8 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee

Make sure global arrays of bpf_rb_root and fields of bpf_rb_root in nested
struct types work correctly.

Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 .../testing/selftests/bpf/prog_tests/rbtree.c | 47 +++++++++++
 tools/testing/selftests/bpf/progs/rbtree.c    | 77 +++++++++++++++++++
 2 files changed, 124 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/rbtree.c b/tools/testing/selftests/bpf/prog_tests/rbtree.c
index e9300c96607d..9818f06c97c5 100644
--- a/tools/testing/selftests/bpf/prog_tests/rbtree.c
+++ b/tools/testing/selftests/bpf/prog_tests/rbtree.c
@@ -31,6 +31,28 @@ static void test_rbtree_add_nodes(void)
 	rbtree__destroy(skel);
 }
 
+static void test_rbtree_add_nodes_nested(void)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts,
+		    .data_in = &pkt_v4,
+		    .data_size_in = sizeof(pkt_v4),
+		    .repeat = 1,
+	);
+	struct rbtree *skel;
+	int ret;
+
+	skel = rbtree__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "rbtree__open_and_load"))
+		return;
+
+	ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.rbtree_add_nodes_nested), &opts);
+	ASSERT_OK(ret, "rbtree_add_nodes_nested run");
+	ASSERT_OK(opts.retval, "rbtree_add_nodes_nested retval");
+	ASSERT_EQ(skel->data->less_callback_ran, 1, "rbtree_add_nodes_nested less_callback_ran");
+
+	rbtree__destroy(skel);
+}
+
 static void test_rbtree_add_and_remove(void)
 {
 	LIBBPF_OPTS(bpf_test_run_opts, opts,
@@ -53,6 +75,27 @@ static void test_rbtree_add_and_remove(void)
 	rbtree__destroy(skel);
 }
 
+static void test_rbtree_add_and_remove_array(void)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, opts,
+		    .data_in = &pkt_v4,
+		    .data_size_in = sizeof(pkt_v4),
+		    .repeat = 1,
+	);
+	struct rbtree *skel;
+	int ret;
+
+	skel = rbtree__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "rbtree__open_and_load"))
+		return;
+
+	ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.rbtree_add_and_remove_array), &opts);
+	ASSERT_OK(ret, "rbtree_add_and_remove_array");
+	ASSERT_OK(opts.retval, "rbtree_add_and_remove_array retval");
+
+	rbtree__destroy(skel);
+}
+
 static void test_rbtree_first_and_remove(void)
 {
 	LIBBPF_OPTS(bpf_test_run_opts, opts,
@@ -104,8 +147,12 @@ void test_rbtree_success(void)
 {
 	if (test__start_subtest("rbtree_add_nodes"))
 		test_rbtree_add_nodes();
+	if (test__start_subtest("rbtree_add_nodes_nested"))
+		test_rbtree_add_nodes_nested();
 	if (test__start_subtest("rbtree_add_and_remove"))
 		test_rbtree_add_and_remove();
+	if (test__start_subtest("rbtree_add_and_remove_array"))
+		test_rbtree_add_and_remove_array();
 	if (test__start_subtest("rbtree_first_and_remove"))
 		test_rbtree_first_and_remove();
 	if (test__start_subtest("rbtree_api_release_aliasing"))
diff --git a/tools/testing/selftests/bpf/progs/rbtree.c b/tools/testing/selftests/bpf/progs/rbtree.c
index b09f4fffe57c..a3620c15c136 100644
--- a/tools/testing/selftests/bpf/progs/rbtree.c
+++ b/tools/testing/selftests/bpf/progs/rbtree.c
@@ -13,6 +13,15 @@ struct node_data {
 	struct bpf_rb_node node;
 };
 
+struct root_nested_inner {
+	struct bpf_spin_lock glock;
+	struct bpf_rb_root root __contains(node_data, node);
+};
+
+struct root_nested {
+	struct root_nested_inner inner;
+};
+
 long less_callback_ran = -1;
 long removed_key = -1;
 long first_data[2] = {-1, -1};
@@ -20,6 +29,9 @@ long first_data[2] = {-1, -1};
 #define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8)))
 private(A) struct bpf_spin_lock glock;
 private(A) struct bpf_rb_root groot __contains(node_data, node);
+private(A) struct bpf_rb_root groot_array[2] __contains(node_data, node);
+private(A) struct bpf_rb_root groot_array_one[1] __contains(node_data, node);
+private(B) struct root_nested groot_nested;
 
 static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
 {
@@ -71,6 +83,12 @@ long rbtree_add_nodes(void *ctx)
 	return __add_three(&groot, &glock);
 }
 
+SEC("tc")
+long rbtree_add_nodes_nested(void *ctx)
+{
+	return __add_three(&groot_nested.inner.root, &groot_nested.inner.glock);
+}
+
 SEC("tc")
 long rbtree_add_and_remove(void *ctx)
 {
@@ -109,6 +127,65 @@ long rbtree_add_and_remove(void *ctx)
 	return 1;
 }
 
+SEC("tc")
+long rbtree_add_and_remove_array(void *ctx)
+{
+	struct bpf_rb_node *res1 = NULL, *res2 = NULL, *res3 = NULL;
+	struct node_data *nodes[3][2] = {{NULL, NULL}, {NULL, NULL}, {NULL, NULL}};
+	struct node_data *n;
+	long k1 = -1, k2 = -1, k3 = -1;
+	int i, j;
+
+	for (i = 0; i < 3; i++) {
+		for (j = 0; j < 2; j++) {
+			nodes[i][j] = bpf_obj_new(typeof(*nodes[i][j]));
+			if (!nodes[i][j])
+				goto err_out;
+			nodes[i][j]->key = i * 2 + j;
+		}
+	}
+
+	bpf_spin_lock(&glock);
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < 2; j++)
+			bpf_rbtree_add(&groot_array[i], &nodes[i][j]->node, less);
+	for (j = 0; j < 2; j++)
+		bpf_rbtree_add(&groot_array_one[0], &nodes[2][j]->node, less);
+	res1 = bpf_rbtree_remove(&groot_array[0], &nodes[0][0]->node);
+	res2 = bpf_rbtree_remove(&groot_array[1], &nodes[1][0]->node);
+	res3 = bpf_rbtree_remove(&groot_array_one[0], &nodes[2][0]->node);
+	bpf_spin_unlock(&glock);
+
+	if (res1) {
+		n = container_of(res1, struct node_data, node);
+		k1 = n->key;
+		bpf_obj_drop(n);
+	}
+	if (res2) {
+		n = container_of(res2, struct node_data, node);
+		k2 = n->key;
+		bpf_obj_drop(n);
+	}
+	if (res3) {
+		n = container_of(res3, struct node_data, node);
+		k3 = n->key;
+		bpf_obj_drop(n);
+	}
+	if (k1 != 0 || k2 != 2 || k3 != 4)
+		return 2;
+
+	return 0;
+
+err_out:
+	for (i = 0; i < 3; i++) {
+		for (j = 0; j < 2; j++) {
+			if (nodes[i][j])
+				bpf_obj_drop(nodes[i][j]);
+		}
+	}
+	return 1;
+}
+
 SEC("tc")
 long rbtree_first_and_remove(void *ctx)
 {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next v5 9/9] selftests/bpf: Test global bpf_list_head arrays.
  2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
                   ` (7 preceding siblings ...)
  2024-05-10  1:13 ` [PATCH bpf-next v5 8/9] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types Kui-Feng Lee
@ 2024-05-10  1:13 ` Kui-Feng Lee
  8 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10  1:13 UTC (permalink / raw
  To: bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng, Kui-Feng Lee

Make sure global arrays of bpf_list_heads and fields of bpf_list_heads in
nested struct types work correctly.

Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
---
 .../selftests/bpf/prog_tests/linked_list.c    | 12 ++++++
 .../testing/selftests/bpf/progs/linked_list.c | 42 +++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 2fb89de63bd2..77d07e0a4a55 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -183,6 +183,18 @@ static void test_linked_list_success(int mode, bool leave_in_map)
 	if (!leave_in_map)
 		clear_fields(skel->maps.bss_A);
 
+	ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_push_pop_nested), &opts);
+	ASSERT_OK(ret, "global_list_push_pop_nested");
+	ASSERT_OK(opts.retval, "global_list_push_pop_nested retval");
+	if (!leave_in_map)
+		clear_fields(skel->maps.bss_A);
+
+	ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_array_push_pop), &opts);
+	ASSERT_OK(ret, "global_list_array_push_pop");
+	ASSERT_OK(opts.retval, "global_list_array_push_pop retval");
+	if (!leave_in_map)
+		clear_fields(skel->maps.bss_A);
+
 	if (mode == PUSH_POP)
 		goto end;
 
diff --git a/tools/testing/selftests/bpf/progs/linked_list.c b/tools/testing/selftests/bpf/progs/linked_list.c
index 26205ca80679..f69bf3e30321 100644
--- a/tools/testing/selftests/bpf/progs/linked_list.c
+++ b/tools/testing/selftests/bpf/progs/linked_list.c
@@ -11,6 +11,22 @@
 
 #include "linked_list.h"
 
+struct head_nested_inner {
+	struct bpf_spin_lock lock;
+	struct bpf_list_head head __contains(foo, node2);
+};
+
+struct head_nested {
+	int dummy;
+	struct head_nested_inner inner;
+};
+
+private(C) struct bpf_spin_lock glock_c;
+private(C) struct bpf_list_head ghead_array[2] __contains(foo, node2);
+private(C) struct bpf_list_head ghead_array_one[1] __contains(foo, node2);
+
+private(D) struct head_nested ghead_nested;
+
 static __always_inline
 int list_push_pop(struct bpf_spin_lock *lock, struct bpf_list_head *head, bool leave_in_map)
 {
@@ -309,6 +325,32 @@ int global_list_push_pop(void *ctx)
 	return test_list_push_pop(&glock, &ghead);
 }
 
+SEC("tc")
+int global_list_push_pop_nested(void *ctx)
+{
+	return test_list_push_pop(&ghead_nested.inner.lock, &ghead_nested.inner.head);
+}
+
+SEC("tc")
+int global_list_array_push_pop(void *ctx)
+{
+	int r;
+
+	r = test_list_push_pop(&glock_c, &ghead_array[0]);
+	if (r)
+		return r;
+
+	r = test_list_push_pop(&glock_c, &ghead_array[1]);
+	if (r)
+		return r;
+
+	/* Arrays with only one element is a special case, being treated
+	 * just like a bpf_list_head variable by the verifier, not an
+	 * array.
+	 */
+	return test_list_push_pop(&glock_c, &ghead_array_one[0]);
+}
+
 SEC("tc")
 int map_list_push_pop_multiple(void *ctx)
 {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 6/9] bpf: limit the number of levels of a nested struct type.
  2024-05-10  1:13 ` [PATCH bpf-next v5 6/9] bpf: limit the number of levels of a nested struct type Kui-Feng Lee
@ 2024-05-10  2:37   ` Eduard Zingerman
  0 siblings, 0 replies; 22+ messages in thread
From: Eduard Zingerman @ 2024-05-10  2:37 UTC (permalink / raw
  To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng

On Thu, 2024-05-09 at 18:13 -0700, Kui-Feng Lee wrote:
> Limit the number of levels looking into struct types to avoid running out
> of stack space.
> 
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10  1:13 ` [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields Kui-Feng Lee
@ 2024-05-10 10:03   ` Eduard Zingerman
  2024-05-10 21:59     ` Kui-Feng Lee
  0 siblings, 1 reply; 22+ messages in thread
From: Eduard Zingerman @ 2024-05-10 10:03 UTC (permalink / raw
  To: Kui-Feng Lee, bpf, ast, martin.lau, song, kernel-team, andrii
  Cc: sinquersw, kuifeng

On Thu, 2024-05-09 at 18:13 -0700, Kui-Feng Lee wrote:
> Make sure that BPF programs can declare global kptr arrays and kptr fields
> in struct types that is the type of a global variable or the type of a
> nested descendant field in a global variable.
> 
> An array with only one element is special case, that it treats the element
> like a non-array kptr field. Nested arrays are also tested to ensure they
> are handled properly.
> 
> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
> ---
>  .../selftests/bpf/prog_tests/cpumask.c        |   5 +
>  .../selftests/bpf/progs/cpumask_success.c     | 133 ++++++++++++++++++
>  2 files changed, 138 insertions(+)
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/cpumask.c b/tools/testing/selftests/bpf/prog_tests/cpumask.c
> index ecf89df78109..2570bd4b0cb2 100644
> --- a/tools/testing/selftests/bpf/prog_tests/cpumask.c
> +++ b/tools/testing/selftests/bpf/prog_tests/cpumask.c
> @@ -18,6 +18,11 @@ static const char * const cpumask_success_testcases[] = {
>  	"test_insert_leave",
>  	"test_insert_remove_release",
>  	"test_global_mask_rcu",
> +	"test_global_mask_array_one_rcu",
> +	"test_global_mask_array_rcu",
> +	"test_global_mask_array_l2_rcu",
> +	"test_global_mask_nested_rcu",
> +	"test_global_mask_nested_deep_rcu",
>  	"test_cpumask_weight",
>  };
>  
> diff --git a/tools/testing/selftests/bpf/progs/cpumask_success.c b/tools/testing/selftests/bpf/progs/cpumask_success.c
> index 7a1e64c6c065..0b6383fa9958 100644
> --- a/tools/testing/selftests/bpf/progs/cpumask_success.c
> +++ b/tools/testing/selftests/bpf/progs/cpumask_success.c
> @@ -12,6 +12,25 @@ char _license[] SEC("license") = "GPL";
>  
>  int pid, nr_cpus;
>  
> +struct kptr_nested {
> +	struct bpf_cpumask __kptr * mask;
> +};
> +
> +struct kptr_nested_mid {
> +	int dummy;
> +	struct kptr_nested m;
> +};
> +
> +struct kptr_nested_deep {
> +	struct kptr_nested_mid ptrs[2];
> +};

For the sake of completeness, would it be possible to create a test
case where there are several struct arrays following each other?
E.g. as below:

struct foo {
  ... __kptr *a;
  ... __kptr *b;
}

struct bar {
  ... __kptr *c;
}

struct {
  struct foo foos[3];
  struct bar bars[2];
}

Just to check that offset is propagated correctly.

Also, in the tests below you check that a pointer to some object could
be put into an array at different indexes. Tbh, I find it not very
interesting if we want to check that offsets are correct.
Would it be possible to create an array of object kptrs,
put specific references at specific indexes and somehow check which
object ended up where? (not necessarily 'bpf_cpumask').

> +
> +private(MASK) static struct bpf_cpumask __kptr * global_mask_array[2];
> +private(MASK) static struct bpf_cpumask __kptr * global_mask_array_l2[2][1];
> +private(MASK) static struct bpf_cpumask __kptr * global_mask_array_one[1];
> +private(MASK) static struct kptr_nested global_mask_nested[2];
> +private(MASK) static struct kptr_nested_deep global_mask_nested_deep;
> +
>  static bool is_test_task(void)
>  {
>  	int cur_pid = bpf_get_current_pid_tgid() >> 32;
> @@ -460,6 +479,120 @@ int BPF_PROG(test_global_mask_rcu, struct task_struct *task, u64 clone_flags)
>  	return 0;
>  }
>  
> +SEC("tp_btf/task_newtask")
> +int BPF_PROG(test_global_mask_array_one_rcu, struct task_struct *task, u64 clone_flags)
> +{
> +	struct bpf_cpumask *local, *prev;
> +
> +	if (!is_test_task())
> +		return 0;
> +
> +	/* Kptr arrays with one element are special cased, being treated
> +	 * just like a single pointer.
> +	 */
> +
> +	local = create_cpumask();
> +	if (!local)
> +		return 0;
> +
> +	prev = bpf_kptr_xchg(&global_mask_array_one[0], local);
> +	if (prev) {
> +		bpf_cpumask_release(prev);
> +		err = 3;
> +		return 0;
> +	}
> +
> +	bpf_rcu_read_lock();
> +	local = global_mask_array_one[0];
> +	if (!local) {
> +		err = 4;
> +		bpf_rcu_read_unlock();
> +		return 0;
> +	}
> +
> +	bpf_rcu_read_unlock();
> +
> +	return 0;
> +}
> +
> +static int _global_mask_array_rcu(struct bpf_cpumask **mask0,
> +				  struct bpf_cpumask **mask1)
> +{
> +	struct bpf_cpumask *local;
> +
> +	if (!is_test_task())
> +		return 0;
> +
> +	/* Check if two kptrs in the array work and independently */
> +
> +	local = create_cpumask();
> +	if (!local)
> +		return 0;
> +
> +	bpf_rcu_read_lock();
> +
> +	local = bpf_kptr_xchg(mask0, local);
> +	if (local) {
> +		err = 1;
> +		goto err_exit;
> +	}
> +
> +	/* [<mask 0>, NULL] */
> +	if (!*mask0 || *mask1) {
> +		err = 2;
> +		goto err_exit;
> +	}
> +
> +	local = create_cpumask();
> +	if (!local) {
> +		err = 9;
> +		goto err_exit;
> +	}
> +
> +	local = bpf_kptr_xchg(mask1, local);
> +	if (local) {
> +		err = 10;
> +		goto err_exit;
> +	}
> +
> +	/* [<mask 0>, <mask 1>] */
> +	if (!*mask0 || !*mask1 || *mask0 == *mask1) {
> +		err = 11;
> +		goto err_exit;
> +	}
> +
> +err_exit:
> +	if (local)
> +		bpf_cpumask_release(local);
> +	bpf_rcu_read_unlock();
> +	return 0;
> +}
> +
> +SEC("tp_btf/task_newtask")
> +int BPF_PROG(test_global_mask_array_rcu, struct task_struct *task, u64 clone_flags)
> +{
> +	return _global_mask_array_rcu(&global_mask_array[0], &global_mask_array[1]);
> +}
> +
> +SEC("tp_btf/task_newtask")
> +int BPF_PROG(test_global_mask_array_l2_rcu, struct task_struct *task, u64 clone_flags)
> +{
> +	return _global_mask_array_rcu(&global_mask_array_l2[0][0], &global_mask_array_l2[1][0]);
> +}
> +
> +SEC("tp_btf/task_newtask")
> +int BPF_PROG(test_global_mask_nested_rcu, struct task_struct *task, u64 clone_flags)
> +{
> +	return _global_mask_array_rcu(&global_mask_nested[0].mask, &global_mask_nested[1].mask);
> +}
> +
> +SEC("tp_btf/task_newtask")
> +int BPF_PROG(test_global_mask_nested_deep_rcu, struct task_struct *task, u64 clone_flags)
> +{
> +	return _global_mask_array_rcu(&global_mask_nested_deep.ptrs[0].m.mask,
> +				      &global_mask_nested_deep.ptrs[1].m.mask);
> +}
> +
>  SEC("tp_btf/task_newtask")
>  int BPF_PROG(test_cpumask_weight, struct task_struct *task, u64 clone_flags)
>  {


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 10:03   ` Eduard Zingerman
@ 2024-05-10 21:59     ` Kui-Feng Lee
  2024-05-10 22:08       ` Eduard Zingerman
  0 siblings, 1 reply; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10 21:59 UTC (permalink / raw
  To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng



On 5/10/24 03:03, Eduard Zingerman wrote:
> On Thu, 2024-05-09 at 18:13 -0700, Kui-Feng Lee wrote:
>> Make sure that BPF programs can declare global kptr arrays and kptr fields
>> in struct types that is the type of a global variable or the type of a
>> nested descendant field in a global variable.
>>
>> An array with only one element is special case, that it treats the element
>> like a non-array kptr field. Nested arrays are also tested to ensure they
>> are handled properly.
>>
>> Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
>> ---
>>   .../selftests/bpf/prog_tests/cpumask.c        |   5 +
>>   .../selftests/bpf/progs/cpumask_success.c     | 133 ++++++++++++++++++
>>   2 files changed, 138 insertions(+)
>>
>> diff --git a/tools/testing/selftests/bpf/prog_tests/cpumask.c b/tools/testing/selftests/bpf/prog_tests/cpumask.c
>> index ecf89df78109..2570bd4b0cb2 100644
>> --- a/tools/testing/selftests/bpf/prog_tests/cpumask.c
>> +++ b/tools/testing/selftests/bpf/prog_tests/cpumask.c
>> @@ -18,6 +18,11 @@ static const char * const cpumask_success_testcases[] = {
>>   	"test_insert_leave",
>>   	"test_insert_remove_release",
>>   	"test_global_mask_rcu",
>> +	"test_global_mask_array_one_rcu",
>> +	"test_global_mask_array_rcu",
>> +	"test_global_mask_array_l2_rcu",
>> +	"test_global_mask_nested_rcu",
>> +	"test_global_mask_nested_deep_rcu",
>>   	"test_cpumask_weight",
>>   };
>>   
>> diff --git a/tools/testing/selftests/bpf/progs/cpumask_success.c b/tools/testing/selftests/bpf/progs/cpumask_success.c
>> index 7a1e64c6c065..0b6383fa9958 100644
>> --- a/tools/testing/selftests/bpf/progs/cpumask_success.c
>> +++ b/tools/testing/selftests/bpf/progs/cpumask_success.c
>> @@ -12,6 +12,25 @@ char _license[] SEC("license") = "GPL";
>>   
>>   int pid, nr_cpus;
>>   
>> +struct kptr_nested {
>> +	struct bpf_cpumask __kptr * mask;
>> +};
>> +
>> +struct kptr_nested_mid {
>> +	int dummy;
>> +	struct kptr_nested m;
>> +};
>> +
>> +struct kptr_nested_deep {
>> +	struct kptr_nested_mid ptrs[2];
>> +};
> 
> For the sake of completeness, would it be possible to create a test
> case where there are several struct arrays following each other?
> E.g. as below:
> 
> struct foo {
>    ... __kptr *a;
>    ... __kptr *b;
> }
> 
> struct bar {
>    ... __kptr *c;
> }
> 
> struct {
>    struct foo foos[3];
>    struct bar bars[2];
> }
> 
> Just to check that offset is propagated correctly.

Sure!

> 
> Also, in the tests below you check that a pointer to some object could
> be put into an array at different indexes. Tbh, I find it not very
> interesting if we want to check that offsets are correct.
> Would it be possible to create an array of object kptrs,
> put specific references at specific indexes and somehow check which
> object ended up where? (not necessarily 'bpf_cpumask').

Do you mean checking index in the way like the following code?

  if (array[0] != ref0 || array[1] != ref1 || array[2] != ref2 ....)
    return err;

> 
>> +
>> +private(MASK) static struct bpf_cpumask __kptr * global_mask_array[2];
>> +private(MASK) static struct bpf_cpumask __kptr * global_mask_array_l2[2][1];
>> +private(MASK) static struct bpf_cpumask __kptr * global_mask_array_one[1];
>> +private(MASK) static struct kptr_nested global_mask_nested[2];
>> +private(MASK) static struct kptr_nested_deep global_mask_nested_deep;
>> +
>>   static bool is_test_task(void)
>>   {
>>   	int cur_pid = bpf_get_current_pid_tgid() >> 32;
>> @@ -460,6 +479,120 @@ int BPF_PROG(test_global_mask_rcu, struct task_struct *task, u64 clone_flags)
>>   	return 0;
>>   }
>>   
>> +SEC("tp_btf/task_newtask")
>> +int BPF_PROG(test_global_mask_array_one_rcu, struct task_struct *task, u64 clone_flags)
>> +{
>> +	struct bpf_cpumask *local, *prev;
>> +
>> +	if (!is_test_task())
>> +		return 0;
>> +
>> +	/* Kptr arrays with one element are special cased, being treated
>> +	 * just like a single pointer.
>> +	 */
>> +
>> +	local = create_cpumask();
>> +	if (!local)
>> +		return 0;
>> +
>> +	prev = bpf_kptr_xchg(&global_mask_array_one[0], local);
>> +	if (prev) {
>> +		bpf_cpumask_release(prev);
>> +		err = 3;
>> +		return 0;
>> +	}
>> +
>> +	bpf_rcu_read_lock();
>> +	local = global_mask_array_one[0];
>> +	if (!local) {
>> +		err = 4;
>> +		bpf_rcu_read_unlock();
>> +		return 0;
>> +	}
>> +
>> +	bpf_rcu_read_unlock();
>> +
>> +	return 0;
>> +}
>> +
>> +static int _global_mask_array_rcu(struct bpf_cpumask **mask0,
>> +				  struct bpf_cpumask **mask1)
>> +{
>> +	struct bpf_cpumask *local;
>> +
>> +	if (!is_test_task())
>> +		return 0;
>> +
>> +	/* Check if two kptrs in the array work and independently */
>> +
>> +	local = create_cpumask();
>> +	if (!local)
>> +		return 0;
>> +
>> +	bpf_rcu_read_lock();
>> +
>> +	local = bpf_kptr_xchg(mask0, local);
>> +	if (local) {
>> +		err = 1;
>> +		goto err_exit;
>> +	}
>> +
>> +	/* [<mask 0>, NULL] */
>> +	if (!*mask0 || *mask1) {
>> +		err = 2;
>> +		goto err_exit;
>> +	}
>> +
>> +	local = create_cpumask();
>> +	if (!local) {
>> +		err = 9;
>> +		goto err_exit;
>> +	}
>> +
>> +	local = bpf_kptr_xchg(mask1, local);
>> +	if (local) {
>> +		err = 10;
>> +		goto err_exit;
>> +	}
>> +
>> +	/* [<mask 0>, <mask 1>] */
>> +	if (!*mask0 || !*mask1 || *mask0 == *mask1) {
>> +		err = 11;
>> +		goto err_exit;
>> +	}
>> +
>> +err_exit:
>> +	if (local)
>> +		bpf_cpumask_release(local);
>> +	bpf_rcu_read_unlock();
>> +	return 0;
>> +}
>> +
>> +SEC("tp_btf/task_newtask")
>> +int BPF_PROG(test_global_mask_array_rcu, struct task_struct *task, u64 clone_flags)
>> +{
>> +	return _global_mask_array_rcu(&global_mask_array[0], &global_mask_array[1]);
>> +}
>> +
>> +SEC("tp_btf/task_newtask")
>> +int BPF_PROG(test_global_mask_array_l2_rcu, struct task_struct *task, u64 clone_flags)
>> +{
>> +	return _global_mask_array_rcu(&global_mask_array_l2[0][0], &global_mask_array_l2[1][0]);
>> +}
>> +
>> +SEC("tp_btf/task_newtask")
>> +int BPF_PROG(test_global_mask_nested_rcu, struct task_struct *task, u64 clone_flags)
>> +{
>> +	return _global_mask_array_rcu(&global_mask_nested[0].mask, &global_mask_nested[1].mask);
>> +}
>> +
>> +SEC("tp_btf/task_newtask")
>> +int BPF_PROG(test_global_mask_nested_deep_rcu, struct task_struct *task, u64 clone_flags)
>> +{
>> +	return _global_mask_array_rcu(&global_mask_nested_deep.ptrs[0].m.mask,
>> +				      &global_mask_nested_deep.ptrs[1].m.mask);
>> +}
>> +
>>   SEC("tp_btf/task_newtask")
>>   int BPF_PROG(test_cpumask_weight, struct task_struct *task, u64 clone_flags)
>>   {
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 21:59     ` Kui-Feng Lee
@ 2024-05-10 22:08       ` Eduard Zingerman
  2024-05-10 22:25         ` Kui-Feng Lee
  0 siblings, 1 reply; 22+ messages in thread
From: Eduard Zingerman @ 2024-05-10 22:08 UTC (permalink / raw
  To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng

On Fri, 2024-05-10 at 14:59 -0700, Kui-Feng Lee wrote:
> 
> For the sake of completeness, would it be possible to create a test
> > case where there are several struct arrays following each other?
> > E.g. as below:
> > 
> > struct foo {
> >    ... __kptr *a;
> >    ... __kptr *b;
> > }
> > 
> > struct bar {
> >    ... __kptr *c;
> > }
> > 
> > struct {
> >    struct foo foos[3];
> >    struct bar bars[2];
> > }
> > 
> > Just to check that offset is propagated correctly.
> 
> Sure!

Great, thank you

> > Also, in the tests below you check that a pointer to some object could
> > be put into an array at different indexes. Tbh, I find it not very
> > interesting if we want to check that offsets are correct.
> > Would it be possible to create an array of object kptrs,
> > put specific references at specific indexes and somehow check which
> > object ended up where? (not necessarily 'bpf_cpumask').
> 
> Do you mean checking index in the way like the following code?
> 
>   if (array[0] != ref0 || array[1] != ref1 || array[2] != ref2 ....)
>     return err;

Probably, but I'd need your help here.
There goal is to verify that offsets of __kptr's in the 'info' array
had been set correctly. Where is this information is used later on?
E.g. I'd like to trigger some action that "touches" __kptr at index N
and verify that all others had not been "touched".
But this "touch" action has to use offset stored in the 'info'.

[...]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 22:08       ` Eduard Zingerman
@ 2024-05-10 22:25         ` Kui-Feng Lee
  2024-05-10 22:31           ` Eduard Zingerman
  0 siblings, 1 reply; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10 22:25 UTC (permalink / raw
  To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng



On 5/10/24 15:08, Eduard Zingerman wrote:
> On Fri, 2024-05-10 at 14:59 -0700, Kui-Feng Lee wrote:
>>
>> For the sake of completeness, would it be possible to create a test
>>> case where there are several struct arrays following each other?
>>> E.g. as below:
>>>
>>> struct foo {
>>>     ... __kptr *a;
>>>     ... __kptr *b;
>>> }
>>>
>>> struct bar {
>>>     ... __kptr *c;
>>> }
>>>
>>> struct {
>>>     struct foo foos[3];
>>>     struct bar bars[2];
>>> }
>>>
>>> Just to check that offset is propagated correctly.
>>
>> Sure!
> 
> Great, thank you
> 
>>> Also, in the tests below you check that a pointer to some object could
>>> be put into an array at different indexes. Tbh, I find it not very
>>> interesting if we want to check that offsets are correct.
>>> Would it be possible to create an array of object kptrs,
>>> put specific references at specific indexes and somehow check which
>>> object ended up where? (not necessarily 'bpf_cpumask').
>>
>> Do you mean checking index in the way like the following code?
>>
>>    if (array[0] != ref0 || array[1] != ref1 || array[2] != ref2 ....)
>>      return err;
> 
> Probably, but I'd need your help here.
> There goal is to verify that offsets of __kptr's in the 'info' array
> had been set correctly. Where is this information is used later on?
> E.g. I'd like to trigger some action that "touches" __kptr at index N
> and verify that all others had not been "touched".
> But this "touch" action has to use offset stored in the 'info'.


They are used for verifying the offset of instructions.
Let's assume we have an array of size 10.
Then, we have 10 infos with 10 different offsets.
And, we have a program includes one instruction for each element, 10 in
total, to access the corresponding element.
Each instruction has an offset different from others, generated by the 
compiler. That means the verifier will fail to find an info for some of
instructions if there is one or more info having wrong offset.



> 
> [...]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 22:25         ` Kui-Feng Lee
@ 2024-05-10 22:31           ` Eduard Zingerman
  2024-05-10 22:53             ` Kui-Feng Lee
  0 siblings, 1 reply; 22+ messages in thread
From: Eduard Zingerman @ 2024-05-10 22:31 UTC (permalink / raw
  To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng

On Fri, 2024-05-10 at 15:25 -0700, Kui-Feng Lee wrote:

> > > > Also, in the tests below you check that a pointer to some object could
> > > > be put into an array at different indexes. Tbh, I find it not very
> > > > interesting if we want to check that offsets are correct.
> > > > Would it be possible to create an array of object kptrs,
> > > > put specific references at specific indexes and somehow check which
> > > > object ended up where? (not necessarily 'bpf_cpumask').
> > > 
> > > Do you mean checking index in the way like the following code?
> > > 
> > >    if (array[0] != ref0 || array[1] != ref1 || array[2] != ref2 ....)
> > >      return err;
> > 
> > Probably, but I'd need your help here.
> > There goal is to verify that offsets of __kptr's in the 'info' array
> > had been set correctly. Where is this information is used later on?
> > E.g. I'd like to trigger some action that "touches" __kptr at index N
> > and verify that all others had not been "touched".
> > But this "touch" action has to use offset stored in the 'info'.
> 
> They are used for verifying the offset of instructions.
> Let's assume we have an array of size 10.
> Then, we have 10 infos with 10 different offsets.
> And, we have a program includes one instruction for each element, 10 in
> total, to access the corresponding element.
> Each instruction has an offset different from others, generated by the 
> compiler. That means the verifier will fail to find an info for some of
> instructions if there is one or more info having wrong offset.

That's a bit depressing, as there would be no way to check if e.g. all
10 refer to the same offset. Is it possible to trigger printing of the
'info.offset' to verifier log? E.g. via some 'illegal' action.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 22:31           ` Eduard Zingerman
@ 2024-05-10 22:53             ` Kui-Feng Lee
  2024-05-10 22:57               ` Eduard Zingerman
  0 siblings, 1 reply; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10 22:53 UTC (permalink / raw
  To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng



On 5/10/24 15:31, Eduard Zingerman wrote:
> On Fri, 2024-05-10 at 15:25 -0700, Kui-Feng Lee wrote:
> 
>>>>> Also, in the tests below you check that a pointer to some object could
>>>>> be put into an array at different indexes. Tbh, I find it not very
>>>>> interesting if we want to check that offsets are correct.
>>>>> Would it be possible to create an array of object kptrs,
>>>>> put specific references at specific indexes and somehow check which
>>>>> object ended up where? (not necessarily 'bpf_cpumask').
>>>>
>>>> Do you mean checking index in the way like the following code?
>>>>
>>>>     if (array[0] != ref0 || array[1] != ref1 || array[2] != ref2 ....)
>>>>       return err;
>>>
>>> Probably, but I'd need your help here.
>>> There goal is to verify that offsets of __kptr's in the 'info' array
>>> had been set correctly. Where is this information is used later on?
>>> E.g. I'd like to trigger some action that "touches" __kptr at index N
>>> and verify that all others had not been "touched".
>>> But this "touch" action has to use offset stored in the 'info'.
>>
>> They are used for verifying the offset of instructions.
>> Let's assume we have an array of size 10.
>> Then, we have 10 infos with 10 different offsets.
>> And, we have a program includes one instruction for each element, 10 in
>> total, to access the corresponding element.
>> Each instruction has an offset different from others, generated by the
>> compiler. That means the verifier will fail to find an info for some of
>> instructions if there is one or more info having wrong offset.
> 
> That's a bit depressing, as there would be no way to check if e.g. all
> 10 refer to the same offset. Is it possible to trigger printing of the
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
How can that happen? Do you mean the compiler does it wrong?


> 'info.offset' to verifier log? E.g. via some 'illegal' action.
Yes if necessary!

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 22:53             ` Kui-Feng Lee
@ 2024-05-10 22:57               ` Eduard Zingerman
  2024-05-10 23:04                 ` Kui-Feng Lee
  0 siblings, 1 reply; 22+ messages in thread
From: Eduard Zingerman @ 2024-05-10 22:57 UTC (permalink / raw
  To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng

On Fri, 2024-05-10 at 15:53 -0700, Kui-Feng Lee wrote:

[...]

> > > > > Do you mean checking index in the way like the following code?
> > > > > 
> > > > >     if (array[0] != ref0 || array[1] != ref1 || array[2] != ref2 ....)
> > > > >       return err;
> > > > 
> > > > Probably, but I'd need your help here.
> > > > There goal is to verify that offsets of __kptr's in the 'info' array
> > > > had been set correctly. Where is this information is used later on?
> > > > E.g. I'd like to trigger some action that "touches" __kptr at index N
> > > > and verify that all others had not been "touched".
> > > > But this "touch" action has to use offset stored in the 'info'.
> > > 
> > > They are used for verifying the offset of instructions.
> > > Let's assume we have an array of size 10.
> > > Then, we have 10 infos with 10 different offsets.
> > > And, we have a program includes one instruction for each element, 10 in
> > > total, to access the corresponding element.
> > > Each instruction has an offset different from others, generated by the
> > > compiler. That means the verifier will fail to find an info for some of
> > > instructions if there is one or more info having wrong offset.
> > 
> > That's a bit depressing, as there would be no way to check if e.g. all
> > 10 refer to the same offset. Is it possible to trigger printing of the
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> How can that happen? Do you mean the compiler does it wrong?

No, suppose that 'info.offset' is computed incorrectly because of some
bug in arrays handling. E.g. all .off fields in the infos have the
same value.

What is the shape of the test that could catch such bug?

> > 'info.offset' to verifier log? E.g. via some 'illegal' action.
> Yes if necessary!


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 22:57               ` Eduard Zingerman
@ 2024-05-10 23:04                 ` Kui-Feng Lee
  2024-05-10 23:17                   ` Eduard Zingerman
  0 siblings, 1 reply; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-10 23:04 UTC (permalink / raw
  To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng



On 5/10/24 15:57, Eduard Zingerman wrote:
> On Fri, 2024-05-10 at 15:53 -0700, Kui-Feng Lee wrote:
> 
> [...]
> 
>>>>>> Do you mean checking index in the way like the following code?
>>>>>>
>>>>>>      if (array[0] != ref0 || array[1] != ref1 || array[2] != ref2 ....)
>>>>>>        return err;
>>>>>
>>>>> Probably, but I'd need your help here.
>>>>> There goal is to verify that offsets of __kptr's in the 'info' array
>>>>> had been set correctly. Where is this information is used later on?
>>>>> E.g. I'd like to trigger some action that "touches" __kptr at index N
>>>>> and verify that all others had not been "touched".
>>>>> But this "touch" action has to use offset stored in the 'info'.
>>>>
>>>> They are used for verifying the offset of instructions.
>>>> Let's assume we have an array of size 10.
>>>> Then, we have 10 infos with 10 different offsets.
>>>> And, we have a program includes one instruction for each element, 10 in
>>>> total, to access the corresponding element.
>>>> Each instruction has an offset different from others, generated by the
>>>> compiler. That means the verifier will fail to find an info for some of
>>>> instructions if there is one or more info having wrong offset.
>>>
>>> That's a bit depressing, as there would be no way to check if e.g. all
>>> 10 refer to the same offset. Is it possible to trigger printing of the
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> How can that happen? Do you mean the compiler does it wrong?
> 
> No, suppose that 'info.offset' is computed incorrectly because of some
> bug in arrays handling. E.g. all .off fields in the infos have the
> same value.
> 
> What is the shape of the test that could catch such bug?
> 

I am not sure if I read you question correctly.

For example, we have 3 correct info.

  [info(offset=0x8), info(offset=0x10), info(offset=0x18)]

And We have program that includes 3 instructions to access the offset 
0x8, 0x10, and 0x18. (let's assume these load instructions would be 
checked against infos)

  load r1, [0x8]
  load r1, [0x10]
  load r1, [0x18]

If everything works as expected, the verifier would accept the program.

Otherwise, like you said, all 3 info are pointing to the same offset.

  [info(0offset=0x8), info(offset=0x8), info(offset=0x8)]

Then, the later two instructions should fail the check.


>>> 'info.offset' to verifier log? E.g. via some 'illegal' action.
>> Yes if necessary!
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 23:04                 ` Kui-Feng Lee
@ 2024-05-10 23:17                   ` Eduard Zingerman
  2024-05-10 23:29                     ` Eduard Zingerman
  0 siblings, 1 reply; 22+ messages in thread
From: Eduard Zingerman @ 2024-05-10 23:17 UTC (permalink / raw
  To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng

On Fri, 2024-05-10 at 16:04 -0700, Kui-Feng Lee wrote:

[...]


> I am not sure if I read you question correctly.
> 
> For example, we have 3 correct info.
> 
>   [info(offset=0x8), info(offset=0x10), info(offset=0x18)]
> 
> And We have program that includes 3 instructions to access the offset 
> 0x8, 0x10, and 0x18. (let's assume these load instructions would be 
> checked against infos)
> 
>   load r1, [0x8]
>   load r1, [0x10]
>   load r1, [0x18]
> 
> If everything works as expected, the verifier would accept the program.
> 
> Otherwise, like you said, all 3 info are pointing to the same offset.
> 
>   [info(0offset=0x8), info(offset=0x8), info(offset=0x8)]
> 
> Then, the later two instructions should fail the check.

I think it would be in reverse.
If for some offset there is no record of special semantics
verifier would threat the load as a regular memory access.

However, there is a btf.c:btf_struct_access(), which would report
an error if offset within a special field is accessed directly:

int btf_struct_access(struct bpf_verifier_log *log,
		      const struct bpf_reg_state *reg,
		      int off, int size, enum bpf_access_type atype __maybe_unused,
		      u32 *next_btf_id, enum bpf_type_flag *flag,
		      const char **field_name)
{
	...
	struct btf_struct_meta *meta;
	struct btf_record *rec;
	int i;

	meta = btf_find_struct_meta(btf, id);
	if (!meta)
		break;
	rec = meta->record;
	for (i = 0; i < rec->cnt; i++) {
		struct btf_field *field = &rec->fields[i];
		u32 offset = field->offset;
		if (off < offset + btf_field_type_size(field->type) && offset < off + size) {
			bpf_log(log,
				"direct access to %s is disallowed\n",
				btf_field_type_name(field->type));
			return -EACCES;
		}
	}
	break;
}

So it looks like we need a test with a following structure:

- global definition using an array, e.g. with a size of 3
- program #1 doing a direct access at offset of element #1, expect load time error message
- program #2 doing a direct access at offset of element #2, expect load time error message
- program #3 doing a direct access at offset of element #3, expect load time error message
If some of the offsets is computed incorrectly the error message will not be printed.

(And these could be packed as progs/verifier_*.c tests)
And some similar tests with different levels of nested arrays and structures.
But this looks a bit ugly/bulky.
Wdyt?
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 23:17                   ` Eduard Zingerman
@ 2024-05-10 23:29                     ` Eduard Zingerman
  2024-05-20 15:55                       ` Kui-Feng Lee
  0 siblings, 1 reply; 22+ messages in thread
From: Eduard Zingerman @ 2024-05-10 23:29 UTC (permalink / raw
  To: Kui-Feng Lee, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng

On Fri, 2024-05-10 at 16:17 -0700, Eduard Zingerman wrote:
> On Fri, 2024-05-10 at 16:04 -0700, Kui-Feng Lee wrote:
> 
> [...]
> 
> 
> > I am not sure if I read you question correctly.
> > 
> > For example, we have 3 correct info.
> > 
> >   [info(offset=0x8), info(offset=0x10), info(offset=0x18)]
> > 
> > And We have program that includes 3 instructions to access the offset 
> > 0x8, 0x10, and 0x18. (let's assume these load instructions would be 
> > checked against infos)
> > 
> >   load r1, [0x8]
> >   load r1, [0x10]
> >   load r1, [0x18]
> > 
> > If everything works as expected, the verifier would accept the program.
> > 
> > Otherwise, like you said, all 3 info are pointing to the same offset.
> > 
> >   [info(0offset=0x8), info(offset=0x8), info(offset=0x8)]
> > 
> > Then, the later two instructions should fail the check.

Ok, what you are saying is possible not with load but with some kfunc
that accepts a special pointer. E.g. when verifier.c:check_kfunc_args()
expects an argument of KF_ARG_PTR_TO_LIST_HEAD type it would report an
error if special field is not found.

So the structure of the test would be:
- define a nested data structure with list head at some leafs;
- in the BPF program call a kfunc accessing each of the list heads;
- if all offsets are computed correctly there would be no load time error;
- this is a load time test, no need to actually run the BPF program.

[...]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  2024-05-10 23:29                     ` Eduard Zingerman
@ 2024-05-20 15:55                       ` Kui-Feng Lee
  0 siblings, 0 replies; 22+ messages in thread
From: Kui-Feng Lee @ 2024-05-20 15:55 UTC (permalink / raw
  To: Eduard Zingerman, Kui-Feng Lee, bpf, ast, martin.lau, song,
	kernel-team, andrii
  Cc: kuifeng



On 5/10/24 16:29, Eduard Zingerman wrote:
> On Fri, 2024-05-10 at 16:17 -0700, Eduard Zingerman wrote:
>> On Fri, 2024-05-10 at 16:04 -0700, Kui-Feng Lee wrote:
>>
>> [...]
>>
>>
>>> I am not sure if I read you question correctly.
>>>
>>> For example, we have 3 correct info.
>>>
>>>    [info(offset=0x8), info(offset=0x10), info(offset=0x18)]
>>>
>>> And We have program that includes 3 instructions to access the offset
>>> 0x8, 0x10, and 0x18. (let's assume these load instructions would be
>>> checked against infos)
>>>
>>>    load r1, [0x8]
>>>    load r1, [0x10]
>>>    load r1, [0x18]
>>>
>>> If everything works as expected, the verifier would accept the program.
>>>
>>> Otherwise, like you said, all 3 info are pointing to the same offset.
>>>
>>>    [info(0offset=0x8), info(offset=0x8), info(offset=0x8)]
>>>
>>> Then, the later two instructions should fail the check.
> 
> Ok, what you are saying is possible not with load but with some kfunc
> that accepts a special pointer. E.g. when verifier.c:check_kfunc_args()
> expects an argument of KF_ARG_PTR_TO_LIST_HEAD type it would report an
> error if special field is not found.
> 
> So the structure of the test would be:
> - define a nested data structure with list head at some leafs;
> - in the BPF program call a kfunc accessing each of the list heads;
> - if all offsets are computed correctly there would be no load time error;
> - this is a load time test, no need to actually run the BPF program.
> 
> [...]

Yes, that is what I meant.
Sorry for replying late.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2024-05-20 15:55 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-10  1:13 [PATCH bpf-next v5 0/9] Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 1/9] bpf: Remove unnecessary checks on the offset of btf_field Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 2/9] bpf: Remove unnecessary call to btf_field_type_size() Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 3/9] bpf: refactor btf_find_struct_field() and btf_find_datasec_var() Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 4/9] bpf: create repeated fields for arrays Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 5/9] bpf: look into the types of the fields of a struct type recursively Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 6/9] bpf: limit the number of levels of a nested struct type Kui-Feng Lee
2024-05-10  2:37   ` Eduard Zingerman
2024-05-10  1:13 ` [PATCH bpf-next v5 7/9] selftests/bpf: Test kptr arrays and kptrs in nested struct fields Kui-Feng Lee
2024-05-10 10:03   ` Eduard Zingerman
2024-05-10 21:59     ` Kui-Feng Lee
2024-05-10 22:08       ` Eduard Zingerman
2024-05-10 22:25         ` Kui-Feng Lee
2024-05-10 22:31           ` Eduard Zingerman
2024-05-10 22:53             ` Kui-Feng Lee
2024-05-10 22:57               ` Eduard Zingerman
2024-05-10 23:04                 ` Kui-Feng Lee
2024-05-10 23:17                   ` Eduard Zingerman
2024-05-10 23:29                     ` Eduard Zingerman
2024-05-20 15:55                       ` Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 8/9] selftests/bpf: Test global bpf_rb_root arrays and fields in nested struct types Kui-Feng Lee
2024-05-10  1:13 ` [PATCH bpf-next v5 9/9] selftests/bpf: Test global bpf_list_head arrays Kui-Feng Lee

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).