LKML Archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] md: simplify md_seq_ops
@ 2023-09-26  2:58 Yu Kuai
  2023-09-26  2:58 ` [PATCH v2 1/2] md: factor out a new helper to put mddev Yu Kuai
  2023-09-26  2:58 ` [PATCH v2 2/2] md: simplify md_seq_ops Yu Kuai
  0 siblings, 2 replies; 9+ messages in thread
From: Yu Kuai @ 2023-09-26  2:58 UTC (permalink / raw
  To: mariusz.tkaczyk, xni, song
  Cc: linux-raid, linux-kernel, yukuai3, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

Changes in v2:
 - don't hold lock while show mddev in md_seq_show
 - add patch 1
 - add commit message

Yu Kuai (2):
  md: factor out a new helper to put mddev
  md: simplify md_seq_ops

 drivers/md/md.c | 117 +++++++++++++++---------------------------------
 1 file changed, 36 insertions(+), 81 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/2] md: factor out a new helper to put mddev
  2023-09-26  2:58 [PATCH v2 0/2] md: simplify md_seq_ops Yu Kuai
@ 2023-09-26  2:58 ` Yu Kuai
  2023-09-26 12:45   ` Mariusz Tkaczyk
  2023-09-27  0:15   ` Song Liu
  2023-09-26  2:58 ` [PATCH v2 2/2] md: simplify md_seq_ops Yu Kuai
  1 sibling, 2 replies; 9+ messages in thread
From: Yu Kuai @ 2023-09-26  2:58 UTC (permalink / raw
  To: mariusz.tkaczyk, xni, song
  Cc: linux-raid, linux-kernel, yukuai3, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

There are no functional changes, the new helper will still hold
'all_mddevs_lock' after putting mddev, and it will be used to simplify
md_seq_ops.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 10cb4dfbf4ae..a5ef6f7da8ec 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
 
 static void mddev_delayed_delete(struct work_struct *ws);
 
-void mddev_put(struct mddev *mddev)
+static void __mddev_put(struct mddev *mddev, bool locked)
 {
-	if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
+	if (locked) {
+		spin_lock(&all_mddevs_lock);
+		if (!atomic_dec_and_test(&mddev->active))
+			return;
+	} else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
 		return;
+
 	if (!mddev->raid_disks && list_empty(&mddev->disks) &&
 	    mddev->ctime == 0 && !mddev->hold_active) {
 		/* Array is not configured at all, and not held active,
@@ -633,7 +638,14 @@ void mddev_put(struct mddev *mddev)
 		 */
 		queue_work(md_misc_wq, &mddev->del_work);
 	}
-	spin_unlock(&all_mddevs_lock);
+
+	if (!locked)
+		spin_unlock(&all_mddevs_lock);
+}
+
+void mddev_put(struct mddev *mddev)
+{
+	__mddev_put(mddev, false);
 }
 
 static void md_safemode_timeout(struct timer_list *t);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/2] md: simplify md_seq_ops
  2023-09-26  2:58 [PATCH v2 0/2] md: simplify md_seq_ops Yu Kuai
  2023-09-26  2:58 ` [PATCH v2 1/2] md: factor out a new helper to put mddev Yu Kuai
@ 2023-09-26  2:58 ` Yu Kuai
  2023-09-26 14:09   ` Mariusz Tkaczyk
  1 sibling, 1 reply; 9+ messages in thread
From: Yu Kuai @ 2023-09-26  2:58 UTC (permalink / raw
  To: mariusz.tkaczyk, xni, song
  Cc: linux-raid, linux-kernel, yukuai3, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

Before this patch, the implementation is hacky and hard to understand:

1) md_seq_start set pos to 1;
2) md_seq_show found pos is 1, then print Personalities;
3) md_seq_next found pos is 1, then it update pos to the first mddev;
4) md_seq_show found pos is not 1 or 2, show mddev;
5) md_seq_next found pos is not 1 or 2, update pos to next mddev;
6) loop 4-5 until the last mddev, then md_seq_next update pos to 2;
7) md_seq_show found pos is 2, then print unused devices;
8) md_seq_next found pos is 2, stop;

This patch remove the magic value and use seq_list_start/next/stop()
directly, and move printing "Personalities" to md_sep_start(),
"unsed devices" to md_seq_stop():

1) md_seq_start print Personalities, and then set pos to first mddev;
2) md_seq_show show mddev;
3) md_seq_next update pos to next mddev;
4) loop 2-3 until the last mddev;
5) md_seq_stop print unsed devices;

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.c | 99 +++++++++++--------------------------------------
 1 file changed, 21 insertions(+), 78 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index a5ef6f7da8ec..44635b6ee26f 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8220,105 +8220,46 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev)
 }
 
 static void *md_seq_start(struct seq_file *seq, loff_t *pos)
+	__acquires(&all_mddevs_lock)
 {
-	struct list_head *tmp;
-	loff_t l = *pos;
-	struct mddev *mddev;
+	struct md_personality *pers;
 
-	if (l == 0x10000) {
-		++*pos;
-		return (void *)2;
-	}
-	if (l > 0x10000)
-		return NULL;
-	if (!l--)
-		/* header */
-		return (void*)1;
+	seq_puts(seq, "Personalities : ");
+	spin_lock(&pers_lock);
+	list_for_each_entry(pers, &pers_list, list)
+		seq_printf(seq, "[%s] ", pers->name);
+
+	spin_unlock(&pers_lock);
+	seq_puts(seq, "\n");
+	seq->poll_event = atomic_read(&md_event_count);
 
 	spin_lock(&all_mddevs_lock);
-	list_for_each(tmp,&all_mddevs)
-		if (!l--) {
-			mddev = list_entry(tmp, struct mddev, all_mddevs);
-			if (!mddev_get(mddev))
-				continue;
-			spin_unlock(&all_mddevs_lock);
-			return mddev;
-		}
-	spin_unlock(&all_mddevs_lock);
-	if (!l--)
-		return (void*)2;/* tail */
-	return NULL;
+
+	return seq_list_start(&all_mddevs, *pos);
 }
 
 static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos)
 {
-	struct list_head *tmp;
-	struct mddev *next_mddev, *mddev = v;
-	struct mddev *to_put = NULL;
-
-	++*pos;
-	if (v == (void*)2)
-		return NULL;
-
-	spin_lock(&all_mddevs_lock);
-	if (v == (void*)1) {
-		tmp = all_mddevs.next;
-	} else {
-		to_put = mddev;
-		tmp = mddev->all_mddevs.next;
-	}
-
-	for (;;) {
-		if (tmp == &all_mddevs) {
-			next_mddev = (void*)2;
-			*pos = 0x10000;
-			break;
-		}
-		next_mddev = list_entry(tmp, struct mddev, all_mddevs);
-		if (mddev_get(next_mddev))
-			break;
-		mddev = next_mddev;
-		tmp = mddev->all_mddevs.next;
-	}
-	spin_unlock(&all_mddevs_lock);
-
-	if (to_put)
-		mddev_put(to_put);
-	return next_mddev;
-
+	return seq_list_next(v, &all_mddevs, pos);
 }
 
 static void md_seq_stop(struct seq_file *seq, void *v)
+	__releases(&all_mddevs_lock)
 {
-	struct mddev *mddev = v;
-
-	if (mddev && v != (void*)1 && v != (void*)2)
-		mddev_put(mddev);
+	status_unused(seq);
+	spin_unlock(&all_mddevs_lock);
 }
 
 static int md_seq_show(struct seq_file *seq, void *v)
 {
-	struct mddev *mddev = v;
+	struct mddev *mddev = list_entry(v, struct mddev, all_mddevs);
 	sector_t sectors;
 	struct md_rdev *rdev;
 
-	if (v == (void*)1) {
-		struct md_personality *pers;
-		seq_printf(seq, "Personalities : ");
-		spin_lock(&pers_lock);
-		list_for_each_entry(pers, &pers_list, list)
-			seq_printf(seq, "[%s] ", pers->name);
-
-		spin_unlock(&pers_lock);
-		seq_printf(seq, "\n");
-		seq->poll_event = atomic_read(&md_event_count);
+	if (!mddev_get(mddev))
 		return 0;
-	}
-	if (v == (void*)2) {
-		status_unused(seq);
-		return 0;
-	}
 
+	spin_unlock(&all_mddevs_lock);
 	spin_lock(&mddev->lock);
 	if (mddev->pers || mddev->raid_disks || !list_empty(&mddev->disks)) {
 		seq_printf(seq, "%s : %sactive", mdname(mddev),
@@ -8390,6 +8331,8 @@ static int md_seq_show(struct seq_file *seq, void *v)
 	}
 	spin_unlock(&mddev->lock);
 
+	__mddev_put(mddev, true);
+
 	return 0;
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] md: factor out a new helper to put mddev
  2023-09-26  2:58 ` [PATCH v2 1/2] md: factor out a new helper to put mddev Yu Kuai
@ 2023-09-26 12:45   ` Mariusz Tkaczyk
  2023-09-26 12:54     ` Yu Kuai
  2023-09-27  0:15   ` Song Liu
  1 sibling, 1 reply; 9+ messages in thread
From: Mariusz Tkaczyk @ 2023-09-26 12:45 UTC (permalink / raw
  To: Yu Kuai; +Cc: xni, song, linux-raid, linux-kernel, yukuai3, yi.zhang, yangerkun

On Tue, 26 Sep 2023 10:58:26 +0800
Yu Kuai <yukuai1@huaweicloud.com> wrote:

> From: Yu Kuai <yukuai3@huawei.com>
> 
> There are no functional changes, the new helper will still hold
> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
> md_seq_ops.
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>  drivers/md/md.c | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev
> *mddev) 
>  static void mddev_delayed_delete(struct work_struct *ws);
>  
> -void mddev_put(struct mddev *mddev)
> +static void __mddev_put(struct mddev *mddev, bool locked)
>  {
> -	if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> +	if (locked) {
> +		spin_lock(&all_mddevs_lock);
> +		if (!atomic_dec_and_test(&mddev->active))
> +			return;

It is "locked" and we are taking lock? It seems weird to me. Perhaps "do_lock"
would be better? Do you meant "lockdep_assert_held(&all_mddevs_lock);"

Something is wrong here, we have two paths and in both cases we are
taking lock.

> +	} else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>  		return;
> +
>  	if (!mddev->raid_disks && list_empty(&mddev->disks) &&
>  	    mddev->ctime == 0 && !mddev->hold_active) {
>  		/* Array is not configured at all, and not held active,
> @@ -633,7 +638,14 @@ void mddev_put(struct mddev *mddev)
>  		 */
>  		queue_work(md_misc_wq, &mddev->del_work);
>  	}
> -	spin_unlock(&all_mddevs_lock);
> +
> +	if (!locked)
> +		spin_unlock(&all_mddevs_lock);
As above, I'm not sure if it is correct.

Thanks,
Mariusz

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] md: factor out a new helper to put mddev
  2023-09-26 12:45   ` Mariusz Tkaczyk
@ 2023-09-26 12:54     ` Yu Kuai
  2023-09-26 13:19       ` Mariusz Tkaczyk
  0 siblings, 1 reply; 9+ messages in thread
From: Yu Kuai @ 2023-09-26 12:54 UTC (permalink / raw
  To: Mariusz Tkaczyk, Yu Kuai
  Cc: xni, song, linux-raid, linux-kernel, yi.zhang, yangerkun,
	yukuai (C)

Hi,

在 2023/09/26 20:45, Mariusz Tkaczyk 写道:
> On Tue, 26 Sep 2023 10:58:26 +0800
> Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> There are no functional changes, the new helper will still hold
>> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
>> md_seq_ops.
>>
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>> ---
>>   drivers/md/md.c | 18 +++++++++++++++---
>>   1 file changed, 15 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev
>> *mddev)
>>   static void mddev_delayed_delete(struct work_struct *ws);
>>   
>> -void mddev_put(struct mddev *mddev)
>> +static void __mddev_put(struct mddev *mddev, bool locked)
>>   {
>> -	if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>> +	if (locked) {
>> +		spin_lock(&all_mddevs_lock);
>> +		if (!atomic_dec_and_test(&mddev->active))
>> +			return;
> 
> It is "locked" and we are taking lock? It seems weird to me. Perhaps "do_lock"
> would be better? Do you meant "lockdep_assert_held(&all_mddevs_lock);"

Yes, do_lock is a better name, true means this function will return with
lock held.
> 
> Something is wrong here, we have two paths and in both cases we are
> taking lock.

No, in the first path, lock is held unconditionaly, that's what we
expected in md_seq_show(); in the next path, lock will only be held if
active is decreased to 0.

Thanks,
Kuai

> 
>> +	} else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>>   		return;
>> +
>>   	if (!mddev->raid_disks && list_empty(&mddev->disks) &&
>>   	    mddev->ctime == 0 && !mddev->hold_active) {
>>   		/* Array is not configured at all, and not held active,
>> @@ -633,7 +638,14 @@ void mddev_put(struct mddev *mddev)
>>   		 */
>>   		queue_work(md_misc_wq, &mddev->del_work);
>>   	}
>> -	spin_unlock(&all_mddevs_lock);
>> +
>> +	if (!locked)
>> +		spin_unlock(&all_mddevs_lock);
> As above, I'm not sure if it is correct.
> 
> Thanks,
> Mariusz
> 
> .
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] md: factor out a new helper to put mddev
  2023-09-26 12:54     ` Yu Kuai
@ 2023-09-26 13:19       ` Mariusz Tkaczyk
  0 siblings, 0 replies; 9+ messages in thread
From: Mariusz Tkaczyk @ 2023-09-26 13:19 UTC (permalink / raw
  To: Yu Kuai
  Cc: xni, song, linux-raid, linux-kernel, yi.zhang, yangerkun,
	yukuai (C)

On Tue, 26 Sep 2023 20:54:01 +0800
Yu Kuai <yukuai1@huaweicloud.com> wrote:

> Hi,
> 
> 在 2023/09/26 20:45, Mariusz Tkaczyk 写道:
> > On Tue, 26 Sep 2023 10:58:26 +0800
> > Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >   
> >> From: Yu Kuai <yukuai3@huawei.com>
> >>
> >> There are no functional changes, the new helper will still hold
> >> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
> >> md_seq_ops.
> >>
> >> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> >> ---
> >>   drivers/md/md.c | 18 +++++++++++++++---
> >>   1 file changed, 15 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/drivers/md/md.c b/drivers/md/md.c
> >> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
> >> --- a/drivers/md/md.c
> >> +++ b/drivers/md/md.c
> >> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev
> >> *mddev)
> >>   static void mddev_delayed_delete(struct work_struct *ws);
> >>   
> >> -void mddev_put(struct mddev *mddev)
> >> +static void __mddev_put(struct mddev *mddev, bool locked)
> >>   {
> >> -	if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> >> +	if (locked) {
> >> +		spin_lock(&all_mddevs_lock);
> >> +		if (!atomic_dec_and_test(&mddev->active))
> >> +			return;  
> > 
> > It is "locked" and we are taking lock? It seems weird to me. Perhaps
> > "do_lock" would be better? Do you meant
> > "lockdep_assert_held(&all_mddevs_lock);"  
> 
> Yes, do_lock is a better name, true means this function will return with
> lock held.
> > 
> > Something is wrong here, we have two paths and in both cases we are
> > taking lock.  
> 
> No, in the first path, lock is held unconditionaly, that's what we
> expected in md_seq_show(); in the next path, lock will only be held if
> active is decreased to 0.
> 

Ok I see, you described it in commit message.
IMO it is bad practice to return with locked resource and not highlight it in
function name.In this case, I would prefer to respect that device is already
locked, not lock it here:

(assuming bool means "locked")
spin_lock(&all_mddevs_lock);
__mddev_put(mddev, true); <- function known that lock is held.
spin_unlock((mddev);

your "do_lock" approach:
__mddev_put(mddev, true); <- lock is taken here and we are returning 
spin_unlock((mddev);

You could change name to something like "all_mddev_lock_and_put(mddev)" to
indicate that we are locking all_mddevs. It fits for me too.
Note: it is just my preference, feel free to ignore :)

Mariusz

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] md: simplify md_seq_ops
  2023-09-26  2:58 ` [PATCH v2 2/2] md: simplify md_seq_ops Yu Kuai
@ 2023-09-26 14:09   ` Mariusz Tkaczyk
  0 siblings, 0 replies; 9+ messages in thread
From: Mariusz Tkaczyk @ 2023-09-26 14:09 UTC (permalink / raw
  To: Yu Kuai; +Cc: xni, song, linux-raid, linux-kernel, yukuai3, yi.zhang, yangerkun

On Tue, 26 Sep 2023 10:58:27 +0800
Yu Kuai <yukuai1@huaweicloud.com> wrote:

> From: Yu Kuai <yukuai3@huawei.com>
> 
> Before this patch, the implementation is hacky and hard to understand:
> 
> 1) md_seq_start set pos to 1;
> 2) md_seq_show found pos is 1, then print Personalities;
> 3) md_seq_next found pos is 1, then it update pos to the first mddev;
> 4) md_seq_show found pos is not 1 or 2, show mddev;
> 5) md_seq_next found pos is not 1 or 2, update pos to next mddev;
> 6) loop 4-5 until the last mddev, then md_seq_next update pos to 2;
> 7) md_seq_show found pos is 2, then print unused devices;
> 8) md_seq_next found pos is 2, stop;
> 
> This patch remove the magic value and use seq_list_start/next/stop()
> directly, and move printing "Personalities" to md_sep_start(),
> "unsed devices" to md_seq_stop():

Typo md_sep_start()
> 
> 1) md_seq_start print Personalities, and then set pos to first mddev;
> 2) md_seq_show show mddev;
> 3) md_seq_next update pos to next mddev;
> 4) loop 2-3 until the last mddev;
> 5) md_seq_stop print unsed devices;
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
LGTM. Nice one. Code looks much better now.

Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] md: factor out a new helper to put mddev
  2023-09-26  2:58 ` [PATCH v2 1/2] md: factor out a new helper to put mddev Yu Kuai
  2023-09-26 12:45   ` Mariusz Tkaczyk
@ 2023-09-27  0:15   ` Song Liu
  2023-09-27  0:54     ` Yu Kuai
  1 sibling, 1 reply; 9+ messages in thread
From: Song Liu @ 2023-09-27  0:15 UTC (permalink / raw
  To: Yu Kuai
  Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yukuai3, yi.zhang,
	yangerkun

On Mon, Sep 25, 2023 at 8:04 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> There are no functional changes, the new helper will still hold
> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
> md_seq_ops.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>  drivers/md/md.c | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
>
>  static void mddev_delayed_delete(struct work_struct *ws);
>
> -void mddev_put(struct mddev *mddev)
> +static void __mddev_put(struct mddev *mddev, bool locked)
>  {
> -       if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> +       if (locked) {
> +               spin_lock(&all_mddevs_lock);
> +               if (!atomic_dec_and_test(&mddev->active))
> +                       return;
> +       } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>                 return;
> +

This condition is indeed very confusing. No matter whether we call the
flag "locked" or "do_lock", it is not really accurate.

How about we factor out a helper with the following logic:

        if (!mddev->raid_disks && list_empty(&mddev->disks) &&
            mddev->ctime == 0 && !mddev->hold_active) {
                /* Array is not configured at all, and not held active,
                 * so destroy it */
                set_bit(MD_DELETED, &mddev->flags);

                /*
                 * Call queue_work inside the spinlock so that
                 * flush_workqueue() after mddev_find will succeed in waiting
                 * for the work to be done.
                 */
                queue_work(md_misc_wq, &mddev->del_work);
        }

and then use it at the two callers?

Does this make sense?

Thanks,
Song

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/2] md: factor out a new helper to put mddev
  2023-09-27  0:15   ` Song Liu
@ 2023-09-27  0:54     ` Yu Kuai
  0 siblings, 0 replies; 9+ messages in thread
From: Yu Kuai @ 2023-09-27  0:54 UTC (permalink / raw
  To: Song Liu, Yu Kuai
  Cc: mariusz.tkaczyk, xni, linux-raid, linux-kernel, yi.zhang,
	yangerkun, yukuai (C)

Hi,

在 2023/09/27 8:15, Song Liu 写道:
> On Mon, Sep 25, 2023 at 8:04 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> There are no functional changes, the new helper will still hold
>> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
>> md_seq_ops.
>>
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>> ---
>>   drivers/md/md.c | 18 +++++++++++++++---
>>   1 file changed, 15 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
>>
>>   static void mddev_delayed_delete(struct work_struct *ws);
>>
>> -void mddev_put(struct mddev *mddev)
>> +static void __mddev_put(struct mddev *mddev, bool locked)
>>   {
>> -       if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>> +       if (locked) {
>> +               spin_lock(&all_mddevs_lock);
>> +               if (!atomic_dec_and_test(&mddev->active))
>> +                       return;
>> +       } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>>                  return;
>> +
> 
> This condition is indeed very confusing. No matter whether we call the
> flag "locked" or "do_lock", it is not really accurate.
> 
> How about we factor out a helper with the following logic:
> 
>          if (!mddev->raid_disks && list_empty(&mddev->disks) &&
>              mddev->ctime == 0 && !mddev->hold_active) {
>                  /* Array is not configured at all, and not held active,
>                   * so destroy it */
>                  set_bit(MD_DELETED, &mddev->flags);
> 
>                  /*
>                   * Call queue_work inside the spinlock so that
>                   * flush_workqueue() after mddev_find will succeed in waiting
>                   * for the work to be done.
>                   */
>                  queue_work(md_misc_wq, &mddev->del_work);
>          }
> 
> and then use it at the two callers?
> 
> Does this make sense?

Yes, that sounds great. I'll do this in v3.

Thanks,
Kuai

> 
> Thanks,
> Song
> .
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-09-27  3:00 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-26  2:58 [PATCH v2 0/2] md: simplify md_seq_ops Yu Kuai
2023-09-26  2:58 ` [PATCH v2 1/2] md: factor out a new helper to put mddev Yu Kuai
2023-09-26 12:45   ` Mariusz Tkaczyk
2023-09-26 12:54     ` Yu Kuai
2023-09-26 13:19       ` Mariusz Tkaczyk
2023-09-27  0:15   ` Song Liu
2023-09-27  0:54     ` Yu Kuai
2023-09-26  2:58 ` [PATCH v2 2/2] md: simplify md_seq_ops Yu Kuai
2023-09-26 14:09   ` Mariusz Tkaczyk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).