Linux-Block Archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] blk-cgroup: two fixes on list corruption
@ 2024-05-15  1:31 Ming Lei
  2024-05-15  1:31 ` [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat Ming Lei
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Ming Lei @ 2024-05-15  1:31 UTC (permalink / raw
  To: Jens Axboe; +Cc: linux-block, Tejun Heo, Waiman Long, Ming Lei

Hello,

The 1st patch fixes list corruption when running reset_iostat(cgroup
v1).

The 2nd patch fixes potential list corruption when reordering of
writting to ->lqueued and reading from iostat instance.


Ming Lei (2):
  blk-cgroup: fix list corruption from resetting io stat
  blk-cgroup: fix list corruption from reorder of WRITE ->lqueued

 block/blk-cgroup.c | 68 ++++++++++++++++++++++++++++++----------------
 1 file changed, 45 insertions(+), 23 deletions(-)

-- 
2.44.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat
  2024-05-15  1:31 [PATCH 0/2] blk-cgroup: two fixes on list corruption Ming Lei
@ 2024-05-15  1:31 ` Ming Lei
  2024-05-15 13:59   ` Waiman Long
  2024-05-15 16:49   ` Tejun Heo
  2024-05-15  1:31 ` [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued Ming Lei
  2024-05-16  2:21 ` [PATCH 0/2] blk-cgroup: two fixes on list corruption Jens Axboe
  2 siblings, 2 replies; 9+ messages in thread
From: Ming Lei @ 2024-05-15  1:31 UTC (permalink / raw
  To: Jens Axboe; +Cc: linux-block, Tejun Heo, Waiman Long, Ming Lei, Jay Shin

Since commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()"),
each iostat instance is added to blkcg percpu list, so blkcg_reset_stats()
can't reset the stat instance by memset(), otherwise the llist may be
corrupted.

Fix the issue by only resetting the counter part.

Cc: Tejun Heo <tj@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Jay Shin <jaeshin@redhat.com>
Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-cgroup.c | 58 ++++++++++++++++++++++++++++------------------
 1 file changed, 35 insertions(+), 23 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 059467086b13..86752b1652b5 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -619,12 +619,45 @@ static void blkg_destroy_all(struct gendisk *disk)
 	spin_unlock_irq(&q->queue_lock);
 }
 
+static void blkg_iostat_set(struct blkg_iostat *dst, struct blkg_iostat *src)
+{
+	int i;
+
+	for (i = 0; i < BLKG_IOSTAT_NR; i++) {
+		dst->bytes[i] = src->bytes[i];
+		dst->ios[i] = src->ios[i];
+	}
+}
+
+static void __blkg_clear_stat(struct blkg_iostat_set *bis)
+{
+	struct blkg_iostat cur = {0};
+	unsigned long flags;
+
+	flags = u64_stats_update_begin_irqsave(&bis->sync);
+	blkg_iostat_set(&bis->cur, &cur);
+	blkg_iostat_set(&bis->last, &cur);
+	u64_stats_update_end_irqrestore(&bis->sync, flags);
+}
+
+static void blkg_clear_stat(struct blkcg_gq *blkg)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		struct blkg_iostat_set *s = per_cpu_ptr(blkg->iostat_cpu, cpu);
+
+		__blkg_clear_stat(s);
+	}
+	__blkg_clear_stat(&blkg->iostat);
+}
+
 static int blkcg_reset_stats(struct cgroup_subsys_state *css,
 			     struct cftype *cftype, u64 val)
 {
 	struct blkcg *blkcg = css_to_blkcg(css);
 	struct blkcg_gq *blkg;
-	int i, cpu;
+	int i;
 
 	mutex_lock(&blkcg_pol_mutex);
 	spin_lock_irq(&blkcg->lock);
@@ -635,18 +668,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css,
 	 * anyway.  If you get hit by a race, retry.
 	 */
 	hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
-		for_each_possible_cpu(cpu) {
-			struct blkg_iostat_set *bis =
-				per_cpu_ptr(blkg->iostat_cpu, cpu);
-			memset(bis, 0, sizeof(*bis));
-
-			/* Re-initialize the cleared blkg_iostat_set */
-			u64_stats_init(&bis->sync);
-			bis->blkg = blkg;
-		}
-		memset(&blkg->iostat, 0, sizeof(blkg->iostat));
-		u64_stats_init(&blkg->iostat.sync);
-
+		blkg_clear_stat(blkg);
 		for (i = 0; i < BLKCG_MAX_POLS; i++) {
 			struct blkcg_policy *pol = blkcg_policy[i];
 
@@ -949,16 +971,6 @@ void blkg_conf_exit(struct blkg_conf_ctx *ctx)
 }
 EXPORT_SYMBOL_GPL(blkg_conf_exit);
 
-static void blkg_iostat_set(struct blkg_iostat *dst, struct blkg_iostat *src)
-{
-	int i;
-
-	for (i = 0; i < BLKG_IOSTAT_NR; i++) {
-		dst->bytes[i] = src->bytes[i];
-		dst->ios[i] = src->ios[i];
-	}
-}
-
 static void blkg_iostat_add(struct blkg_iostat *dst, struct blkg_iostat *src)
 {
 	int i;
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued
  2024-05-15  1:31 [PATCH 0/2] blk-cgroup: two fixes on list corruption Ming Lei
  2024-05-15  1:31 ` [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat Ming Lei
@ 2024-05-15  1:31 ` Ming Lei
  2024-05-15 14:09   ` Waiman Long
  2024-05-16  2:21 ` [PATCH 0/2] blk-cgroup: two fixes on list corruption Jens Axboe
  2 siblings, 1 reply; 9+ messages in thread
From: Ming Lei @ 2024-05-15  1:31 UTC (permalink / raw
  To: Jens Axboe; +Cc: linux-block, Tejun Heo, Waiman Long, Ming Lei

__blkcg_rstat_flush() can be run anytime, especially when blk_cgroup_bio_start
is being executed.

If WRITE of `->lqueued` is re-ordered with READ of 'bisc->lnode.next' in
the loop of __blkcg_rstat_flush(), `next_bisc` can be assigned with one
stat instance being added in blk_cgroup_bio_start(), then the local
list in __blkcg_rstat_flush() could be corrupted.

Fix the issue by adding one barrier.

Cc: Tejun Heo <tj@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-cgroup.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 86752b1652b5..b36ba1d40ba1 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1036,6 +1036,16 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
 		struct blkg_iostat cur;
 		unsigned int seq;
 
+		/*
+		 * Order assignment of `next_bisc` from `bisc->lnode.next` in
+		 * llist_for_each_entry_safe and clearing `bisc->lqueued` for
+		 * avoiding to assign `next_bisc` with new next pointer added
+		 * in blk_cgroup_bio_start() in case of re-ordering.
+		 *
+		 * The pair barrier is implied in llist_add() in blk_cgroup_bio_start().
+		 */
+		smp_mb();
+
 		WRITE_ONCE(bisc->lqueued, false);
 
 		/* fetch the current per-cpu values */
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat
  2024-05-15  1:31 ` [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat Ming Lei
@ 2024-05-15 13:59   ` Waiman Long
  2024-05-15 16:49   ` Tejun Heo
  1 sibling, 0 replies; 9+ messages in thread
From: Waiman Long @ 2024-05-15 13:59 UTC (permalink / raw
  To: Ming Lei, Jens Axboe; +Cc: linux-block, Tejun Heo, Jay Shin


On 5/14/24 21:31, Ming Lei wrote:
> Since commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()"),
> each iostat instance is added to blkcg percpu list, so blkcg_reset_stats()
> can't reset the stat instance by memset(), otherwise the llist may be
> corrupted.
>
> Fix the issue by only resetting the counter part.
>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Jay Shin <jaeshin@redhat.com>
> Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   block/blk-cgroup.c | 58 ++++++++++++++++++++++++++++------------------
>   1 file changed, 35 insertions(+), 23 deletions(-)
>
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index 059467086b13..86752b1652b5 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -619,12 +619,45 @@ static void blkg_destroy_all(struct gendisk *disk)
>   	spin_unlock_irq(&q->queue_lock);
>   }
>   
> +static void blkg_iostat_set(struct blkg_iostat *dst, struct blkg_iostat *src)
> +{
> +	int i;
> +
> +	for (i = 0; i < BLKG_IOSTAT_NR; i++) {
> +		dst->bytes[i] = src->bytes[i];
> +		dst->ios[i] = src->ios[i];
> +	}
> +}
> +
> +static void __blkg_clear_stat(struct blkg_iostat_set *bis)
> +{
> +	struct blkg_iostat cur = {0};
> +	unsigned long flags;
> +
> +	flags = u64_stats_update_begin_irqsave(&bis->sync);
> +	blkg_iostat_set(&bis->cur, &cur);
> +	blkg_iostat_set(&bis->last, &cur);
> +	u64_stats_update_end_irqrestore(&bis->sync, flags);
> +}
> +
> +static void blkg_clear_stat(struct blkcg_gq *blkg)
> +{
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		struct blkg_iostat_set *s = per_cpu_ptr(blkg->iostat_cpu, cpu);
> +
> +		__blkg_clear_stat(s);
> +	}
> +	__blkg_clear_stat(&blkg->iostat);
> +}
> +
>   static int blkcg_reset_stats(struct cgroup_subsys_state *css,
>   			     struct cftype *cftype, u64 val)
>   {
>   	struct blkcg *blkcg = css_to_blkcg(css);
>   	struct blkcg_gq *blkg;
> -	int i, cpu;
> +	int i;
>   
>   	mutex_lock(&blkcg_pol_mutex);
>   	spin_lock_irq(&blkcg->lock);
> @@ -635,18 +668,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css,
>   	 * anyway.  If you get hit by a race, retry.
>   	 */
>   	hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
> -		for_each_possible_cpu(cpu) {
> -			struct blkg_iostat_set *bis =
> -				per_cpu_ptr(blkg->iostat_cpu, cpu);
> -			memset(bis, 0, sizeof(*bis));
> -
> -			/* Re-initialize the cleared blkg_iostat_set */
> -			u64_stats_init(&bis->sync);
> -			bis->blkg = blkg;
> -		}
> -		memset(&blkg->iostat, 0, sizeof(blkg->iostat));
> -		u64_stats_init(&blkg->iostat.sync);
> -
> +		blkg_clear_stat(blkg);
>   		for (i = 0; i < BLKCG_MAX_POLS; i++) {
>   			struct blkcg_policy *pol = blkcg_policy[i];
>   
> @@ -949,16 +971,6 @@ void blkg_conf_exit(struct blkg_conf_ctx *ctx)
>   }
>   EXPORT_SYMBOL_GPL(blkg_conf_exit);
>   
> -static void blkg_iostat_set(struct blkg_iostat *dst, struct blkg_iostat *src)
> -{
> -	int i;
> -
> -	for (i = 0; i < BLKG_IOSTAT_NR; i++) {
> -		dst->bytes[i] = src->bytes[i];
> -		dst->ios[i] = src->ios[i];
> -	}
> -}
> -
>   static void blkg_iostat_add(struct blkg_iostat *dst, struct blkg_iostat *src)
>   {
>   	int i;
Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued
  2024-05-15  1:31 ` [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued Ming Lei
@ 2024-05-15 14:09   ` Waiman Long
  2024-05-15 14:46     ` Waiman Long
  2024-05-16  0:58     ` Ming Lei
  0 siblings, 2 replies; 9+ messages in thread
From: Waiman Long @ 2024-05-15 14:09 UTC (permalink / raw
  To: Ming Lei, Jens Axboe; +Cc: linux-block, Tejun Heo


On 5/14/24 21:31, Ming Lei wrote:
> __blkcg_rstat_flush() can be run anytime, especially when blk_cgroup_bio_start
> is being executed.
>
> If WRITE of `->lqueued` is re-ordered with READ of 'bisc->lnode.next' in
> the loop of __blkcg_rstat_flush(), `next_bisc` can be assigned with one
> stat instance being added in blk_cgroup_bio_start(), then the local
> list in __blkcg_rstat_flush() could be corrupted.
>
> Fix the issue by adding one barrier.
>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   block/blk-cgroup.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
>
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index 86752b1652b5..b36ba1d40ba1 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -1036,6 +1036,16 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
>   		struct blkg_iostat cur;
>   		unsigned int seq;
>   
> +		/*
> +		 * Order assignment of `next_bisc` from `bisc->lnode.next` in
> +		 * llist_for_each_entry_safe and clearing `bisc->lqueued` for
> +		 * avoiding to assign `next_bisc` with new next pointer added
> +		 * in blk_cgroup_bio_start() in case of re-ordering.
> +		 *
> +		 * The pair barrier is implied in llist_add() in blk_cgroup_bio_start().
> +		 */
> +		smp_mb();
> +
>   		WRITE_ONCE(bisc->lqueued, false);

I believe replacing WRITE_ONCE() by smp_store_release() and the 
READ_ONCE() in blk_cgroup_bio_start() by smp_load_acquire() should 
provide enough barrier to prevent unexpected reordering as the 
subsequent u64_stats_fetch_begin() will also provide a read barrier for 
subsequent read.

Cheers,
Longman

>   
>   		/* fetch the current per-cpu values */


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued
  2024-05-15 14:09   ` Waiman Long
@ 2024-05-15 14:46     ` Waiman Long
  2024-05-16  0:58     ` Ming Lei
  1 sibling, 0 replies; 9+ messages in thread
From: Waiman Long @ 2024-05-15 14:46 UTC (permalink / raw
  To: Ming Lei, Jens Axboe; +Cc: linux-block, Tejun Heo

On 5/15/24 10:09, Waiman Long wrote:
>
> On 5/14/24 21:31, Ming Lei wrote:
>> __blkcg_rstat_flush() can be run anytime, especially when 
>> blk_cgroup_bio_start
>> is being executed.
>>
>> If WRITE of `->lqueued` is re-ordered with READ of 'bisc->lnode.next' in
>> the loop of __blkcg_rstat_flush(), `next_bisc` can be assigned with one
>> stat instance being added in blk_cgroup_bio_start(), then the local
>> list in __blkcg_rstat_flush() could be corrupted.
>>
>> Fix the issue by adding one barrier.
>>
>> Cc: Tejun Heo <tj@kernel.org>
>> Cc: Waiman Long <longman@redhat.com>
>> Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
>> Signed-off-by: Ming Lei <ming.lei@redhat.com>
>> ---
>>   block/blk-cgroup.c | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
>> index 86752b1652b5..b36ba1d40ba1 100644
>> --- a/block/blk-cgroup.c
>> +++ b/block/blk-cgroup.c
>> @@ -1036,6 +1036,16 @@ static void __blkcg_rstat_flush(struct blkcg 
>> *blkcg, int cpu)
>>           struct blkg_iostat cur;
>>           unsigned int seq;
>>   +        /*
>> +         * Order assignment of `next_bisc` from `bisc->lnode.next` in
>> +         * llist_for_each_entry_safe and clearing `bisc->lqueued` for
>> +         * avoiding to assign `next_bisc` with new next pointer added
>> +         * in blk_cgroup_bio_start() in case of re-ordering.
>> +         *
>> +         * The pair barrier is implied in llist_add() in 
>> blk_cgroup_bio_start().
>> +         */
>> +        smp_mb();
>> +
>>           WRITE_ONCE(bisc->lqueued, false);
>
> I believe replacing WRITE_ONCE() by smp_store_release() and the 
> READ_ONCE() in blk_cgroup_bio_start() by smp_load_acquire() should 
> provide enough barrier to prevent unexpected reordering as the 
> subsequent u64_stats_fetch_begin() will also provide a read barrier 
> for subsequent read.

We can also use a smp_acquire__after_ctrl_dep() after the READ_ONCE() in 
blk_cgroup_bio_start() instead of a full smp_load_acquire() to optimize 
it a bit more.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat
  2024-05-15  1:31 ` [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat Ming Lei
  2024-05-15 13:59   ` Waiman Long
@ 2024-05-15 16:49   ` Tejun Heo
  1 sibling, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2024-05-15 16:49 UTC (permalink / raw
  To: Ming Lei; +Cc: Jens Axboe, linux-block, Waiman Long, Jay Shin

On Wed, May 15, 2024 at 09:31:56AM +0800, Ming Lei wrote:
> Since commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()"),
> each iostat instance is added to blkcg percpu list, so blkcg_reset_stats()
> can't reset the stat instance by memset(), otherwise the llist may be
> corrupted.
> 
> Fix the issue by only resetting the counter part.
> 
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Jay Shin <jaeshin@redhat.com>
> Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
> Signed-off-by: Ming Lei <ming.lei@redhat.com>

Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued
  2024-05-15 14:09   ` Waiman Long
  2024-05-15 14:46     ` Waiman Long
@ 2024-05-16  0:58     ` Ming Lei
  1 sibling, 0 replies; 9+ messages in thread
From: Ming Lei @ 2024-05-16  0:58 UTC (permalink / raw
  To: Waiman Long; +Cc: Jens Axboe, linux-block, Tejun Heo, ming.lei

On Wed, May 15, 2024 at 10:09:30AM -0400, Waiman Long wrote:
> 
> On 5/14/24 21:31, Ming Lei wrote:
> > __blkcg_rstat_flush() can be run anytime, especially when blk_cgroup_bio_start
> > is being executed.
> > 
> > If WRITE of `->lqueued` is re-ordered with READ of 'bisc->lnode.next' in
> > the loop of __blkcg_rstat_flush(), `next_bisc` can be assigned with one
> > stat instance being added in blk_cgroup_bio_start(), then the local
> > list in __blkcg_rstat_flush() could be corrupted.
> > 
> > Fix the issue by adding one barrier.
> > 
> > Cc: Tejun Heo <tj@kernel.org>
> > Cc: Waiman Long <longman@redhat.com>
> > Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >   block/blk-cgroup.c | 10 ++++++++++
> >   1 file changed, 10 insertions(+)
> > 
> > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> > index 86752b1652b5..b36ba1d40ba1 100644
> > --- a/block/blk-cgroup.c
> > +++ b/block/blk-cgroup.c
> > @@ -1036,6 +1036,16 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
> >   		struct blkg_iostat cur;
> >   		unsigned int seq;
> > +		/*
> > +		 * Order assignment of `next_bisc` from `bisc->lnode.next` in
> > +		 * llist_for_each_entry_safe and clearing `bisc->lqueued` for
> > +		 * avoiding to assign `next_bisc` with new next pointer added
> > +		 * in blk_cgroup_bio_start() in case of re-ordering.
> > +		 *
> > +		 * The pair barrier is implied in llist_add() in blk_cgroup_bio_start().
> > +		 */
> > +		smp_mb();
> > +
> >   		WRITE_ONCE(bisc->lqueued, false);
> 
> I believe replacing WRITE_ONCE() by smp_store_release() and the READ_ONCE()
> in blk_cgroup_bio_start() by smp_load_acquire() should provide enough
> barrier to prevent unexpected reordering as

Yeah, smp_load_acquire() and smp_store_release() pair works too, but with
one extra cost of smp_mb() around llist_add() which implies barrier
already.

So I prefer to this patch.

> the subsequent
> u64_stats_fetch_begin() will also provide a read barrier for subsequent
> read.

u64_stats_fetch_begin() is nop in case of `BITS_PER_LONG == 64`.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2] blk-cgroup: two fixes on list corruption
  2024-05-15  1:31 [PATCH 0/2] blk-cgroup: two fixes on list corruption Ming Lei
  2024-05-15  1:31 ` [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat Ming Lei
  2024-05-15  1:31 ` [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued Ming Lei
@ 2024-05-16  2:21 ` Jens Axboe
  2 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2024-05-16  2:21 UTC (permalink / raw
  To: Ming Lei; +Cc: linux-block, Tejun Heo, Waiman Long


On Wed, 15 May 2024 09:31:55 +0800, Ming Lei wrote:
> The 1st patch fixes list corruption when running reset_iostat(cgroup
> v1).
> 
> The 2nd patch fixes potential list corruption when reordering of
> writting to ->lqueued and reading from iostat instance.
> 
> 
> [...]

Applied, thanks!

[1/2] blk-cgroup: fix list corruption from resetting io stat
      commit: 6da6680632792709cecf2b006f2fe3ca7857e791
[2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued
      commit: d0aac2363549e12cc79b8e285f13d5a9f42fd08e

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-05-16  2:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-15  1:31 [PATCH 0/2] blk-cgroup: two fixes on list corruption Ming Lei
2024-05-15  1:31 ` [PATCH 1/2] blk-cgroup: fix list corruption from resetting io stat Ming Lei
2024-05-15 13:59   ` Waiman Long
2024-05-15 16:49   ` Tejun Heo
2024-05-15  1:31 ` [PATCH 2/2] blk-cgroup: fix list corruption from reorder of WRITE ->lqueued Ming Lei
2024-05-15 14:09   ` Waiman Long
2024-05-15 14:46     ` Waiman Long
2024-05-16  0:58     ` Ming Lei
2024-05-16  2:21 ` [PATCH 0/2] blk-cgroup: two fixes on list corruption Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).