All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
	rcampbell@nvidia.com, linux-doc@vger.kernel.org,
	nouveau@lists.freedesktop.org, hughd@google.com,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	hch@infradead.org, bskeggs@redhat.com, jgg@nvidia.com,
	shakeelb@google.com, jhubbard@nvidia.com, willy@infradead.org
Subject: Re: [PATCH v10 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte()
Date: Tue, 8 Jun 2021 11:19:41 -0400	[thread overview]
Message-ID: <YL+KjaB4fCt/xodJ@t490s> (raw)
In-Reply-To: <20210607075855.5084-7-apopple@nvidia.com>

On Mon, Jun 07, 2021 at 05:58:51PM +1000, Alistair Popple wrote:
> Currently if copy_nonpresent_pte() returns a non-zero value it is
> assumed to be a swap entry which requires further processing outside the
> loop in copy_pte_range() after dropping locks. This prevents other
> values being returned to signal conditions such as failure which a
> subsequent change requires.
> 
> Instead make copy_nonpresent_pte() return an error code if further
> processing is required and read the value for the swap entry in the main
> loop under the ptl.
> 
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> 
> ---
> 
> v10:
> 
> Use a unique error code and only check return codes for handling.
> 
> v9:
> 
> New for v9 to allow device exclusive handling to occur in
> copy_nonpresent_pte().
> ---
>  mm/memory.c | 26 ++++++++++++++++----------
>  1 file changed, 16 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 2fb455c365c2..0982cab37ecb 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  
>  	if (likely(!non_swap_entry(entry))) {
>  		if (swap_duplicate(entry) < 0)
> -			return entry.val;
> +			return -EIO;
>  
>  		/* make sure dst_mm is on swapoff's mmlist. */
>  		if (unlikely(list_empty(&dst_mm->mmlist))) {
> @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  			continue;
>  		}
>  		if (unlikely(!pte_present(*src_pte))) {
> -			entry.val = copy_nonpresent_pte(dst_mm, src_mm,
> -							dst_pte, src_pte,
> -							src_vma, addr, rss);
> -			if (entry.val)
> +			ret = copy_nonpresent_pte(dst_mm, src_mm,
> +						dst_pte, src_pte,
> +						src_vma, addr, rss);
> +			if (ret == -EIO) {
> +				entry = pte_to_swp_entry(*src_pte);
>  				break;
> +			}
>  			progress += 8;
>  			continue;
>  		}
> @@ -1011,20 +1013,24 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  	pte_unmap_unlock(orig_dst_pte, dst_ptl);
>  	cond_resched();
>  
> -	if (entry.val) {
> +	if (ret == -EIO) {
> +		VM_WARN_ON_ONCE(!entry.val);
>  		if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
>  			ret = -ENOMEM;
>  			goto out;
>  		}
>  		entry.val = 0;
> -	} else if (ret) {
> -		WARN_ON_ONCE(ret != -EAGAIN);
> +	} else if (ret ==  -EAGAIN) {
                          ^
                          |----------------------------- one more space here

>  		prealloc = page_copy_prealloc(src_mm, src_vma, addr);
>  		if (!prealloc)
>  			return -ENOMEM;
> -		/* We've captured and resolved the error. Reset, try again. */
> -		ret = 0;
> +	} else if (ret) {
> +		VM_WARN_ON_ONCE(1);
>  	}
> +
> +	/* We've captured and resolved the error. Reset, try again. */

Maybe better as:

      /*
       * We've resolved all error even if there is, reset error code and try
       * again if necessary.
       */

as it also covers the no-error path.  But I guess not a big deal..

Reviewed-by: Peter Xu <peterx@redhat.com>

Thanks,

> +	ret = 0;
> +
>  	if (addr != end)
>  		goto again;
>  out:
> -- 
> 2.20.1
> 

-- 
Peter Xu


WARNING: multiple messages have this Message-ID (diff)
From: Peter Xu <peterx@redhat.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: rcampbell@nvidia.com, willy@infradead.org,
	linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org,
	hughd@google.com, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, hch@infradead.org,
	linux-mm@kvack.org, shakeelb@google.com, bskeggs@redhat.com,
	jgg@nvidia.com, jhubbard@nvidia.com, akpm@linux-foundation.org
Subject: Re: [PATCH v10 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte()
Date: Tue, 8 Jun 2021 11:19:41 -0400	[thread overview]
Message-ID: <YL+KjaB4fCt/xodJ@t490s> (raw)
In-Reply-To: <20210607075855.5084-7-apopple@nvidia.com>

On Mon, Jun 07, 2021 at 05:58:51PM +1000, Alistair Popple wrote:
> Currently if copy_nonpresent_pte() returns a non-zero value it is
> assumed to be a swap entry which requires further processing outside the
> loop in copy_pte_range() after dropping locks. This prevents other
> values being returned to signal conditions such as failure which a
> subsequent change requires.
> 
> Instead make copy_nonpresent_pte() return an error code if further
> processing is required and read the value for the swap entry in the main
> loop under the ptl.
> 
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> 
> ---
> 
> v10:
> 
> Use a unique error code and only check return codes for handling.
> 
> v9:
> 
> New for v9 to allow device exclusive handling to occur in
> copy_nonpresent_pte().
> ---
>  mm/memory.c | 26 ++++++++++++++++----------
>  1 file changed, 16 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 2fb455c365c2..0982cab37ecb 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  
>  	if (likely(!non_swap_entry(entry))) {
>  		if (swap_duplicate(entry) < 0)
> -			return entry.val;
> +			return -EIO;
>  
>  		/* make sure dst_mm is on swapoff's mmlist. */
>  		if (unlikely(list_empty(&dst_mm->mmlist))) {
> @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  			continue;
>  		}
>  		if (unlikely(!pte_present(*src_pte))) {
> -			entry.val = copy_nonpresent_pte(dst_mm, src_mm,
> -							dst_pte, src_pte,
> -							src_vma, addr, rss);
> -			if (entry.val)
> +			ret = copy_nonpresent_pte(dst_mm, src_mm,
> +						dst_pte, src_pte,
> +						src_vma, addr, rss);
> +			if (ret == -EIO) {
> +				entry = pte_to_swp_entry(*src_pte);
>  				break;
> +			}
>  			progress += 8;
>  			continue;
>  		}
> @@ -1011,20 +1013,24 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  	pte_unmap_unlock(orig_dst_pte, dst_ptl);
>  	cond_resched();
>  
> -	if (entry.val) {
> +	if (ret == -EIO) {
> +		VM_WARN_ON_ONCE(!entry.val);
>  		if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
>  			ret = -ENOMEM;
>  			goto out;
>  		}
>  		entry.val = 0;
> -	} else if (ret) {
> -		WARN_ON_ONCE(ret != -EAGAIN);
> +	} else if (ret ==  -EAGAIN) {
                          ^
                          |----------------------------- one more space here

>  		prealloc = page_copy_prealloc(src_mm, src_vma, addr);
>  		if (!prealloc)
>  			return -ENOMEM;
> -		/* We've captured and resolved the error. Reset, try again. */
> -		ret = 0;
> +	} else if (ret) {
> +		VM_WARN_ON_ONCE(1);
>  	}
> +
> +	/* We've captured and resolved the error. Reset, try again. */

Maybe better as:

      /*
       * We've resolved all error even if there is, reset error code and try
       * again if necessary.
       */

as it also covers the no-error path.  But I guess not a big deal..

Reviewed-by: Peter Xu <peterx@redhat.com>

Thanks,

> +	ret = 0;
> +
>  	if (addr != end)
>  		goto again;
>  out:
> -- 
> 2.20.1
> 

-- 
Peter Xu


WARNING: multiple messages have this Message-ID (diff)
From: Peter Xu <peterx@redhat.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: rcampbell@nvidia.com, willy@infradead.org,
	linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org,
	hughd@google.com, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, hch@infradead.org,
	linux-mm@kvack.org, shakeelb@google.com, bskeggs@redhat.com,
	jgg@nvidia.com, akpm@linux-foundation.org
Subject: Re: [Nouveau] [PATCH v10 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte()
Date: Tue, 8 Jun 2021 11:19:41 -0400	[thread overview]
Message-ID: <YL+KjaB4fCt/xodJ@t490s> (raw)
In-Reply-To: <20210607075855.5084-7-apopple@nvidia.com>

On Mon, Jun 07, 2021 at 05:58:51PM +1000, Alistair Popple wrote:
> Currently if copy_nonpresent_pte() returns a non-zero value it is
> assumed to be a swap entry which requires further processing outside the
> loop in copy_pte_range() after dropping locks. This prevents other
> values being returned to signal conditions such as failure which a
> subsequent change requires.
> 
> Instead make copy_nonpresent_pte() return an error code if further
> processing is required and read the value for the swap entry in the main
> loop under the ptl.
> 
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> 
> ---
> 
> v10:
> 
> Use a unique error code and only check return codes for handling.
> 
> v9:
> 
> New for v9 to allow device exclusive handling to occur in
> copy_nonpresent_pte().
> ---
>  mm/memory.c | 26 ++++++++++++++++----------
>  1 file changed, 16 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 2fb455c365c2..0982cab37ecb 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  
>  	if (likely(!non_swap_entry(entry))) {
>  		if (swap_duplicate(entry) < 0)
> -			return entry.val;
> +			return -EIO;
>  
>  		/* make sure dst_mm is on swapoff's mmlist. */
>  		if (unlikely(list_empty(&dst_mm->mmlist))) {
> @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  			continue;
>  		}
>  		if (unlikely(!pte_present(*src_pte))) {
> -			entry.val = copy_nonpresent_pte(dst_mm, src_mm,
> -							dst_pte, src_pte,
> -							src_vma, addr, rss);
> -			if (entry.val)
> +			ret = copy_nonpresent_pte(dst_mm, src_mm,
> +						dst_pte, src_pte,
> +						src_vma, addr, rss);
> +			if (ret == -EIO) {
> +				entry = pte_to_swp_entry(*src_pte);
>  				break;
> +			}
>  			progress += 8;
>  			continue;
>  		}
> @@ -1011,20 +1013,24 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  	pte_unmap_unlock(orig_dst_pte, dst_ptl);
>  	cond_resched();
>  
> -	if (entry.val) {
> +	if (ret == -EIO) {
> +		VM_WARN_ON_ONCE(!entry.val);
>  		if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
>  			ret = -ENOMEM;
>  			goto out;
>  		}
>  		entry.val = 0;
> -	} else if (ret) {
> -		WARN_ON_ONCE(ret != -EAGAIN);
> +	} else if (ret ==  -EAGAIN) {
                          ^
                          |----------------------------- one more space here

>  		prealloc = page_copy_prealloc(src_mm, src_vma, addr);
>  		if (!prealloc)
>  			return -ENOMEM;
> -		/* We've captured and resolved the error. Reset, try again. */
> -		ret = 0;
> +	} else if (ret) {
> +		VM_WARN_ON_ONCE(1);
>  	}
> +
> +	/* We've captured and resolved the error. Reset, try again. */

Maybe better as:

      /*
       * We've resolved all error even if there is, reset error code and try
       * again if necessary.
       */

as it also covers the no-error path.  But I guess not a big deal..

Reviewed-by: Peter Xu <peterx@redhat.com>

Thanks,

> +	ret = 0;
> +
>  	if (addr != end)
>  		goto again;
>  out:
> -- 
> 2.20.1
> 

-- 
Peter Xu

_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

  reply	other threads:[~2021-06-08 15:19 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07  7:58 [PATCH v10 00/10] Add support for SVM atomics in Nouveau Alistair Popple
2021-06-07  7:58 ` [Nouveau] " Alistair Popple
2021-06-07  7:58 ` Alistair Popple
2021-06-07  7:58 ` [PATCH v10 01/10] mm: Remove special swap entry functions Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-07  7:58 ` [PATCH v10 02/10] mm/swapops: Rework swap entry manipulation code Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-07  7:58 ` [PATCH v10 03/10] mm/rmap: Split try_to_munlock from try_to_unmap Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-07  7:58 ` [PATCH v10 04/10] mm/rmap: Split migration into its own function Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-07  7:58 ` [PATCH v10 05/10] mm: Rename migrate_pgmap_owner Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-08 15:16   ` Peter Xu
2021-06-08 15:16     ` [Nouveau] " Peter Xu
2021-06-08 15:16     ` Peter Xu
2021-06-07  7:58 ` [PATCH v10 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-08 15:19   ` Peter Xu [this message]
2021-06-08 15:19     ` [Nouveau] " Peter Xu
2021-06-08 15:19     ` Peter Xu
2021-06-07  7:58 ` [PATCH v10 07/10] mm: Device exclusive memory access Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-08 18:33   ` Peter Xu
2021-06-08 18:33     ` [Nouveau] " Peter Xu
2021-06-08 18:33     ` Peter Xu
2021-06-09  9:38     ` [Nouveau] " Alistair Popple
2021-06-09  9:38       ` Alistair Popple
2021-06-09  9:38       ` Alistair Popple
2021-06-09 16:05       ` Peter Xu
2021-06-09 16:05         ` [Nouveau] " Peter Xu
2021-06-09 16:05         ` Peter Xu
2021-06-10  0:18         ` [Nouveau] " Alistair Popple
2021-06-10  0:18           ` Alistair Popple
2021-06-10  0:18           ` Alistair Popple
2021-06-10 18:04           ` Peter Xu
2021-06-10 18:04             ` [Nouveau] " Peter Xu
2021-06-10 18:04             ` Peter Xu
2021-06-10 14:21             ` [Nouveau] " Alistair Popple
2021-06-10 14:21               ` Alistair Popple
2021-06-10 14:21               ` Alistair Popple
2021-06-10 23:04               ` Peter Xu
2021-06-10 23:04                 ` [Nouveau] " Peter Xu
2021-06-10 23:04                 ` Peter Xu
2021-06-10 23:17                 ` [Nouveau] " Alistair Popple
2021-06-10 23:17                   ` Alistair Popple
2021-06-10 23:17                   ` Alistair Popple
2021-06-11  1:00                   ` Peter Xu
2021-06-11  1:00                     ` [Nouveau] " Peter Xu
2021-06-11  1:00                     ` Peter Xu
2021-06-11  3:43                     ` [Nouveau] " Alistair Popple
2021-06-11  3:43                       ` Alistair Popple
2021-06-11  3:43                       ` Alistair Popple
2021-06-11 15:01                       ` Peter Xu
2021-06-11 15:01                         ` [Nouveau] " Peter Xu
2021-06-11 15:01                         ` Peter Xu
2021-06-15  3:08                         ` [Nouveau] " Alistair Popple
2021-06-15  3:08                           ` Alistair Popple
2021-06-15  3:08                           ` Alistair Popple
2021-06-15 16:25                           ` Peter Xu
2021-06-15 16:25                             ` [Nouveau] " Peter Xu
2021-06-15 16:25                             ` Peter Xu
2021-06-16  2:47                             ` [Nouveau] " Alistair Popple
2021-06-16  2:47                               ` Alistair Popple
2021-06-16  2:47                               ` Alistair Popple
2021-06-07  7:58 ` [PATCH v10 08/10] mm: Selftests for exclusive device memory Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58 ` [PATCH v10 09/10] nouveau/svm: Refactor nouveau_range_fault Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple
2021-06-07  7:58 ` [PATCH v10 10/10] nouveau/svm: Implement atomic SVM access Alistair Popple
2021-06-07  7:58   ` Alistair Popple
2021-06-07  7:58   ` [Nouveau] " Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YL+KjaB4fCt/xodJ@t490s \
    --to=peterx@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=hughd@google.com \
    --cc=jgg@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    --cc=shakeelb@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.