Linux-mm Archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Barry Song <21cnbao@gmail.com>, Khalid Aziz <khalid.aziz@oracle.com>
Cc: akpm@linux-foundation.org,  linux-mm@kvack.org,
	baolin.wang@linux.alibaba.com,  chrisl@kernel.org,
	 david@redhat.com, hanchuanhua@oppo.com,  hannes@cmpxchg.org,
	 hughd@google.com, kasong@tencent.com,  ryan.roberts@arm.com,
	 surenb@google.com, v-songbaohua@oppo.com,  willy@infradead.org,
	 xiang@kernel.org, yosryahmed@google.com,  yuzhao@google.com,
	 ziy@nvidia.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 4/5] mm: swap: entirely map large folios found in swapcache
Date: Tue, 16 Apr 2024 10:25:28 +0800	[thread overview]
Message-ID: <8734rm2gdj.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <CAGsJ_4x_bOchG=bJjR8WE=vQxu3ke8fkxcDOFhqX5FS_a-0heA@mail.gmail.com> (Barry Song's message of "Mon, 15 Apr 2024 20:53:53 +1200")


Added Khalid for arch_do_swap_page().

Barry Song <21cnbao@gmail.com> writes:

> On Mon, Apr 15, 2024 at 8:39 PM Huang, Ying <ying.huang@intel.com> wrote:
>>
>> Barry Song <21cnbao@gmail.com> writes:

[snip]

>>
>> > +     bool any_swap_shared = false;
>> >
>> >       if (!pte_unmap_same(vmf))
>> >               goto out;
>> > @@ -4137,6 +4141,35 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>> >        */
>> >       vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
>> >                       &vmf->ptl);
>>
>> We should move pte check here.  That is,
>>
>>         if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
>>                 goto out_nomap;
>>
>> This will simplify the situation for large folio.
>
> the plan is moving the whole code block
>
> if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio))
>
> after
>         if (unlikely(!folio_test_uptodate(folio))) {
>                 ret = VM_FAULT_SIGBUS;
>                 goto out_nomap;
>         }
>
> though we couldn't be !folio_test_uptodate(folio)) for hitting
> swapcache but it seems
> logically better for future use.

LGTM, Thanks!

>>
>> > +
>> > +     /* We hit large folios in swapcache */
>>
>> The comments seems unnecessary because the code tells that already.
>>
>> > +     if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) {
>> > +             int nr = folio_nr_pages(folio);
>> > +             int idx = folio_page_idx(folio, page);
>> > +             unsigned long folio_start = vmf->address - idx * PAGE_SIZE;
>> > +             unsigned long folio_end = folio_start + nr * PAGE_SIZE;
>> > +             pte_t *folio_ptep;
>> > +             pte_t folio_pte;
>> > +
>> > +             if (unlikely(folio_start < max(vmf->address & PMD_MASK, vma->vm_start)))
>> > +                     goto check_pte;
>> > +             if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->vm_end)))
>> > +                     goto check_pte;
>> > +
>> > +             folio_ptep = vmf->pte - idx;
>> > +             folio_pte = ptep_get(folio_ptep);
>>
>> It's better to construct pte based on fault PTE via generalizing
>> pte_next_swp_offset() (may be pte_move_swp_offset()).  Then we can find
>> inconsistent PTEs quicker.
>
> it seems your point is getting the pte of page0 by pte_next_swp_offset()
> unfortunately pte_next_swp_offset can't go back. on the other hand,
> we have to check the real pte value of the 0nd entry right now because
> swap_pte_batch() only really reads pte from the 1st entry. it assumes
> pte argument is the real value for the 0nd pte entry.
>
> static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
> {
>         pte_t expected_pte = pte_next_swp_offset(pte);
>         const pte_t *end_ptep = start_ptep + max_nr;
>         pte_t *ptep = start_ptep + 1;
>
>         VM_WARN_ON(max_nr < 1);
>         VM_WARN_ON(!is_swap_pte(pte));
>         VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte)));
>
>         while (ptep < end_ptep) {
>                 pte = ptep_get(ptep);
>
>                 if (!pte_same(pte, expected_pte))
>                         break;
>
>                 expected_pte = pte_next_swp_offset(expected_pte);
>                 ptep++;
>         }
>
>         return ptep - start_ptep;
> }

Yes.  You are right.

But we may check whether the pte of page0 is same as "vmf->orig_pte -
folio_page_idx()" (fake code).

You need to check the pte of page 0 anyway.

>>
>> > +             if (!is_swap_pte(folio_pte) || non_swap_entry(pte_to_swp_entry(folio_pte)) ||
>> > +                 swap_pte_batch(folio_ptep, nr, folio_pte, &any_swap_shared) != nr)
>> > +                     goto check_pte;
>> > +
>> > +             start_address = folio_start;
>> > +             start_pte = folio_ptep;
>> > +             nr_pages = nr;
>> > +             entry = folio->swap;
>> > +             page = &folio->page;
>> > +     }
>> > +
>> > +check_pte:
>> >       if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
>> >               goto out_nomap;
>> >
>> > @@ -4190,6 +4223,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>> >                        */
>> >                       exclusive = false;
>> >               }
>> > +
>> > +             /* Reuse the whole large folio iff all entries are exclusive */
>> > +             if (nr_pages > 1 && any_swap_shared)
>> > +                     exclusive = false;
>> >       }
>> >
>> >       /*
>> > @@ -4204,12 +4241,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>> >        * We're already holding a reference on the page but haven't mapped it
>> >        * yet.
>> >        */
>> > -     swap_free(entry);
>> > +     swap_free_nr(entry, nr_pages);
>> >       if (should_try_to_free_swap(folio, vma, vmf->flags))
>> >               folio_free_swap(folio);
>> >
>> > -     inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
>> > -     dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
>> > +     folio_ref_add(folio, nr_pages - 1);
>> > +     add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
>> > +     add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages);
>> > +
>> >       pte = mk_pte(page, vma->vm_page_prot);
>> >
>> >       /*
>> > @@ -4219,33 +4258,34 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>> >        * exclusivity.
>> >        */
>> >       if (!folio_test_ksm(folio) &&
>> > -         (exclusive || folio_ref_count(folio) == 1)) {
>> > +         (exclusive || (folio_ref_count(folio) == nr_pages &&
>> > +                        folio_nr_pages(folio) == nr_pages))) {
>> >               if (vmf->flags & FAULT_FLAG_WRITE) {
>> >                       pte = maybe_mkwrite(pte_mkdirty(pte), vma);
>> >                       vmf->flags &= ~FAULT_FLAG_WRITE;
>> >               }
>> >               rmap_flags |= RMAP_EXCLUSIVE;
>> >       }
>> > -     flush_icache_page(vma, page);
>> > +     flush_icache_pages(vma, page, nr_pages);
>> >       if (pte_swp_soft_dirty(vmf->orig_pte))
>> >               pte = pte_mksoft_dirty(pte);
>> >       if (pte_swp_uffd_wp(vmf->orig_pte))
>> >               pte = pte_mkuffd_wp(pte);
>> > -     vmf->orig_pte = pte;
>> >
>> >       /* ksm created a completely new copy */
>> >       if (unlikely(folio != swapcache && swapcache)) {
>> > -             folio_add_new_anon_rmap(folio, vma, vmf->address);
>> > +             folio_add_new_anon_rmap(folio, vma, start_address);
>> >               folio_add_lru_vma(folio, vma);
>> >       } else {
>> > -             folio_add_anon_rmap_pte(folio, page, vma, vmf->address,
>> > -                                     rmap_flags);
>> > +             folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address,
>> > +                                      rmap_flags);
>> >       }
>> >
>> >       VM_BUG_ON(!folio_test_anon(folio) ||
>> >                       (pte_write(pte) && !PageAnonExclusive(page)));
>> > -     set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
>> > -     arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
>> > +     set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages);
>> > +     vmf->orig_pte = ptep_get(vmf->pte);
>> > +     arch_do_swap_page(vma->vm_mm, vma, start_address, pte, pte);
>>
>> Do we need to call arch_do_swap_page() for each subpage?  IIUC, the
>> corresponding arch_unmap_one() will be called for each subpage.
>
> i actually thought about this very carefully, right now, the only one who
> needs this is sparc and it doesn't support THP_SWAPOUT at all. and
> there is no proof doing restoration one by one won't really break sparc.
> so i'd like to defer this to when sparc really needs THP_SWAPOUT.

Let's ask SPARC developer (Cced) for this.

IMHO, even if we cannot get help, we need to change code with our
understanding instead of deferring it.

> on the other hand, it seems really bad we have both
> arch_swap_restore  - for this, arm64 has moved to using folio
> and
> arch_do_swap_page
>
> we should somehow unify them later if sparc wants THP_SWPOUT.
>
>>
>> >       folio_unlock(folio);
>> >       if (folio != swapcache && swapcache) {
>> > @@ -4269,7 +4309,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>> >       }
>> >
>> >       /* No need to invalidate - it was non-present before */
>> > -     update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>> > +     update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages);
>> >  unlock:
>> >       if (vmf->pte)
>> >               pte_unmap_unlock(vmf->pte, vmf->ptl);

--
Best Regards,
Huang, Ying


  reply	other threads:[~2024-04-16  2:27 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-09  8:26 [PATCH v2 0/5] large folios swap-in: handle refault cases first Barry Song
2024-04-09  8:26 ` [PATCH v2 1/5] mm: swap: introduce swap_free_nr() for batched swap_free() Barry Song
2024-04-10 23:37   ` SeongJae Park
2024-04-11  1:27     ` Barry Song
2024-04-11 14:30   ` Ryan Roberts
2024-04-12  2:07     ` Chuanhua Han
2024-04-12 11:28       ` Ryan Roberts
2024-04-12 11:38         ` Chuanhua Han
2024-04-15  6:17   ` Huang, Ying
2024-04-15  7:04     ` Barry Song
2024-04-15  8:06       ` Barry Song
2024-04-15  8:19       ` Huang, Ying
2024-04-15  8:34         ` Barry Song
2024-04-15  8:51           ` Huang, Ying
2024-04-15  9:01             ` Barry Song
2024-04-16  1:40               ` Huang, Ying
2024-04-16  2:08                 ` Barry Song
2024-04-16  3:11                   ` Huang, Ying
2024-04-16  4:32                     ` Barry Song
2024-04-17  0:32                       ` Huang, Ying
2024-04-17  1:35                         ` Barry Song
2024-04-18  5:27                           ` Barry Song
2024-04-18  8:55                             ` Huang, Ying
2024-04-18  9:14                               ` Barry Song
2024-05-02 23:05                                 ` Barry Song
2024-04-09  8:26 ` [PATCH v2 2/5] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-04-15  7:11   ` Huang, Ying
2024-04-09  8:26 ` [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive Barry Song
2024-04-11 14:54   ` Ryan Roberts
2024-04-11 15:00     ` David Hildenbrand
2024-04-11 15:36       ` Ryan Roberts
2024-04-09  8:26 ` [PATCH v2 4/5] mm: swap: entirely map large folios found in swapcache Barry Song
2024-04-11 15:33   ` Ryan Roberts
2024-04-11 23:30     ` Barry Song
2024-04-12 11:31       ` Ryan Roberts
2024-04-15  8:37   ` Huang, Ying
2024-04-15  8:53     ` Barry Song
2024-04-16  2:25       ` Huang, Ying [this message]
2024-04-16  2:36         ` Barry Song
2024-04-16  2:39           ` Huang, Ying
2024-04-16  2:52             ` Barry Song
2024-04-16  3:17               ` Huang, Ying
2024-04-16  4:40                 ` Barry Song
2024-04-18  9:55           ` Barry Song
2024-04-09  8:26 ` [PATCH v2 5/5] mm: add per-order mTHP swpin_refault counter Barry Song
2024-04-10 23:15   ` SeongJae Park
2024-04-11  1:46     ` Barry Song
2024-04-11 16:14       ` SeongJae Park
2024-04-11 15:53   ` Ryan Roberts
2024-04-11 23:01     ` Barry Song
2024-04-17  0:45   ` Huang, Ying
2024-04-17  1:16     ` Barry Song
2024-04-17  1:38       ` Huang, Ying
2024-04-17  1:48         ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8734rm2gdj.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hanchuanhua@oppo.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kasong@tencent.com \
    --cc=khalid.aziz@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=v-songbaohua@oppo.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).