LKML Archive mirror
 help / color / mirror / Atom feed
From: Khalid Aziz <khalid.aziz@oracle.com>
To: David Hildenbrand <david@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	John Hubbard <jhubbard@nvidia.com>
Cc: Steven Sistare <steven.sistare@oracle.com>,
	akpm@linux-foundation.org, ying.huang@intel.com,
	mgorman@techsingularity.net, baolin.wang@linux.alibaba.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Khalid Aziz <khalid@kernel.org>
Subject: Re: [PATCH v4] mm, compaction: Skip all non-migratable pages during scan
Date: Tue, 30 May 2023 09:42:33 -0600	[thread overview]
Message-ID: <3c2ac70b-31a2-4e85-0ade-cb9f701525b3@oracle.com> (raw)
In-Reply-To: <4d035744-271d-1ca3-a440-f8b1573eec96@redhat.com>

On 5/29/23 03:25, David Hildenbrand wrote:
> On 29.05.23 02:31, Matthew Wilcox wrote:
>> On Sun, May 28, 2023 at 04:49:52PM -0700, John Hubbard wrote:
>>> On 5/26/23 20:18, Matthew Wilcox wrote:
>>>> On Fri, May 26, 2023 at 07:11:05PM -0700, John Hubbard wrote:
>>>>>> So any user with 1024 processes can fragment physical memory? :/
>>>>>>
>>>>>> Sorry, I'd like to minimize the usage of folio_maybe_dma_pinned().
>>>>>
>>>>> I was actually thinking that we should minimize any more cases of
>>>>> fragile mapcount and refcount comparison, which then leads to
>>>>> Matthew's approach here!
>>>>
>>>> I was wondering if we shouldn't make folio_maybe_dma_pinned() a little
>>>> more accurate.  eg:
>>>>
>>>>           if (folio_test_large(folio))
>>>>                   return atomic_read(&folio->_pincount) > 0;
>>>>     return (unsigned)(folio_ref_count(folio) - folio_mapcount(folio)) >=
>>>>             GUP_PIN_COUNTING_BIAS;
>>>
>>> I'm trying to figure out what might be wrong with that, but it seems
>>> OK. We must have talked about this earlier, but I recall vaguely that
>>> there was not a lot of concern about the case of a page being mapped
>>>> 1024 times. Because pinned or not, it's likely to be effectively
>>> locked into memory due to LRU effects. As mentioned here, too.
>>
>> That was my point of view, but David convinced me that a hostile process
>> can effectively lock its own memory into place.
>>
> 
> 1) My opinion on this optimization
> 
> Before I start going into detail, let me first phrase my opinion so we are on the same page:
> 
> "a tiny fraction of Linux installations makes heavy use of long-term pinning -- the *single* mechanism that completely 
> *destroys* the whole purpose of memory compaction -- and the users complain about memory compaction overhead. So we are 
> talking about optimizing for that by eventually harming *everybody else*."
> 
> Ehm ... I'm all for reasonable optimization, but not at any price.
> 
> We don't care about a handful of long-term pinned pages in the system, this is really about vfio long-term pinning a 
> significant amount of system RAM, and we only care about shmem here.
> 
> 
> *maybe* there is an issue with page migration when we have many page mappings, but (a) it's a separate issue and to be 
> dealt with separately, not buried into such checks (b) it's unclear how many page mappings are too many, the magic 
> number 1024 is just a random number (c) it needs much finer control (hostile processes).
> 
> 
> 2) mapcount vs. pagecount
> 
> Now, doing these mapcount vs. pagecount checks is perfectly reasonable (see mm/ksm.c) as long as know what we're doing. 
> For example, we have to make sure that a possible compound page cannot get split concurrently (e.g., hold a reference). 
> It's been used forever, I consider it stable.
> 
> I completely agree that we should be careful with such mapcount vs. pagecount checks, and if we can use something 
> better, let's use something *better*.

When we have a reliable folio_maybe_dma_longterm_pinned() function, it will be better to call that instead of doing 
refcount vs mapcount check. Until that better function to check for pinned pages is in place, may I propose that the 
current patch fixes a customer problem though not optimally and is a good enough working solution. When a better 
function is in place, page_has_extra_refs() function can be updated to rely on this other function instead of refcount 
vs mapcount.

Thanks,
Khalid


> 
> 
> 3) page_maybe_dma_pinned()
> 
> Now, why do I dislike bringing up page_maybe_dma_pinned() [IOW, why is it not better]? Besides it ignoring FOLL_GET for 
> now, that might be fixed at some point.
> 
> I think we agree that order-0 pages are the problem, because we get guaranteed false positives with many mappings (not 
> just on speculative page pinnings). For these order-0 pages, it's perfectly reasonable to check page_maybe_dma_pinned() 
> *as long as* we know the number of mappings is very small.
> 
> I don't consider anon pages the issue here, we barely get 1024 mappings (not even with KSM), and it's much harder to 
> exploit because you cannot simply get new mappings via mmap(), only via fork().
> 
> In fact, we could optimize easily for order-0 anon pages if we'd need to: I have a patch lying around, it just wasn't 
> really worth it for now, because there is only a single relevant page_maybe_dma_pinned() call in vmscan that could benefit:
> 
> https://github.com/davidhildenbrand/linux/commit/0575860d064694d4e2f307b2c20a880a6a7b59ab
> 
> We cannot do the same for pagecache pages, so we would possibly introduce harm by carelessly checking 
> page_maybe_dma_pinned() on pages
> with many mappings.
> 
> 
> 4) folio_maybe_dma_longterm_pinned() ?
> 
> I thought yesterday if we'd want something like folio_maybe_dma_longterm_pinned() here. Essentially using what we 
> learned about long-term pinning of fs pages:
> 
> (1) ZONE_MOVABLE, MIGRATE_CMA -> "return false;"
> (2) If !anon, !hugetlb, !shmem -> "return false;"
> (3) "return folio_maybe_dma_pinned()"
> 
> Yes, above would easily create false-positives for short-term pinned pages (anon/hugetlb/shmem), but would never create 
> false-positives for any other page (shared library ...).
> 
> 
> We would use it in the following way:
> 
> bool skip_folio_in_isolation()
> {
>      /*
>           * Avoid skipping pages that are short-term pinned, the pin
>       * might go away any moment and we'll succeed to migrate.
>           *
>           * We get false positives for short-term pinned anon, shmem and
>           * hugetl pages for now, but such short-term pins are transient.
>           */
>      if (!folio_maybe_dma_longterm_pinned())
>          return false;
>          /*
>           * order-0 pages with many mappings can easily be confused
>           * for pinned pages and this could be exploited by
>           * malicious user-space to cause fragmentation. This is only
>           * an optimization, so if a page (especially shmem) is mapped
>           * many times, we'll rather try migrating it instead of
>           * accidentally skipping it all the time.
>           */
>      return folio_order(folio) != 0 || && total_mappings <= 32)
> }
> 
> Someone long-term pins an shmem page with many mappings? Too bad, we don't optimize for that and still try migrating it.
> 
> 
> BUT, I am still confused if we want to check here for "any additional references", which is what mapcount vs. refcount 
> is, or folio_maybe_dma_longterm_pinned().
> 
> Of course, we could similarly write a variant of skip_folio_in_isolation:
> 
> bool skip_folio_in_isolation()
> {
>      /*
>           * If a page is not pinned, try migrating it. Note that this
>           * does not consider any FOLL_GET used for DMA yet.
>           */
>      if (!folio_maybe_dma_pinned())
>          return false;
>          /*
>           * order-0 pages with many mappings can easily be confused
>           * for pinned pages and this could be exploited by
>           * malicious user-space to cause fragmentation. This is only
>           * an optimization, so if a page is mapped
>           * many times, we'll rather try migrating it instead of
>           * accidentally skipping it all the time.
>           */
>      return folio_order(folio) != 0 || && total_mappings <= 32)
> }
> 
> 
> As long as FOLL_GET is still used for DMA, the mapcount vs. pagecount checks might be better ... but it depends on if we 
> care about short-term or long-term pinned pages here.
> 
>>> Anyway, sure.
>>>
>>> A detail:
>>>
>>> The unsigned cast, I'm not sure that helps or solves anything, right?
>>> That is, other than bugs, is it possible to get refcount < mapcount?
> 
> BUG IMHO.
> 
>>>
>>> And if it's only due to bugs, then the casting, again, isn't likely to
>>> going to mitigate the fallout from whatever mess the bug caused.
>>
>> I wasn't thinking too hard about the cast.  If the caller has the folio
>> lock, I don't think it's possible for refcount < mapcount.  This caller
>> has a refcount, but doesn't hold the lock, so it is possible for them
>> to read mapcount first, then have both mapcount and refcount decremented
>> and see refcount < mapcount.
>>
>> I don't think it matters too much.  We don't hold the folio lock, so
>> it might transition from pinned to unpinned as much as a refcount might
>> be decremented or a mapcount incremented.  What's important is that a
>> hostile process can't prevent memory from being moved indefinitely.
>>
>> David, have I missed something else?
> 
> 
> What I learned from staring at the code in mm/ksm.c:write_protect_page() for too long a while ago is that:
> 
> (1) Mapping a page first increments the refcount, then the mapcount
> (2) Unmapping a page first decrements the mapcount, then the refcount
> 
> So the mapcount is supposed to be always larger than the refcount. Especially, if you take a snapshot of both (read 
> first the mapcount, then the mapcount).
> 
> A hostile process wouldn't be able to block compaction here forever, even if we accidentally would make the wrong call 
> once when deciding whether to isolate a page. It would work on the next attempt.
> 
> That's the difference to page_maybe_dma_pinned(), which can be made to consistently block compaction.
> 
> 
> [sorry for the lengthy mail]
> 


  reply	other threads:[~2023-05-30 15:43 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-25 19:15 [PATCH v4] mm, compaction: Skip all non-migratable pages during scan Khalid Aziz
2023-05-25 19:58 ` Matthew Wilcox
2023-05-25 20:15   ` Steven Sistare
2023-05-25 20:45     ` Matthew Wilcox
2023-05-25 21:31       ` Matthew Wilcox
2023-05-26 15:44         ` Khalid Aziz
2023-05-26 16:44           ` Matthew Wilcox
2023-05-26 16:46             ` David Hildenbrand
2023-05-26 18:46               ` Matthew Wilcox
2023-05-26 18:50                 ` David Hildenbrand
2023-05-27  2:11                   ` John Hubbard
2023-05-27  3:18                     ` Matthew Wilcox
2023-05-28 23:49                       ` John Hubbard
2023-05-29  0:31                         ` Matthew Wilcox
2023-05-29  9:25                           ` David Hildenbrand
2023-05-30 15:42                             ` Khalid Aziz [this message]
2023-06-09 22:11                               ` Andrew Morton
2023-06-09 23:28                                 ` Khalid Aziz
2023-05-25 20:41   ` Khalid Aziz
2023-05-29  3:01 ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3c2ac70b-31a2-4e85-0ade-cb9f701525b3@oracle.com \
    --to=khalid.aziz@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=khalid@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=steven.sistare@oracle.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).