Linux-mm Archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Muchun Song <muchun.song@linux.dev>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Matthew Wilcox <willy@infradead.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH rfc 6/9] mm: memcg: move cgroup v1 oom handling code into memcontrol-v1.c
Date: Fri, 10 May 2024 15:26:35 +0200	[thread overview]
Message-ID: <Zj4gi-vOxLZi2van@tiehlicka> (raw)
In-Reply-To: <20240509034138.2207186-7-roman.gushchin@linux.dev>

On Wed 08-05-24 20:41:35, Roman Gushchin wrote:
[...]
> @@ -1747,106 +1623,14 @@ static bool mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order)
>  
>  	memcg_memory_event(memcg, MEMCG_OOM);
>  
> -	/*
> -	 * We are in the middle of the charge context here, so we
> -	 * don't want to block when potentially sitting on a callstack
> -	 * that holds all kinds of filesystem and mm locks.
> -	 *
> -	 * cgroup1 allows disabling the OOM killer and waiting for outside
> -	 * handling until the charge can succeed; remember the context and put
> -	 * the task to sleep at the end of the page fault when all locks are
> -	 * released.
> -	 *
> -	 * On the other hand, in-kernel OOM killer allows for an async victim
> -	 * memory reclaim (oom_reaper) and that means that we are not solely
> -	 * relying on the oom victim to make a forward progress and we can
> -	 * invoke the oom killer here.
> -	 *
> -	 * Please note that mem_cgroup_out_of_memory might fail to find a
> -	 * victim and then we have to bail out from the charge path.
> -	 */
> -	if (READ_ONCE(memcg->oom_kill_disable)) {
> -		if (current->in_user_fault) {
> -			css_get(&memcg->css);
> -			current->memcg_in_oom = memcg;
> -			current->memcg_oom_gfp_mask = mask;
> -			current->memcg_oom_order = order;
> -		}
> +	if (!mem_cgroup_v1_oom_prepare(memcg, mask, order, &locked))
>  		return false;
> -	}
> -
> -	mem_cgroup_mark_under_oom(memcg);
> -
> -	locked = mem_cgroup_oom_trylock(memcg);

This really confused me because this looks like the oom locking is
removed for v2 but this is not the case because
mem_cgroup_v1_oom_prepare is not really v1 only code - in other words
this is not going to be just return false for CONFIG_MEMCG_V1=n.

It makes sense to move the userspace oom handling out to the v1 file. I
would keep mem_cgroup_mark_under_oom here. I am not sure about the oom
locking thing because I think we can make it v1 only. For v2 I guess we
can go without this locking as the oom path is already locked and it
implements overkilling prevention (oom_evaluate_task) as it walks all
processes in the oom hierarchy. 

> -
> -	if (locked)
> -		mem_cgroup_oom_notify(memcg);
> -
> -	mem_cgroup_unmark_under_oom(memcg);
>  	ret = mem_cgroup_out_of_memory(memcg, mask, order);
> -
> -	if (locked)
> -		mem_cgroup_oom_unlock(memcg);
> +	mem_cgroup_v1_oom_finish(memcg, &locked);
>  
>  	return ret;
>  }

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2024-05-10 13:26 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-09  3:41 [PATCH rfc 0/9] mm: memcg: separate legacy cgroup v1 code and put under config option Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 1/9] mm: memcg: introduce memcontrol-v1.c Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 2/9] mm: memcg: move soft limit reclaim code to memcontrol-v1.c Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 3/9] mm: memcg: move charge migration " Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 4/9] mm: memcg: move legacy memcg event code into memcontrol-v1.c Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 5/9] mm: memcg: move cgroup v1 interface files to memcontrol-v1.c Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 6/9] mm: memcg: move cgroup v1 oom handling code into memcontrol-v1.c Roman Gushchin
2024-05-10 13:26   ` Michal Hocko [this message]
2024-05-25  1:03     ` Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 7/9] mm: memcg: put cgroup v1-specific code under a config option Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 8/9] mm: memcg: put corresponding struct mem_cgroup members under CONFIG_MEMCG_V1 Roman Gushchin
2024-05-09  3:41 ` [PATCH rfc 9/9] mm: memcg: put cgroup v1-related members of task_struct under config option Roman Gushchin
2024-05-09  6:33 ` [PATCH rfc 0/9] mm: memcg: separate legacy cgroup v1 code and put " Shakeel Butt
2024-05-09 17:30   ` Roman Gushchin
2024-05-10  2:59   ` David Rientjes
2024-05-10  7:10     ` Chris Li
2024-05-10  8:10     ` Michal Hocko
2024-05-16  3:35   ` Yafang Shao
2024-05-16 17:29     ` Roman Gushchin
2024-05-17  2:21       ` Yafang Shao
2024-05-18  2:13         ` Roman Gushchin
2024-05-18  7:32     ` Shakeel Butt
2024-05-20  2:14       ` Yafang Shao
2024-05-22 17:58   ` Kairui Song
2024-05-23 19:55     ` Roman Gushchin
2024-05-23 20:26       ` Chris Li
2024-05-28 17:20       ` Kairui Song
2024-05-09 14:22 ` Johannes Weiner
2024-05-09 14:36   ` Johannes Weiner
2024-05-09 14:57     ` Roman Gushchin
2024-05-10 14:18       ` Johannes Weiner
2024-05-10 13:33 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zj4gi-vOxLZi2van@tiehlicka \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).