All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, cl@linux.com, guro@fb.com,
	hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@kernel.org,
	mm-commits@vger.kernel.org, shakeelb@google.com, tj@kernel.org,
	torvalds@linux-foundation.org, vbabka@suse.cz
Subject: [patch 073/163] mm: memcg/slab: use a single set of kmem_caches for all accounted allocations
Date: Thu, 06 Aug 2020 23:21:10 -0700	[thread overview]
Message-ID: <20200807062110.OsXinNSO_%akpm@linux-foundation.org> (raw)
In-Reply-To: <20200806231643.a2711a608dd0f18bff2caf2b@linux-foundation.org>

From: Roman Gushchin <guro@fb.com>
Subject: mm: memcg/slab: use a single set of kmem_caches for all accounted allocations

This is fairly big but mostly red patch, which makes all accounted slab
allocations use a single set of kmem_caches instead of creating a separate
set for each memory cgroup.

Because the number of non-root kmem_caches is now capped by the number of
root kmem_caches, there is no need to shrink or destroy them prematurely.
They can be perfectly destroyed together with their root counterparts.
This allows to dramatically simplify the management of non-root
kmem_caches and delete a ton of code.

This patch performs the following changes:
1) introduces memcg_params.memcg_cache pointer to represent the
   kmem_cache which will be used for all non-root allocations
2) reuses the existing memcg kmem_cache creation mechanism
   to create memcg kmem_cache on the first allocation attempt
3) memcg kmem_caches are named <kmemcache_name>-memcg,
   e.g. dentry-memcg
4) simplifies memcg_kmem_get_cache() to just return memcg kmem_cache
   or schedule it's creation and return the root cache
5) removes almost all non-root kmem_cache management code
   (separate refcounter, reparenting, shrinking, etc)
6) makes slab debugfs to display root_mem_cgroup css id and never
   show :dead and :deact flags in the memcg_slabinfo attribute.

Following patches in the series will simplify the kmem_cache creation.

Link: http://lkml.kernel.org/r/20200623174037.3951353-13-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/memcontrol.h |    5 
 include/linux/slab.h       |    5 
 mm/memcontrol.c            |  165 ++----------
 mm/slab.c                  |   16 -
 mm/slab.h                  |  146 +++--------
 mm/slab_common.c           |  461 +++--------------------------------
 mm/slub.c                  |   38 --
 7 files changed, 136 insertions(+), 700 deletions(-)

--- a/include/linux/memcontrol.h~mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-accounted-allocations
+++ a/include/linux/memcontrol.h
@@ -317,7 +317,6 @@ struct mem_cgroup {
         /* Index in the kmem_cache->memcg_params.memcg_caches array */
 	int kmemcg_id;
 	enum memcg_kmem_state kmem_state;
-	struct list_head kmem_caches;
 	struct obj_cgroup __rcu *objcg;
 	struct list_head objcg_list; /* list of inherited objcgs */
 #endif
@@ -1404,9 +1403,7 @@ static inline void memcg_set_shrinker_bi
 }
 #endif
 
-struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep,
-					struct obj_cgroup **objcgp);
-void memcg_kmem_put_cache(struct kmem_cache *cachep);
+struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
 
 #ifdef CONFIG_MEMCG_KMEM
 int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
--- a/include/linux/slab.h~mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-accounted-allocations
+++ a/include/linux/slab.h
@@ -155,8 +155,7 @@ struct kmem_cache *kmem_cache_create_use
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
 
-void memcg_create_kmem_cache(struct mem_cgroup *, struct kmem_cache *);
-void memcg_deactivate_kmem_caches(struct mem_cgroup *, struct mem_cgroup *);
+void memcg_create_kmem_cache(struct kmem_cache *cachep);
 
 /*
  * Please use this macro to create slab caches. Simply specify the
@@ -580,8 +579,6 @@ static __always_inline void *kmalloc_nod
 	return __kmalloc_node(size, flags, node);
 }
 
-int memcg_update_all_caches(int num_memcgs);
-
 /**
  * kmalloc_array - allocate memory for an array.
  * @n: number of elements.
--- a/mm/memcontrol.c~mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-accounted-allocations
+++ a/mm/memcontrol.c
@@ -350,7 +350,7 @@ static void memcg_reparent_objcgs(struct
 }
 
 /*
- * This will be the memcg's index in each cache's ->memcg_params.memcg_caches.
+ * This will be used as a shrinker list's index.
  * The main reason for not using cgroup id for this:
  *  this works better in sparse environments, where we have a lot of memcgs,
  *  but only a few kmem-limited. Or also, if we have, for instance, 200
@@ -569,20 +569,16 @@ ino_t page_cgroup_ino(struct page *page)
 	unsigned long ino = 0;
 
 	rcu_read_lock();
-	if (PageSlab(page) && !PageTail(page)) {
-		memcg = memcg_from_slab_page(page);
-	} else {
-		memcg = page->mem_cgroup;
+	memcg = page->mem_cgroup;
 
-		/*
-		 * The lowest bit set means that memcg isn't a valid
-		 * memcg pointer, but a obj_cgroups pointer.
-		 * In this case the page is shared and doesn't belong
-		 * to any specific memory cgroup.
-		 */
-		if ((unsigned long) memcg & 0x1UL)
-			memcg = NULL;
-	}
+	/*
+	 * The lowest bit set means that memcg isn't a valid
+	 * memcg pointer, but a obj_cgroups pointer.
+	 * In this case the page is shared and doesn't belong
+	 * to any specific memory cgroup.
+	 */
+	if ((unsigned long) memcg & 0x1UL)
+		memcg = NULL;
 
 	while (memcg && !(memcg->css.flags & CSS_ONLINE))
 		memcg = parent_mem_cgroup(memcg);
@@ -2822,12 +2818,18 @@ struct mem_cgroup *mem_cgroup_from_obj(v
 	page = virt_to_head_page(p);
 
 	/*
-	 * Slab pages don't have page->mem_cgroup set because corresponding
-	 * kmem caches can be reparented during the lifetime. That's why
-	 * memcg_from_slab_page() should be used instead.
-	 */
-	if (PageSlab(page))
-		return memcg_from_slab_page(page);
+	 * Slab objects are accounted individually, not per-page.
+	 * Memcg membership data for each individual object is saved in
+	 * the page->obj_cgroups.
+	 */
+	if (page_has_obj_cgroups(page)) {
+		struct obj_cgroup *objcg;
+		unsigned int off;
+
+		off = obj_to_index(page->slab_cache, page, p);
+		objcg = page_obj_cgroups(page)[off];
+		return obj_cgroup_memcg(objcg);
+	}
 
 	/* All other pages use page->mem_cgroup */
 	return page->mem_cgroup;
@@ -2882,9 +2884,7 @@ static int memcg_alloc_cache_id(void)
 	else if (size > MEMCG_CACHES_MAX_SIZE)
 		size = MEMCG_CACHES_MAX_SIZE;
 
-	err = memcg_update_all_caches(size);
-	if (!err)
-		err = memcg_update_all_list_lrus(size);
+	err = memcg_update_all_list_lrus(size);
 	if (!err)
 		memcg_nr_cache_ids = size;
 
@@ -2903,7 +2903,6 @@ static void memcg_free_cache_id(int id)
 }
 
 struct memcg_kmem_cache_create_work {
-	struct mem_cgroup *memcg;
 	struct kmem_cache *cachep;
 	struct work_struct work;
 };
@@ -2912,33 +2911,24 @@ static void memcg_kmem_cache_create_func
 {
 	struct memcg_kmem_cache_create_work *cw =
 		container_of(w, struct memcg_kmem_cache_create_work, work);
-	struct mem_cgroup *memcg = cw->memcg;
 	struct kmem_cache *cachep = cw->cachep;
 
-	memcg_create_kmem_cache(memcg, cachep);
+	memcg_create_kmem_cache(cachep);
 
-	css_put(&memcg->css);
 	kfree(cw);
 }
 
 /*
  * Enqueue the creation of a per-memcg kmem_cache.
  */
-static void memcg_schedule_kmem_cache_create(struct mem_cgroup *memcg,
-					       struct kmem_cache *cachep)
+static void memcg_schedule_kmem_cache_create(struct kmem_cache *cachep)
 {
 	struct memcg_kmem_cache_create_work *cw;
 
-	if (!css_tryget_online(&memcg->css))
-		return;
-
 	cw = kmalloc(sizeof(*cw), GFP_NOWAIT | __GFP_NOWARN);
-	if (!cw) {
-		css_put(&memcg->css);
+	if (!cw)
 		return;
-	}
 
-	cw->memcg = memcg;
 	cw->cachep = cachep;
 	INIT_WORK(&cw->work, memcg_kmem_cache_create_func);
 
@@ -2946,102 +2936,26 @@ static void memcg_schedule_kmem_cache_cr
 }
 
 /**
- * memcg_kmem_get_cache: select the correct per-memcg cache for allocation
+ * memcg_kmem_get_cache: select memcg or root cache for allocation
  * @cachep: the original global kmem cache
  *
  * Return the kmem_cache we're supposed to use for a slab allocation.
- * We try to use the current memcg's version of the cache.
  *
  * If the cache does not exist yet, if we are the first user of it, we
  * create it asynchronously in a workqueue and let the current allocation
  * go through with the original cache.
- *
- * This function takes a reference to the cache it returns to assure it
- * won't get destroyed while we are working with it. Once the caller is
- * done with it, memcg_kmem_put_cache() must be called to release the
- * reference.
  */
-struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep,
-					struct obj_cgroup **objcgp)
+struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep)
 {
-	struct mem_cgroup *memcg;
 	struct kmem_cache *memcg_cachep;
-	struct memcg_cache_array *arr;
-	int kmemcg_id;
-
-	VM_BUG_ON(!is_root_cache(cachep));
 
-	if (memcg_kmem_bypass())
+	memcg_cachep = READ_ONCE(cachep->memcg_params.memcg_cache);
+	if (unlikely(!memcg_cachep)) {
+		memcg_schedule_kmem_cache_create(cachep);
 		return cachep;
-
-	rcu_read_lock();
-
-	if (unlikely(current->active_memcg))
-		memcg = current->active_memcg;
-	else
-		memcg = mem_cgroup_from_task(current);
-
-	if (!memcg || memcg == root_mem_cgroup)
-		goto out_unlock;
-
-	kmemcg_id = READ_ONCE(memcg->kmemcg_id);
-	if (kmemcg_id < 0)
-		goto out_unlock;
-
-	arr = rcu_dereference(cachep->memcg_params.memcg_caches);
-
-	/*
-	 * Make sure we will access the up-to-date value. The code updating
-	 * memcg_caches issues a write barrier to match the data dependency
-	 * barrier inside READ_ONCE() (see memcg_create_kmem_cache()).
-	 */
-	memcg_cachep = READ_ONCE(arr->entries[kmemcg_id]);
-
-	/*
-	 * If we are in a safe context (can wait, and not in interrupt
-	 * context), we could be be predictable and return right away.
-	 * This would guarantee that the allocation being performed
-	 * already belongs in the new cache.
-	 *
-	 * However, there are some clashes that can arrive from locking.
-	 * For instance, because we acquire the slab_mutex while doing
-	 * memcg_create_kmem_cache, this means no further allocation
-	 * could happen with the slab_mutex held. So it's better to
-	 * defer everything.
-	 *
-	 * If the memcg is dying or memcg_cache is about to be released,
-	 * don't bother creating new kmem_caches. Because memcg_cachep
-	 * is ZEROed as the fist step of kmem offlining, we don't need
-	 * percpu_ref_tryget_live() here. css_tryget_online() check in
-	 * memcg_schedule_kmem_cache_create() will prevent us from
-	 * creation of a new kmem_cache.
-	 */
-	if (unlikely(!memcg_cachep))
-		memcg_schedule_kmem_cache_create(memcg, cachep);
-	else if (percpu_ref_tryget(&memcg_cachep->memcg_params.refcnt)) {
-		struct obj_cgroup *objcg = rcu_dereference(memcg->objcg);
-
-		if (!objcg || !obj_cgroup_tryget(objcg)) {
-			percpu_ref_put(&memcg_cachep->memcg_params.refcnt);
-			goto out_unlock;
-		}
-
-		*objcgp = objcg;
-		cachep = memcg_cachep;
 	}
-out_unlock:
-	rcu_read_unlock();
-	return cachep;
-}
 
-/**
- * memcg_kmem_put_cache: drop reference taken by memcg_kmem_get_cache
- * @cachep: the cache returned by memcg_kmem_get_cache
- */
-void memcg_kmem_put_cache(struct kmem_cache *cachep)
-{
-	if (!is_root_cache(cachep))
-		percpu_ref_put(&cachep->memcg_params.refcnt);
+	return memcg_cachep;
 }
 
 /**
@@ -3731,7 +3645,6 @@ static int memcg_online_kmem(struct mem_
 	 */
 	memcg->kmemcg_id = memcg_id;
 	memcg->kmem_state = KMEM_ONLINE;
-	INIT_LIST_HEAD(&memcg->kmem_caches);
 
 	return 0;
 }
@@ -3744,22 +3657,13 @@ static void memcg_offline_kmem(struct me
 
 	if (memcg->kmem_state != KMEM_ONLINE)
 		return;
-	/*
-	 * Clear the online state before clearing memcg_caches array
-	 * entries. The slab_mutex in memcg_deactivate_kmem_caches()
-	 * guarantees that no cache will be created for this cgroup
-	 * after we are done (see memcg_create_kmem_cache()).
-	 */
+
 	memcg->kmem_state = KMEM_ALLOCATED;
 
 	parent = parent_mem_cgroup(memcg);
 	if (!parent)
 		parent = root_mem_cgroup;
 
-	/*
-	 * Deactivate and reparent kmem_caches and objcgs.
-	 */
-	memcg_deactivate_kmem_caches(memcg, parent);
 	memcg_reparent_objcgs(memcg, parent);
 
 	kmemcg_id = memcg->kmemcg_id;
@@ -5384,9 +5288,6 @@ mem_cgroup_css_alloc(struct cgroup_subsy
 
 	/* The following stuff does not apply to the root */
 	if (!parent) {
-#ifdef CONFIG_MEMCG_KMEM
-		INIT_LIST_HEAD(&memcg->kmem_caches);
-#endif
 		root_mem_cgroup = memcg;
 		return &memcg->css;
 	}
--- a/mm/slab.c~mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-accounted-allocations
+++ a/mm/slab.c
@@ -1249,7 +1249,7 @@ void __init kmem_cache_init(void)
 				  nr_node_ids * sizeof(struct kmem_cache_node *),
 				  SLAB_HWCACHE_ALIGN, 0, 0);
 	list_add(&kmem_cache->list, &slab_caches);
-	memcg_link_cache(kmem_cache, NULL);
+	memcg_link_cache(kmem_cache);
 	slab_state = PARTIAL;
 
 	/*
@@ -2253,17 +2253,6 @@ int __kmem_cache_shrink(struct kmem_cach
 	return (ret ? 1 : 0);
 }
 
-#ifdef CONFIG_MEMCG
-void __kmemcg_cache_deactivate(struct kmem_cache *cachep)
-{
-	__kmem_cache_shrink(cachep);
-}
-
-void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s)
-{
-}
-#endif
-
 int __kmem_cache_shutdown(struct kmem_cache *cachep)
 {
 	return __kmem_cache_shrink(cachep);
@@ -3872,7 +3861,8 @@ static int do_tune_cpucache(struct kmem_
 		return ret;
 
 	lockdep_assert_held(&slab_mutex);
-	for_each_memcg_cache(c, cachep) {
+	c = memcg_cache(cachep);
+	if (c) {
 		/* return value determined by the root cache only */
 		__do_tune_cpucache(c, limit, batchcount, shared, gfp);
 	}
--- a/mm/slab_common.c~mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-accounted-allocations
+++ a/mm/slab_common.c
@@ -133,141 +133,36 @@ int __kmem_cache_alloc_bulk(struct kmem_
 #ifdef CONFIG_MEMCG_KMEM
 
 LIST_HEAD(slab_root_caches);
-static DEFINE_SPINLOCK(memcg_kmem_wq_lock);
-
-static void kmemcg_cache_shutdown(struct percpu_ref *percpu_ref);
 
 void slab_init_memcg_params(struct kmem_cache *s)
 {
 	s->memcg_params.root_cache = NULL;
-	RCU_INIT_POINTER(s->memcg_params.memcg_caches, NULL);
-	INIT_LIST_HEAD(&s->memcg_params.children);
-	s->memcg_params.dying = false;
-}
-
-static int init_memcg_params(struct kmem_cache *s,
-			     struct kmem_cache *root_cache)
-{
-	struct memcg_cache_array *arr;
-
-	if (root_cache) {
-		int ret = percpu_ref_init(&s->memcg_params.refcnt,
-					  kmemcg_cache_shutdown,
-					  0, GFP_KERNEL);
-		if (ret)
-			return ret;
-
-		s->memcg_params.root_cache = root_cache;
-		INIT_LIST_HEAD(&s->memcg_params.children_node);
-		INIT_LIST_HEAD(&s->memcg_params.kmem_caches_node);
-		return 0;
-	}
-
-	slab_init_memcg_params(s);
-
-	if (!memcg_nr_cache_ids)
-		return 0;
-
-	arr = kvzalloc(sizeof(struct memcg_cache_array) +
-		       memcg_nr_cache_ids * sizeof(void *),
-		       GFP_KERNEL);
-	if (!arr)
-		return -ENOMEM;
-
-	RCU_INIT_POINTER(s->memcg_params.memcg_caches, arr);
-	return 0;
-}
-
-static void destroy_memcg_params(struct kmem_cache *s)
-{
-	if (is_root_cache(s)) {
-		kvfree(rcu_access_pointer(s->memcg_params.memcg_caches));
-	} else {
-		mem_cgroup_put(s->memcg_params.memcg);
-		WRITE_ONCE(s->memcg_params.memcg, NULL);
-		percpu_ref_exit(&s->memcg_params.refcnt);
-	}
+	s->memcg_params.memcg_cache = NULL;
 }
 
-static void free_memcg_params(struct rcu_head *rcu)
+static void init_memcg_params(struct kmem_cache *s,
+			      struct kmem_cache *root_cache)
 {
-	struct memcg_cache_array *old;
-
-	old = container_of(rcu, struct memcg_cache_array, rcu);
-	kvfree(old);
-}
-
-static int update_memcg_params(struct kmem_cache *s, int new_array_size)
-{
-	struct memcg_cache_array *old, *new;
-
-	new = kvzalloc(sizeof(struct memcg_cache_array) +
-		       new_array_size * sizeof(void *), GFP_KERNEL);
-	if (!new)
-		return -ENOMEM;
-
-	old = rcu_dereference_protected(s->memcg_params.memcg_caches,
-					lockdep_is_held(&slab_mutex));
-	if (old)
-		memcpy(new->entries, old->entries,
-		       memcg_nr_cache_ids * sizeof(void *));
-
-	rcu_assign_pointer(s->memcg_params.memcg_caches, new);
-	if (old)
-		call_rcu(&old->rcu, free_memcg_params);
-	return 0;
-}
-
-int memcg_update_all_caches(int num_memcgs)
-{
-	struct kmem_cache *s;
-	int ret = 0;
-
-	mutex_lock(&slab_mutex);
-	list_for_each_entry(s, &slab_root_caches, root_caches_node) {
-		ret = update_memcg_params(s, num_memcgs);
-		/*
-		 * Instead of freeing the memory, we'll just leave the caches
-		 * up to this point in an updated state.
-		 */
-		if (ret)
-			break;
-	}
-	mutex_unlock(&slab_mutex);
-	return ret;
+	if (root_cache)
+		s->memcg_params.root_cache = root_cache;
+	else
+		slab_init_memcg_params(s);
 }
 
-void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg)
+void memcg_link_cache(struct kmem_cache *s)
 {
-	if (is_root_cache(s)) {
+	if (is_root_cache(s))
 		list_add(&s->root_caches_node, &slab_root_caches);
-	} else {
-		css_get(&memcg->css);
-		s->memcg_params.memcg = memcg;
-		list_add(&s->memcg_params.children_node,
-			 &s->memcg_params.root_cache->memcg_params.children);
-		list_add(&s->memcg_params.kmem_caches_node,
-			 &s->memcg_params.memcg->kmem_caches);
-	}
 }
 
 static void memcg_unlink_cache(struct kmem_cache *s)
 {
-	if (is_root_cache(s)) {
+	if (is_root_cache(s))
 		list_del(&s->root_caches_node);
-	} else {
-		list_del(&s->memcg_params.children_node);
-		list_del(&s->memcg_params.kmem_caches_node);
-	}
 }
 #else
-static inline int init_memcg_params(struct kmem_cache *s,
-				    struct kmem_cache *root_cache)
-{
-	return 0;
-}
-
-static inline void destroy_memcg_params(struct kmem_cache *s)
+static inline void init_memcg_params(struct kmem_cache *s,
+				     struct kmem_cache *root_cache)
 {
 }
 
@@ -328,14 +223,6 @@ int slab_unmergeable(struct kmem_cache *
 	if (s->refcount < 0)
 		return 1;
 
-#ifdef CONFIG_MEMCG_KMEM
-	/*
-	 * Skip the dying kmem_cache.
-	 */
-	if (s->memcg_params.dying)
-		return 1;
-#endif
-
 	return 0;
 }
 
@@ -390,7 +277,7 @@ static struct kmem_cache *create_cache(c
 		unsigned int object_size, unsigned int align,
 		slab_flags_t flags, unsigned int useroffset,
 		unsigned int usersize, void (*ctor)(void *),
-		struct mem_cgroup *memcg, struct kmem_cache *root_cache)
+		struct kmem_cache *root_cache)
 {
 	struct kmem_cache *s;
 	int err;
@@ -410,24 +297,20 @@ static struct kmem_cache *create_cache(c
 	s->useroffset = useroffset;
 	s->usersize = usersize;
 
-	err = init_memcg_params(s, root_cache);
-	if (err)
-		goto out_free_cache;
-
+	init_memcg_params(s, root_cache);
 	err = __kmem_cache_create(s, flags);
 	if (err)
 		goto out_free_cache;
 
 	s->refcount = 1;
 	list_add(&s->list, &slab_caches);
-	memcg_link_cache(s, memcg);
+	memcg_link_cache(s);
 out:
 	if (err)
 		return ERR_PTR(err);
 	return s;
 
 out_free_cache:
-	destroy_memcg_params(s);
 	kmem_cache_free(kmem_cache, s);
 	goto out;
 }
@@ -514,7 +397,7 @@ kmem_cache_create_usercopy(const char *n
 
 	s = create_cache(cache_name, size,
 			 calculate_alignment(flags, align, size),
-			 flags, useroffset, usersize, ctor, NULL, NULL);
+			 flags, useroffset, usersize, ctor, NULL);
 	if (IS_ERR(s)) {
 		err = PTR_ERR(s);
 		kfree_const(cache_name);
@@ -639,51 +522,27 @@ static int shutdown_cache(struct kmem_ca
 
 #ifdef CONFIG_MEMCG_KMEM
 /*
- * memcg_create_kmem_cache - Create a cache for a memory cgroup.
- * @memcg: The memory cgroup the new cache is for.
+ * memcg_create_kmem_cache - Create a cache for non-root memory cgroups.
  * @root_cache: The parent of the new cache.
  *
  * This function attempts to create a kmem cache that will serve allocation
- * requests going from @memcg to @root_cache. The new cache inherits properties
- * from its parent.
+ * requests going all non-root memory cgroups to @root_cache. The new cache
+ * inherits properties from its parent.
  */
-void memcg_create_kmem_cache(struct mem_cgroup *memcg,
-			     struct kmem_cache *root_cache)
+void memcg_create_kmem_cache(struct kmem_cache *root_cache)
 {
-	static char memcg_name_buf[NAME_MAX + 1]; /* protected by slab_mutex */
-	struct cgroup_subsys_state *css = &memcg->css;
-	struct memcg_cache_array *arr;
 	struct kmem_cache *s = NULL;
 	char *cache_name;
-	int idx;
 
 	get_online_cpus();
 	get_online_mems();
 
 	mutex_lock(&slab_mutex);
 
-	/*
-	 * The memory cgroup could have been offlined while the cache
-	 * creation work was pending.
-	 */
-	if (memcg->kmem_state != KMEM_ONLINE)
+	if (root_cache->memcg_params.memcg_cache)
 		goto out_unlock;
 
-	idx = memcg_cache_id(memcg);
-	arr = rcu_dereference_protected(root_cache->memcg_params.memcg_caches,
-					lockdep_is_held(&slab_mutex));
-
-	/*
-	 * Since per-memcg caches are created asynchronously on first
-	 * allocation (see memcg_kmem_get_cache()), several threads can try to
-	 * create the same cache, but only one of them may succeed.
-	 */
-	if (arr->entries[idx])
-		goto out_unlock;
-
-	cgroup_name(css->cgroup, memcg_name_buf, sizeof(memcg_name_buf));
-	cache_name = kasprintf(GFP_KERNEL, "%s(%llu:%s)", root_cache->name,
-			       css->serial_nr, memcg_name_buf);
+	cache_name = kasprintf(GFP_KERNEL, "%s-memcg", root_cache->name);
 	if (!cache_name)
 		goto out_unlock;
 
@@ -691,7 +550,7 @@ void memcg_create_kmem_cache(struct mem_
 			 root_cache->align,
 			 root_cache->flags & CACHE_CREATE_MASK,
 			 root_cache->useroffset, root_cache->usersize,
-			 root_cache->ctor, memcg, root_cache);
+			 root_cache->ctor, root_cache);
 	/*
 	 * If we could not create a memcg cache, do not complain, because
 	 * that's not critical at all as we can always proceed with the root
@@ -708,7 +567,7 @@ void memcg_create_kmem_cache(struct mem_
 	 * initialized.
 	 */
 	smp_wmb();
-	arr->entries[idx] = s;
+	root_cache->memcg_params.memcg_cache = s;
 
 out_unlock:
 	mutex_unlock(&slab_mutex);
@@ -717,231 +576,40 @@ out_unlock:
 	put_online_cpus();
 }
 
-static void kmemcg_workfn(struct work_struct *work)
-{
-	struct kmem_cache *s = container_of(work, struct kmem_cache,
-					    memcg_params.work);
-
-	get_online_cpus();
-	get_online_mems();
-
-	mutex_lock(&slab_mutex);
-	s->memcg_params.work_fn(s);
-	mutex_unlock(&slab_mutex);
-
-	put_online_mems();
-	put_online_cpus();
-}
-
-static void kmemcg_rcufn(struct rcu_head *head)
-{
-	struct kmem_cache *s = container_of(head, struct kmem_cache,
-					    memcg_params.rcu_head);
-
-	/*
-	 * We need to grab blocking locks.  Bounce to ->work.  The
-	 * work item shares the space with the RCU head and can't be
-	 * initialized earlier.
-	 */
-	INIT_WORK(&s->memcg_params.work, kmemcg_workfn);
-	queue_work(memcg_kmem_cache_wq, &s->memcg_params.work);
-}
-
-static void kmemcg_cache_shutdown_fn(struct kmem_cache *s)
-{
-	WARN_ON(shutdown_cache(s));
-}
-
-static void kmemcg_cache_shutdown(struct percpu_ref *percpu_ref)
-{
-	struct kmem_cache *s = container_of(percpu_ref, struct kmem_cache,
-					    memcg_params.refcnt);
-	unsigned long flags;
-
-	spin_lock_irqsave(&memcg_kmem_wq_lock, flags);
-	if (s->memcg_params.root_cache->memcg_params.dying)
-		goto unlock;
-
-	s->memcg_params.work_fn = kmemcg_cache_shutdown_fn;
-	INIT_WORK(&s->memcg_params.work, kmemcg_workfn);
-	queue_work(memcg_kmem_cache_wq, &s->memcg_params.work);
-
-unlock:
-	spin_unlock_irqrestore(&memcg_kmem_wq_lock, flags);
-}
-
-static void kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s)
-{
-	__kmemcg_cache_deactivate_after_rcu(s);
-	percpu_ref_kill(&s->memcg_params.refcnt);
-}
-
-static void kmemcg_cache_deactivate(struct kmem_cache *s)
-{
-	if (WARN_ON_ONCE(is_root_cache(s)))
-		return;
-
-	__kmemcg_cache_deactivate(s);
-	s->flags |= SLAB_DEACTIVATED;
-
-	/*
-	 * memcg_kmem_wq_lock is used to synchronize memcg_params.dying
-	 * flag and make sure that no new kmem_cache deactivation tasks
-	 * are queued (see flush_memcg_workqueue() ).
-	 */
-	spin_lock_irq(&memcg_kmem_wq_lock);
-	if (s->memcg_params.root_cache->memcg_params.dying)
-		goto unlock;
-
-	s->memcg_params.work_fn = kmemcg_cache_deactivate_after_rcu;
-	call_rcu(&s->memcg_params.rcu_head, kmemcg_rcufn);
-unlock:
-	spin_unlock_irq(&memcg_kmem_wq_lock);
-}
-
-void memcg_deactivate_kmem_caches(struct mem_cgroup *memcg,
-				  struct mem_cgroup *parent)
-{
-	int idx;
-	struct memcg_cache_array *arr;
-	struct kmem_cache *s, *c;
-	unsigned int nr_reparented;
-
-	idx = memcg_cache_id(memcg);
-
-	get_online_cpus();
-	get_online_mems();
-
-	mutex_lock(&slab_mutex);
-	list_for_each_entry(s, &slab_root_caches, root_caches_node) {
-		arr = rcu_dereference_protected(s->memcg_params.memcg_caches,
-						lockdep_is_held(&slab_mutex));
-		c = arr->entries[idx];
-		if (!c)
-			continue;
-
-		kmemcg_cache_deactivate(c);
-		arr->entries[idx] = NULL;
-	}
-	nr_reparented = 0;
-	list_for_each_entry(s, &memcg->kmem_caches,
-			    memcg_params.kmem_caches_node) {
-		WRITE_ONCE(s->memcg_params.memcg, parent);
-		css_put(&memcg->css);
-		nr_reparented++;
-	}
-	if (nr_reparented) {
-		list_splice_init(&memcg->kmem_caches,
-				 &parent->kmem_caches);
-		css_get_many(&parent->css, nr_reparented);
-	}
-	mutex_unlock(&slab_mutex);
-
-	put_online_mems();
-	put_online_cpus();
-}
-
 static int shutdown_memcg_caches(struct kmem_cache *s)
 {
-	struct memcg_cache_array *arr;
-	struct kmem_cache *c, *c2;
-	LIST_HEAD(busy);
-	int i;
-
 	BUG_ON(!is_root_cache(s));
 
-	/*
-	 * First, shutdown active caches, i.e. caches that belong to online
-	 * memory cgroups.
-	 */
-	arr = rcu_dereference_protected(s->memcg_params.memcg_caches,
-					lockdep_is_held(&slab_mutex));
-	for_each_memcg_cache_index(i) {
-		c = arr->entries[i];
-		if (!c)
-			continue;
-		if (shutdown_cache(c))
-			/*
-			 * The cache still has objects. Move it to a temporary
-			 * list so as not to try to destroy it for a second
-			 * time while iterating over inactive caches below.
-			 */
-			list_move(&c->memcg_params.children_node, &busy);
-		else
-			/*
-			 * The cache is empty and will be destroyed soon. Clear
-			 * the pointer to it in the memcg_caches array so that
-			 * it will never be accessed even if the root cache
-			 * stays alive.
-			 */
-			arr->entries[i] = NULL;
-	}
-
-	/*
-	 * Second, shutdown all caches left from memory cgroups that are now
-	 * offline.
-	 */
-	list_for_each_entry_safe(c, c2, &s->memcg_params.children,
-				 memcg_params.children_node)
-		shutdown_cache(c);
-
-	list_splice(&busy, &s->memcg_params.children);
+	if (s->memcg_params.memcg_cache)
+		WARN_ON(shutdown_cache(s->memcg_params.memcg_cache));
 
-	/*
-	 * A cache being destroyed must be empty. In particular, this means
-	 * that all per memcg caches attached to it must be empty too.
-	 */
-	if (!list_empty(&s->memcg_params.children))
-		return -EBUSY;
 	return 0;
 }
 
-static void memcg_set_kmem_cache_dying(struct kmem_cache *s)
-{
-	spin_lock_irq(&memcg_kmem_wq_lock);
-	s->memcg_params.dying = true;
-	spin_unlock_irq(&memcg_kmem_wq_lock);
-}
-
 static void flush_memcg_workqueue(struct kmem_cache *s)
 {
 	/*
-	 * SLAB and SLUB deactivate the kmem_caches through call_rcu. Make
-	 * sure all registered rcu callbacks have been invoked.
-	 */
-	rcu_barrier();
-
-	/*
 	 * SLAB and SLUB create memcg kmem_caches through workqueue and SLUB
 	 * deactivates the memcg kmem_caches through workqueue. Make sure all
 	 * previous workitems on workqueue are processed.
 	 */
 	if (likely(memcg_kmem_cache_wq))
 		flush_workqueue(memcg_kmem_cache_wq);
-
-	/*
-	 * If we're racing with children kmem_cache deactivation, it might
-	 * take another rcu grace period to complete their destruction.
-	 * At this moment the corresponding percpu_ref_kill() call should be
-	 * done, but it might take another rcu grace period to complete
-	 * switching to the atomic mode.
-	 * Please, note that we check without grabbing the slab_mutex. It's safe
-	 * because at this moment the children list can't grow.
-	 */
-	if (!list_empty(&s->memcg_params.children))
-		rcu_barrier();
 }
 #else
 static inline int shutdown_memcg_caches(struct kmem_cache *s)
 {
 	return 0;
 }
+
+static inline void flush_memcg_workqueue(struct kmem_cache *s)
+{
+}
 #endif /* CONFIG_MEMCG_KMEM */
 
 void slab_kmem_cache_release(struct kmem_cache *s)
 {
 	__kmem_cache_release(s);
-	destroy_memcg_params(s);
 	kfree_const(s->name);
 	kmem_cache_free(kmem_cache, s);
 }
@@ -953,6 +621,8 @@ void kmem_cache_destroy(struct kmem_cach
 	if (unlikely(!s))
 		return;
 
+	flush_memcg_workqueue(s);
+
 	get_online_cpus();
 	get_online_mems();
 
@@ -962,22 +632,6 @@ void kmem_cache_destroy(struct kmem_cach
 	if (s->refcount)
 		goto out_unlock;
 
-#ifdef CONFIG_MEMCG_KMEM
-	memcg_set_kmem_cache_dying(s);
-
-	mutex_unlock(&slab_mutex);
-
-	put_online_mems();
-	put_online_cpus();
-
-	flush_memcg_workqueue(s);
-
-	get_online_cpus();
-	get_online_mems();
-
-	mutex_lock(&slab_mutex);
-#endif
-
 	err = shutdown_memcg_caches(s);
 	if (!err)
 		err = shutdown_cache(s);
@@ -1019,7 +673,7 @@ int kmem_cache_shrink(struct kmem_cache
 EXPORT_SYMBOL(kmem_cache_shrink);
 
 /**
- * kmem_cache_shrink_all - shrink a cache and all memcg caches for root cache
+ * kmem_cache_shrink_all - shrink root and memcg caches
  * @s: The cache pointer
  */
 void kmem_cache_shrink_all(struct kmem_cache *s)
@@ -1036,21 +690,11 @@ void kmem_cache_shrink_all(struct kmem_c
 	kasan_cache_shrink(s);
 	__kmem_cache_shrink(s);
 
-	/*
-	 * We have to take the slab_mutex to protect from the memcg list
-	 * modification.
-	 */
-	mutex_lock(&slab_mutex);
-	for_each_memcg_cache(c, s) {
-		/*
-		 * Don't need to shrink deactivated memcg caches.
-		 */
-		if (s->flags & SLAB_DEACTIVATED)
-			continue;
+	c = memcg_cache(s);
+	if (c) {
 		kasan_cache_shrink(c);
 		__kmem_cache_shrink(c);
 	}
-	mutex_unlock(&slab_mutex);
 	put_online_mems();
 	put_online_cpus();
 }
@@ -1105,7 +749,7 @@ struct kmem_cache *__init create_kmalloc
 
 	create_boot_cache(s, name, size, flags, useroffset, usersize);
 	list_add(&s->list, &slab_caches);
-	memcg_link_cache(s, NULL);
+	memcg_link_cache(s);
 	s->refcount = 1;
 	return s;
 }
@@ -1483,7 +1127,8 @@ memcg_accumulate_slabinfo(struct kmem_ca
 	if (!is_root_cache(s))
 		return;
 
-	for_each_memcg_cache(c, s) {
+	c = memcg_cache(s);
+	if (c) {
 		memset(&sinfo, 0, sizeof(sinfo));
 		get_slabinfo(c, &sinfo);
 
@@ -1614,7 +1259,7 @@ module_init(slab_proc_init);
 
 #if defined(CONFIG_DEBUG_FS) && defined(CONFIG_MEMCG_KMEM)
 /*
- * Display information about kmem caches that have child memcg caches.
+ * Display information about kmem caches that have memcg cache.
  */
 static int memcg_slabinfo_show(struct seq_file *m, void *unused)
 {
@@ -1626,9 +1271,9 @@ static int memcg_slabinfo_show(struct se
 	seq_puts(m, " <active_slabs> <num_slabs>\n");
 	list_for_each_entry(s, &slab_root_caches, root_caches_node) {
 		/*
-		 * Skip kmem caches that don't have any memcg children.
+		 * Skip kmem caches that don't have the memcg cache.
 		 */
-		if (list_empty(&s->memcg_params.children))
+		if (!s->memcg_params.memcg_cache)
 			continue;
 
 		memset(&sinfo, 0, sizeof(sinfo));
@@ -1637,23 +1282,13 @@ static int memcg_slabinfo_show(struct se
 			   cache_name(s), sinfo.active_objs, sinfo.num_objs,
 			   sinfo.active_slabs, sinfo.num_slabs);
 
-		for_each_memcg_cache(c, s) {
-			struct cgroup_subsys_state *css;
-			char *status = "";
-
-			css = &c->memcg_params.memcg->css;
-			if (!(css->flags & CSS_ONLINE))
-				status = ":dead";
-			else if (c->flags & SLAB_DEACTIVATED)
-				status = ":deact";
-
-			memset(&sinfo, 0, sizeof(sinfo));
-			get_slabinfo(c, &sinfo);
-			seq_printf(m, "%-17s %4d%-6s %6lu %6lu %6lu %6lu\n",
-				   cache_name(c), css->id, status,
-				   sinfo.active_objs, sinfo.num_objs,
-				   sinfo.active_slabs, sinfo.num_slabs);
-		}
+		c = s->memcg_params.memcg_cache;
+		memset(&sinfo, 0, sizeof(sinfo));
+		get_slabinfo(c, &sinfo);
+		seq_printf(m, "%-17s %4d %6lu %6lu %6lu %6lu\n",
+			   cache_name(c), root_mem_cgroup->css.id,
+			   sinfo.active_objs, sinfo.num_objs,
+			   sinfo.active_slabs, sinfo.num_slabs);
 	}
 	mutex_unlock(&slab_mutex);
 	return 0;
--- a/mm/slab.h~mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-accounted-allocations
+++ a/mm/slab.h
@@ -32,66 +32,25 @@ struct kmem_cache {
 
 #else /* !CONFIG_SLOB */
 
-struct memcg_cache_array {
-	struct rcu_head rcu;
-	struct kmem_cache *entries[0];
-};
-
 /*
  * This is the main placeholder for memcg-related information in kmem caches.
- * Both the root cache and the child caches will have it. For the root cache,
- * this will hold a dynamically allocated array large enough to hold
- * information about the currently limited memcgs in the system. To allow the
- * array to be accessed without taking any locks, on relocation we free the old
- * version only after a grace period.
- *
- * Root and child caches hold different metadata.
+ * Both the root cache and the child cache will have it. Some fields are used
+ * in both cases, other are specific to root caches.
  *
  * @root_cache:	Common to root and child caches.  NULL for root, pointer to
  *		the root cache for children.
  *
  * The following fields are specific to root caches.
  *
- * @memcg_caches: kmemcg ID indexed table of child caches.  This table is
- *		used to index child cachces during allocation and cleared
- *		early during shutdown.
- *
- * @root_caches_node: List node for slab_root_caches list.
- *
- * @children:	List of all child caches.  While the child caches are also
- *		reachable through @memcg_caches, a child cache remains on
- *		this list until it is actually destroyed.
- *
- * The following fields are specific to child caches.
- *
- * @memcg:	Pointer to the memcg this cache belongs to.
- *
- * @children_node: List node for @root_cache->children list.
- *
- * @kmem_caches_node: List node for @memcg->kmem_caches list.
+ * @memcg_cache: pointer to memcg kmem cache, used by all non-root memory
+ *		cgroups.
+ * @root_caches_node: list node for slab_root_caches list.
  */
 struct memcg_cache_params {
 	struct kmem_cache *root_cache;
-	union {
-		struct {
-			struct memcg_cache_array __rcu *memcg_caches;
-			struct list_head __root_caches_node;
-			struct list_head children;
-			bool dying;
-		};
-		struct {
-			struct mem_cgroup *memcg;
-			struct list_head children_node;
-			struct list_head kmem_caches_node;
-			struct percpu_ref refcnt;
-
-			void (*work_fn)(struct kmem_cache *);
-			union {
-				struct rcu_head rcu_head;
-				struct work_struct work;
-			};
-		};
-	};
+
+	struct kmem_cache *memcg_cache;
+	struct list_head __root_caches_node;
 };
 #endif /* CONFIG_SLOB */
 
@@ -236,8 +195,6 @@ bool __kmem_cache_empty(struct kmem_cach
 int __kmem_cache_shutdown(struct kmem_cache *);
 void __kmem_cache_release(struct kmem_cache *);
 int __kmem_cache_shrink(struct kmem_cache *);
-void __kmemcg_cache_deactivate(struct kmem_cache *s);
-void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s);
 void slab_kmem_cache_release(struct kmem_cache *);
 void kmem_cache_shrink_all(struct kmem_cache *s);
 
@@ -311,14 +268,6 @@ static inline bool kmem_cache_debug_flag
 extern struct list_head		slab_root_caches;
 #define root_caches_node	memcg_params.__root_caches_node
 
-/*
- * Iterate over all memcg caches of the given root cache. The caller must hold
- * slab_mutex.
- */
-#define for_each_memcg_cache(iter, root) \
-	list_for_each_entry(iter, &(root)->memcg_params.children, \
-			    memcg_params.children_node)
-
 static inline bool is_root_cache(struct kmem_cache *s)
 {
 	return !s->memcg_params.root_cache;
@@ -349,6 +298,13 @@ static inline struct kmem_cache *memcg_r
 	return s->memcg_params.root_cache;
 }
 
+static inline struct kmem_cache *memcg_cache(struct kmem_cache *s)
+{
+	if (is_root_cache(s))
+		return s->memcg_params.memcg_cache;
+	return NULL;
+}
+
 static inline struct obj_cgroup **page_obj_cgroups(struct page *page)
 {
 	/*
@@ -361,25 +317,9 @@ static inline struct obj_cgroup **page_o
 		((unsigned long)page->obj_cgroups & ~0x1UL);
 }
 
-/*
- * Expects a pointer to a slab page. Please note, that PageSlab() check
- * isn't sufficient, as it returns true also for tail compound slab pages,
- * which do not have slab_cache pointer set.
- * So this function assumes that the page can pass PageSlab() && !PageTail()
- * check.
- *
- * The kmem_cache can be reparented asynchronously. The caller must ensure
- * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex.
- */
-static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+static inline bool page_has_obj_cgroups(struct page *page)
 {
-	struct kmem_cache *s;
-
-	s = READ_ONCE(page->slab_cache);
-	if (s && !is_root_cache(s))
-		return READ_ONCE(s->memcg_params.memcg);
-
-	return NULL;
+	return ((unsigned long)page->obj_cgroups & 0x1UL);
 }
 
 static inline int memcg_alloc_page_obj_cgroups(struct page *page,
@@ -418,17 +358,25 @@ static inline struct kmem_cache *memcg_s
 						size_t objects, gfp_t flags)
 {
 	struct kmem_cache *cachep;
+	struct obj_cgroup *objcg;
+
+	if (memcg_kmem_bypass())
+		return s;
 
-	cachep = memcg_kmem_get_cache(s, objcgp);
+	cachep = memcg_kmem_get_cache(s);
 	if (is_root_cache(cachep))
 		return s;
 
-	if (obj_cgroup_charge(*objcgp, flags, objects * obj_full_size(s))) {
-		obj_cgroup_put(*objcgp);
-		memcg_kmem_put_cache(cachep);
+	objcg = get_obj_cgroup_from_current();
+	if (!objcg)
+		return s;
+
+	if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) {
+		obj_cgroup_put(objcg);
 		cachep = NULL;
 	}
 
+	*objcgp = objcg;
 	return cachep;
 }
 
@@ -467,7 +415,6 @@ static inline void memcg_slab_post_alloc
 		}
 	}
 	obj_cgroup_put(objcg);
-	memcg_kmem_put_cache(s);
 }
 
 static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
@@ -491,7 +438,7 @@ static inline void memcg_slab_free_hook(
 }
 
 extern void slab_init_memcg_params(struct kmem_cache *);
-extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg);
+extern void memcg_link_cache(struct kmem_cache *s);
 
 #else /* CONFIG_MEMCG_KMEM */
 
@@ -499,9 +446,6 @@ extern void memcg_link_cache(struct kmem
 #define slab_root_caches	slab_caches
 #define root_caches_node	list
 
-#define for_each_memcg_cache(iter, root) \
-	for ((void)(iter), (void)(root); 0; )
-
 static inline bool is_root_cache(struct kmem_cache *s)
 {
 	return true;
@@ -523,7 +467,17 @@ static inline struct kmem_cache *memcg_r
 	return s;
 }
 
-static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+static inline struct kmem_cache *memcg_cache(struct kmem_cache *s)
+{
+	return NULL;
+}
+
+static inline bool page_has_obj_cgroups(struct page *page)
+{
+	return false;
+}
+
+static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr)
 {
 	return NULL;
 }
@@ -560,8 +514,7 @@ static inline void slab_init_memcg_param
 {
 }
 
-static inline void memcg_link_cache(struct kmem_cache *s,
-				    struct mem_cgroup *memcg)
+static inline void memcg_link_cache(struct kmem_cache *s)
 {
 }
 
@@ -582,17 +535,14 @@ static __always_inline int charge_slab_p
 					    gfp_t gfp, int order,
 					    struct kmem_cache *s)
 {
-#ifdef CONFIG_MEMCG_KMEM
 	if (memcg_kmem_enabled() && !is_root_cache(s)) {
 		int ret;
 
 		ret = memcg_alloc_page_obj_cgroups(page, s, gfp);
 		if (ret)
 			return ret;
-
-		percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
 	}
-#endif
+
 	mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
 			    PAGE_SIZE << order);
 	return 0;
@@ -601,12 +551,9 @@ static __always_inline int charge_slab_p
 static __always_inline void uncharge_slab_page(struct page *page, int order,
 					       struct kmem_cache *s)
 {
-#ifdef CONFIG_MEMCG_KMEM
-	if (memcg_kmem_enabled() && !is_root_cache(s)) {
+	if (memcg_kmem_enabled() && !is_root_cache(s))
 		memcg_free_page_obj_cgroups(page);
-		percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
-	}
-#endif
+
 	mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
 			    -(PAGE_SIZE << order));
 }
@@ -749,9 +696,6 @@ static inline struct kmem_cache_node *ge
 void *slab_start(struct seq_file *m, loff_t *pos);
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
-void *memcg_slab_start(struct seq_file *m, loff_t *pos);
-void *memcg_slab_next(struct seq_file *m, void *p, loff_t *pos);
-void memcg_slab_stop(struct seq_file *m, void *p);
 int memcg_slab_show(struct seq_file *m, void *p);
 
 #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG)
--- a/mm/slub.c~mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-accounted-allocations
+++ a/mm/slub.c
@@ -4204,36 +4204,6 @@ int __kmem_cache_shrink(struct kmem_cach
 	return ret;
 }
 
-#ifdef CONFIG_MEMCG
-void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s)
-{
-	/*
-	 * Called with all the locks held after a sched RCU grace period.
-	 * Even if @s becomes empty after shrinking, we can't know that @s
-	 * doesn't have allocations already in-flight and thus can't
-	 * destroy @s until the associated memcg is released.
-	 *
-	 * However, let's remove the sysfs files for empty caches here.
-	 * Each cache has a lot of interface files which aren't
-	 * particularly useful for empty draining caches; otherwise, we can
-	 * easily end up with millions of unnecessary sysfs files on
-	 * systems which have a lot of memory and transient cgroups.
-	 */
-	if (!__kmem_cache_shrink(s))
-		sysfs_slab_remove(s);
-}
-
-void __kmemcg_cache_deactivate(struct kmem_cache *s)
-{
-	/*
-	 * Disable empty slabs caching. Used to avoid pinning offline
-	 * memory cgroups by kmem pages that can be freed.
-	 */
-	slub_set_cpu_partial(s, 0);
-	s->min_partial = 0;
-}
-#endif	/* CONFIG_MEMCG */
-
 static int slab_mem_going_offline_callback(void *arg)
 {
 	struct kmem_cache *s;
@@ -4390,7 +4360,7 @@ static struct kmem_cache * __init bootst
 	}
 	slab_init_memcg_params(s);
 	list_add(&s->list, &slab_caches);
-	memcg_link_cache(s, NULL);
+	memcg_link_cache(s);
 	return s;
 }
 
@@ -4458,7 +4428,8 @@ __kmem_cache_alias(const char *name, uns
 		s->object_size = max(s->object_size, size);
 		s->inuse = max(s->inuse, ALIGN(size, sizeof(void *)));
 
-		for_each_memcg_cache(c, s) {
+		c = memcg_cache(s);
+		if (c) {
 			c->object_size = s->object_size;
 			c->inuse = max(c->inuse, ALIGN(size, sizeof(void *)));
 		}
@@ -5591,7 +5562,8 @@ static ssize_t slab_attr_store(struct ko
 		 * directly either failed or succeeded, in which case we loop
 		 * through the descendants with best-effort propagation.
 		 */
-		for_each_memcg_cache(c, s)
+		c = memcg_cache(s);
+		if (c)
 			attribute->store(c, buf, len);
 		mutex_unlock(&slab_mutex);
 	}
_


  parent reply	other threads:[~2020-08-07  6:21 UTC|newest]

Thread overview: 177+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-07  6:16 incoming Andrew Morton
2020-08-07  6:17 ` [patch 001/163] mm/memory.c: avoid access flag update TLB flush for retried page fault Andrew Morton
2020-08-07 18:17   ` Linus Torvalds
2020-08-07 20:53     ` Yang Shi
2020-08-08  4:33       ` Linus Torvalds
2020-08-10 17:48         ` Yang Shi
2020-08-10 18:57           ` Linus Torvalds
2020-08-07  6:17 ` [patch 002/163] mm/migrate: fix migrate_pgmap_owner w/o CONFIG_MMU_NOTIFIER Andrew Morton
2020-08-07  6:17 ` [patch 003/163] mm/shuffle: don't move pages between zones and don't read garbage memmaps Andrew Morton
2020-08-07  6:17 ` [patch 004/163] mm: fix kthread_use_mm() vs TLB invalidate Andrew Morton
2020-08-07  6:17 ` [patch 005/163] kthread: remove incorrect comment in kthread_create_on_cpu() Andrew Morton
2020-08-07  6:17 ` [patch 006/163] tools/: replace HTTP links with HTTPS ones Andrew Morton
2020-08-07  6:17 ` [patch 007/163] tools/testing/selftests/cgroup/cgroup_util.c: cg_read_strcmp: fix null pointer dereference Andrew Morton
2020-08-07  6:17 ` [patch 008/163] scripts/tags.sh: collect compiled source precisely Andrew Morton
2020-08-07  6:17 ` [patch 009/163] scripts/bloat-o-meter: Support comparing library archives Andrew Morton
2020-08-07  6:17 ` [patch 010/163] scripts/decode_stacktrace.sh: skip missing symbols Andrew Morton
2020-08-07  6:17 ` [patch 011/163] scripts/decode_stacktrace.sh: guess basepath if not specified Andrew Morton
2020-08-07  6:17 ` [patch 012/163] scripts/decode_stacktrace.sh: guess path to modules Andrew Morton
2020-08-07  6:17 ` [patch 013/163] scripts/decode_stacktrace.sh: guess path to vmlinux by release name Andrew Morton
2020-08-07  6:17 ` [patch 014/163] const_structs.checkpatch: add regulator_ops Andrew Morton
2020-08-07  6:17 ` [patch 015/163] scripts/spelling.txt: add more spellings to spelling.txt Andrew Morton
2020-08-07  6:17 ` [patch 016/163] ntfs: fix ntfs_test_inode and ntfs_init_locked_inode function type Andrew Morton
2020-08-07  6:17 ` [patch 017/163] ocfs2: fix remounting needed after setfacl command Andrew Morton
2020-08-07  6:17 ` [patch 018/163] ocfs2: suballoc.h: delete a duplicated word Andrew Morton
2020-08-07  6:18 ` [patch 019/163] ocfs2: change slot number type s16 to u16 Andrew Morton
2020-08-07  6:18 ` [patch 020/163] ocfs2: replace HTTP links with HTTPS ones Andrew Morton
2020-08-07  6:18 ` [patch 021/163] ocfs2: fix unbalanced locking Andrew Morton
2020-08-07  6:18 ` [patch 022/163] mm, treewide: rename kzfree() to kfree_sensitive() Andrew Morton
2020-08-07  6:18 ` [patch 023/163] mm: ksize() should silently accept a NULL pointer Andrew Morton
2020-08-07  6:18 ` [patch 024/163] mm/slab: expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB Andrew Morton
2020-08-07  6:18 ` [patch 025/163] mm/slab: add naive detection of double free Andrew Morton
2020-08-07  6:18 ` [patch 026/163] mm, slab: check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order Andrew Morton
2020-08-07  6:18 ` [patch 027/163] mm/slab.c: update outdated kmem_list3 in a comment Andrew Morton
2020-08-07  6:18 ` [patch 028/163] mm, slub: extend slub_debug syntax for multiple blocks Andrew Morton
2020-08-07  6:18 ` [patch 029/163] mm, slub: make some slub_debug related attributes read-only Andrew Morton
2020-08-07  6:18 ` [patch 030/163] mm, slub: remove runtime allocation order changes Andrew Morton
2020-08-07  6:18 ` [patch 031/163] mm, slub: make remaining slub_debug related attributes read-only Andrew Morton
2020-08-07  6:18 ` [patch 032/163] mm, slub: make reclaim_account attribute read-only Andrew Morton
2020-08-07  6:18 ` [patch 033/163] mm, slub: introduce static key for slub_debug() Andrew Morton
2020-08-07  6:18 ` [patch 034/163] mm, slub: introduce kmem_cache_debug_flags() Andrew Morton
2020-08-07  6:18 ` [patch 035/163] mm, slub: extend checks guarded by slub_debug static key Andrew Morton
2020-08-07  6:19 ` [patch 036/163] mm, slab/slub: move and improve cache_from_obj() Andrew Morton
2020-08-07  6:19 ` [patch 037/163] mm, slab/slub: improve error reporting and overhead of cache_from_obj() Andrew Morton
2020-08-07  6:19 ` [patch 038/163] mm/slub.c: drop lockdep_assert_held() from put_map() Andrew Morton
2020-08-07  6:19 ` [patch 039/163] mm, kcsan: instrument SLAB/SLUB free with "ASSERT_EXCLUSIVE_ACCESS" Andrew Morton
2020-08-07  6:19 ` [patch 040/163] mm/debug_vm_pgtable: add tests validating arch helpers for core MM features Andrew Morton
2020-08-07  6:19 ` [patch 041/163] mm/debug_vm_pgtable: add tests validating advanced arch page table helpers Andrew Morton
2020-08-07  6:19 ` [patch 042/163] mm/debug_vm_pgtable: add debug prints for individual tests Andrew Morton
2020-08-07  6:19 ` [patch 043/163] Documentation/mm: add descriptions for arch page table helpers Andrew Morton
2020-08-07  6:19 ` [patch 044/163] mm/debug: handle page->mapping better in dump_page Andrew Morton
2020-08-07  6:19 ` [patch 045/163] mm/debug: dump compound page information on a second line Andrew Morton
2020-08-07  6:19 ` [patch 046/163] mm/debug: print head flags in dump_page Andrew Morton
2020-08-07  6:19 ` [patch 047/163] mm/debug: switch dump_page to get_kernel_nofault Andrew Morton
2020-08-07  6:19 ` [patch 048/163] mm/debug: print the inode number in dump_page Andrew Morton
2020-08-07  6:19 ` [patch 049/163] mm/debug: print hashed address of struct page Andrew Morton
2020-08-07  6:19 ` [patch 050/163] mm, dump_page: do not crash with bad compound_mapcount() Andrew Morton
2020-08-07  6:19 ` [patch 051/163] mm: filemap: clear idle flag for writes Andrew Morton
2020-08-07  6:19 ` [patch 052/163] mm: filemap: add missing FGP_ flags in kerneldoc comment for pagecache_get_page Andrew Morton
2020-08-07  6:20 ` [patch 053/163] mm/gup.c: fix the comment of return value for populate_vma_page_range() Andrew Morton
2020-08-07  6:20 ` [patch 054/163] mm/swap_slots.c: simplify alloc_swap_slot_cache() Andrew Morton
2020-08-07  6:20 ` [patch 055/163] mm/swap_slots.c: simplify enable_swap_slots_cache() Andrew Morton
2020-08-07  6:20 ` [patch 056/163] mm/swap_slots.c: remove redundant check for swap_slot_cache_initialized Andrew Morton
2020-08-07  6:20 ` [patch 057/163] mm: swap: fix kerneldoc of swap_vma_readahead() Andrew Morton
2020-08-07  6:20 ` [patch 058/163] mm/page_io.c: use blk_io_schedule() for avoiding task hung in sync io Andrew Morton
2020-08-07  6:20 ` [patch 059/163] tmpfs: per-superblock i_ino support Andrew Morton
2020-08-07  6:20 ` [patch 060/163] tmpfs: support 64-bit inums per-sb Andrew Morton
2020-08-07  6:20 ` [patch 061/163] mm: kmem: make memcg_kmem_enabled() irreversible Andrew Morton
2020-08-07  6:20 ` [patch 062/163] mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state() Andrew Morton
2020-08-07  6:20 ` [patch 063/163] mm: memcg: prepare for byte-sized vmstat items Andrew Morton
2020-08-07  6:20 ` [patch 064/163] mm: memcg: convert vmstat slab counters to bytes Andrew Morton
2020-08-07  6:20 ` [patch 065/163] mm: slub: implement SLUB version of obj_to_index() Andrew Morton
2020-08-07  6:20 ` [patch 066/163] mm: memcontrol: decouple reference counting from page accounting Andrew Morton
2020-08-07  6:20 ` [patch 067/163] mm: memcg/slab: obj_cgroup API Andrew Morton
2020-08-07  6:20 ` [patch 068/163] mm: memcg/slab: allocate obj_cgroups for non-root slab pages Andrew Morton
2020-08-07  6:20 ` [patch 069/163] mm: memcg/slab: save obj_cgroup for non-root slab objects Andrew Morton
2020-08-07  6:20 ` [patch 070/163] mm: memcg/slab: charge individual slab objects instead of pages Andrew Morton
2020-08-07  6:21 ` [patch 071/163] mm: memcg/slab: deprecate memory.kmem.slabinfo Andrew Morton
2020-08-07  6:21 ` [patch 072/163] mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h Andrew Morton
2020-08-07  6:21 ` Andrew Morton [this message]
2020-08-07  6:21 ` [patch 074/163] mm: memcg/slab: simplify memcg cache creation Andrew Morton
2020-08-07  6:21 ` [patch 075/163] mm: memcg/slab: remove memcg_kmem_get_cache() Andrew Morton
2020-08-07  6:21 ` [patch 076/163] mm: memcg/slab: deprecate slab_root_caches Andrew Morton
2020-08-07  6:21 ` [patch 077/163] mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() Andrew Morton
2020-08-07  6:21 ` [patch 078/163] mm: memcg/slab: use a single set of kmem_caches for all allocations Andrew Morton
2020-08-07  6:21 ` [patch 079/163] kselftests: cgroup: add kernel memory accounting tests Andrew Morton
2020-08-07  6:21 ` [patch 080/163] tools/cgroup: add memcg_slabinfo.py tool Andrew Morton
2020-08-07  6:21 ` [patch 081/163] mm: memcontrol: account kernel stack per node Andrew Morton
2020-08-07  6:21 ` [patch 082/163] mm: memcg/slab: remove unused argument by charge_slab_page() Andrew Morton
2020-08-07  6:21 ` [patch 083/163] mm: slab: rename (un)charge_slab_page() to (un)account_slab_page() Andrew Morton
2020-08-07  6:21 ` [patch 084/163] mm: kmem: switch to static_branch_likely() in memcg_kmem_enabled() Andrew Morton
2020-08-07  6:21 ` [patch 085/163] mm: memcontrol: avoid workload stalls when lowering memory.high Andrew Morton
2020-08-07  6:21 ` [patch 086/163] mm, memcg: reclaim more aggressively before high allocator throttling Andrew Morton
2020-08-07  6:21 ` [patch 087/163] mm, memcg: unify reclaim retry limits with page allocator Andrew Morton
2020-08-07  6:22 ` [patch 088/163] mm, memcg: avoid stale protection values when cgroup is above protection Andrew Morton
2020-08-07  6:22 ` [patch 089/163] mm, memcg: decouple e{low,min} state mutations from protection checks Andrew Morton
2020-08-07  6:22 ` [patch 090/163] memcg, oom: check memcg margin for parallel oom Andrew Morton
2020-08-07  6:22 ` [patch 091/163] mm: memcontrol: restore proper dirty throttling when memory.high changes Andrew Morton
2020-08-07  6:22 ` [patch 092/163] mm: memcontrol: don't count limit-setting reclaim as memory pressure Andrew Morton
2020-08-07  6:22 ` [patch 093/163] mm/page_counter.c: fix protection usage propagation Andrew Morton
2020-08-07  6:22 ` [patch 094/163] mm: remove redundant check non_swap_entry() Andrew Morton
2020-08-07  6:22 ` [patch 095/163] mm/memory.c: make remap_pfn_range() reject unaligned addr Andrew Morton
2020-08-07  6:22 ` [patch 096/163] mm: remove unneeded includes of <asm/pgalloc.h> Andrew Morton
2020-08-07  6:22 ` [patch 097/163] opeinrisc: switch to generic version of pte allocation Andrew Morton
2020-08-07  6:22 ` [patch 098/163] xtensa: " Andrew Morton
2020-08-07  6:22 ` [patch 099/163] asm-generic: pgalloc: provide generic pmd_alloc_one() and pmd_free_one() Andrew Morton
2020-08-07  6:22 ` [patch 100/163] asm-generic: pgalloc: provide generic pud_alloc_one() and pud_free_one() Andrew Morton
2020-08-07  6:22 ` [patch 101/163] asm-generic: pgalloc: provide generic pgd_free() Andrew Morton
2020-08-07  6:22 ` [patch 102/163] mm: move lib/ioremap.c to mm/ Andrew Morton
2020-08-07  6:22 ` [patch 103/163] mm: move p?d_alloc_track to separate header file Andrew Morton
2020-08-07  6:22 ` [patch 104/163] mm/mmap: optimize a branch judgment in ksys_mmap_pgoff() Andrew Morton
2020-08-07  6:23 ` [patch 105/163] proc/meminfo: avoid open coded reading of vm_committed_as Andrew Morton
2020-08-07  6:23 ` [patch 106/163] mm/util.c: make vm_memory_committed() more accurate Andrew Morton
2020-08-07  6:23 ` [patch 107/163] percpu_counter: add percpu_counter_sync() Andrew Morton
2020-08-07  6:23 ` [patch 108/163] mm: adjust vm_committed_as_batch according to vm overcommit policy Andrew Morton
2020-08-07  6:23 ` [patch 109/163] mm/sparsemem: enable vmem_altmap support in vmemmap_populate_basepages() Andrew Morton
2020-08-07  6:23 ` [patch 110/163] mm/sparsemem: enable vmem_altmap support in vmemmap_alloc_block_buf() Andrew Morton
2020-08-07  6:23 ` [patch 111/163] arm64/mm: enable vmem_altmap support for vmemmap mappings Andrew Morton
2020-08-07  6:23 ` [patch 112/163] mm: mmap: merge vma after call_mmap() if possible Andrew Morton
2020-08-07  6:23 ` [patch 113/163] mm: remove unnecessary wrapper function do_mmap_pgoff() Andrew Morton
2020-08-07  6:23 ` [patch 114/163] mm/mremap: it is sure to have enough space when extent meets requirement Andrew Morton
2020-08-07  6:23 ` [patch 115/163] mm/mremap: calculate extent in one place Andrew Morton
2020-08-07  6:23 ` [patch 116/163] mm/mremap: start addresses are properly aligned Andrew Morton
2020-08-07  6:23 ` [patch 117/163] selftests: add mincore() tests Andrew Morton
2020-08-07  6:23 ` [patch 118/163] mm/sparse: never partially remove memmap for early section Andrew Morton
2020-08-07  6:23 ` [patch 119/163] mm/sparse: only sub-section aligned range would be populated Andrew Morton
2020-08-07  6:24 ` [patch 120/163] mm/sparse: cleanup the code surrounding memory_present() Andrew Morton
2020-08-07  6:24 ` [patch 121/163] vmalloc: convert to XArray Andrew Morton
2020-08-07  6:24 ` [patch 122/163] mm/vmalloc: simplify merge_or_add_vmap_area() Andrew Morton
2020-08-07  6:24 ` [patch 123/163] mm/vmalloc: simplify augment_tree_propagate_check() Andrew Morton
2020-08-07  6:24 ` [patch 124/163] mm/vmalloc: switch to "propagate()" callback Andrew Morton
2020-08-07  6:24 ` [patch 125/163] mm/vmalloc: update the header about KVA rework Andrew Morton
2020-08-07  6:24 ` [patch 126/163] mm: vmalloc: remove redundant assignment in unmap_kernel_range_noflush() Andrew Morton
2020-08-07  6:24 ` [patch 127/163] mm/vmalloc.c: remove BUG() from the find_va_links() Andrew Morton
2020-08-07  6:24 ` [patch 128/163] kasan: improve and simplify Kconfig.kasan Andrew Morton
2020-08-07  6:24 ` [patch 129/163] kasan: update required compiler versions in documentation Andrew Morton
2020-08-07  6:24 ` [patch 130/163] rcu: kasan: record and print call_rcu() call stack Andrew Morton
2020-08-07  6:24 ` [patch 131/163] kasan: record and print the free track Andrew Morton
2020-08-07  6:24 ` [patch 132/163] kasan: add tests for call_rcu stack recording Andrew Morton
2020-08-07  6:24 ` [patch 133/163] kasan: update documentation for generic kasan Andrew Morton
2020-08-07  6:24 ` [patch 134/163] kasan: remove kasan_unpoison_stack_above_sp_to() Andrew Morton
2020-08-07  6:24 ` [patch 135/163] lib/test_kasan.c: fix KASAN unit tests for tag-based KASAN Andrew Morton
2020-08-07  6:24 ` [patch 136/163] kasan: don't tag stacks allocated with pagealloc Andrew Morton
2020-08-07  6:25 ` [patch 137/163] efi: provide empty efi_enter_virtual_mode implementation Andrew Morton
2020-08-07  6:25 ` [patch 138/163] kasan, arm64: don't instrument functions that enable kasan Andrew Morton
2020-08-07  6:25 ` [patch 139/163] kasan: allow enabling stack tagging for tag-based mode Andrew Morton
2020-08-07  6:25 ` [patch 140/163] kasan: adjust kasan_stack_oob " Andrew Morton
2020-08-07  6:25 ` [patch 141/163] mm, page_alloc: use unlikely() in task_capc() Andrew Morton
2020-08-07  6:25 ` [patch 142/163] page_alloc: consider highatomic reserve in watermark fast Andrew Morton
2020-08-07  6:25 ` [patch 143/163] mm, page_alloc: skip ->waternark_boost for atomic order-0 allocations Andrew Morton
2020-08-07  6:25 ` [patch 144/163] mm: remove vm_total_pages Andrew Morton
2020-08-07  6:25 ` [patch 145/163] mm/page_alloc: remove nr_free_pagecache_pages() Andrew Morton
2020-08-07  6:25 ` [patch 146/163] mm/memory_hotplug: document why shuffle_zone() is relevant Andrew Morton
2020-08-07  6:25 ` [patch 147/163] mm/shuffle: remove dynamic reconfiguration Andrew Morton
2020-08-07  6:25 ` [patch 148/163] mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits Andrew Morton
2020-08-07  6:25 ` [patch 149/163] mm/page_alloc.c: extract the common part in pfn_to_bitidx() Andrew Morton
2020-08-07  6:25 ` [patch 150/163] mm/page_alloc.c: simplify pageblock bitmap access Andrew Morton
2020-08-07  6:25 ` [patch 151/163] mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask() Andrew Morton
2020-08-07  6:25 ` [patch 152/163] mm/page_alloc: silence a KASAN false positive Andrew Morton
2020-08-07  6:25 ` [patch 153/163] mm/page_alloc: fallbacks at most has 3 elements Andrew Morton
2020-08-07  6:26 ` [patch 154/163] mm/page_alloc.c: skip setting nodemask when we are in interrupt Andrew Morton
2020-08-07  6:26 ` [patch 155/163] mm/page_alloc: fix memalloc_nocma_{save/restore} APIs Andrew Morton
2020-08-07  6:26 ` [patch 156/163] mm: thp: replace HTTP links with HTTPS ones Andrew Morton
2020-08-07  6:26 ` [patch 157/163] mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible Andrew Morton
2020-08-07  6:26 ` [patch 158/163] khugepaged: collapse_pte_mapped_thp() flush the right range Andrew Morton
2020-08-07  6:26 ` [patch 159/163] khugepaged: collapse_pte_mapped_thp() protect the pmd lock Andrew Morton
2020-08-07  6:26 ` [patch 160/163] khugepaged: retract_page_tables() remember to test exit Andrew Morton
2020-08-07  6:26 ` [patch 161/163] khugepaged: khugepaged_test_exit() check mmget_still_valid() Andrew Morton
2020-08-07  6:26 ` [patch 162/163] mm/vmscan.c: fix typo Andrew Morton
2020-08-07  6:26 ` [patch 163/163] mm: vmscan: consistent update to pgrefill Andrew Morton
2020-08-07 19:32 ` [nacked] mm-avoid-access-flag-update-tlb-flush-for-retried-page-fault.patch removed from -mm tree Andrew Morton
2020-08-11  4:18 ` + mempolicyh-fix-typo.patch added to " Andrew Morton
2020-08-11  4:47 ` + mm-vunmap-add-cond_resched-in-vunmap_pmd_range.patch " Andrew Morton
2020-08-11 20:33 ` + mm-memcg-charge-memcg-percpu-memory-to-the-parent-cgroup-fix.patch " Andrew Morton
2020-08-11 20:49 ` + mm-slub-fix-conversion-of-freelist_corrupted.patch " Andrew Morton
2020-08-11 21:14 ` + revert-mm-vmstatc-do-not-show-lowmem-reserve-protection-information-of-empty-zone.patch " Andrew Morton
2020-08-11 21:16 ` + romfs-support-inode-blocks-calculation.patch " Andrew Morton
2020-08-11 22:20 ` + mm-vmstat-add-events-for-thp-migration-without-split-v4.patch " Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200807062110.OsXinNSO_%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.