gfs2.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH dlm/next] dlm: add rcu_barrier before destroy kmem cache
@ 2024-06-13 17:06 Alexander Aring
  2024-06-13 17:24 ` Alexander Aring
  0 siblings, 1 reply; 2+ messages in thread
From: Alexander Aring @ 2024-06-13 17:06 UTC (permalink / raw
  To: teigland; +Cc: gfs2, aahringo

In the case we trigger dlm_free_rsb() that does a call_rcu() and the
responding kfree() of res_lvbptr and a kmem_cache_free() of the rsb
pointer we need to wait until this pending operation is done before
calling kmem_cache_destroy(). We doing that by using rcu_barrier() that
waits until all pending call_rcu() are done. This avoids that
kmem_cache_destroy() complains about active objects around that are not
being freed yet by call_rcu().

There is currently more discussions about to make this behaviour better,
see:

https://lore.kernel.org/netdev/20240609082726.32742-1-Julia.Lawall@inria.fr/

However this is only for call_rcu() if the callback calls
kmem_cache_destroy() only to replace it by kfree_rcu() call which has
currently some issue. This isn't our case because we also free the
res_lvbptr if being set.

For our case, to avoid the above race rcu_barrier() should be used before
calling kmem_cache_destroy() to be sure that there are no active objects
around. This is exactly what net/batman-adv is also doing before calling their
kmem_cache_destroy() in module unloading.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/memory.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/dlm/memory.c b/fs/dlm/memory.c
index 105a79978706..8c44b954c166 100644
--- a/fs/dlm/memory.c
+++ b/fs/dlm/memory.c
@@ -72,6 +72,8 @@ int __init dlm_memory_init(void)
 
 void dlm_memory_exit(void)
 {
+	rcu_barrier();
+
 	kmem_cache_destroy(writequeue_cache);
 	kmem_cache_destroy(mhandle_cache);
 	kmem_cache_destroy(msg_cache);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH dlm/next] dlm: add rcu_barrier before destroy kmem cache
  2024-06-13 17:06 [PATCH dlm/next] dlm: add rcu_barrier before destroy kmem cache Alexander Aring
@ 2024-06-13 17:24 ` Alexander Aring
  0 siblings, 0 replies; 2+ messages in thread
From: Alexander Aring @ 2024-06-13 17:24 UTC (permalink / raw
  To: teigland; +Cc: gfs2

Hi,

On Thu, Jun 13, 2024 at 1:06 PM Alexander Aring <aahringo@redhat.com> wrote:
>
> In the case we trigger dlm_free_rsb() that does a call_rcu() and the
> responding kfree() of res_lvbptr and a kmem_cache_free() of the rsb
> pointer we need to wait until this pending operation is done before
> calling kmem_cache_destroy(). We doing that by using rcu_barrier() that
> waits until all pending call_rcu() are done. This avoids that
> kmem_cache_destroy() complains about active objects around that are not
> being freed yet by call_rcu().
>
> There is currently more discussions about to make this behaviour better,
> see:
>
> https://lore.kernel.org/netdev/20240609082726.32742-1-Julia.Lawall@inria.fr/
>
> However this is only for call_rcu() if the callback calls
> kmem_cache_destroy() only to replace it by kfree_rcu() call which has
> currently some issue. This isn't our case because we also free the
> res_lvbptr if being set.
>
> For our case, to avoid the above race rcu_barrier() should be used before
> calling kmem_cache_destroy() to be sure that there are no active objects
> around. This is exactly what net/batman-adv is also doing before calling their
> kmem_cache_destroy() in module unloading.
>
> Signed-off-by: Alexander Aring <aahringo@redhat.com>

missing "Fixes" tag:

Fixes: 01fdeca1cc2d ("dlm: use rcu to avoid an extra rsb struct lookup")

- Alex


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-06-13 17:24 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-13 17:06 [PATCH dlm/next] dlm: add rcu_barrier before destroy kmem cache Alexander Aring
2024-06-13 17:24 ` Alexander Aring

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).