* [PATCH] locking/lockdep: Avoid potential access of invalid memory in lock_class
@ 2022-01-03 2:35 Waiman Long
2022-01-03 17:47 ` Bart Van Assche
2022-01-26 13:30 ` [tip: locking/core] " tip-bot2 for Waiman Long
0 siblings, 2 replies; 5+ messages in thread
From: Waiman Long @ 2022-01-03 2:35 UTC (permalink / raw
To: Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng
Cc: linux-kernel, Bart Van Assche, Tetsuo Handa, Waiman Long
It was found that reading /proc/lockdep after a lockdep splat may
potentially cause an access to freed memory if lockdep_unregister_key()
is called after the splat but before access to /proc/lockdep [1]. This
is due to the fact that graph_lock() call in lockdep_unregister_key()
fails after the clearing of debug_locks by the splat process.
After lockdep_unregister_key() is called, the lock_name may be freed
but the corresponding lock_class structure still have a reference to
it. That invalid memory pointer will then be accessed when /proc/lockdep
is read by a user and a use-after-free (UAF) error will be reported if
KASAN is enabled.
To fix this problem, lockdep_unregister_key() is now modified to always
search for a matching key irrespective of the debug_locks state and
zap the corresponding lock class if a matching one is found.
[1] https://lore.kernel.org/lkml/77f05c15-81b6-bddd-9650-80d5f23fe330@i-love.sakura.ne.jp/
Fixes: 8b39adbee805 ("locking/lockdep: Make lockdep_unregister_key() honor 'debug_locks' again")
Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/locking/lockdep.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2270ec68f10a..d4252b5c9863 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -6288,7 +6288,13 @@ void lockdep_reset_lock(struct lockdep_map *lock)
lockdep_reset_lock_reg(lock);
}
-/* Unregister a dynamically allocated key. */
+/*
+ * Unregister a dynamically allocated key.
+ *
+ * Unlike lockdep_register_key(), a search is always done to find a matching
+ * key irrespective of debug_locks to avoid potential invalid access to freed
+ * memory in lock_class entry.
+ */
void lockdep_unregister_key(struct lock_class_key *key)
{
struct hlist_head *hash_head = keyhashentry(key);
@@ -6303,10 +6309,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
return;
raw_local_irq_save(flags);
- if (!graph_lock())
- goto out_irq;
+ lockdep_lock();
- pf = get_pending_free();
hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
if (k == key) {
hlist_del_rcu(&k->hash_entry);
@@ -6314,11 +6318,13 @@ void lockdep_unregister_key(struct lock_class_key *key)
break;
}
}
- WARN_ON_ONCE(!found);
- __lockdep_free_key_range(pf, key, 1);
- call_rcu_zapped(pf);
- graph_unlock();
-out_irq:
+ WARN_ON_ONCE(!found && debug_locks);
+ if (found) {
+ pf = get_pending_free();
+ __lockdep_free_key_range(pf, key, 1);
+ call_rcu_zapped(pf);
+ }
+ lockdep_unlock();
raw_local_irq_restore(flags);
/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
--
2.27.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] locking/lockdep: Avoid potential access of invalid memory in lock_class
2022-01-03 2:35 [PATCH] locking/lockdep: Avoid potential access of invalid memory in lock_class Waiman Long
@ 2022-01-03 17:47 ` Bart Van Assche
2022-01-03 18:17 ` Waiman Long
2022-01-26 13:30 ` [tip: locking/core] " tip-bot2 for Waiman Long
1 sibling, 1 reply; 5+ messages in thread
From: Bart Van Assche @ 2022-01-03 17:47 UTC (permalink / raw
To: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng
Cc: linux-kernel, Tetsuo Handa
On 1/2/22 18:35, Waiman Long wrote:
> - WARN_ON_ONCE(!found);
> - __lockdep_free_key_range(pf, key, 1);
> - call_rcu_zapped(pf);
> - graph_unlock();
> -out_irq:
> + WARN_ON_ONCE(!found && debug_locks);
lockdep_unregister_key() should only be called for a registered key so I'd
like to keep the WARN_ON_ONCE(!found) here instead of changing it into
WARN_ON_ONCE(!found && debug_locks). Otherwise this patch looks good to me.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] locking/lockdep: Avoid potential access of invalid memory in lock_class
2022-01-03 17:47 ` Bart Van Assche
@ 2022-01-03 18:17 ` Waiman Long
2022-01-03 21:26 ` Bart Van Assche
0 siblings, 1 reply; 5+ messages in thread
From: Waiman Long @ 2022-01-03 18:17 UTC (permalink / raw
To: Bart Van Assche, Peter Zijlstra, Ingo Molnar, Will Deacon,
Boqun Feng
Cc: linux-kernel, Tetsuo Handa
On 1/3/22 12:47, Bart Van Assche wrote:
> On 1/2/22 18:35, Waiman Long wrote:
>> - WARN_ON_ONCE(!found);
>> - __lockdep_free_key_range(pf, key, 1);
>> - call_rcu_zapped(pf);
>> - graph_unlock();
>> -out_irq:
>> + WARN_ON_ONCE(!found && debug_locks);
>
> lockdep_unregister_key() should only be called for a registered key so
> I'd
> like to keep the WARN_ON_ONCE(!found) here instead of changing it into
> WARN_ON_ONCE(!found && debug_locks). Otherwise this patch looks good
> to me.
The reason for this change is to handle the case that a
lockdep_register_key()/lockdep_unregister_key() pair may be called after
debug_locks is turned off. Without that change, we are going to get the
warning in lockdep_unregister_key(). That is the motivation of your
original commit 8b39adbee805 ("locking/lockdep: Make
lockdep_unregister_key() honor 'debug_locks' again").
Cheers,
Longman
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] locking/lockdep: Avoid potential access of invalid memory in lock_class
2022-01-03 18:17 ` Waiman Long
@ 2022-01-03 21:26 ` Bart Van Assche
0 siblings, 0 replies; 5+ messages in thread
From: Bart Van Assche @ 2022-01-03 21:26 UTC (permalink / raw
To: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng
Cc: linux-kernel, Tetsuo Handa
On 1/3/22 10:17, Waiman Long wrote:
> On 1/3/22 12:47, Bart Van Assche wrote:
>> On 1/2/22 18:35, Waiman Long wrote:
>>> - WARN_ON_ONCE(!found);
>>> - __lockdep_free_key_range(pf, key, 1);
>>> - call_rcu_zapped(pf);
>>> - graph_unlock();
>>> -out_irq:
>>> + WARN_ON_ONCE(!found && debug_locks);
>>
>> lockdep_unregister_key() should only be called for a registered key so
>> I'd
>> like to keep the WARN_ON_ONCE(!found) here instead of changing it into
>> WARN_ON_ONCE(!found && debug_locks). Otherwise this patch looks good
>> to me.
>
> The reason for this change is to handle the case that a
> lockdep_register_key()/lockdep_unregister_key() pair may be called after
> debug_locks is turned off. Without that change, we are going to get the
> warning in lockdep_unregister_key(). That is the motivation of your
> original commit 8b39adbee805 ("locking/lockdep: Make
> lockdep_unregister_key() honor 'debug_locks' again").
Ah, that's right. Hence:
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 5+ messages in thread
* [tip: locking/core] locking/lockdep: Avoid potential access of invalid memory in lock_class
2022-01-03 2:35 [PATCH] locking/lockdep: Avoid potential access of invalid memory in lock_class Waiman Long
2022-01-03 17:47 ` Bart Van Assche
@ 2022-01-26 13:30 ` tip-bot2 for Waiman Long
1 sibling, 0 replies; 5+ messages in thread
From: tip-bot2 for Waiman Long @ 2022-01-26 13:30 UTC (permalink / raw
To: linux-tip-commits
Cc: Tetsuo Handa, Waiman Long, Peter Zijlstra (Intel),
Bart Van Assche, x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 61cc4534b6550997c97a03759ab46b29d44c0017
Gitweb: https://git.kernel.org/tip/61cc4534b6550997c97a03759ab46b29d44c0017
Author: Waiman Long <longman@redhat.com>
AuthorDate: Sun, 02 Jan 2022 21:35:58 -05:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 25 Jan 2022 22:30:28 +01:00
locking/lockdep: Avoid potential access of invalid memory in lock_class
It was found that reading /proc/lockdep after a lockdep splat may
potentially cause an access to freed memory if lockdep_unregister_key()
is called after the splat but before access to /proc/lockdep [1]. This
is due to the fact that graph_lock() call in lockdep_unregister_key()
fails after the clearing of debug_locks by the splat process.
After lockdep_unregister_key() is called, the lock_name may be freed
but the corresponding lock_class structure still have a reference to
it. That invalid memory pointer will then be accessed when /proc/lockdep
is read by a user and a use-after-free (UAF) error will be reported if
KASAN is enabled.
To fix this problem, lockdep_unregister_key() is now modified to always
search for a matching key irrespective of the debug_locks state and
zap the corresponding lock class if a matching one is found.
[1] https://lore.kernel.org/lkml/77f05c15-81b6-bddd-9650-80d5f23fe330@i-love.sakura.ne.jp/
Fixes: 8b39adbee805 ("locking/lockdep: Make lockdep_unregister_key() honor 'debug_locks' again")
Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Link: https://lkml.kernel.org/r/20220103023558.1377055-1-longman@redhat.com
---
kernel/locking/lockdep.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 89b3df5..2e6892e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -6287,7 +6287,13 @@ void lockdep_reset_lock(struct lockdep_map *lock)
lockdep_reset_lock_reg(lock);
}
-/* Unregister a dynamically allocated key. */
+/*
+ * Unregister a dynamically allocated key.
+ *
+ * Unlike lockdep_register_key(), a search is always done to find a matching
+ * key irrespective of debug_locks to avoid potential invalid access to freed
+ * memory in lock_class entry.
+ */
void lockdep_unregister_key(struct lock_class_key *key)
{
struct hlist_head *hash_head = keyhashentry(key);
@@ -6302,10 +6308,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
return;
raw_local_irq_save(flags);
- if (!graph_lock())
- goto out_irq;
+ lockdep_lock();
- pf = get_pending_free();
hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
if (k == key) {
hlist_del_rcu(&k->hash_entry);
@@ -6313,11 +6317,13 @@ void lockdep_unregister_key(struct lock_class_key *key)
break;
}
}
- WARN_ON_ONCE(!found);
- __lockdep_free_key_range(pf, key, 1);
- call_rcu_zapped(pf);
- graph_unlock();
-out_irq:
+ WARN_ON_ONCE(!found && debug_locks);
+ if (found) {
+ pf = get_pending_free();
+ __lockdep_free_key_range(pf, key, 1);
+ call_rcu_zapped(pf);
+ }
+ lockdep_unlock();
raw_local_irq_restore(flags);
/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-01-26 13:30 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-01-03 2:35 [PATCH] locking/lockdep: Avoid potential access of invalid memory in lock_class Waiman Long
2022-01-03 17:47 ` Bart Van Assche
2022-01-03 18:17 ` Waiman Long
2022-01-03 21:26 ` Bart Van Assche
2022-01-26 13:30 ` [tip: locking/core] " tip-bot2 for Waiman Long
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.