From: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
To: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Cc: Paolo Valente <paolo.valente@linaro.org>, Jan Kara <jack@suse.cz>,
Yu Kuai <yukuai3@huawei.com>
Subject: [bug report] BUG: KASAN: use-after-free in bic_set_bfqq
Date: Thu, 12 Jan 2023 11:38:34 +0000 [thread overview]
Message-ID: <20230112113833.6zkuoxshdcuctlnw@shindev> (raw)
I observed another KASAN uaf related to bfq. I would like to ask bfq experts
to take a look in it. Whole KASAN message is attached below. It looks different
from another uaf fixed with 246cf66e300b ("block, bfq: fix uaf for bfqq in
bfq_exit_icq_bfqq").
It was observed first time during blktests test case block/027 run on kernel
v6.2-rc3. Depending on test machines, it was recreated during system boot or ssh
login occasionally. When I repeat system reboot and ssh-login twice, the uaf is
recreated.
I guess 64dc8c732f5c ("block, bfq: fix possible uaf for 'bfqq->bic'") could be
the trigger commit. I cherry-picked the two commits 64dc8c732f5c and
246cf66e300b on top of v6.1. With this kernel, I observed the KASAN uaf in
bic_set_bfqq.
BUG: KASAN: use-after-free in bic_set_bfqq+0x15f/0x190
device offline error, dev sdr, sector 245352968 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
Read of size 8 at addr ffff88811de85f88 by task in:imjournal/815
CPU: 5 PID: 815 Comm: in:imjournal Not tainted 6.2.0-rc3-kts+ #1
Hardware name: Supermicro Super Server/X10SRL-F, BIOS 3.2 11/22/2019
Call Trace:
<TASK>
dump_stack_lvl+0x5b/0x77
print_report+0x182/0x47e
? bic_set_bfqq+0x15f/0x190
? bic_set_bfqq+0x15f/0x190
kasan_report+0xbb/0xf0
? bic_set_bfqq+0x15f/0x190
bic_set_bfqq+0x15f/0x190
bfq_bic_update_cgroup+0x386/0x950
bfq_bio_merge+0x132/0x2c0
? __pfx_bfq_bio_merge+0x10/0x10
blk_mq_submit_bio+0xc5c/0x1b40
? __pfx_blk_mq_submit_bio+0x10/0x10
? find_held_lock+0x2d/0x110
__submit_bio+0x24d/0x2c0
? __pfx___submit_bio+0x10/0x10
submit_bio_noacct_nocheck+0x5b1/0x820
? __pfx_submit_bio_noacct_nocheck+0x10/0x10
? rcu_read_lock_sched_held+0x3f/0x80
ext4_io_submit+0x86/0x110
ext4_do_writepages+0xb97/0x2f70
? __pfx_ext4_do_writepages+0x10/0x10
? lock_is_held_type+0xe3/0x140
ext4_writepages+0x21c/0x4b0
? __pfx_ext4_writepages+0x10/0x10
? __lock_acquire+0xc75/0x5520
do_writepages+0x166/0x630
? __pfx_do_writepages+0x10/0x10
? lock_release+0x365/0x730
? wbc_attach_and_unlock_inode+0x3a3/0x780
? __pfx_lock_release+0x10/0x10
? __pfx_lock_release+0x10/0x10
? __pfx_lock_acquire+0x10/0x10
? do_raw_spin_unlock+0x54/0x1f0
? _raw_spin_unlock+0x29/0x50
? wbc_attach_and_unlock_inode+0x3a3/0x780
filemap_fdatawrite_wbc+0x111/0x170
? kfree+0x115/0x190
__filemap_fdatawrite_range+0x9a/0xc0
? __pfx___filemap_fdatawrite_range+0x10/0x10
? __pfx_ext4_find_entry+0x10/0x10
? __pfx___dquot_initialize+0x10/0x10
? rcu_read_lock_sched_held+0x3f/0x80
? ext4_alloc_da_blocks+0x177/0x210
ext4_rename+0x1123/0x23d0
? __pfx_ext4_rename+0x10/0x10
? __pfx___lock_acquire+0x10/0x10
? lock_acquire+0x1a4/0x4f0
? down_write_nested+0x141/0x200
? ext4_rename2+0x88/0x200
vfs_rename+0xa6e/0x14f0
? __pfx_lock_release+0x10/0x10
? hook_file_open+0x780/0x790
? __pfx_vfs_rename+0x10/0x10
? __d_lookup+0x1fd/0x330
? d_lookup+0x37/0x50
? security_path_rename+0x111/0x1e0
do_renameat2+0x81c/0xa00
? __pfx_do_renameat2+0x10/0x10
? lock_release+0x365/0x730
? __might_fault+0xbc/0x160
? __pfx_lock_release+0x10/0x10
? getname_flags.part.0+0x8d/0x430
? lockdep_hardirqs_on_prepare+0x17b/0x410
__x64_sys_rename+0x7d/0xa0
do_syscall_64+0x5b/0x80
? lockdep_hardirqs_on+0x7d/0x100
? do_syscall_64+0x67/0x80
? do_syscall_64+0x67/0x80
? lockdep_hardirqs_on+0x7d/0x100
? do_syscall_64+0x67/0x80
? do_syscall_64+0x67/0x80
? lockdep_hardirqs_on+0x7d/0x100
? do_syscall_64+0x67/0x80
? do_syscall_64+0x67/0x80
? do_syscall_64+0x67/0x80
? lockdep_hardirqs_on+0x7d/0x100
entry_SYSCALL_64_after_hwframe+0x72/0xdc
RIP: 0033:0x7f8a2a5e3eab
Code: e8 ba 2a 0a 00 f7 d8 19 c0 5b c3 0f 1f 40 00 b8 ff ff ff ff 5b c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 52 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05
+c3 0f 1f 40 00 48 8b 15 51 8f 17 00 f7 d8
RSP: 002b:00007f8a213fcc28 EFLAGS: 00000206 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f8a1c00c640 RCX: 00007f8a2a5e3eab
RDX: 0000000000000001 RSI: 000055d94c238820 RDI: 00007f8a213fcc30
RBP: 00007f8a213fcc30 R08: 0000000000000000 R09: 00007f8a1c000130
R10: 0000000000000000 R11: 0000000000000206 R12: 00007f8a1c00b480
R13: 0000000000000067 R14: 00007f8a213fdce0 R15: 00007f8a1c00b180
</TASK>
Allocated by task 815:
kasan_save_stack+0x1c/0x40
kasan_set_track+0x21/0x30
__kasan_slab_alloc+0x88/0x90
kmem_cache_alloc_node+0x175/0x420
--
Shin'ichiro Kawasaki
next reply other threads:[~2023-01-12 11:50 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-12 11:38 Shinichiro Kawasaki [this message]
2023-01-12 11:47 ` [bug report] BUG: KASAN: use-after-free in bic_set_bfqq Yu Kuai
2023-01-12 11:53 ` Yu Kuai
2023-01-12 13:18 ` Shinichiro Kawasaki
2023-01-13 1:04 ` Shinichiro Kawasaki
2023-01-13 1:11 ` Yu Kuai
2023-01-12 13:14 ` Shinichiro Kawasaki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230112113833.6zkuoxshdcuctlnw@shindev \
--to=shinichiro.kawasaki@wdc.com \
--cc=jack@suse.cz \
--cc=linux-block@vger.kernel.org \
--cc=paolo.valente@linaro.org \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.