Linux-NVME Archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Mikulas Patocka <mpatocka@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, Keith Busch <kbusch@kernel.org>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Mike Snitzer <snitzer@kernel.org>,
	Milan Broz <gmazyland@gmail.com>,
	linux-block@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-nvme@lists.infradead.org
Subject: Re: [RFC PATCH 1/2] block: change rq_integrity_vec to respect the iterator
Date: Thu, 16 May 2024 16:14:54 +0800	[thread overview]
Message-ID: <ZkXAfmbtXzKGaouC@fedora> (raw)
In-Reply-To: <c366231-e146-5a2b-1d8a-5936fb2047ca@redhat.com>

On Wed, May 15, 2024 at 03:28:11PM +0200, Mikulas Patocka wrote:
> If we allocate a bio that is larger than NVMe maximum request size, attach
> integrity metadata to it and send it to the NVMe subsystem, the integrity
> metadata will be corrupted.
> 
> Splitting the bio works correctly. The function bio_split will clone the
> bio, trim the iterator of the first bio and advance the iterator of the
> second bio.
> 
> However, the function rq_integrity_vec has a bug - it returns the first
> vector of the bio's metadata and completely disregards the metadata
> iterator that was advanced when the bio was split. Thus, the second bio
> uses the same metadata as the first bio and this leads to metadata
> corruption.

Wrt. NVMe, inside blk_mq_submit_bio(), bio_integrity_prep() is called after
bio is split, ->bi_integrity is actually allocated for every split bio, so I
am not sure if the issue is related with bio splitting. Or is it related
with DM over NVMe?

However, rq_integrity_vec() may not work correctly in case of bio merge.


Thanks, 
Ming



  parent reply	other threads:[~2024-05-16  8:15 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-15 13:27 [RFC PATCH 0/2] dm-crypt support for per-sector NVMe metadata Mikulas Patocka
2024-05-15 13:28 ` [RFC PATCH 1/2] block: change rq_integrity_vec to respect the iterator Mikulas Patocka
2024-05-16  2:30   ` Jens Axboe
2024-05-20 12:53     ` Mikulas Patocka
2024-05-23 14:58     ` [PATCH v2] " Mikulas Patocka
2024-05-23 15:01       ` Jens Axboe
2024-05-23 15:11         ` Mikulas Patocka
2024-05-23 15:22           ` Anuj gupta
2024-05-23 15:33           ` Jens Axboe
2024-05-23 15:48             ` Mikulas Patocka
2024-05-16  8:14   ` Ming Lei [this message]
2024-05-20 12:42     ` [RFC PATCH 1/2] " Mikulas Patocka
2024-05-20 13:19       ` Ming Lei
2024-05-15 13:30 ` [RFC PATCH 2/2] dm-crypt: support per-sector NVMe metadata Mikulas Patocka
2024-05-27 22:12 ` [RFC PATCH 0/2] dm-crypt support for " Eric Wheeler
2024-05-28  7:25   ` Milan Broz
2024-05-28 23:55     ` Eric Wheeler
2024-05-28 11:16   ` Mikulas Patocka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZkXAfmbtXzKGaouC@fedora \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dm-devel@lists.linux.dev \
    --cc=gmazyland@gmail.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mpatocka@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=snitzer@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).