From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 809C0C433E6 for ; Tue, 12 Jan 2021 21:21:15 +0000 (UTC) Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C3BCA23125 for ; Tue, 12 Jan 2021 21:21:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3BCA23125 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 5C02786C3A; Tue, 12 Jan 2021 21:21:14 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XxTYzV8vZb35; Tue, 12 Jan 2021 21:21:13 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 6C03F85EA7; Tue, 12 Jan 2021 21:21:13 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 561B3C088B; Tue, 12 Jan 2021 21:21:13 +0000 (UTC) Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 66F20C013A for ; Tue, 12 Jan 2021 21:21:12 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 409EF20354 for ; Tue, 12 Jan 2021 21:21:12 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nfWm+G2nPh3y for ; Tue, 12 Jan 2021 21:21:11 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by silver.osuosl.org (Postfix) with ESMTPS id 8C10D20352 for ; Tue, 12 Jan 2021 21:21:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610486469; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XLFSIDj1yb2peaSE/tJnoIhEfZLTdfvc3qPuS09xnb4=; b=aOA6mbXx8q22kdfZubzGFTER47drgElw8/KWqedpi/UgCb76Rdcn8EISnapG+8e8pEFdxH GduGKhN9O+cLEvTkbXmbb/K1OOzM36NwH+AsJ2Yabns+MWeAcQvcaTkAhZ0HKWjUcstCYa pOqZsjNgLScEXtWQIMstjjYL2VxOPaY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-129-Zwx1QS4kPn2XRswLQlixLA-1; Tue, 12 Jan 2021 16:21:05 -0500 X-MC-Unique: Zwx1QS4kPn2XRswLQlixLA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E9A3B107ACF7; Tue, 12 Jan 2021 21:21:01 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D0D29CA0; Tue, 12 Jan 2021 21:20:59 +0000 (UTC) Date: Tue, 12 Jan 2021 14:20:59 -0700 From: Alex Williamson To: Keqian Zhu Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210112142059.074c1b0f@omen.home.shazbot.org> In-Reply-To: <20210107092901.19712-2-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Cc: Mark Rutland , jiangkunkun@huawei.com, kvm@vger.kernel.org, Catalin Marinas , Kirti Wankhede , Will Deacon , kvmarm@lists.cs.columbia.edu, Marc Zyngier , Daniel Lezcano , wanghaibin.wang@huawei.com, Julien Thierry , Suzuki K Poulose , Alexios Zavras , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, James Morse , Andrew Morton , Robin Murphy X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" On Thu, 7 Jan 2021 17:28:57 +0800 Keqian Zhu wrote: > Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > is easy to lose dirty log. For example, after promoting pinned_scope of > vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > dirty log that occurs before vfio_iommu is promoted. > > The key point is that pinned-dirty is not a real dirty tracking way, it > can't continuously track dirty pages, but just restrict dirty scope. It > is essentially the same as fully-dirty. Fully-dirty is of full-scope and > pinned-dirty is of pinned-scope. > > So we must mark pinned-dirty or fully-dirty after we start dirty tracking > or clear dirty bitmap, to ensure that dirty log is marked right away. I was initially convinced by these first three patches, but upon further review, I think the premise is wrong. AIUI, the concern across these patches is that our dirty bitmap is only populated with pages dirtied by pinning and we only take into account the pinned page dirty scope at the time the bitmap is retrieved by the user. You suppose this presents a gap where if a vendor driver has not yet identified with a page pinning scope that the entire bitmap should be considered dirty regardless of whether that driver later pins pages prior to the user retrieving the dirty bitmap. I don't think this is how we intended the cooperation between the iommu driver and vendor driver to work. By pinning pages a vendor driver is not declaring that only their future dirty page scope is limited to pinned pages, instead they're declaring themselves as a participant in dirty page tracking and take responsibility for pinning any necessary pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START to trigger a blocking notification to groups to not only begin dirty tracking, but also to synchronously register their current device DMA footprint. This patch would require a vendor driver to possibly perform a gratuitous page pinning in order to set the scope prior to dirty logging being enabled, or else the initial bitmap will be fully dirty. Therefore, I don't see that this series is necessary or correct. Kirti, does this match your thinking? Thinking about these semantics, it seems there might still be an issue if a group with non-pinned-page dirty scope is detached with dirty logging enabled. It seems this should in fact fully populate the dirty bitmaps at the time it's removed since we don't know the extent of its previous DMA, nor will the group be present to trigger the full bitmap when the user retrieves the dirty bitmap. Creating fully populated bitmaps at the time tracking is enabled negates our ability to take advantage of later enlightenment though. Thanks, Alex > Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > Signed-off-by: Keqian Zhu > --- > drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > 1 file changed, 22 insertions(+), 11 deletions(-) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index bceda5e8baaa..b0a26e8e0adf 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > dma->bitmap = NULL; > } > > -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > { > struct rb_node *p; > unsigned long pgshift = __ffs(pgsize); > @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > } > } > > +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > +{ > + unsigned long pgshift = __ffs(pgsize); > + unsigned long nbits = dma->size >> pgshift; > + > + bitmap_set(dma->bitmap, 0, nbits); > +} > + > +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > + > + if (iommu->pinned_page_dirty_scope) > + vfio_dma_populate_bitmap_pinned(dma, pgsize); > + else if (dma->iommu_mapped) > + vfio_dma_populate_bitmap_full(dma, pgsize); > +} > + > static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > { > struct rb_node *n; > @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > } > return ret; > } > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > } > @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > unsigned long shift = bit_offset % BITS_PER_LONG; > unsigned long leftover; > > - /* > - * mark all pages dirty if any IOMMU capable device is not able > - * to report dirty pages and all pages are pinned and mapped. > - */ > - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > - bitmap_set(dma->bitmap, 0, nbits); > - > if (shift) { > bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > nbits + shift); > @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > struct vfio_dma *dma; > struct rb_node *n; > unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > - size_t pgsize = (size_t)1 << pgshift; > int ret; > > /* > @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > * pages which are marked dirty by vfio_dma_rw() > */ > bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > } _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25085C433DB for ; Tue, 12 Jan 2021 21:21:15 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 4E70B2311F for ; Tue, 12 Jan 2021 21:21:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E70B2311F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7DD204B1F8; Tue, 12 Jan 2021 16:21:13 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@redhat.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BEI4p2PMzdpH; Tue, 12 Jan 2021 16:21:12 -0500 (EST) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 108C44B227; Tue, 12 Jan 2021 16:21:12 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E93C84B222 for ; Tue, 12 Jan 2021 16:21:10 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JZhvI0Zyqu2P for ; Tue, 12 Jan 2021 16:21:09 -0500 (EST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 93C6C4B1EA for ; Tue, 12 Jan 2021 16:21:09 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610486469; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XLFSIDj1yb2peaSE/tJnoIhEfZLTdfvc3qPuS09xnb4=; b=aOA6mbXx8q22kdfZubzGFTER47drgElw8/KWqedpi/UgCb76Rdcn8EISnapG+8e8pEFdxH GduGKhN9O+cLEvTkbXmbb/K1OOzM36NwH+AsJ2Yabns+MWeAcQvcaTkAhZ0HKWjUcstCYa pOqZsjNgLScEXtWQIMstjjYL2VxOPaY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-129-Zwx1QS4kPn2XRswLQlixLA-1; Tue, 12 Jan 2021 16:21:05 -0500 X-MC-Unique: Zwx1QS4kPn2XRswLQlixLA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E9A3B107ACF7; Tue, 12 Jan 2021 21:21:01 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D0D29CA0; Tue, 12 Jan 2021 21:20:59 +0000 (UTC) Date: Tue, 12 Jan 2021 14:20:59 -0700 From: Alex Williamson To: Keqian Zhu Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210112142059.074c1b0f@omen.home.shazbot.org> In-Reply-To: <20210107092901.19712-2-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Cc: kvm@vger.kernel.org, Catalin Marinas , Kirti Wankhede , Will Deacon , kvmarm@lists.cs.columbia.edu, Marc Zyngier , Joerg Roedel , Daniel Lezcano , Alexios Zavras , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Andrew Morton , Robin Murphy X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Thu, 7 Jan 2021 17:28:57 +0800 Keqian Zhu wrote: > Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > is easy to lose dirty log. For example, after promoting pinned_scope of > vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > dirty log that occurs before vfio_iommu is promoted. > > The key point is that pinned-dirty is not a real dirty tracking way, it > can't continuously track dirty pages, but just restrict dirty scope. It > is essentially the same as fully-dirty. Fully-dirty is of full-scope and > pinned-dirty is of pinned-scope. > > So we must mark pinned-dirty or fully-dirty after we start dirty tracking > or clear dirty bitmap, to ensure that dirty log is marked right away. I was initially convinced by these first three patches, but upon further review, I think the premise is wrong. AIUI, the concern across these patches is that our dirty bitmap is only populated with pages dirtied by pinning and we only take into account the pinned page dirty scope at the time the bitmap is retrieved by the user. You suppose this presents a gap where if a vendor driver has not yet identified with a page pinning scope that the entire bitmap should be considered dirty regardless of whether that driver later pins pages prior to the user retrieving the dirty bitmap. I don't think this is how we intended the cooperation between the iommu driver and vendor driver to work. By pinning pages a vendor driver is not declaring that only their future dirty page scope is limited to pinned pages, instead they're declaring themselves as a participant in dirty page tracking and take responsibility for pinning any necessary pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START to trigger a blocking notification to groups to not only begin dirty tracking, but also to synchronously register their current device DMA footprint. This patch would require a vendor driver to possibly perform a gratuitous page pinning in order to set the scope prior to dirty logging being enabled, or else the initial bitmap will be fully dirty. Therefore, I don't see that this series is necessary or correct. Kirti, does this match your thinking? Thinking about these semantics, it seems there might still be an issue if a group with non-pinned-page dirty scope is detached with dirty logging enabled. It seems this should in fact fully populate the dirty bitmaps at the time it's removed since we don't know the extent of its previous DMA, nor will the group be present to trigger the full bitmap when the user retrieves the dirty bitmap. Creating fully populated bitmaps at the time tracking is enabled negates our ability to take advantage of later enlightenment though. Thanks, Alex > Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > Signed-off-by: Keqian Zhu > --- > drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > 1 file changed, 22 insertions(+), 11 deletions(-) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index bceda5e8baaa..b0a26e8e0adf 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > dma->bitmap = NULL; > } > > -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > { > struct rb_node *p; > unsigned long pgshift = __ffs(pgsize); > @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > } > } > > +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > +{ > + unsigned long pgshift = __ffs(pgsize); > + unsigned long nbits = dma->size >> pgshift; > + > + bitmap_set(dma->bitmap, 0, nbits); > +} > + > +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > + > + if (iommu->pinned_page_dirty_scope) > + vfio_dma_populate_bitmap_pinned(dma, pgsize); > + else if (dma->iommu_mapped) > + vfio_dma_populate_bitmap_full(dma, pgsize); > +} > + > static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > { > struct rb_node *n; > @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > } > return ret; > } > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > } > @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > unsigned long shift = bit_offset % BITS_PER_LONG; > unsigned long leftover; > > - /* > - * mark all pages dirty if any IOMMU capable device is not able > - * to report dirty pages and all pages are pinned and mapped. > - */ > - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > - bitmap_set(dma->bitmap, 0, nbits); > - > if (shift) { > bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > nbits + shift); > @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > struct vfio_dma *dma; > struct rb_node *n; > unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > - size_t pgsize = (size_t)1 << pgshift; > int ret; > > /* > @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > * pages which are marked dirty by vfio_dma_rw() > */ > bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > } _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59B02C433E0 for ; Tue, 12 Jan 2021 21:23:26 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF8EA2311F for ; Tue, 12 Jan 2021 21:23:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF8EA2311F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FAPLShAibZZMVxio0jFVv2xZY7/65QoF8wbcn9/UxSI=; b=mXijncOlX9ROfw0jcYT3vaMzx OJ5RT+M0GRqliDGciGDe6/gD7VirKs41qgUl99qSp7TkrXqBXVIHzDyeJpY0iMT3R38PZYyHer0E+ FD5ZIQ7gRAgwH3W3bMH3N2HY8f/Hcwvj8z4b8/+akmHN1h3z4Et5yAzqva9EbJIS9Hg0DMoGb68q+ rvzGtjmK3ccjqj2Vg61XyW6n1w1KAk00k0+OTmzUeHgtk2SVf6J2bKyMyrBlUmmwtJupGslUUlb5/ 5IavVwwG9Vfh2rQzPbuc20TvzQ4a7I7LS4VsHE/8UNjTn4ge4eEPqHlbr/q5dA361h5WQed2XfaL7 +J6x/oBbw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kzR6L-0007dH-0Z; Tue, 12 Jan 2021 21:21:17 +0000 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kzR6E-0007bW-7A for linux-arm-kernel@lists.infradead.org; Tue, 12 Jan 2021 21:21:11 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610486469; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XLFSIDj1yb2peaSE/tJnoIhEfZLTdfvc3qPuS09xnb4=; b=aOA6mbXx8q22kdfZubzGFTER47drgElw8/KWqedpi/UgCb76Rdcn8EISnapG+8e8pEFdxH GduGKhN9O+cLEvTkbXmbb/K1OOzM36NwH+AsJ2Yabns+MWeAcQvcaTkAhZ0HKWjUcstCYa pOqZsjNgLScEXtWQIMstjjYL2VxOPaY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-129-Zwx1QS4kPn2XRswLQlixLA-1; Tue, 12 Jan 2021 16:21:05 -0500 X-MC-Unique: Zwx1QS4kPn2XRswLQlixLA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E9A3B107ACF7; Tue, 12 Jan 2021 21:21:01 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D0D29CA0; Tue, 12 Jan 2021 21:20:59 +0000 (UTC) Date: Tue, 12 Jan 2021 14:20:59 -0700 From: Alex Williamson To: Keqian Zhu Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210112142059.074c1b0f@omen.home.shazbot.org> In-Reply-To: <20210107092901.19712-2-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210112_162110_389983_504BE761 X-CRM114-Status: GOOD ( 35.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , jiangkunkun@huawei.com, kvm@vger.kernel.org, Catalin Marinas , Kirti Wankhede , Will Deacon , kvmarm@lists.cs.columbia.edu, Marc Zyngier , Joerg Roedel , Daniel Lezcano , wanghaibin.wang@huawei.com, Julien Thierry , Suzuki K Poulose , Alexios Zavras , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, James Morse , Andrew Morton , Robin Murphy Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, 7 Jan 2021 17:28:57 +0800 Keqian Zhu wrote: > Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > is easy to lose dirty log. For example, after promoting pinned_scope of > vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > dirty log that occurs before vfio_iommu is promoted. > > The key point is that pinned-dirty is not a real dirty tracking way, it > can't continuously track dirty pages, but just restrict dirty scope. It > is essentially the same as fully-dirty. Fully-dirty is of full-scope and > pinned-dirty is of pinned-scope. > > So we must mark pinned-dirty or fully-dirty after we start dirty tracking > or clear dirty bitmap, to ensure that dirty log is marked right away. I was initially convinced by these first three patches, but upon further review, I think the premise is wrong. AIUI, the concern across these patches is that our dirty bitmap is only populated with pages dirtied by pinning and we only take into account the pinned page dirty scope at the time the bitmap is retrieved by the user. You suppose this presents a gap where if a vendor driver has not yet identified with a page pinning scope that the entire bitmap should be considered dirty regardless of whether that driver later pins pages prior to the user retrieving the dirty bitmap. I don't think this is how we intended the cooperation between the iommu driver and vendor driver to work. By pinning pages a vendor driver is not declaring that only their future dirty page scope is limited to pinned pages, instead they're declaring themselves as a participant in dirty page tracking and take responsibility for pinning any necessary pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START to trigger a blocking notification to groups to not only begin dirty tracking, but also to synchronously register their current device DMA footprint. This patch would require a vendor driver to possibly perform a gratuitous page pinning in order to set the scope prior to dirty logging being enabled, or else the initial bitmap will be fully dirty. Therefore, I don't see that this series is necessary or correct. Kirti, does this match your thinking? Thinking about these semantics, it seems there might still be an issue if a group with non-pinned-page dirty scope is detached with dirty logging enabled. It seems this should in fact fully populate the dirty bitmaps at the time it's removed since we don't know the extent of its previous DMA, nor will the group be present to trigger the full bitmap when the user retrieves the dirty bitmap. Creating fully populated bitmaps at the time tracking is enabled negates our ability to take advantage of later enlightenment though. Thanks, Alex > Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > Signed-off-by: Keqian Zhu > --- > drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > 1 file changed, 22 insertions(+), 11 deletions(-) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index bceda5e8baaa..b0a26e8e0adf 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > dma->bitmap = NULL; > } > > -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > { > struct rb_node *p; > unsigned long pgshift = __ffs(pgsize); > @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > } > } > > +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > +{ > + unsigned long pgshift = __ffs(pgsize); > + unsigned long nbits = dma->size >> pgshift; > + > + bitmap_set(dma->bitmap, 0, nbits); > +} > + > +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > + > + if (iommu->pinned_page_dirty_scope) > + vfio_dma_populate_bitmap_pinned(dma, pgsize); > + else if (dma->iommu_mapped) > + vfio_dma_populate_bitmap_full(dma, pgsize); > +} > + > static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > { > struct rb_node *n; > @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > } > return ret; > } > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > } > @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > unsigned long shift = bit_offset % BITS_PER_LONG; > unsigned long leftover; > > - /* > - * mark all pages dirty if any IOMMU capable device is not able > - * to report dirty pages and all pages are pinned and mapped. > - */ > - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > - bitmap_set(dma->bitmap, 0, nbits); > - > if (shift) { > bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > nbits + shift); > @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > struct vfio_dma *dma; > struct rb_node *n; > unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > - size_t pgsize = (size_t)1 << pgshift; > int ret; > > /* > @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > * pages which are marked dirty by vfio_dma_rw() > */ > bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > } _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EF0AC4321A for ; Tue, 12 Jan 2021 21:48:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6789C23122 for ; Tue, 12 Jan 2021 21:48:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393346AbhALVg1 (ORCPT ); Tue, 12 Jan 2021 16:36:27 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:29455 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437152AbhALVWg (ORCPT ); Tue, 12 Jan 2021 16:22:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610486469; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XLFSIDj1yb2peaSE/tJnoIhEfZLTdfvc3qPuS09xnb4=; b=aOA6mbXx8q22kdfZubzGFTER47drgElw8/KWqedpi/UgCb76Rdcn8EISnapG+8e8pEFdxH GduGKhN9O+cLEvTkbXmbb/K1OOzM36NwH+AsJ2Yabns+MWeAcQvcaTkAhZ0HKWjUcstCYa pOqZsjNgLScEXtWQIMstjjYL2VxOPaY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-129-Zwx1QS4kPn2XRswLQlixLA-1; Tue, 12 Jan 2021 16:21:05 -0500 X-MC-Unique: Zwx1QS4kPn2XRswLQlixLA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E9A3B107ACF7; Tue, 12 Jan 2021 21:21:01 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D0D29CA0; Tue, 12 Jan 2021 21:20:59 +0000 (UTC) Date: Tue, 12 Jan 2021 14:20:59 -0700 From: Alex Williamson To: Keqian Zhu Cc: , , , , , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas , Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210112142059.074c1b0f@omen.home.shazbot.org> In-Reply-To: <20210107092901.19712-2-zhukeqian1@huawei.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 7 Jan 2021 17:28:57 +0800 Keqian Zhu wrote: > Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > is easy to lose dirty log. For example, after promoting pinned_scope of > vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > dirty log that occurs before vfio_iommu is promoted. > > The key point is that pinned-dirty is not a real dirty tracking way, it > can't continuously track dirty pages, but just restrict dirty scope. It > is essentially the same as fully-dirty. Fully-dirty is of full-scope and > pinned-dirty is of pinned-scope. > > So we must mark pinned-dirty or fully-dirty after we start dirty tracking > or clear dirty bitmap, to ensure that dirty log is marked right away. I was initially convinced by these first three patches, but upon further review, I think the premise is wrong. AIUI, the concern across these patches is that our dirty bitmap is only populated with pages dirtied by pinning and we only take into account the pinned page dirty scope at the time the bitmap is retrieved by the user. You suppose this presents a gap where if a vendor driver has not yet identified with a page pinning scope that the entire bitmap should be considered dirty regardless of whether that driver later pins pages prior to the user retrieving the dirty bitmap. I don't think this is how we intended the cooperation between the iommu driver and vendor driver to work. By pinning pages a vendor driver is not declaring that only their future dirty page scope is limited to pinned pages, instead they're declaring themselves as a participant in dirty page tracking and take responsibility for pinning any necessary pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START to trigger a blocking notification to groups to not only begin dirty tracking, but also to synchronously register their current device DMA footprint. This patch would require a vendor driver to possibly perform a gratuitous page pinning in order to set the scope prior to dirty logging being enabled, or else the initial bitmap will be fully dirty. Therefore, I don't see that this series is necessary or correct. Kirti, does this match your thinking? Thinking about these semantics, it seems there might still be an issue if a group with non-pinned-page dirty scope is detached with dirty logging enabled. It seems this should in fact fully populate the dirty bitmaps at the time it's removed since we don't know the extent of its previous DMA, nor will the group be present to trigger the full bitmap when the user retrieves the dirty bitmap. Creating fully populated bitmaps at the time tracking is enabled negates our ability to take advantage of later enlightenment though. Thanks, Alex > Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > Signed-off-by: Keqian Zhu > --- > drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > 1 file changed, 22 insertions(+), 11 deletions(-) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index bceda5e8baaa..b0a26e8e0adf 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > dma->bitmap = NULL; > } > > -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > { > struct rb_node *p; > unsigned long pgshift = __ffs(pgsize); > @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > } > } > > +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > +{ > + unsigned long pgshift = __ffs(pgsize); > + unsigned long nbits = dma->size >> pgshift; > + > + bitmap_set(dma->bitmap, 0, nbits); > +} > + > +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > + > + if (iommu->pinned_page_dirty_scope) > + vfio_dma_populate_bitmap_pinned(dma, pgsize); > + else if (dma->iommu_mapped) > + vfio_dma_populate_bitmap_full(dma, pgsize); > +} > + > static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > { > struct rb_node *n; > @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > } > return ret; > } > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > } > @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > unsigned long shift = bit_offset % BITS_PER_LONG; > unsigned long leftover; > > - /* > - * mark all pages dirty if any IOMMU capable device is not able > - * to report dirty pages and all pages are pinned and mapped. > - */ > - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > - bitmap_set(dma->bitmap, 0, nbits); > - > if (shift) { > bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > nbits + shift); > @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > struct vfio_dma *dma; > struct rb_node *n; > unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > - size_t pgsize = (size_t)1 << pgshift; > int ret; > > /* > @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > * pages which are marked dirty by vfio_dma_rw() > */ > bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > - vfio_dma_populate_bitmap(dma, pgsize); > + vfio_dma_populate_bitmap(iommu, dma); > } > return 0; > }