From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6DA0C433E9 for ; Wed, 13 Jan 2021 15:13:26 +0000 (UTC) Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4E8FE23383 for ; Wed, 13 Jan 2021 15:13:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E8FE23383 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 58ACA87252; Wed, 13 Jan 2021 15:13:25 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CsefQKnkjVEn; Wed, 13 Jan 2021 15:13:24 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 0B335871F1; Wed, 13 Jan 2021 15:13:24 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id D6630C0893; Wed, 13 Jan 2021 15:13:23 +0000 (UTC) Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 6D3C1C013A for ; Wed, 13 Jan 2021 15:13:22 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 520AE87207 for ; Wed, 13 Jan 2021 15:13:22 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ayi8n+JnOkcy for ; Wed, 13 Jan 2021 15:13:18 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by hemlock.osuosl.org (Postfix) with ESMTPS id EF74D871F1 for ; Wed, 13 Jan 2021 15:13:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610550796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=453S8sBVjLr0faZuW43TaOSjhYKei55KCDbFiBmR9dY=; b=hvP/nOKHkhKWguugvrTJGpW7481O44EVQgqYXTsggSe61c+GLL2AgNpc/HCS2g8e+CGEJW PXHjtmSIzFPd7eneYY2qMU179pgBxJhtmwBu7rhdtR7JGEH687RChEYWF7YnS1S70qk4Ds 90/p5klcq/mtb/KRXB/Uii/6bWWqRLw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-540-4yzDmH8lPpeaaE_NclWwuw-1; Wed, 13 Jan 2021 10:13:12 -0500 X-MC-Unique: 4yzDmH8lPpeaaE_NclWwuw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA475100559A; Wed, 13 Jan 2021 15:13:08 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id A6E8C5D9DD; Wed, 13 Jan 2021 15:13:05 +0000 (UTC) Date: Wed, 13 Jan 2021 08:13:05 -0700 From: Alex Williamson To: Kirti Wankhede Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210113081305.1df7de8d@omen.home.shazbot.org> In-Reply-To: <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> <20210112142059.074c1b0f@omen.home.shazbot.org> <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Cc: Mark Rutland , jiangkunkun@huawei.com, kvm@vger.kernel.org, Catalin Marinas , Will Deacon , Keqian Zhu , Marc Zyngier , Daniel Lezcano , kvmarm@lists.cs.columbia.edu, wanghaibin.wang@huawei.com, Julien Thierry , Suzuki K Poulose , Alexios Zavras , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, James Morse , Andrew Morton , Robin Murphy X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" On Wed, 13 Jan 2021 18:05:43 +0530 Kirti Wankhede wrote: > On 1/13/2021 2:50 AM, Alex Williamson wrote: > > On Thu, 7 Jan 2021 17:28:57 +0800 > > Keqian Zhu wrote: > > > >> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > >> is easy to lose dirty log. For example, after promoting pinned_scope of > >> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > >> dirty log that occurs before vfio_iommu is promoted. > >> > >> The key point is that pinned-dirty is not a real dirty tracking way, it > >> can't continuously track dirty pages, but just restrict dirty scope. It > >> is essentially the same as fully-dirty. Fully-dirty is of full-scope and > >> pinned-dirty is of pinned-scope. > >> > >> So we must mark pinned-dirty or fully-dirty after we start dirty tracking > >> or clear dirty bitmap, to ensure that dirty log is marked right away. > > > > I was initially convinced by these first three patches, but upon > > further review, I think the premise is wrong. AIUI, the concern across > > these patches is that our dirty bitmap is only populated with pages > > dirtied by pinning and we only take into account the pinned page dirty > > scope at the time the bitmap is retrieved by the user. You suppose > > this presents a gap where if a vendor driver has not yet identified > > with a page pinning scope that the entire bitmap should be considered > > dirty regardless of whether that driver later pins pages prior to the > > user retrieving the dirty bitmap. > > > > I don't think this is how we intended the cooperation between the iommu > > driver and vendor driver to work. By pinning pages a vendor driver is > > not declaring that only their future dirty page scope is limited to > > pinned pages, instead they're declaring themselves as a participant in > > dirty page tracking and take responsibility for pinning any necessary > > pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START > > to trigger a blocking notification to groups to not only begin dirty > > tracking, but also to synchronously register their current device DMA > > footprint. This patch would require a vendor driver to possibly perform > > a gratuitous page pinning in order to set the scope prior to dirty > > logging being enabled, or else the initial bitmap will be fully dirty. > > > > Therefore, I don't see that this series is necessary or correct. Kirti, > > does this match your thinking? > > > > That's correct Alex and I agree with you. > > > Thinking about these semantics, it seems there might still be an issue > > if a group with non-pinned-page dirty scope is detached with dirty > > logging enabled. > > Hot-unplug a device while migration process has started - is this > scenario supported? It's not prevented, it would rely on a userspace policy, right? The kernel should do the right thing regardless. Thanks, Alex > > It seems this should in fact fully populate the dirty > > bitmaps at the time it's removed since we don't know the extent of its > > previous DMA, nor will the group be present to trigger the full bitmap > > when the user retrieves the dirty bitmap. Creating fully populated > > bitmaps at the time tracking is enabled negates our ability to take > > advantage of later enlightenment though. Thanks, > > > > Alex > > > >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > >> Signed-off-by: Keqian Zhu > >> --- > >> drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > >> 1 file changed, 22 insertions(+), 11 deletions(-) > >> > >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > >> index bceda5e8baaa..b0a26e8e0adf 100644 > >> --- a/drivers/vfio/vfio_iommu_type1.c > >> +++ b/drivers/vfio/vfio_iommu_type1.c > >> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > >> dma->bitmap = NULL; > >> } > >> > >> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > >> { > >> struct rb_node *p; > >> unsigned long pgshift = __ffs(pgsize); > >> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> } > >> } > >> > >> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > >> +{ > >> + unsigned long pgshift = __ffs(pgsize); > >> + unsigned long nbits = dma->size >> pgshift; > >> + > >> + bitmap_set(dma->bitmap, 0, nbits); > >> +} > >> + > >> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > >> + struct vfio_dma *dma) > >> +{ > >> + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > >> + > >> + if (iommu->pinned_page_dirty_scope) > >> + vfio_dma_populate_bitmap_pinned(dma, pgsize); > >> + else if (dma->iommu_mapped) > >> + vfio_dma_populate_bitmap_full(dma, pgsize); > >> +} > >> + > >> static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> { > >> struct rb_node *n; > >> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> } > >> return ret; > >> } > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > >> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> unsigned long shift = bit_offset % BITS_PER_LONG; > >> unsigned long leftover; > >> > >> - /* > >> - * mark all pages dirty if any IOMMU capable device is not able > >> - * to report dirty pages and all pages are pinned and mapped. > >> - */ > >> - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > >> - bitmap_set(dma->bitmap, 0, nbits); > >> - > >> if (shift) { > >> bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > >> nbits + shift); > >> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> struct vfio_dma *dma; > >> struct rb_node *n; > >> unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > >> - size_t pgsize = (size_t)1 << pgshift; > >> int ret; > >> > >> /* > >> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> * pages which are marked dirty by vfio_dma_rw() > >> */ > >> bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > > > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34AFEC433E6 for ; Wed, 13 Jan 2021 15:13:27 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 8A33823600 for ; Wed, 13 Jan 2021 15:13:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A33823600 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id CB9804B1B2; Wed, 13 Jan 2021 10:13:25 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@redhat.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id d1zfAJiVXVT1; Wed, 13 Jan 2021 10:13:22 -0500 (EST) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 386834B103; Wed, 13 Jan 2021 10:13:22 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 5ECB04B0F3 for ; Wed, 13 Jan 2021 10:13:20 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dLzfHiYMHI8o for ; Wed, 13 Jan 2021 10:13:16 -0500 (EST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by mm01.cs.columbia.edu (Postfix) with ESMTP id AA56F4B0F2 for ; Wed, 13 Jan 2021 10:13:16 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610550796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=453S8sBVjLr0faZuW43TaOSjhYKei55KCDbFiBmR9dY=; b=hvP/nOKHkhKWguugvrTJGpW7481O44EVQgqYXTsggSe61c+GLL2AgNpc/HCS2g8e+CGEJW PXHjtmSIzFPd7eneYY2qMU179pgBxJhtmwBu7rhdtR7JGEH687RChEYWF7YnS1S70qk4Ds 90/p5klcq/mtb/KRXB/Uii/6bWWqRLw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-540-4yzDmH8lPpeaaE_NclWwuw-1; Wed, 13 Jan 2021 10:13:12 -0500 X-MC-Unique: 4yzDmH8lPpeaaE_NclWwuw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA475100559A; Wed, 13 Jan 2021 15:13:08 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id A6E8C5D9DD; Wed, 13 Jan 2021 15:13:05 +0000 (UTC) Date: Wed, 13 Jan 2021 08:13:05 -0700 From: Alex Williamson To: Kirti Wankhede Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210113081305.1df7de8d@omen.home.shazbot.org> In-Reply-To: <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> <20210112142059.074c1b0f@omen.home.shazbot.org> <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Cc: kvm@vger.kernel.org, Catalin Marinas , Will Deacon , Marc Zyngier , Joerg Roedel , Daniel Lezcano , kvmarm@lists.cs.columbia.edu, Alexios Zavras , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Andrew Morton , Robin Murphy X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Wed, 13 Jan 2021 18:05:43 +0530 Kirti Wankhede wrote: > On 1/13/2021 2:50 AM, Alex Williamson wrote: > > On Thu, 7 Jan 2021 17:28:57 +0800 > > Keqian Zhu wrote: > > > >> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > >> is easy to lose dirty log. For example, after promoting pinned_scope of > >> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > >> dirty log that occurs before vfio_iommu is promoted. > >> > >> The key point is that pinned-dirty is not a real dirty tracking way, it > >> can't continuously track dirty pages, but just restrict dirty scope. It > >> is essentially the same as fully-dirty. Fully-dirty is of full-scope and > >> pinned-dirty is of pinned-scope. > >> > >> So we must mark pinned-dirty or fully-dirty after we start dirty tracking > >> or clear dirty bitmap, to ensure that dirty log is marked right away. > > > > I was initially convinced by these first three patches, but upon > > further review, I think the premise is wrong. AIUI, the concern across > > these patches is that our dirty bitmap is only populated with pages > > dirtied by pinning and we only take into account the pinned page dirty > > scope at the time the bitmap is retrieved by the user. You suppose > > this presents a gap where if a vendor driver has not yet identified > > with a page pinning scope that the entire bitmap should be considered > > dirty regardless of whether that driver later pins pages prior to the > > user retrieving the dirty bitmap. > > > > I don't think this is how we intended the cooperation between the iommu > > driver and vendor driver to work. By pinning pages a vendor driver is > > not declaring that only their future dirty page scope is limited to > > pinned pages, instead they're declaring themselves as a participant in > > dirty page tracking and take responsibility for pinning any necessary > > pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START > > to trigger a blocking notification to groups to not only begin dirty > > tracking, but also to synchronously register their current device DMA > > footprint. This patch would require a vendor driver to possibly perform > > a gratuitous page pinning in order to set the scope prior to dirty > > logging being enabled, or else the initial bitmap will be fully dirty. > > > > Therefore, I don't see that this series is necessary or correct. Kirti, > > does this match your thinking? > > > > That's correct Alex and I agree with you. > > > Thinking about these semantics, it seems there might still be an issue > > if a group with non-pinned-page dirty scope is detached with dirty > > logging enabled. > > Hot-unplug a device while migration process has started - is this > scenario supported? It's not prevented, it would rely on a userspace policy, right? The kernel should do the right thing regardless. Thanks, Alex > > It seems this should in fact fully populate the dirty > > bitmaps at the time it's removed since we don't know the extent of its > > previous DMA, nor will the group be present to trigger the full bitmap > > when the user retrieves the dirty bitmap. Creating fully populated > > bitmaps at the time tracking is enabled negates our ability to take > > advantage of later enlightenment though. Thanks, > > > > Alex > > > >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > >> Signed-off-by: Keqian Zhu > >> --- > >> drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > >> 1 file changed, 22 insertions(+), 11 deletions(-) > >> > >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > >> index bceda5e8baaa..b0a26e8e0adf 100644 > >> --- a/drivers/vfio/vfio_iommu_type1.c > >> +++ b/drivers/vfio/vfio_iommu_type1.c > >> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > >> dma->bitmap = NULL; > >> } > >> > >> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > >> { > >> struct rb_node *p; > >> unsigned long pgshift = __ffs(pgsize); > >> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> } > >> } > >> > >> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > >> +{ > >> + unsigned long pgshift = __ffs(pgsize); > >> + unsigned long nbits = dma->size >> pgshift; > >> + > >> + bitmap_set(dma->bitmap, 0, nbits); > >> +} > >> + > >> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > >> + struct vfio_dma *dma) > >> +{ > >> + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > >> + > >> + if (iommu->pinned_page_dirty_scope) > >> + vfio_dma_populate_bitmap_pinned(dma, pgsize); > >> + else if (dma->iommu_mapped) > >> + vfio_dma_populate_bitmap_full(dma, pgsize); > >> +} > >> + > >> static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> { > >> struct rb_node *n; > >> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> } > >> return ret; > >> } > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > >> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> unsigned long shift = bit_offset % BITS_PER_LONG; > >> unsigned long leftover; > >> > >> - /* > >> - * mark all pages dirty if any IOMMU capable device is not able > >> - * to report dirty pages and all pages are pinned and mapped. > >> - */ > >> - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > >> - bitmap_set(dma->bitmap, 0, nbits); > >> - > >> if (shift) { > >> bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > >> nbits + shift); > >> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> struct vfio_dma *dma; > >> struct rb_node *n; > >> unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > >> - size_t pgsize = (size_t)1 << pgshift; > >> int ret; > >> > >> /* > >> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> * pages which are marked dirty by vfio_dma_rw() > >> */ > >> bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > > > _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93906C433DB for ; Wed, 13 Jan 2021 15:15:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A7A223383 for ; Wed, 13 Jan 2021 15:15:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727074AbhAMPOo (ORCPT ); Wed, 13 Jan 2021 10:14:44 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:33474 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726989AbhAMPOo (ORCPT ); Wed, 13 Jan 2021 10:14:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610550796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=453S8sBVjLr0faZuW43TaOSjhYKei55KCDbFiBmR9dY=; b=hvP/nOKHkhKWguugvrTJGpW7481O44EVQgqYXTsggSe61c+GLL2AgNpc/HCS2g8e+CGEJW PXHjtmSIzFPd7eneYY2qMU179pgBxJhtmwBu7rhdtR7JGEH687RChEYWF7YnS1S70qk4Ds 90/p5klcq/mtb/KRXB/Uii/6bWWqRLw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-540-4yzDmH8lPpeaaE_NclWwuw-1; Wed, 13 Jan 2021 10:13:12 -0500 X-MC-Unique: 4yzDmH8lPpeaaE_NclWwuw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA475100559A; Wed, 13 Jan 2021 15:13:08 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id A6E8C5D9DD; Wed, 13 Jan 2021 15:13:05 +0000 (UTC) Date: Wed, 13 Jan 2021 08:13:05 -0700 From: Alex Williamson To: Kirti Wankhede Cc: Keqian Zhu , , , , , , Cornelia Huck , "Will Deacon" , Marc Zyngier , Catalin Marinas , Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , Daniel Lezcano , "Thomas Gleixner" , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210113081305.1df7de8d@omen.home.shazbot.org> In-Reply-To: <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> <20210112142059.074c1b0f@omen.home.shazbot.org> <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 13 Jan 2021 18:05:43 +0530 Kirti Wankhede wrote: > On 1/13/2021 2:50 AM, Alex Williamson wrote: > > On Thu, 7 Jan 2021 17:28:57 +0800 > > Keqian Zhu wrote: > > > >> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > >> is easy to lose dirty log. For example, after promoting pinned_scope of > >> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > >> dirty log that occurs before vfio_iommu is promoted. > >> > >> The key point is that pinned-dirty is not a real dirty tracking way, it > >> can't continuously track dirty pages, but just restrict dirty scope. It > >> is essentially the same as fully-dirty. Fully-dirty is of full-scope and > >> pinned-dirty is of pinned-scope. > >> > >> So we must mark pinned-dirty or fully-dirty after we start dirty tracking > >> or clear dirty bitmap, to ensure that dirty log is marked right away. > > > > I was initially convinced by these first three patches, but upon > > further review, I think the premise is wrong. AIUI, the concern across > > these patches is that our dirty bitmap is only populated with pages > > dirtied by pinning and we only take into account the pinned page dirty > > scope at the time the bitmap is retrieved by the user. You suppose > > this presents a gap where if a vendor driver has not yet identified > > with a page pinning scope that the entire bitmap should be considered > > dirty regardless of whether that driver later pins pages prior to the > > user retrieving the dirty bitmap. > > > > I don't think this is how we intended the cooperation between the iommu > > driver and vendor driver to work. By pinning pages a vendor driver is > > not declaring that only their future dirty page scope is limited to > > pinned pages, instead they're declaring themselves as a participant in > > dirty page tracking and take responsibility for pinning any necessary > > pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START > > to trigger a blocking notification to groups to not only begin dirty > > tracking, but also to synchronously register their current device DMA > > footprint. This patch would require a vendor driver to possibly perform > > a gratuitous page pinning in order to set the scope prior to dirty > > logging being enabled, or else the initial bitmap will be fully dirty. > > > > Therefore, I don't see that this series is necessary or correct. Kirti, > > does this match your thinking? > > > > That's correct Alex and I agree with you. > > > Thinking about these semantics, it seems there might still be an issue > > if a group with non-pinned-page dirty scope is detached with dirty > > logging enabled. > > Hot-unplug a device while migration process has started - is this > scenario supported? It's not prevented, it would rely on a userspace policy, right? The kernel should do the right thing regardless. Thanks, Alex > > It seems this should in fact fully populate the dirty > > bitmaps at the time it's removed since we don't know the extent of its > > previous DMA, nor will the group be present to trigger the full bitmap > > when the user retrieves the dirty bitmap. Creating fully populated > > bitmaps at the time tracking is enabled negates our ability to take > > advantage of later enlightenment though. Thanks, > > > > Alex > > > >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > >> Signed-off-by: Keqian Zhu > >> --- > >> drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > >> 1 file changed, 22 insertions(+), 11 deletions(-) > >> > >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > >> index bceda5e8baaa..b0a26e8e0adf 100644 > >> --- a/drivers/vfio/vfio_iommu_type1.c > >> +++ b/drivers/vfio/vfio_iommu_type1.c > >> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > >> dma->bitmap = NULL; > >> } > >> > >> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > >> { > >> struct rb_node *p; > >> unsigned long pgshift = __ffs(pgsize); > >> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> } > >> } > >> > >> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > >> +{ > >> + unsigned long pgshift = __ffs(pgsize); > >> + unsigned long nbits = dma->size >> pgshift; > >> + > >> + bitmap_set(dma->bitmap, 0, nbits); > >> +} > >> + > >> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > >> + struct vfio_dma *dma) > >> +{ > >> + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > >> + > >> + if (iommu->pinned_page_dirty_scope) > >> + vfio_dma_populate_bitmap_pinned(dma, pgsize); > >> + else if (dma->iommu_mapped) > >> + vfio_dma_populate_bitmap_full(dma, pgsize); > >> +} > >> + > >> static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> { > >> struct rb_node *n; > >> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> } > >> return ret; > >> } > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > >> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> unsigned long shift = bit_offset % BITS_PER_LONG; > >> unsigned long leftover; > >> > >> - /* > >> - * mark all pages dirty if any IOMMU capable device is not able > >> - * to report dirty pages and all pages are pinned and mapped. > >> - */ > >> - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > >> - bitmap_set(dma->bitmap, 0, nbits); > >> - > >> if (shift) { > >> bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > >> nbits + shift); > >> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> struct vfio_dma *dma; > >> struct rb_node *n; > >> unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > >> - size_t pgsize = (size_t)1 << pgshift; > >> int ret; > >> > >> /* > >> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> * pages which are marked dirty by vfio_dma_rw() > >> */ > >> bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FA4FC433E0 for ; Wed, 13 Jan 2021 15:15:13 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C464123383 for ; Wed, 13 Jan 2021 15:15:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C464123383 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rmYO0iRxRHTehNrdFmb9lLx4mJaiUjyxUWVvWb5Zi5g=; b=A+2DSg4zYjLzwEFQnQoxdAp9i v7w58hEg/e0Bs2PgP5jiyPU/+ZvtytIgq44YK/fh6No3KwBOeJq8K4t6hF+tDB3DJAxiDU80B1y4/ uA9ci0T7rceAy4ZhBys7+0PMNKM5dU8TTvfNqpPdT8Ni/2n6QHbdfgspn2GfXQJwqk0feNalX2Lei Ll+u1ZclFkafC4nTjdaxDorn2bTZq23GROqVN1pH4O9601B6F2GI3XFzrSvSQ1NnlQFWhuzOOOap3 xrPbZsxDWowKiVxg9YkGJfN5aN8Hy6v6BYx+Ajtmy7fKOxiysq77oGJW0W12XaD1yGs4Mx2VBEBZk c7i/qv9nA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kzhpo-0008FQ-AP; Wed, 13 Jan 2021 15:13:20 +0000 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kzhpl-0008EU-J5 for linux-arm-kernel@lists.infradead.org; Wed, 13 Jan 2021 15:13:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610550796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=453S8sBVjLr0faZuW43TaOSjhYKei55KCDbFiBmR9dY=; b=hvP/nOKHkhKWguugvrTJGpW7481O44EVQgqYXTsggSe61c+GLL2AgNpc/HCS2g8e+CGEJW PXHjtmSIzFPd7eneYY2qMU179pgBxJhtmwBu7rhdtR7JGEH687RChEYWF7YnS1S70qk4Ds 90/p5klcq/mtb/KRXB/Uii/6bWWqRLw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-540-4yzDmH8lPpeaaE_NclWwuw-1; Wed, 13 Jan 2021 10:13:12 -0500 X-MC-Unique: 4yzDmH8lPpeaaE_NclWwuw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA475100559A; Wed, 13 Jan 2021 15:13:08 +0000 (UTC) Received: from omen.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id A6E8C5D9DD; Wed, 13 Jan 2021 15:13:05 +0000 (UTC) Date: Wed, 13 Jan 2021 08:13:05 -0700 From: Alex Williamson To: Kirti Wankhede Subject: Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Message-ID: <20210113081305.1df7de8d@omen.home.shazbot.org> In-Reply-To: <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> References: <20210107092901.19712-1-zhukeqian1@huawei.com> <20210107092901.19712-2-zhukeqian1@huawei.com> <20210112142059.074c1b0f@omen.home.shazbot.org> <3f4f9a82-0934-b114-8bd8-452e9e56712f@nvidia.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210113_101317_672997_90541EA0 X-CRM114-Status: GOOD ( 44.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , jiangkunkun@huawei.com, kvm@vger.kernel.org, Catalin Marinas , Will Deacon , Keqian Zhu , Marc Zyngier , Joerg Roedel , Daniel Lezcano , kvmarm@lists.cs.columbia.edu, wanghaibin.wang@huawei.com, Julien Thierry , Suzuki K Poulose , Alexios Zavras , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, James Morse , Andrew Morton , Robin Murphy Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, 13 Jan 2021 18:05:43 +0530 Kirti Wankhede wrote: > On 1/13/2021 2:50 AM, Alex Williamson wrote: > > On Thu, 7 Jan 2021 17:28:57 +0800 > > Keqian Zhu wrote: > > > >> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap > >> is easy to lose dirty log. For example, after promoting pinned_scope of > >> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose > >> dirty log that occurs before vfio_iommu is promoted. > >> > >> The key point is that pinned-dirty is not a real dirty tracking way, it > >> can't continuously track dirty pages, but just restrict dirty scope. It > >> is essentially the same as fully-dirty. Fully-dirty is of full-scope and > >> pinned-dirty is of pinned-scope. > >> > >> So we must mark pinned-dirty or fully-dirty after we start dirty tracking > >> or clear dirty bitmap, to ensure that dirty log is marked right away. > > > > I was initially convinced by these first three patches, but upon > > further review, I think the premise is wrong. AIUI, the concern across > > these patches is that our dirty bitmap is only populated with pages > > dirtied by pinning and we only take into account the pinned page dirty > > scope at the time the bitmap is retrieved by the user. You suppose > > this presents a gap where if a vendor driver has not yet identified > > with a page pinning scope that the entire bitmap should be considered > > dirty regardless of whether that driver later pins pages prior to the > > user retrieving the dirty bitmap. > > > > I don't think this is how we intended the cooperation between the iommu > > driver and vendor driver to work. By pinning pages a vendor driver is > > not declaring that only their future dirty page scope is limited to > > pinned pages, instead they're declaring themselves as a participant in > > dirty page tracking and take responsibility for pinning any necessary > > pages. For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START > > to trigger a blocking notification to groups to not only begin dirty > > tracking, but also to synchronously register their current device DMA > > footprint. This patch would require a vendor driver to possibly perform > > a gratuitous page pinning in order to set the scope prior to dirty > > logging being enabled, or else the initial bitmap will be fully dirty. > > > > Therefore, I don't see that this series is necessary or correct. Kirti, > > does this match your thinking? > > > > That's correct Alex and I agree with you. > > > Thinking about these semantics, it seems there might still be an issue > > if a group with non-pinned-page dirty scope is detached with dirty > > logging enabled. > > Hot-unplug a device while migration process has started - is this > scenario supported? It's not prevented, it would rely on a userspace policy, right? The kernel should do the right thing regardless. Thanks, Alex > > It seems this should in fact fully populate the dirty > > bitmaps at the time it's removed since we don't know the extent of its > > previous DMA, nor will the group be present to trigger the full bitmap > > when the user retrieves the dirty bitmap. Creating fully populated > > bitmaps at the time tracking is enabled negates our ability to take > > advantage of later enlightenment though. Thanks, > > > > Alex > > > >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") > >> Signed-off-by: Keqian Zhu > >> --- > >> drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- > >> 1 file changed, 22 insertions(+), 11 deletions(-) > >> > >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > >> index bceda5e8baaa..b0a26e8e0adf 100644 > >> --- a/drivers/vfio/vfio_iommu_type1.c > >> +++ b/drivers/vfio/vfio_iommu_type1.c > >> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma) > >> dma->bitmap = NULL; > >> } > >> > >> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize) > >> { > >> struct rb_node *p; > >> unsigned long pgshift = __ffs(pgsize); > >> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) > >> } > >> } > >> > >> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize) > >> +{ > >> + unsigned long pgshift = __ffs(pgsize); > >> + unsigned long nbits = dma->size >> pgshift; > >> + > >> + bitmap_set(dma->bitmap, 0, nbits); > >> +} > >> + > >> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu, > >> + struct vfio_dma *dma) > >> +{ > >> + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); > >> + > >> + if (iommu->pinned_page_dirty_scope) > >> + vfio_dma_populate_bitmap_pinned(dma, pgsize); > >> + else if (dma->iommu_mapped) > >> + vfio_dma_populate_bitmap_full(dma, pgsize); > >> +} > >> + > >> static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> { > >> struct rb_node *n; > >> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) > >> } > >> return ret; > >> } > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > >> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> unsigned long shift = bit_offset % BITS_PER_LONG; > >> unsigned long leftover; > >> > >> - /* > >> - * mark all pages dirty if any IOMMU capable device is not able > >> - * to report dirty pages and all pages are pinned and mapped. > >> - */ > >> - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) > >> - bitmap_set(dma->bitmap, 0, nbits); > >> - > >> if (shift) { > >> bitmap_shift_left(dma->bitmap, dma->bitmap, shift, > >> nbits + shift); > >> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> struct vfio_dma *dma; > >> struct rb_node *n; > >> unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > >> - size_t pgsize = (size_t)1 << pgshift; > >> int ret; > >> > >> /* > >> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, > >> * pages which are marked dirty by vfio_dma_rw() > >> */ > >> bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); > >> - vfio_dma_populate_bitmap(dma, pgsize); > >> + vfio_dma_populate_bitmap(iommu, dma); > >> } > >> return 0; > >> } > > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel