From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56927) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEfB8-0005FO-0i for qemu-devel@nongnu.org; Mon, 13 Jul 2015 10:58:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZEfB3-00056a-Lc for qemu-devel@nongnu.org; Mon, 13 Jul 2015 10:57:57 -0400 Received: from e23smtp02.au.ibm.com ([202.81.31.144]:42792) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEfB2-00053y-Li for qemu-devel@nongnu.org; Mon, 13 Jul 2015 10:57:53 -0400 Received: from /spool/local by e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 14 Jul 2015 00:57:50 +1000 From: Alexey Kardashevskiy Date: Tue, 14 Jul 2015 00:56:21 +1000 Message-Id: <1436799381-16150-6-git-send-email-aik@ozlabs.ru> In-Reply-To: <1436799381-16150-1-git-send-email-aik@ozlabs.ru> References: <1436799381-16150-1-git-send-email-aik@ozlabs.ru> Subject: [Qemu-devel] [RFC PATCH qemu v2 5/5] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering) List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Alexey Kardashevskiy , Alex Williamson , qemu-ppc@nongnu.org, Michael Roth , David Gibson This makes use of the new "memory registering" feature. The idea is to provide the userspace ability to notify the host kernel about pages which are going to be used for DMA. Having this information, the host kernel can pin them all once per user process, do locked pages accounting (once) and not spent time on doing that in real time with possible failures which cannot be handled nicely in some cases. This adds a guest RAM memory listener which notifies a VFIO container about memory which needs to be pinned/unpinned. VFIO MMIO regions (i.e. "skip dump" regions) are skipped. This does not reuse the existing listener without filters as at the moment of the listener registration we know exactly what we want to listen on but it is tricky to do the filtering from the listener as it will be called for IBM VIO address spaces too (in addition to PCI and RAM). The feature is only enabled for SPAPR IOMMU v2. The host kernel changes are required. Since v2 does not need/support VFIO_IOMMU_ENABLE, this does not call it when v2 is detected and enabled. This does not change the guest visible interface. Signed-off-by: Alexey Kardashevskiy --- Changes: v2: * added another listener for RAM Changelog from the times when this was a part of DDW patchset: v11: * merged register_listener into vfio_memory_listener v9: * since there is no more SPAPR-specific data in container::iommu_data, the memory preregistration fields are common and potentially can be used by other architectures v7: * in vfio_spapr_ram_listener_region_del(), do unref() after ioctl() * s'ramlistener'register_listener' v6: * fixed commit log (s/guest/userspace/), added note about no guest visible change * fixed error checking if ram registration failed * added alignment check for section->offset_within_region v5: * simplified the patch * added trace points * added round_up() for the size * SPAPR IOMMU v2 used --- hw/vfio/common.c | 92 +++++++++++++++++++++++++++++++++++++++---- include/hw/vfio/vfio-common.h | 1 + trace-events | 2 + 3 files changed, 87 insertions(+), 8 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index daff0d9..d8ce04a 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -409,6 +409,19 @@ static void vfio_listener_region_do_add(VFIOContainer *container, goto error_exit; } break; + + case VFIO_SPAPR_TCE_v2_IOMMU: { + struct vfio_iommu_spapr_register_memory reg = { + .argsz = sizeof(reg), + .flags = 0, + .vaddr = (uint64_t) vaddr, + .size = end - iova + }; + + ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®); + trace_vfio_ram_register(reg.vaddr, reg.size, ret ? -errno : 0); + break; + } } return; @@ -491,6 +504,26 @@ static void vfio_listener_region_do_del(VFIOContainer *container, "0x%"HWADDR_PRIx") = %d (%m)", container, iova, end - iova, ret); } + + switch (container->iommu_data.type) { + case VFIO_SPAPR_TCE_v2_IOMMU: + if (!is_iommu) { + void *vaddr = memory_region_get_ram_ptr(section->mr) + + section->offset_within_region + + (iova - section->offset_within_address_space); + struct vfio_iommu_spapr_register_memory reg = { + .argsz = sizeof(reg), + .flags = 0, + .vaddr = (uint64_t) vaddr, + .size = end - iova + }; + + ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, + ®); + trace_vfio_ram_unregister(reg.vaddr, reg.size, ret ? -errno : 0); + } + break; + } } static void vfio_listener_region_add(MemoryListener *listener, @@ -517,8 +550,35 @@ static const MemoryListener vfio_memory_listener = { .region_del = vfio_listener_region_del, }; +static void vfio_spapr_ram_listener_region_add(MemoryListener *listener, + MemoryRegionSection *section) +{ + VFIOContainer *container = container_of(listener, VFIOContainer, + iommu_data.spapr.ram_listener); + + vfio_listener_region_do_add(container, listener, section); +} + + +static void vfio_spapr_ram_listener_region_del(MemoryListener *listener, + MemoryRegionSection *section) +{ + VFIOContainer *container = container_of(listener, VFIOContainer, + iommu_data.spapr.ram_listener); + + vfio_listener_region_do_del(container, listener, section); +} + +static const MemoryListener vfio_spapr_ram_listener = { + .region_add = vfio_spapr_ram_listener_region_add, + .region_del = vfio_spapr_ram_listener_region_del, +}; + static void vfio_listener_release(VFIOContainer *container) { + if (container->iommu_data.type == VFIO_SPAPR_TCE_v2_IOMMU) { + memory_listener_unregister(&container->iommu_data.spapr.ram_listener); + } memory_listener_unregister(&container->iommu_data.type1.listener); } @@ -732,14 +792,18 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as) container->iommu_data.type1.initialized = true; - } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) { + } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU) || + ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU)) { + bool v2 = !!ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU); + ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd); if (ret) { error_report("vfio: failed to set group container: %m"); ret = -errno; goto free_container_exit; } - container->iommu_data.type = VFIO_SPAPR_TCE_IOMMU; + container->iommu_data.type = + v2 ? VFIO_SPAPR_TCE_v2_IOMMU : VFIO_SPAPR_TCE_IOMMU; ret = ioctl(fd, VFIO_SET_IOMMU, container->iommu_data.type); if (ret) { error_report("vfio: failed to set iommu for container: %m"); @@ -752,18 +816,30 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as) * when container fd is closed so we do not call it explicitly * in this file. */ - ret = ioctl(fd, VFIO_IOMMU_ENABLE); - if (ret) { - error_report("vfio: failed to enable container: %m"); - ret = -errno; - goto free_container_exit; + if (!v2) { + ret = ioctl(fd, VFIO_IOMMU_ENABLE); + if (ret) { + error_report("vfio: failed to enable container: %m"); + ret = -errno; + goto free_container_exit; + } } container->iommu_data.spapr.common.listener = vfio_memory_listener; container->iommu_data.release = vfio_listener_release; - memory_listener_register(&container->iommu_data.spapr.common.listener, container->space->as); + if (v2) { + container->iommu_data.spapr.ram_listener = vfio_spapr_ram_listener; + memory_listener_register(&container->iommu_data.spapr.ram_listener, + &address_space_memory); + if (container->iommu_data.spapr.common.error) { + error_report("vfio: RAM memory listener initialization failed for container"); + goto listener_release_exit; + } + + container->iommu_data.spapr.common.initialized = true; + } } else { error_report("vfio: No available IOMMU models"); diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 135ea64..8edd572 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -72,6 +72,7 @@ typedef struct VFIOType1 { typedef struct VFIOSPAPR { VFIOType1 common; + MemoryListener ram_listener; } VFIOSPAPR; typedef struct VFIOContainer { diff --git a/trace-events b/trace-events index d24d80a..f859ad0 100644 --- a/trace-events +++ b/trace-events @@ -1582,6 +1582,8 @@ vfio_disconnect_container(int fd) "close container->fd=%d" vfio_put_group(int fd) "close group->fd=%d" vfio_get_device(const char * name, unsigned int flags, unsigned int num_regions, unsigned int num_irqs) "Device %s flags: %u, regions: %u, irqs: %u" vfio_put_base_device(int fd) "close vdev->fd=%d" +vfio_ram_register(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d" +vfio_ram_unregister(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d" # hw/vfio/platform.c vfio_platform_populate_regions(int region_index, unsigned long flag, unsigned long size, int fd, unsigned long offset) "- region %d flags = 0x%lx, size = 0x%lx, fd= %d, offset = 0x%lx" -- 2.4.0.rc3.8.gfb3e7d5