From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41579) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZDAIh-00055L-EM for qemu-devel@nongnu.org; Thu, 09 Jul 2015 07:47:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZDAIc-0005il-Jn for qemu-devel@nongnu.org; Thu, 09 Jul 2015 07:47:35 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49590) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZDAIc-0005iD-Dz for qemu-devel@nongnu.org; Thu, 09 Jul 2015 07:47:30 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id E44498F003 for ; Thu, 9 Jul 2015 11:47:28 +0000 (UTC) From: Igor Mammedov Date: Thu, 9 Jul 2015 13:47:17 +0200 Message-Id: <1436442444-132020-1-git-send-email-imammedo@redhat.com> Subject: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, mst@redhat.com Changelog: v3->v4: * drop patch extending memory_region_subregion_add() with error argument * and add memory_region_add_subregion_to_hva() API instead * add madvise(DONTNEED) when returning range to HVA container v2->v3: * fixed(work-arouned) unmapping issues, now memory subsytem keeps track of HVA mapped regions and doesn't allow to map a new region at address where previos has benn mapped until previous region is gone * fixed offset calculations in memory_region_find_hva_range() in 2/8 * redone MemorySection folding into HVA range for VHOST, now compacted memory map is temporary and passed only to vhost backend and doesn't touch original memory map used by QEMU v1->v2: * take into account Paolo's review comments * do not overload ram_addr * ifdef linux specific code * reseve HVA using API from exec.c instead of calling mmap() dircely from memory.c * support unmapping of HVA remapped region When more than ~50 pc-dimm devices are hotplugged with vhost enabled, QEMU will assert in vhost vhost_commit() due to backend refusing to accept too many memory ranges. Series introduces Reserved HVA MemoryRegion container where to all hotplugged memory is remapped and passes the single container range to vhost instead of multiple memory ranges for each hotlugged pc-dimm device. It's an alternative approach to increasing backend supported memory regions limit. Tested it a bit more, so now - migration from current master to patched version seems to work - memory is returned to host after device_del+object_del sequence, but I can't bet if cgroups won't charge it. git branch for testing: https://github.com/imammedo/qemu/commits/vhost_one_hp_range_v4 Igor Mammedov (7): memory: get rid of memory_region_destructor_ram_from_ptr() memory: introduce MemoryRegion container with reserved HVA range pc: reserve hotpluggable memory range with memory_region_init_hva_range() pc: fix QEMU crashing when more than ~50 memory hotplugged exec: make sure that RAMBlock descriptor won't be leaked exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area memory: add support for deleting HVA mapped MemoryRegion exec.c | 71 +++++++++++++++++++---------- hw/i386/pc.c | 4 +- hw/mem/pc-dimm.c | 6 ++- hw/virtio/vhost.c | 47 ++++++++++++++++++-- include/exec/cpu-common.h | 3 ++ include/exec/memory.h | 67 +++++++++++++++++++++++++++- include/exec/ram_addr.h | 1 - include/hw/virtio/vhost.h | 1 + memory.c | 111 +++++++++++++++++++++++++++++++++++++++++++--- 9 files changed, 272 insertions(+), 39 deletions(-) -- 1.8.3.1