From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42387) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZDVIR-00007o-C5 for qemu-devel@nongnu.org; Fri, 10 Jul 2015 06:12:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZDVIO-0007Bg-4I for qemu-devel@nongnu.org; Fri, 10 Jul 2015 06:12:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52187) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZDVIN-0007Bc-Uv for qemu-devel@nongnu.org; Fri, 10 Jul 2015 06:12:40 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id 98E462D1272 for ; Fri, 10 Jul 2015 10:12:39 +0000 (UTC) Date: Fri, 10 Jul 2015 12:12:36 +0200 From: Igor Mammedov Message-ID: <20150710121236.172d59e9@nial.brq.redhat.com> In-Reply-To: <20150709164336-mutt-send-email-mst@redhat.com> References: <1436442444-132020-1-git-send-email-imammedo@redhat.com> <1436442444-132020-5-git-send-email-imammedo@redhat.com> <20150709155919-mutt-send-email-mst@redhat.com> <559E7A65.6080908@redhat.com> <20150709164336-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Paolo Bonzini , qemu-devel@nongnu.org On Thu, 9 Jul 2015 16:46:43 +0300 "Michael S. Tsirkin" wrote: > On Thu, Jul 09, 2015 at 03:43:01PM +0200, Paolo Bonzini wrote: > > > > > > On 09/07/2015 15:06, Michael S. Tsirkin wrote: > > > > QEMU asserts in vhost due to hitting vhost backend limit > > > > on number of supported memory regions. > > > > > > > > Describe all hotplugged memory as one continuos range > > > > to vhost with linear 1:1 HVA->GPA mapping in backend. > > > > > > > > Signed-off-by: Igor Mammedov > > > > > > Hmm - a bunch of work here to recombine MRs that memory listener > > > interface breaks up. In particular KVM could benefit from this too (on > > > workloads that change the table a lot). Can't we teach memory core to > > > pass hva range as a single continuous range to memory listeners? > > > > Memory listeners are based on memory regions, not HVA ranges. > > > > Paolo > > Many listeners care about HVA ranges. I know KVM and vhost do. I'm not sure about KVM, it works just fine with fragmented memory regions, the same will apply to vhost once module parameter to increase limit is merged. but changing generic memory listener interface to replace HVA mapped regions with HVA container would lead to a case when listeners won't see exact layout that they might need. In addition vhost itself will suffer from working with big HVA since it allocates log depending on size of memory => bigger log. That's one of the reasons that in this patch HVA ranges in memory map are compacted only for backend consumption, QEMU's side of vhost uses exact map for internal purposes. And the other reason is I don't know vhost enough to rewrite it to use big HVA for everything. > I guess we could create dummy MRs to fill in the holes left by > memory hotplug? it looks like nice thing from vhost pov but complicates other side, hence I dislike an idea inventing dummy MRs for vhost's convenience. > vhost already has logic to recombine > consequitive chunks created by memory core. which looks a bit complicated and I was thinking about simplifying it some time in the future.