From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54842) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEXeB-0005s9-Iv for qemu-devel@nongnu.org; Mon, 13 Jul 2015 02:55:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZEXe6-0005o8-Em for qemu-devel@nongnu.org; Mon, 13 Jul 2015 02:55:27 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59851) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEXe6-0005ny-9E for qemu-devel@nongnu.org; Mon, 13 Jul 2015 02:55:22 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id 428FD3589E5 for ; Mon, 13 Jul 2015 06:55:21 +0000 (UTC) Date: Mon, 13 Jul 2015 09:55:18 +0300 From: "Michael S. Tsirkin" Message-ID: <20150713095252-mutt-send-email-mst@redhat.com> References: <1436442444-132020-1-git-send-email-imammedo@redhat.com> <1436442444-132020-5-git-send-email-imammedo@redhat.com> <20150709155919-mutt-send-email-mst@redhat.com> <559E7A65.6080908@redhat.com> <20150709164336-mutt-send-email-mst@redhat.com> <20150710121236.172d59e9@nial.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150710121236.172d59e9@nial.brq.redhat.com> Subject: Re: [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Igor Mammedov Cc: Paolo Bonzini , qemu-devel@nongnu.org On Fri, Jul 10, 2015 at 12:12:36PM +0200, Igor Mammedov wrote: > On Thu, 9 Jul 2015 16:46:43 +0300 > "Michael S. Tsirkin" wrote: > > > On Thu, Jul 09, 2015 at 03:43:01PM +0200, Paolo Bonzini wrote: > > > > > > > > > On 09/07/2015 15:06, Michael S. Tsirkin wrote: > > > > > QEMU asserts in vhost due to hitting vhost backend limit > > > > > on number of supported memory regions. > > > > > > > > > > Describe all hotplugged memory as one continuos range > > > > > to vhost with linear 1:1 HVA->GPA mapping in backend. > > > > > > > > > > Signed-off-by: Igor Mammedov > > > > > > > > Hmm - a bunch of work here to recombine MRs that memory listener > > > > interface breaks up. In particular KVM could benefit from this too (on > > > > workloads that change the table a lot). Can't we teach memory core to > > > > pass hva range as a single continuous range to memory listeners? > > > > > > Memory listeners are based on memory regions, not HVA ranges. > > > > > > Paolo > > > > Many listeners care about HVA ranges. I know KVM and vhost do. > I'm not sure about KVM, it works just fine with fragmented memory regions, > the same will apply to vhost once module parameter to increase limit > is merged. > > but changing generic memory listener interface to replace HVA mapped > regions with HVA container would lead to a case when listeners > won't see exact layout that they might need. I don't think they care, really. > In addition vhost itself will suffer from working with big HVA > since it allocates log depending on size of memory => bigger log. Not really - it allocates the log depending on the PA range. Leaving unused holes doesn't reduce it's size. > That's one of the reasons that in this patch HVA ranges in > memory map are compacted only for backend consumption, > QEMU's side of vhost uses exact map for internal purposes. > And the other reason is I don't know vhost enough to rewrite it > to use big HVA for everything. > > > I guess we could create dummy MRs to fill in the holes left by > > memory hotplug? > it looks like nice thing from vhost pov but complicates other side, What other side do you have in mind? > hence I dislike an idea inventing dummy MRs for vhost's convenience. > > > > vhost already has logic to recombine > > consequitive chunks created by memory core. > which looks a bit complicated and I was thinking about simplifying > it some time in the future.