All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Igor Mammedov <imammedo@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: pbonzini@redhat.com, qemu-devel@nongnu.org, peter.maydell@linaro.org
Subject: Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on
Date: Thu, 16 Jul 2015 11:42:36 +0200	[thread overview]
Message-ID: <20150716114236.67615189@nial.brq.redhat.com> (raw)
In-Reply-To: <20150716103150-mutt-send-email-mst@redhat.com>

On Thu, 16 Jul 2015 10:35:33 +0300
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Thu, Jul 16, 2015 at 09:26:21AM +0200, Igor Mammedov wrote:
> > On Wed, 15 Jul 2015 19:32:31 +0300
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Wed, Jul 15, 2015 at 05:12:01PM +0200, Igor Mammedov wrote:
> > > > On Thu,  9 Jul 2015 13:47:17 +0200
> > > > Igor Mammedov <imammedo@redhat.com> wrote:
> > > > 
> > > > there also is yet another issue with vhost-user. It also has
> > > > very low limit on amount of memory regions (if I recall correctly 8)
> > > > and it's possible to trigger even without memory hotplug.
> > > > one just need to start QEMU with a several -numa memdev= options
> > > > to create a necessary amount of memory regions to trigger it.
> > > > 
> > > > lowrisk option to fix it would be increasing limit in vhost-user
> > > > backend.
> > > > 
> > > > another option is disabling vhost and fall-back to virtio,
> > > > but I don't know much about vhost if it's possible to 
> > > > to switch it off without loosing packets guest was sending
> > > > at the moment and if it will work at all with vhost.
> > > 
> > > With vhost-user you can't fall back to virtio: it's
> > > not an accelerator, it's the backend.
> > > 
> > > Updating the protocol to support a bigger table
> > > is possible but old remotes won't be able to support it.
> > > 
> > it looks like increasing limit is the only option left.
> > 
> > it's not ideal that old remotes /with hardcoded limit/
> > might not be able to handle bigger table but at least
> > new ones and ones that handle VhostUserMsg payload
> > dynamically would be able to work without crashing.
> 
> I think we need a way for hotplug to fail gracefully.  As long as we
> don't implement the hva trick, it's needed for old kernels with vhost in
> kernel, too.
I don't see a reliable way to fail hotplug though.

In case of hotplug failure path comes from memory listener
which can't fail by design but it fails in vhost case, i.e.
vhost side doesn't follow protocol.

We already have considered idea of querying vhost, for limit
from memory hotplug handler before mapping memory region
but it has drawbacks:
 1. amount of memory ranges changes during guest lifecycle
   as it initializes different devices.
   which leads to a case when we can hotplug more pc-dimms
   than cold-plug.
   Which leads to inability to migrate guest with hotplugged
   pc-dimms since target QEMU won't start with that amount
   of dimms from source due to hitting limit.
 2. it's ugly hack to query random 'vhost' entity when plugging
   dimm device from modeling pov, but we can live with it
   if it helps QEMU not to crash.

If it's acceptable to break/ignore #1 issue, I can post related
QEMU patches that I have, at least qemu won't crash with old
vhost backends.

  reply	other threads:[~2015-07-16  9:42 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-09 11:47 [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 1/7] memory: get rid of memory_region_destructor_ram_from_ptr() Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 2/7] memory: introduce MemoryRegion container with reserved HVA range Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 3/7] pc: reserve hotpluggable memory range with memory_region_init_hva_range() Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged Igor Mammedov
2015-07-09 13:06   ` Michael S. Tsirkin
2015-07-09 13:43     ` Paolo Bonzini
2015-07-09 13:46       ` Michael S. Tsirkin
2015-07-10 10:12         ` Igor Mammedov
2015-07-13  6:55           ` Michael S. Tsirkin
2015-07-13 18:55             ` Igor Mammedov
2015-07-13 20:14               ` Michael S. Tsirkin
2015-07-14 13:02                 ` Igor Mammedov
2015-07-14 13:14                   ` Michael S. Tsirkin
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 5/7] exec: make sure that RAMBlock descriptor won't be leaked Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 6/7] exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 7/7] memory: add support for deleting HVA mapped MemoryRegion Igor Mammedov
2015-07-15 15:12 ` [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on Igor Mammedov
2015-07-15 16:32   ` Michael S. Tsirkin
2015-07-16  7:26     ` Igor Mammedov
2015-07-16  7:35       ` Michael S. Tsirkin
2015-07-16  9:42         ` Igor Mammedov [this message]
2015-07-16 10:24           ` Michael S. Tsirkin
2015-07-16 11:11             ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150716114236.67615189@nial.brq.redhat.com \
    --to=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.