All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org, pbonzini@redhat.com, mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on
Date: Wed, 15 Jul 2015 17:12:01 +0200	[thread overview]
Message-ID: <20150715171201.05f6bc0d@nial.brq.redhat.com> (raw)
In-Reply-To: <1436442444-132020-1-git-send-email-imammedo@redhat.com>

On Thu,  9 Jul 2015 13:47:17 +0200
Igor Mammedov <imammedo@redhat.com> wrote:

there also is yet another issue with vhost-user. It also has
very low limit on amount of memory regions (if I recall correctly 8)
and it's possible to trigger even without memory hotplug.
one just need to start QEMU with a several -numa memdev= options
to create a necessary amount of memory regions to trigger it.

lowrisk option to fix it would be increasing limit in vhost-user
backend.

another option is disabling vhost and fall-back to virtio,
but I don't know much about vhost if it's possible to 
to switch it off without loosing packets guest was sending
at the moment and if it will work at all with vhost.



> Changelog:
>  v3->v4:
>    * drop patch extending memory_region_subregion_add()
>      with error argument
>    * and add memory_region_add_subregion_to_hva() API instead
>    * add madvise(DONTNEED) when returning range to HVA container
>  v2->v3:
>    * fixed(work-arouned) unmapping issues,
>      now memory subsytem keeps track of HVA mapped
>      regions and doesn't allow to map a new region
>      at address where previos has benn mapped until
>      previous region is gone
>    * fixed offset calculations in memory_region_find_hva_range()
>      in 2/8
>    * redone MemorySection folding into HVA range for VHOST,
>      now compacted memory map is temporary and passed only to vhost
>      backend and doesn't touch original memory map used by QEMU
>  v1->v2:
>    * take into account Paolo's review comments
>      * do not overload ram_addr
>      * ifdef linux specific code
>    * reseve HVA using API from exec.c instead of calling
>      mmap() dircely from memory.c
>    * support unmapping of HVA remapped region
> 
> When more than ~50 pc-dimm devices are hotplugged with
> vhost enabled, QEMU will assert in vhost vhost_commit()
> due to backend refusing to accept too many memory ranges.
> 
> Series introduces Reserved HVA MemoryRegion container
> where to all hotplugged memory is remapped and passes
> the single container range to vhost instead of multiple
> memory ranges for each hotlugged pc-dimm device.
> 
> It's an alternative approach to increasing backend supported
> memory regions limit. 
> 
> Tested it a bit more, so now
>  - migration from current master to patched version seems to work
>  - memory is returned to host after device_del+object_del sequence,
>    but I can't bet if cgroups won't charge it.
> 
> git branch for testing:
>   https://github.com/imammedo/qemu/commits/vhost_one_hp_range_v4
> 
> 
> Igor Mammedov (7):
>   memory: get rid of memory_region_destructor_ram_from_ptr()
>   memory: introduce MemoryRegion container with reserved HVA range
>   pc: reserve hotpluggable memory range with
>     memory_region_init_hva_range()
>   pc: fix QEMU crashing when more than ~50 memory hotplugged
>   exec: make sure that RAMBlock descriptor won't be leaked
>   exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area
>   memory: add support for deleting HVA mapped MemoryRegion
> 
>  exec.c                    |  71 +++++++++++++++++++----------
>  hw/i386/pc.c              |   4 +-
>  hw/mem/pc-dimm.c          |   6 ++-
>  hw/virtio/vhost.c         |  47 ++++++++++++++++++--
>  include/exec/cpu-common.h |   3 ++
>  include/exec/memory.h     |  67 +++++++++++++++++++++++++++-
>  include/exec/ram_addr.h   |   1 -
>  include/hw/virtio/vhost.h |   1 +
>  memory.c                  | 111 +++++++++++++++++++++++++++++++++++++++++++---
>  9 files changed, 272 insertions(+), 39 deletions(-)
> 

  parent reply	other threads:[~2015-07-15 15:12 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-09 11:47 [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 1/7] memory: get rid of memory_region_destructor_ram_from_ptr() Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 2/7] memory: introduce MemoryRegion container with reserved HVA range Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 3/7] pc: reserve hotpluggable memory range with memory_region_init_hva_range() Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged Igor Mammedov
2015-07-09 13:06   ` Michael S. Tsirkin
2015-07-09 13:43     ` Paolo Bonzini
2015-07-09 13:46       ` Michael S. Tsirkin
2015-07-10 10:12         ` Igor Mammedov
2015-07-13  6:55           ` Michael S. Tsirkin
2015-07-13 18:55             ` Igor Mammedov
2015-07-13 20:14               ` Michael S. Tsirkin
2015-07-14 13:02                 ` Igor Mammedov
2015-07-14 13:14                   ` Michael S. Tsirkin
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 5/7] exec: make sure that RAMBlock descriptor won't be leaked Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 6/7] exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area Igor Mammedov
2015-07-09 11:47 ` [Qemu-devel] [PATCH v4 7/7] memory: add support for deleting HVA mapped MemoryRegion Igor Mammedov
2015-07-15 15:12 ` Igor Mammedov [this message]
2015-07-15 16:32   ` [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on Michael S. Tsirkin
2015-07-16  7:26     ` Igor Mammedov
2015-07-16  7:35       ` Michael S. Tsirkin
2015-07-16  9:42         ` Igor Mammedov
2015-07-16 10:24           ` Michael S. Tsirkin
2015-07-16 11:11             ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150715171201.05f6bc0d@nial.brq.redhat.com \
    --to=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.