All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: "Philippe Mathieu-Daudé" <philmd@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Bug 1880355 <1880355@bugs.launchpad.net>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Laszlo Ersek <lersek@redhat.com>
Subject: Re: [Bug 1880355] [NEW] Length restrictions for fw_cfg_dma_transfer?
Date: Sun, 24 May 2020 16:27:48 +0200	[thread overview]
Message-ID: <c8326e26-c529-18c1-4bd2-63a5aec071fb@redhat.com> (raw)
In-Reply-To: <CAFEAcA83E33xNjhXvbZr9oe7TO9kMa0nArroCA_mY3zy+0bq2g@mail.gmail.com>

On 5/24/20 3:40 PM, Peter Maydell wrote:
> On Sun, 24 May 2020 at 11:30, Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
>> It looks to me a normal behavior for a DMA device. DMA devices have a
>> different address space view than the CPUs.
>> Also note the fw_cfg is a generic device, not restricted to the x86 arch.
> 
> In an ideal world all our DMA devices would use some kind of common
> framework or design pattern so they didn't hog all the CPU
> and/or spend minutes with the BQL held if the guest requests
> an enormous-sized DMA. In practice many of them just have
> a simple "loop until the DMA transfer is complete" implementation...

Is this framework already implemented in the hidden dma-helpers.c?

Apparently this file was written for BlockBackend, but the code seems
rather generic.



WARNING: multiple messages have this Message-ID (diff)
From: "Philippe Mathieu-Daudé" <1880355@bugs.launchpad.net>
To: qemu-devel@nongnu.org
Subject: Re: [Bug 1880355] [NEW] Length restrictions for fw_cfg_dma_transfer?
Date: Sun, 24 May 2020 14:27:48 -0000	[thread overview]
Message-ID: <c8326e26-c529-18c1-4bd2-63a5aec071fb@redhat.com> (raw)
Message-ID: <20200524142748.rDfBLFhRcc5ZCZwSQskrdRUGJaGsxK4WpfA-z8RusRU@z> (raw)
In-Reply-To: CAFEAcA83E33xNjhXvbZr9oe7TO9kMa0nArroCA_mY3zy+0bq2g@mail.gmail.com

On 5/24/20 3:40 PM, Peter Maydell wrote:
> On Sun, 24 May 2020 at 11:30, Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
>> It looks to me a normal behavior for a DMA device. DMA devices have a
>> different address space view than the CPUs.
>> Also note the fw_cfg is a generic device, not restricted to the x86 arch.
> 
> In an ideal world all our DMA devices would use some kind of common
> framework or design pattern so they didn't hog all the CPU
> and/or spend minutes with the BQL held if the guest requests
> an enormous-sized DMA. In practice many of them just have
> a simple "loop until the DMA transfer is complete" implementation...

Is this framework already implemented in the hidden dma-helpers.c?

Apparently this file was written for BlockBackend, but the code seems
rather generic.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1880355

Title:
  Length restrictions for fw_cfg_dma_transfer?

Status in QEMU:
  New

Bug description:
  For me, this takes close to 3 minutes at 100% CPU:
  echo "outl 0x518 0x9596ffff" | ./i386-softmmu/qemu-system-i386 -M q35 -m 32 -nographic -accel qtest -monitor none -serial none -qtest stdio

  #0  phys_page_find (d=0x606000035d80, addr=136728041144404) at /exec.c:338
  #1  address_space_lookup_region (d=0x606000035d80, addr=136728041144404, resolve_subpage=true) at /exec.c:363
  #2  address_space_translate_internal (d=0x606000035d80, addr=136728041144404, xlat=0x7fff1fc0d070, plen=0x7fff1fc0d090, resolve_subpage=true) at /exec.c:382
  #3  flatview_do_translate (fv=0x606000035d20, addr=136728041144404, xlat=0x7fff1fc0d070, plen_out=0x7fff1fc0d090, page_mask_out=0x0, is_write=true, is_mmio=true, target_as=0x7fff1fc0ce10, attrs=...)
      pment/qemu/exec.c:520
  #4  flatview_translate (fv=0x606000035d20, addr=136728041144404, xlat=0x7fff1fc0d070, plen=0x7fff1fc0d090, is_write=true, attrs=...) at /exec.c:586
  #5  flatview_write_continue (fv=0x606000035d20, addr=136728041144404, attrs=..., ptr=0x7fff1fc0d660, len=172, addr1=136728041144400, l=172, mr=0x557fd54e77e0 <io_mem_unassigned>)
      pment/qemu/exec.c:3160
  #6  flatview_write (fv=0x606000035d20, addr=136728041144064, attrs=..., buf=0x7fff1fc0d660, len=512) at /exec.c:3177
  #7  address_space_write (as=0x557fd54e7a00 <address_space_memory>, addr=136728041144064, attrs=..., buf=0x7fff1fc0d660, len=512) at /exec.c:3271
  #8  dma_memory_set (as=0x557fd54e7a00 <address_space_memory>, addr=136728041144064, c=0 '\000', len=1378422272) at /dma-helpers.c:31
  #9  fw_cfg_dma_transfer (s=0x61a000001e80) at /hw/nvram/fw_cfg.c:400
  #10 fw_cfg_dma_mem_write (opaque=0x61a000001e80, addr=4, value=4294940309, size=4) at /hw/nvram/fw_cfg.c:467
  #11 memory_region_write_accessor (mr=0x61a000002200, addr=4, value=0x7fff1fc0e3d0, size=4, shift=0, mask=4294967295, attrs=...) at /memory.c:483
  #12 access_with_adjusted_size (addr=4, value=0x7fff1fc0e3d0, size=4, access_size_min=1, access_size_max=8, access_fn=0x557fd2288c80 <memory_region_write_accessor>, mr=0x61a000002200, attrs=...)
      pment/qemu/memory.c:539
  #13 memory_region_dispatch_write (mr=0x61a000002200, addr=4, data=4294940309, op=MO_32, attrs=...) at /memory.c:1476
  #14 flatview_write_continue (fv=0x606000035f00, addr=1304, attrs=..., ptr=0x7fff1fc0ec40, len=4, addr1=4, l=4, mr=0x61a000002200) at /exec.c:3137
  #15 flatview_write (fv=0x606000035f00, addr=1304, attrs=..., buf=0x7fff1fc0ec40, len=4) at /exec.c:3177
  #16 address_space_write (as=0x557fd54e7bc0 <address_space_io>, addr=1304, attrs=..., buf=0x7fff1fc0ec40, len=4) at /exec.c:3271

  
  It looks like fw_cfg_dma_transfer gets the address(136728041144064) and length(1378422272) for the read from the value provided as input 4294940309 (0xFFFF9695) which lands in pcbios. Should there be any limits on the length of guest-memory that fw_cfg should populate?
  Found by libfuzzer

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1880355/+subscriptions


  reply	other threads:[~2020-05-24 14:28 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-24  4:12 [Bug 1880355] [NEW] Length restrictions for fw_cfg_dma_transfer? Alexander Bulekov
2020-05-24 10:30 ` Philippe Mathieu-Daudé
2020-05-24 10:30   ` Philippe Mathieu-Daudé
2020-05-24 13:40   ` Peter Maydell
2020-05-24 13:40     ` Peter Maydell
2020-05-24 14:27     ` Philippe Mathieu-Daudé [this message]
2020-05-24 14:27       ` Philippe Mathieu-Daudé
2020-05-25 13:59       ` Paolo Bonzini
2020-05-25  9:14     ` Mark Cave-Ayland
2020-05-25  9:14       ` Mark Cave-Ayland
2021-06-10 15:50 ` [Bug 1880355] " Thomas Huth
2021-06-14 23:35 ` Alexander Bulekov
2021-06-15 18:17 ` Thomas Huth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c8326e26-c529-18c1-4bd2-63a5aec071fb@redhat.com \
    --to=philmd@redhat.com \
    --cc=1880355@bugs.launchpad.net \
    --cc=kraxel@redhat.com \
    --cc=lersek@redhat.com \
    --cc=mark.cave-ayland@ilande.co.uk \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.