All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Li Qiang <liq3ea@gmail.com>
To: Jason Wang <jasowang@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Bug 1886362 <1886362@bugs.launchpad.net>,
	Qemu Developers <qemu-devel@nongnu.org>
Subject: Re: [Bug 1886362] [NEW] Heap use-after-free in lduw_he_p through e1000e_write_to_rx_buffers
Date: Tue, 14 Jul 2020 18:48:33 +0800	[thread overview]
Message-ID: <CAKXe6SJb=1YLLGF+TP1fMd95k_WzZd73JeUJv5Sn4EE4m2zP4w@mail.gmail.com> (raw)
In-Reply-To: <e4a34525-dbd1-1f85-475b-b5004885215b@redhat.com>

Jason Wang <jasowang@redhat.com> 于2020年7月14日周二 下午4:56写道:
>
>
> On 2020/7/10 下午6:37, Li Qiang wrote:
> > Paolo Bonzini <pbonzini@redhat.com> 于2020年7月10日周五 上午1:36写道:
> >> On 09/07/20 17:51, Li Qiang wrote:
> >>> Maybe we should check whether the address is a RAM address in 'dma_memory_rw'?
> >>> But it is a hot path. I'm not sure it is right. Hope more discussion.
> >> Half of the purpose of dma-helpers.c (as opposed to address_space_*
> >> functions in exec.c) is exactly to support writes to MMIO.  This is
> > Hi Paolo,
> >
> > Could you please explain more about this(to support writes to MMIO).
> > I can just see the dma helpers with sg DMA, not related with MMIO.
>
>
> Please refer doc/devel/memory.rst.
>
> The motivation of memory API is to allow support modeling different
> memory regions. DMA to MMIO is allowed in hardware so Qemu should
> emulate this behaviour.
>

I just read the code again.
So the dma_blk_io is used for some device that will need DMA to
MMIO(may be related with
device spec).  But for most of the devices(networking card for
example) there is no need this DMA to MMIO.
So we just use dma_memory_rw.  Is this understanding right?

Then another question.
Though the dma helpers uses a bouncing buffer, it finally write to the
device addressspace in 'address_space_unmap'.
Is there any posibility that we can again write to the MMIO like this issue?


>
> >
> >
> >> especially true of dma_blk_io, which takes care of doing the DMA via a
> >> bounce buffer, possibly in multiple steps and even blocking due to
> >> cpu_register_map_client.
> >>
> >> For dma_memory_rw this is not needed, so it only needs to handle
> >> QEMUSGList, but I think the design should be the same.
> >>
> >> However, this is indeed a nightmare for re-entrancy.  The easiest
> >> solution is to delay processing of descriptors to a bottom half whenever
> >> MMIO is doing something complicated.  This is also better for latency
> >> because it will free the vCPU thread more quickly and leave the work to
> >> the I/O thread.
> > Do you mean we define a per-e1000e bottom half. And in the MMIO write
> > or packet send
> > trigger this bh?
>
>
> Probably a TX bh.
>

I will try to write this tx bh to strength my understanding in this part.
Maybe reference the virtio-net implementation I think.



Thanks,
Li Qiang

>
> > So even if we again trigger the MMIO write, then
> > second bh will not be executed?
>
>
> Bh is serialized so no re-entrancy issue.
>
> Thanks
>
>
> >
> >
> > Thanks,
> > Li Qiang
> >
> >> Paolo
> >>
>


  reply	other threads:[~2020-07-14 10:49 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-06  2:44 [Bug 1886362] [NEW] Heap use-after-free in lduw_he_p through e1000e_write_to_rx_buffers Alexander Bulekov
2020-07-07  4:52 ` [Bug 1886362] " Philippe Mathieu-Daudé
2020-07-09 15:51 ` [Bug 1886362] [NEW] " Li Qiang
2020-07-09 17:36   ` Paolo Bonzini
2020-07-10 10:37     ` Li Qiang
2020-07-14  8:56       ` Jason Wang
2020-07-14  8:56         ` Jason Wang
2020-07-14 10:48         ` Li Qiang [this message]
2020-07-15  8:35           ` Jason Wang
2020-07-15  8:35             ` Jason Wang
2020-07-21 12:31             ` Peter Maydell
2020-07-21 12:31               ` Peter Maydell
2020-07-21 13:21               ` Jason Wang
2020-07-21 13:21                 ` Jason Wang
2020-07-21 13:44                 ` Peter Maydell
2020-07-21 13:44                   ` Peter Maydell
2020-07-21 14:37                   ` Alexander Bulekov
2020-07-21 14:37                     ` Alexander Bulekov
2020-07-22  3:25                   ` Jason Wang
2020-07-22  3:25                     ` Jason Wang
2020-07-21 13:46                 ` Li Qiang
2020-07-22  3:27                   ` Jason Wang
2020-07-22  3:27                     ` Jason Wang
2020-07-21 12:14 ` [Bug 1886362] " P J P
2021-05-26 14:34 ` Thomas Huth
2021-06-14 23:40 ` Alexander Bulekov
2021-06-15 18:19 ` Thomas Huth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKXe6SJb=1YLLGF+TP1fMd95k_WzZd73JeUJv5Sn4EE4m2zP4w@mail.gmail.com' \
    --to=liq3ea@gmail.com \
    --cc=1886362@bugs.launchpad.net \
    --cc=jasowang@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.