All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Srujana Challa <schalla@marvell.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Cc: "virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	 "xuanzhuo@linux.alibaba.com" <xuanzhuo@linux.alibaba.com>,
	Vamsi Krishna Attunuru <vattunuru@marvell.com>,
	 Shijith Thotton <sthotton@marvell.com>,
	Nithin Kumar Dabilpuram <ndabilpuram@marvell.com>,
	 Jerin Jacob <jerinj@marvell.com>, eperezma <eperezma@redhat.com>
Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver for Marvell OCTEON DPU devices
Date: Tue, 16 Apr 2024 11:17:48 +0800	[thread overview]
Message-ID: <CACGkMEtMvb6HOebVxjn8070+PvtPp=9FAvuNA6uKd4NK+L996Q@mail.gmail.com> (raw)
In-Reply-To: <PH7PR18MB535426965C489176FA9D8281A0092@PH7PR18MB5354.namprd18.prod.outlook.com>

On Mon, Apr 15, 2024 at 8:42 PM Srujana Challa <schalla@marvell.com> wrote:
>
> > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver for Marvell
> > OCTEON DPU devices
> >
> > On Fri, Apr 12, 2024 at 5:49 PM Srujana Challa <schalla@marvell.com> wrote:
> > >
> > > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver for
> > > > Marvell OCTEON DPU devices
> > > >
> > > > On Fri, Apr 12, 2024 at 1:13 PM Srujana Challa <schalla@marvell.com>
> > wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jason Wang <jasowang@redhat.com>
> > > > > > Sent: Thursday, April 11, 2024 11:32 AM
> > > > > > To: Srujana Challa <schalla@marvell.com>
> > > > > > Cc: Michael S. Tsirkin <mst@redhat.com>;
> > > > > > virtualization@lists.linux.dev; xuanzhuo@linux.alibaba.com;
> > > > > > Vamsi Krishna Attunuru <vattunuru@marvell.com>; Shijith Thotton
> > > > > > <sthotton@marvell.com>; Nithin Kumar Dabilpuram
> > > > > > <ndabilpuram@marvell.com>; Jerin Jacob <jerinj@marvell.com>;
> > > > > > eperezma <eperezma@redhat.com>
> > > > > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver
> > > > > > for Marvell OCTEON DPU devices
> > > > > >
> > > > > > On Wed, Apr 10, 2024 at 8:35 PM Srujana Challa
> > > > > > <schalla@marvell.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA
> > > > > > > > driver for Marvell OCTEON DPU devices
> > > > > > > >
> > > > > > > > On Wed, Apr 10, 2024 at 10:15:37AM +0000, Srujana Challa wrote:
> > > > > > > > > > > > > +
> > > > > > > > > > > > > +       domain = iommu_get_domain_for_dev(dev);
> > > > > > > > > > > > > +       if (!domain || domain->type ==
> > > > > > > > > > > > > + IOMMU_DOMAIN_IDENTITY)
> > > > > > {
> > > > > > > > > > > > > +               dev_info(dev, "NO-IOMMU\n");
> > > > > > > > > > > > > +               octep_vdpa_ops.set_map =
> > > > > > > > > > > > > + octep_vdpa_set_map;
> > > > > > > > > > > >
> > > > > > > > > > > > Is this a shortcut to have get better performance?
> > > > > > > > > > > > DMA API should have those greacefully I think.
> > > > > > > > > > > When IOMMU is disabled on host and set_map/dma_map is
> > > > > > > > > > > not set, vhost-vdpa is reporting an error "Failed to
> > > > > > > > > > > allocate domain, device is not
> > > > > > > > > > IOMMU cache coherent capable\n".
> > > > > > > > > > > Hence we are doing this way to get better performance.
> > > > > > > > > >
> > > > > > > > > > The problem is, assuming the device does not have any
> > > > > > > > > > internal
> > > > > > IOMMU.
> > > > > > > > > >
> > > > > > > > > > 1) If we allow it running without IOMMU, it opens a
> > > > > > > > > > window for guest to attack the host.
> > > > > > > > > > 2) If you see perforamnce issue with
> > > > > > > > > > IOMMU_DOMAIN_IDENTITY, let's report it to DMA/IOMMU
> > > > > > > > > > maintiner to fix that
> > > > > > > > > It will be helpful for host networking case when iommu is disabled.
> > > > > > > > > Can we take the vfio pci driver approach as a reference
> > > > > > > > > where user explicitly set "enable_unsafe_noiommu_mode"
> > > > > > > > > using module
> > > > param?
> > > > > > > >
> > > > > > > > vfio is a userspace driver so it's userspace's responsibility.
> > > > > > > > what exactly ensures correctness here? does the device have
> > > > > > > > an on-chip iommu?
> > > > > > > >
> > > > > > > Our device features an on-chip IOMMU, although it is not
> > > > > > > utilized for host-side targeted DMA operations. We included
> > > > > > > no-iommu mode in our driver to ensure that host applications,
> > > > > > > such as DPDK Virtio user PMD, continue to function even when
> > > > > > > operating in a no-
> > > > IOMMU mode.
> > > > > >
> > > > > > I may miss something but set_map() is empty in this driver. How
> > > > > > could such isolation be done?
> > > > >
> > > > > In no-iommu case, there would be no domain right, and the user of
> > > > > vhost-vdpa(DPDK virtio user pmd), would create the mapping and
> > > > > pass the PA (= IOVA) to the device directly. So that, device can
> > > > > directly DMA to the
> > > > PA.
> > > >
> > > > Yes, but this doesn't differ too much from the case where DMA API is
> > > > used with IOMMU disabled.
> > > >
> > > > Are you saying DMA API introduces overheads in this case?
> > > No actually, current vhost-vdpa code is not allowing IOMMU disabled
> > > mode, If set_map/dma_map op is not set. Hence, we are setting set_map
> > > with dummy api to allow IOMMU disabled mode.
> > >
> > > Following is the code snippet from drivers/vhost/vdpa.c
> > >
> > >       /* Device want to do DMA by itself */
> > >         if (ops->set_map || ops->dma_map)
> > >                 return 0;
> > >
> > >         bus = dma_dev->bus;
> > >         if (!bus)
> > >                 return -EFAULT;
> > >
> > >        if (!device_iommu_capable(dma_dev,
> > IOMMU_CAP_CACHE_COHERENCY))
> > >                 return -ENOTSUPP;
> >
> > Right, so here's the question.
> >
> > When IOMMU is disabled, if there's no isolation from the device on-chip
> > IOMMU. It might have security implications. For example if we're using PA,
> > userspace could attack the kernel.
> >
> > So there should be some logic in the set_map() to program the on-chip
> > IOMMU to isolate DMA in that case but I don't see such implementation done
> > in set_map().
>
> Our chip lacks support for on-chip IOMMU for host-side targeted DMA operations.
> When using the DPDK virtio user PMD, we’ve noticed a significant 80% performance
> improvement when IOMMU is disabled on specific x86 machines. This performance
> improvement can be leveraged by embedded platforms where applications run in
> controlled environment.
> May be it's a trade-off between security and performance.
>
> We can disable the no-iommu support by default and enable it through some module
> parameter and taint the kernel similar to VFIO driver(enable_unsafe_noiommu_mode) right?

Could be one way.

Michael, any thoughts on this?

Thanks

> >
> > >
> > > Performance degradation when iommu enabled is not with DMA API but the
> > > x86 HW IOMMU translation performance on certain low end x86 machines.
> >
> > This might be true but it's not specific to vDPA I think?
> >
> > Thanks
> >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > >
> > > > > > > We observed performance impacts on certain low-end x86
> > > > > > > machines when IOMMU mode was enabled.
> > > > > > > I think, correctness is Host userspace application's
> > > > > > > responsibility, in this case when vhost-vdpa is used with Host
> > > > > > > application such as DPDK
> > > > > > Virtio user PMD.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > >
> > > > >
> > >
>


  reply	other threads:[~2024-04-16  3:18 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 11:21 [PATCH] virtio: vdpa: vDPA driver for Marvell OCTEON DPU devices Srujana Challa
2024-03-29  4:28 ` Jason Wang
2024-03-29 12:34   ` [EXTERNAL] " Srujana Challa
2024-03-31 11:31     ` Michael S. Tsirkin
2024-04-07  3:24       ` Jason Wang
2024-04-07  3:34     ` Jason Wang
2024-04-10 10:15       ` Srujana Challa
2024-04-10 11:19         ` Michael S. Tsirkin
2024-04-10 12:34           ` Srujana Challa
2024-04-11  6:01             ` Jason Wang
2024-04-12  5:12               ` Srujana Challa
2024-04-12  6:41                 ` Jason Wang
2024-04-12  9:48                   ` Srujana Challa
2024-04-15  6:49                     ` Jason Wang
2024-04-15 12:42                       ` Srujana Challa
2024-04-16  3:17                         ` Jason Wang [this message]
2024-04-22 20:42                           ` Michael S. Tsirkin
2024-04-24 13:05                             ` Srujana Challa
2024-04-23  5:40                           ` Srujana Challa
2024-04-11  5:59         ` Jason Wang
2024-03-29 11:37 ` Stefano Garzarella
2024-03-29 13:02   ` [EXTERNAL] " Srujana Challa
2024-03-29 13:19     ` Stefano Garzarella
2024-04-10 12:40       ` Srujana Challa
2024-04-22 20:44 ` Michael S. Tsirkin
2024-04-24 19:08 ` [PATCH v2] " Srujana Challa
2024-04-25  1:57   ` Jason Wang
2024-04-25  9:36   ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACGkMEtMvb6HOebVxjn8070+PvtPp=9FAvuNA6uKd4NK+L996Q@mail.gmail.com' \
    --to=jasowang@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=jerinj@marvell.com \
    --cc=mst@redhat.com \
    --cc=ndabilpuram@marvell.com \
    --cc=schalla@marvell.com \
    --cc=sthotton@marvell.com \
    --cc=vattunuru@marvell.com \
    --cc=virtualization@lists.linux.dev \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.