All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Srujana Challa <schalla@marvell.com>
To: "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>
Cc: "virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xuanzhuo@linux.alibaba.com" <xuanzhuo@linux.alibaba.com>,
	Vamsi Krishna Attunuru <vattunuru@marvell.com>,
	Shijith Thotton <sthotton@marvell.com>,
	Nithin Kumar Dabilpuram <ndabilpuram@marvell.com>,
	Jerin Jacob <jerinj@marvell.com>, eperezma <eperezma@redhat.com>
Subject: RE: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver for Marvell OCTEON DPU devices
Date: Wed, 24 Apr 2024 13:05:21 +0000	[thread overview]
Message-ID: <DS0PR18MB53689F93A92A48928582BF95A0102@DS0PR18MB5368.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20240422164108-mutt-send-email-mst@kernel.org>

> > > > > > > > > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA
> > > > > > > > > > driver for Marvell OCTEON DPU devices
> > > > > > > > > >
> > > > > > > > > > On Wed, Apr 10, 2024 at 10:15:37AM +0000, Srujana Challa
> wrote:
> > > > > > > > > > > > > > > +
> > > > > > > > > > > > > > > +       domain = iommu_get_domain_for_dev(dev);
> > > > > > > > > > > > > > > +       if (!domain || domain->type ==
> > > > > > > > > > > > > > > + IOMMU_DOMAIN_IDENTITY)
> > > > > > > > {
> > > > > > > > > > > > > > > +               dev_info(dev, "NO-IOMMU\n");
> > > > > > > > > > > > > > > +               octep_vdpa_ops.set_map =
> > > > > > > > > > > > > > > + octep_vdpa_set_map;
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Is this a shortcut to have get better performance?
> > > > > > > > > > > > > > DMA API should have those greacefully I think.
> > > > > > > > > > > > > When IOMMU is disabled on host and
> > > > > > > > > > > > > set_map/dma_map is not set, vhost-vdpa is
> > > > > > > > > > > > > reporting an error "Failed to allocate domain,
> > > > > > > > > > > > > device is not
> > > > > > > > > > > > IOMMU cache coherent capable\n".
> > > > > > > > > > > > > Hence we are doing this way to get better performance.
> > > > > > > > > > > >
> > > > > > > > > > > > The problem is, assuming the device does not have
> > > > > > > > > > > > any internal
> > > > > > > > IOMMU.
> > > > > > > > > > > >
> > > > > > > > > > > > 1) If we allow it running without IOMMU, it opens
> > > > > > > > > > > > a window for guest to attack the host.
> > > > > > > > > > > > 2) If you see perforamnce issue with
> > > > > > > > > > > > IOMMU_DOMAIN_IDENTITY, let's report it to
> > > > > > > > > > > > DMA/IOMMU maintiner to fix that
> > > > > > > > > > > It will be helpful for host networking case when iommu is
> disabled.
> > > > > > > > > > > Can we take the vfio pci driver approach as a
> > > > > > > > > > > reference where user explicitly set
> "enable_unsafe_noiommu_mode"
> > > > > > > > > > > using module
> > > > > > param?
> > > > > > > > > >
> > > > > > > > > > vfio is a userspace driver so it's userspace's responsibility.
> > > > > > > > > > what exactly ensures correctness here? does the device
> > > > > > > > > > have an on-chip iommu?
> > > > > > > > > >
> > > > > > > > > Our device features an on-chip IOMMU, although it is not
> > > > > > > > > utilized for host-side targeted DMA operations. We
> > > > > > > > > included no-iommu mode in our driver to ensure that host
> > > > > > > > > applications, such as DPDK Virtio user PMD, continue to
> > > > > > > > > function even when operating in a no-
> > > > > > IOMMU mode.
> > > > > > > >
> > > > > > > > I may miss something but set_map() is empty in this
> > > > > > > > driver. How could such isolation be done?
> > > > > > >
> > > > > > > In no-iommu case, there would be no domain right, and the
> > > > > > > user of vhost-vdpa(DPDK virtio user pmd), would create the
> > > > > > > mapping and pass the PA (= IOVA) to the device directly. So
> > > > > > > that, device can directly DMA to the
> > > > > > PA.
> > > > > >
> > > > > > Yes, but this doesn't differ too much from the case where DMA
> > > > > > API is used with IOMMU disabled.
> > > > > >
> > > > > > Are you saying DMA API introduces overheads in this case?
> > > > > No actually, current vhost-vdpa code is not allowing IOMMU
> > > > > disabled mode, If set_map/dma_map op is not set. Hence, we are
> > > > > setting set_map with dummy api to allow IOMMU disabled mode.
> > > > >
> > > > > Following is the code snippet from drivers/vhost/vdpa.c
> > > > >
> > > > >       /* Device want to do DMA by itself */
> > > > >         if (ops->set_map || ops->dma_map)
> > > > >                 return 0;
> > > > >
> > > > >         bus = dma_dev->bus;
> > > > >         if (!bus)
> > > > >                 return -EFAULT;
> > > > >
> > > > >        if (!device_iommu_capable(dma_dev,
> > > > IOMMU_CAP_CACHE_COHERENCY))
> > > > >                 return -ENOTSUPP;
> > > >
> > > > Right, so here's the question.
> > > >
> > > > When IOMMU is disabled, if there's no isolation from the device
> > > > on-chip IOMMU. It might have security implications. For example if
> > > > we're using PA, userspace could attack the kernel.
> > > >
> > > > So there should be some logic in the set_map() to program the
> > > > on-chip IOMMU to isolate DMA in that case but I don't see such
> > > > implementation done in set_map().
> > >
> > > Our chip lacks support for on-chip IOMMU for host-side targeted DMA
> operations.
> > > When using the DPDK virtio user PMD, we’ve noticed a significant 80%
> > > performance improvement when IOMMU is disabled on specific x86
> > > machines. This performance improvement can be leveraged by embedded
> > > platforms where applications run in controlled environment.
> > > May be it's a trade-off between security and performance.
> > >
> > > We can disable the no-iommu support by default and enable it through
> > > some module parameter and taint the kernel similar to VFIO
> driver(enable_unsafe_noiommu_mode) right?
> >
> > Could be one way.
> >
> > Michael, any thoughts on this?
> >
> > Thanks
> 
> My thought is there's nothing special about the Marvell chip here.
> Merge it normally. Then if you like work on a no-iommu mode in vdpa.
For now we will remove no-iommu code from this patch and we will
work with Jason for adding no-iommu mode in vdpa.
@Jason Wang Can you confirm if it sounds fine?
> 
> 
> > > >
> > > > >
> > > > > Performance degradation when iommu enabled is not with DMA API
> > > > > but the
> > > > > x86 HW IOMMU translation performance on certain low end x86
> machines.
> > > >
> > > > This might be true but it's not specific to vDPA I think?
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > > We observed performance impacts on certain low-end x86
> > > > > > > > > machines when IOMMU mode was enabled.
> > > > > > > > > I think, correctness is Host userspace application's
> > > > > > > > > responsibility, in this case when vhost-vdpa is used
> > > > > > > > > with Host application such as DPDK
> > > > > > > > Virtio user PMD.
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks
> > > > > > > > > > >
> > > > > > > > > > > Thanks.
> > > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > >
> > >


  reply	other threads:[~2024-04-24 13:05 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 11:21 [PATCH] virtio: vdpa: vDPA driver for Marvell OCTEON DPU devices Srujana Challa
2024-03-29  4:28 ` Jason Wang
2024-03-29 12:34   ` [EXTERNAL] " Srujana Challa
2024-03-31 11:31     ` Michael S. Tsirkin
2024-04-07  3:24       ` Jason Wang
2024-04-07  3:34     ` Jason Wang
2024-04-10 10:15       ` Srujana Challa
2024-04-10 11:19         ` Michael S. Tsirkin
2024-04-10 12:34           ` Srujana Challa
2024-04-11  6:01             ` Jason Wang
2024-04-12  5:12               ` Srujana Challa
2024-04-12  6:41                 ` Jason Wang
2024-04-12  9:48                   ` Srujana Challa
2024-04-15  6:49                     ` Jason Wang
2024-04-15 12:42                       ` Srujana Challa
2024-04-16  3:17                         ` Jason Wang
2024-04-22 20:42                           ` Michael S. Tsirkin
2024-04-24 13:05                             ` Srujana Challa [this message]
2024-04-23  5:40                           ` Srujana Challa
2024-04-11  5:59         ` Jason Wang
2024-03-29 11:37 ` Stefano Garzarella
2024-03-29 13:02   ` [EXTERNAL] " Srujana Challa
2024-03-29 13:19     ` Stefano Garzarella
2024-04-10 12:40       ` Srujana Challa
2024-04-22 20:44 ` Michael S. Tsirkin
2024-04-24 19:08 ` [PATCH v2] " Srujana Challa
2024-04-25  1:57   ` Jason Wang
2024-04-25  9:36   ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DS0PR18MB53689F93A92A48928582BF95A0102@DS0PR18MB5368.namprd18.prod.outlook.com \
    --to=schalla@marvell.com \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=jerinj@marvell.com \
    --cc=mst@redhat.com \
    --cc=ndabilpuram@marvell.com \
    --cc=sthotton@marvell.com \
    --cc=vattunuru@marvell.com \
    --cc=virtualization@lists.linux.dev \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.