All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	Linuxarm <linuxarm@huawei.com>,
	Zhangfei Gao <zhangfei.gao@linaro.org>,
	Michael Shavit <mshavit@google.com>,
	Eric Auger <eric.auger@redhat.com>,
	Moritz Fischer <mdf@kernel.org>,
	"baolu.lu@linux.intel.com" <baolu.lu@linux.intel.com>
Subject: Re: Query on ARM SMMUv3 nested support
Date: Mon, 8 Apr 2024 23:12:57 -0700	[thread overview]
Message-ID: <ZhTcaSGOvsIkZDy1@Asurada-Nvidia> (raw)
In-Reply-To: <02f3fbc5145d4449b3313eb802ecfa2c@huawei.com>

Sorry for my belated reply.

On Tue, Apr 02, 2024 at 07:25:56AM +0000, Shameerali Kolothum Thodi wrote:

> Here we have specified nested-smmuv3 and have  a PCIe root port(ioh3420) and
> a virtio-pci dev(for virtfs). Once this VM is up and running, both the ioh3420
> and virtio-9p-pci will not work as expected as we return sysmem address space
> for them.
> 
> And then later when you try to hotplug a vfio dev to the PCIe root port,
> device_add vfio-pci,host=0000:7d:02.1,bus=rp1,id=net1,iommufd=iommufd0
> 
> it won't work either because the address space for ioh3420 is wrong.
> 
> Taking another look at this, I think we can replace my earlier attempt to fix
> this by detecting if the dev is vfio or not and add that into the check below,
> 
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -619,8 +619,15 @@ static AddressSpace *smmu_find_add_as(PCIBus
> *bus, void *opaque, int devfn)
>      SMMUState *s = opaque;
>      SMMUPciBus *sbus = smmu_get_sbus(s, bus);
>      SMMUDevice *sdev = smmu_get_sdev(s, sbus, bus, devfn);
> +    bool is_vfio = false;
> +    PCIDevice *pdev;
> 
> -    if (s->nested && !s->s2_hwpt) {
> +    pdev = pci_find_device(bus, pci_bus_num(bus), devfn);
> +    if (object_dynamic_cast(OBJECT(pdev), "vfio-pci")) {
> +        is_vfio = true;
> +    }
> +
> +    if (is_vfio && s->nested && !s->s2_hwpt) {
>          return &sdev->as_sysmem;
>      } else {
>          return &sdev->as;
> 
> Please let me know your thoughts on this.

Hmm, checking "vfio-pci" here feels a bit of layer violation..
But I understand how you got there...

I think that the current way of enabling the "nested" flag for
a device is a bit messy:
    if -machine has "iommu=nested-smmuv3" &&
       -device is "vfio_pci" &&
       -device has "iommufd"

Any other device that doesn't meet the latter two conditions
is ambiguously added to a soft smmuv3, which feels wrong to
me now.

Also, the callbacks from hw/vfio/pci.c to iommu are tricky as
well:
    vfio_realize() {
        pci_device_iommu_address_space() {
            smmu_find_add_as();   // Can only assume vfio-pci is behind
                                  // a nested SMMU. So returns system AS
        }
        vfio_attach_device() {
            ioctl(VFIO_DEVICE_BIND_IOMMUFD);
            ioctl(VFIO_DEVICE_ATTACH_IOMMUFD_PT);
            listener_add_address_space() {
                if (memory_region_is_iommu(section->mr)) { // Skip
                    iommu_notifier_init(IOMMU_NOTIFIER_IOTLB_EVENTS);
                    memory_region_iommu_replay();
                }
            }
        }
    }

In my view, we should sort this out with a cleaner enabling
command first. And then, as Jason pointed out, other devices
(non vfio-pci) should probably not be added to the SMMU, i.e.
they might need to be added to a separate PCI bus/bridge with
out iommu. In this case, the vSMMU module would not need an
iommu AS at all when "iommu=nested-smmuv3".

Thoughts?

Thanks
Nicolin

  parent reply	other threads:[~2024-04-09  6:13 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-13 10:13 Query on ARM SMMUv3 nested support Shameerali Kolothum Thodi
2024-03-13 23:50 ` Nicolin Chen
2024-03-18 16:22   ` Jason Gunthorpe
2024-03-22 15:04   ` Shameerali Kolothum Thodi
2024-03-29  7:13     ` Nicolin Chen
2024-04-02  7:25       ` Shameerali Kolothum Thodi
2024-04-02 11:28         ` Jason Gunthorpe
2024-04-09  6:12         ` Nicolin Chen [this message]
2024-04-09 19:47           ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZhTcaSGOvsIkZDy1@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=linuxarm@huawei.com \
    --cc=mdf@kernel.org \
    --cc=mshavit@google.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=zhangfei.gao@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.