All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
To: Nicolin Chen <nicolinc@nvidia.com>, Jason Gunthorpe <jgg@nvidia.com>
Cc: "iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	Linuxarm <linuxarm@huawei.com>,
	Zhangfei Gao <zhangfei.gao@linaro.org>,
	Michael Shavit <mshavit@google.com>,
	Eric Auger <eric.auger@redhat.com>,
	Moritz Fischer <mdf@kernel.org>
Subject: Query on ARM SMMUv3 nested support
Date: Wed, 13 Mar 2024 10:13:58 +0000	[thread overview]
Message-ID: <b8c1b2fb281246da94b35ceb5e27b538@huawei.com> (raw)

Hi Nicolin,

Thanks for the latest repos with basic SMMUv3 nested support enabled[1].
I did some  basic sanity runs on a HiSilicon platform and they seems to work as
expected. The only problem being we can't assign two devices to the VM if
they are on different physical SMMUs.

qemu-system-aarch64: -device
vfio-pci,host=0000:75:00.1,iommufd=iommufd0: [iommufd=29] error attach
0000:75:00.1 (36) to id=4: Invalid argument
qemu-system-aarch64: -device
vfio-pci,host=0000:75:00.1,iommufd=iommufd0: Unable to attach dev to
stage-2 HW pagetable: -1
Segmentation fault (core dumped)
...

I see that on the Qemu side we now allocate a single s2 hwpt and attach that into
all the devices . But this will only work currently if all the assigned devices
are under the same physical SMMUv3 as we have a check in kernel
whether domains are having the same SMMUv3. I remember Jason mentioning
that he is planning to relax that. So are the Qemu side changes based on that
assumption? And any idea how we are planning to relax that restriction? We
do a check for compatibility of the phys SMMUv3s and then allow/restrict in kernel?
Do Qemu can then try allocating a separate s2 hwpt for those and attach again?
Sorry if this was already discussed elsewhere and I missed that.

Thanks,
Shameer
[1]:
https://github.com/nicolinc/iommufd/commits/wip/iommufd_nesting-03112024/
https://github.com/nicolinc/qemu/commits/wip/iommufd_vsmmu-02292024/

             reply	other threads:[~2024-03-13 10:14 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-13 10:13 Shameerali Kolothum Thodi [this message]
2024-03-13 23:50 ` Query on ARM SMMUv3 nested support Nicolin Chen
2024-03-18 16:22   ` Jason Gunthorpe
2024-03-22 15:04   ` Shameerali Kolothum Thodi
2024-03-29  7:13     ` Nicolin Chen
2024-04-02  7:25       ` Shameerali Kolothum Thodi
2024-04-02 11:28         ` Jason Gunthorpe
2024-04-09  6:12         ` Nicolin Chen
2024-04-09 19:47           ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b8c1b2fb281246da94b35ceb5e27b538@huawei.com \
    --to=shameerali.kolothum.thodi@huawei.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=linuxarm@huawei.com \
    --cc=mdf@kernel.org \
    --cc=mshavit@google.com \
    --cc=nicolinc@nvidia.com \
    --cc=zhangfei.gao@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.