From: Mukesh Ojha <quic_mojha@quicinc.com>
To: <neil.armstrong@linaro.org>
Cc: Bjorn Andersson <andersson@kernel.org>,
Mathieu Poirier <mathieu.poirier@linaro.org>,
Rob Herring <robh@kernel.org>,
Krzysztof Kozlowski <krzk+dt@kernel.org>,
Conor Dooley <conor+dt@kernel.org>,
Konrad Dybcio <konradybcio@kernel.org>,
Bartosz Golaszewski <bartosz.golaszewski@linaro.org>,
Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>,
<linux-arm-msm@vger.kernel.org>,
<linux-remoteproc@vger.kernel.org>, <devicetree@vger.kernel.org>,
<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 6/6] remoteproc: qcom: Enable map/unmap and SHM bridge support
Date: Tue, 8 Oct 2024 11:51:54 +0530 [thread overview]
Message-ID: <ZwTPghV36CSIpkE4@hu-mojha-hyd.qualcomm.com> (raw)
In-Reply-To: <ZwP1t45ni/gk754B@hu-mojha-hyd.qualcomm.com>
On Mon, Oct 07, 2024 at 08:22:39PM +0530, Mukesh Ojha wrote:
> On Mon, Oct 07, 2024 at 10:05:08AM +0200, neil.armstrong@linaro.org wrote:
> > On 04/10/2024 23:23, Mukesh Ojha wrote:
> > > For Qualcomm SoCs runnning with Qualcomm EL2 hypervisor(QHEE), IOMMU
> > > translation for remote processors is managed by QHEE and if the same SoC
> > > run under KVM, remoteproc carveout and devmem region should be IOMMU
> > > mapped from Linux PAS driver before remoteproc is brought up and
> > > unmapped once it is tear down and apart from this, SHM bridge also need
> > > to set up to enable memory protection on both remoteproc meta data
> > > memory as well as for the carveout region.
> > >
> > > Enable the support required to run Qualcomm remoteprocs on non-QHEE
> > > hypervisors.
> > >
> > > Signed-off-by: Mukesh Ojha <quic_mojha@quicinc.com>
> > > ---
> > > drivers/remoteproc/qcom_q6v5_pas.c | 41 +++++++++++++++++++++++++++++-
> > > 1 file changed, 40 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
> > > index ac339145e072..13bd13f1b989 100644
> > > --- a/drivers/remoteproc/qcom_q6v5_pas.c
> > > +++ b/drivers/remoteproc/qcom_q6v5_pas.c
> > > @@ -122,6 +122,7 @@ struct qcom_adsp {
> > > struct qcom_devmem_table *devmem;
> > > struct qcom_tzmem_area *tzmem;
> > > + unsigned long sid;
> > > };
> > > static void adsp_segment_dump(struct rproc *rproc, struct rproc_dump_segment *segment,
> > > @@ -310,9 +311,21 @@ static int adsp_start(struct rproc *rproc)
> > > if (ret)
> > > return ret;
> > > + ret = qcom_map_unmap_carveout(rproc, adsp->mem_phys, adsp->mem_size, true, true, adsp->sid);
> > > + if (ret) {
> > > + dev_err(adsp->dev, "iommu mapping failed, ret: %d\n", ret);
> > > + goto disable_irqs;
> > > + }
> > > +
> > > + ret = qcom_map_devmem(rproc, adsp->devmem, true, adsp->sid);
> > > + if (ret) {
> > > + dev_err(adsp->dev, "devmem iommu mapping failed, ret: %d\n", ret);
> > > + goto unmap_carveout;
> > > + }
> > > +
> > > ret = adsp_pds_enable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
> > > if (ret < 0)
> > > - goto disable_irqs;
> > > + goto unmap_devmem;
> > > ret = clk_prepare_enable(adsp->xo);
> > > if (ret)
> > > @@ -400,6 +413,10 @@ static int adsp_start(struct rproc *rproc)
> > > clk_disable_unprepare(adsp->xo);
> > > disable_proxy_pds:
> > > adsp_pds_disable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
> > > +unmap_devmem:
> > > + qcom_unmap_devmem(rproc, adsp->devmem, adsp->sid);
> > > +unmap_carveout:
> > > + qcom_map_unmap_carveout(rproc, adsp->mem_phys, adsp->mem_size, false, true, adsp->sid);
> > > disable_irqs:
> > > qcom_q6v5_unprepare(&adsp->q6v5);
> > > @@ -445,6 +462,9 @@ static int adsp_stop(struct rproc *rproc)
> > > dev_err(adsp->dev, "failed to shutdown dtb: %d\n", ret);
> > > }
> > > + qcom_unmap_devmem(rproc, adsp->devmem, adsp->sid);
> > > + qcom_map_unmap_carveout(rproc, adsp->mem_phys, adsp->mem_size, false, true, adsp->sid);
> > > +
> > > handover = qcom_q6v5_unprepare(&adsp->q6v5);
> > > if (handover)
> > > qcom_pas_handover(&adsp->q6v5);
> > > @@ -844,6 +864,25 @@ static int adsp_probe(struct platform_device *pdev)
> > > }
> > > platform_set_drvdata(pdev, adsp);
> > > + if (of_property_present(pdev->dev.of_node, "iommus")) {
> > > + struct of_phandle_args args;
> > > +
> > > + ret = of_parse_phandle_with_args(pdev->dev.of_node, "iommus", "#iommu-cells", 0, &args);
> > > + if (ret < 0)
> > > + return ret;
> > > +
> > > + rproc->has_iommu = true;
> > > + adsp->sid = args.args[0];
> > > + of_node_put(args.np);
> > > + ret = adsp_devmem_init(adsp);
> > > + if (ret)
> > > + return ret;
> >
> > Why don't you get this table from the firmware like presumably QHEE does ?
>
> Well, AFAIK, QHEE(EL2) has this information statically present and does
> not get it from anywhere., but will confirm this twice..
Double confirmed, device memory region required by remoteproc is
statically present with QHEE.
-Mukesh
>
> -Mukesh
>
next prev parent reply other threads:[~2024-10-08 6:22 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-04 21:23 [PATCH 0/6] Peripheral Image Loader support for Qualcomm SoCs Mukesh Ojha
2024-10-04 21:23 ` [PATCH 1/6] dt-bindings: remoteproc: qcom,pas-common: Introduce iommus and qcom,devmem property Mukesh Ojha
2024-10-06 19:38 ` Dmitry Baryshkov
2024-10-07 15:35 ` Mukesh Ojha
2024-10-07 16:25 ` Dmitry Baryshkov
2024-10-09 14:04 ` Shiraz Hashim
2024-10-10 7:15 ` Krzysztof Kozlowski
2024-10-10 8:30 ` Shiraz Hashim
2024-10-04 21:23 ` [PATCH 2/6] remoteproc: qcom: Add iommu map_unmap helper function Mukesh Ojha
2024-10-06 2:04 ` kernel test robot
2024-10-06 4:29 ` kernel test robot
2024-10-04 21:23 ` [PATCH 3/6] remoteproc: qcom: Add helper function to support IOMMU devmem translation Mukesh Ojha
2024-10-07 8:08 ` neil.armstrong
2024-10-07 14:37 ` Mukesh Ojha
2024-10-10 6:59 ` neil.armstrong
2024-10-17 21:25 ` Konrad Dybcio
2024-10-04 21:23 ` [PATCH 4/6] remoteproc: qcom: Add support to parse qcom,devmem property Mukesh Ojha
2024-10-04 21:23 ` [PATCH 5/6] remoteproc: qcom: Add support of SHM bridge to enable memory protection Mukesh Ojha
2024-10-05 22:26 ` kernel test robot
2024-10-25 19:03 ` Konrad Dybcio
2024-10-04 21:23 ` [PATCH 6/6] remoteproc: qcom: Enable map/unmap and SHM bridge support Mukesh Ojha
2024-10-07 8:05 ` neil.armstrong
2024-10-07 14:52 ` Mukesh Ojha
2024-10-08 6:21 ` Mukesh Ojha [this message]
2024-10-10 6:57 ` neil.armstrong
2024-10-11 5:05 ` Shiraz Hashim
2024-10-11 6:23 ` Dmitry Baryshkov
2024-10-11 7:09 ` Shiraz Hashim
2024-10-11 7:12 ` Dmitry Baryshkov
2024-10-14 12:31 ` Shiraz Hashim
2024-10-14 12:38 ` Dmitry Baryshkov
2024-10-11 7:11 ` Mukesh Ojha
2024-10-11 7:09 ` neil.armstrong
2024-10-14 12:29 ` Shiraz Hashim
2024-10-25 19:07 ` Konrad Dybcio
2024-10-06 19:34 ` [PATCH 0/6] Peripheral Image Loader support for Qualcomm SoCs Dmitry Baryshkov
2024-10-07 18:39 ` Mukesh Ojha
2024-10-09 13:50 ` Shiraz Hashim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZwTPghV36CSIpkE4@hu-mojha-hyd.qualcomm.com \
--to=quic_mojha@quicinc.com \
--cc=andersson@kernel.org \
--cc=bartosz.golaszewski@linaro.org \
--cc=conor+dt@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=konradybcio@kernel.org \
--cc=krzk+dt@kernel.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-remoteproc@vger.kernel.org \
--cc=manivannan.sadhasivam@linaro.org \
--cc=mathieu.poirier@linaro.org \
--cc=neil.armstrong@linaro.org \
--cc=robh@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).