* Question regarding optimisation of RPMsg round trip times on Xilinx ZynqMP hardware
@ 2024-10-26 10:57 Felix Kuhlmann
2024-10-28 15:17 ` Mathieu Poirier
0 siblings, 1 reply; 2+ messages in thread
From: Felix Kuhlmann @ 2024-10-26 10:57 UTC (permalink / raw)
To: linux-remoteproc
Hello everybody,
I need your help concerning an error that was returned while trying to
use the AMD Xilinx implementation of remoteproc. I hope that this is
the right place to ask for help.
I'm currently working on a project that requires Remoteproc and
RPMsg. The hardware I am working with is a Trenz SoM containing a
AMD Zynq UltraScale+ MPSoC, CG variant, DDR3 external RAM and a few
additional components.
One of the targets of the project is that the communication between
the RPU and the APU should happen under soft realtime conditions. The
issue with the communication examples provided by Xilinx is that they
use the external RAM for the buffers for RPMsg. This results in highly
non-deterministic communication delay jitter, which is most likely due
to the fact that DDR RAM is not suited for those applications.
Given that the SoC already has an On-Chip Memory that is designed for such
applications, I am curious whether changing the shared memory location
for RPMsg to reside inside of the OCM of the SoC result in a performance
boost. Do you have any experience with such performance benefits?
I'm currently developing a solution, trying to adopt the examples AMD
provides, but when trying to boot the fw image, Remoteproc complains that
it is unable to allocate the memory, saying the fw image size doesn't
fit the len request. This results in Remoteproc throwing error "-12",
which simply indicates that booting of the RPU failed. More information
isn't logged.
I have tried to read the documentation, but I can't really decide which
aspects I need to bear in mind when trying to adopt my code to use a
different memory region as a whole.
My previous attempts at circumventing this issue failed, resulting in
the error above.
A few of the things I've tried are:
- Changing the shared memory and the vring addresses to be inside of
the OCM
- Adding the OCM and the remoteproc buffers to the device tree
- Attempting to increase the requested carveout for the firmware
I hope this provides a sufficient overview of my situation. If you need
further information or logs in order to figure out what went wrong,
feel free to ask for that.
Thank you in advance and best regards,
Felix Kuhlmann
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Question regarding optimisation of RPMsg round trip times on Xilinx ZynqMP hardware
2024-10-26 10:57 Question regarding optimisation of RPMsg round trip times on Xilinx ZynqMP hardware Felix Kuhlmann
@ 2024-10-28 15:17 ` Mathieu Poirier
0 siblings, 0 replies; 2+ messages in thread
From: Mathieu Poirier @ 2024-10-28 15:17 UTC (permalink / raw)
To: Felix Kuhlmann; +Cc: linux-remoteproc
Hi Felix,
On Sat, 26 Oct 2024 at 04:57, Felix Kuhlmann <felix-kuhlmann@gmx.de> wrote:
>
> Hello everybody,
>
> I need your help concerning an error that was returned while trying to
> use the AMD Xilinx implementation of remoteproc. I hope that this is
> the right place to ask for help.
>
> I'm currently working on a project that requires Remoteproc and
> RPMsg. The hardware I am working with is a Trenz SoM containing a
> AMD Zynq UltraScale+ MPSoC, CG variant, DDR3 external RAM and a few
> additional components.
>
> One of the targets of the project is that the communication between
> the RPU and the APU should happen under soft realtime conditions. The
> issue with the communication examples provided by Xilinx is that they
> use the external RAM for the buffers for RPMsg. This results in highly
> non-deterministic communication delay jitter, which is most likely due
> to the fact that DDR RAM is not suited for those applications.
>
> Given that the SoC already has an On-Chip Memory that is designed for such
> applications, I am curious whether changing the shared memory location
> for RPMsg to reside inside of the OCM of the SoC result in a performance
> boost. Do you have any experience with such performance benefits?
>
I do not have direct experience with this kind of configuration but I
would think both vring and vdev buffers would need to be located on
the OCM. That said the OCM may not be big enough for that or it may
not be accessible by the main processor. Another option could be to
use non-cached device memory, but there may also be constraints at
that level as well. This is really HW specifics and I do not have
enough details to provide further guidance.
Thanks,
Mathieu
> I'm currently developing a solution, trying to adopt the examples AMD
> provides, but when trying to boot the fw image, Remoteproc complains that
> it is unable to allocate the memory, saying the fw image size doesn't
> fit the len request. This results in Remoteproc throwing error "-12",
> which simply indicates that booting of the RPU failed. More information
> isn't logged.
>
> I have tried to read the documentation, but I can't really decide which
> aspects I need to bear in mind when trying to adopt my code to use a
> different memory region as a whole.
>
> My previous attempts at circumventing this issue failed, resulting in
> the error above.
> A few of the things I've tried are:
> - Changing the shared memory and the vring addresses to be inside of
> the OCM
> - Adding the OCM and the remoteproc buffers to the device tree
> - Attempting to increase the requested carveout for the firmware
>
> I hope this provides a sufficient overview of my situation. If you need
> further information or logs in order to figure out what went wrong,
> feel free to ask for that.
>
> Thank you in advance and best regards,
>
> Felix Kuhlmann
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-10-28 15:17 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-26 10:57 Question regarding optimisation of RPMsg round trip times on Xilinx ZynqMP hardware Felix Kuhlmann
2024-10-28 15:17 ` Mathieu Poirier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).