From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
To: Umang Jain <umang.jain@ideasonboard.com>
Cc: linux-staging@lists.linux.dev, Dan Carpenter <error27@gmail.com>,
Kieran Bingham <kieran.bingham@ideasonboard.com>,
Dave Stevenson <dave.stevenson@raspberrypi.com>,
Phil Elwell <phil@raspberrypi.com>, Greg KH <greg@kroah.com>,
Stefan Wahren <wahrenst@gmx.net>
Subject: Re: [PATCH v4 06/11] staging: vc04_services: Move global variables tracking allocated pages
Date: Fri, 29 Mar 2024 01:04:08 +0200 [thread overview]
Message-ID: <20240328230408.GN11463@pendragon.ideasonboard.com> (raw)
In-Reply-To: <20240328225958.GK11463@pendragon.ideasonboard.com>
On Fri, Mar 29, 2024 at 12:59:59AM +0200, Laurent Pinchart wrote:
> Hi Umang,
>
> Thank you for the patch.
>
> On Thu, Mar 28, 2024 at 11:41:28PM +0530, Umang Jain wrote:
> > The variables tracking allocated pages fragments in vchiq_arm.c
> > can be easily moved to struct vchiq_drv_mgmt instead of being global.
> > This helps us to drop the non-essential global variables in the vchiq
> > interface.
> >
> > No functional changes intended in this patch.
> >
> > Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
>
> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
>
> > ---
> > .../interface/vchiq_arm/vchiq_arm.c | 53 +++++++++----------
> > .../interface/vchiq_arm/vchiq_arm.h | 6 +++
> > 2 files changed, 30 insertions(+), 29 deletions(-)
> >
> > diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
> > index 2005d392d0b7..b335948f4b0c 100644
> > --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
> > +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
> > @@ -130,12 +130,6 @@ struct vchiq_pagelist_info {
> > };
> >
> > static void __iomem *g_regs;
> > -static unsigned int g_fragments_size;
> > -static char *g_fragments_base;
> > -static char *g_free_fragments;
> > -static struct semaphore g_free_fragments_sema;
> > -
> > -static DEFINE_SEMAPHORE(g_free_fragments_mutex, 1);
> >
> > static int
> > vchiq_blocking_bulk_transfer(struct vchiq_instance *instance, unsigned int handle, void *data,
> > @@ -414,20 +408,20 @@ create_pagelist(struct vchiq_instance *instance, char *buf, char __user *ubuf,
> > (drv_mgmt->info->cache_line_size - 1)))) {
> > char *fragments;
> >
> > - if (down_interruptible(&g_free_fragments_sema)) {
> > + if (down_interruptible(&drv_mgmt->free_fragments_sema)) {
> > cleanup_pagelistinfo(instance, pagelistinfo);
> > return NULL;
> > }
> >
> > - WARN_ON(!g_free_fragments);
> > + WARN_ON(!drv_mgmt->free_fragments);
> >
> > - down(&g_free_fragments_mutex);
> > - fragments = g_free_fragments;
> > + down(&drv_mgmt->free_fragments_mutex);
> > + fragments = drv_mgmt->free_fragments;
> > WARN_ON(!fragments);
> > - g_free_fragments = *(char **)g_free_fragments;
> > - up(&g_free_fragments_mutex);
> > + drv_mgmt->free_fragments = *(char **)drv_mgmt->free_fragments;
> > + up(&drv_mgmt->free_fragments_mutex);
> > pagelist->type = PAGELIST_READ_WITH_FRAGMENTS +
> > - (fragments - g_fragments_base) / g_fragments_size;
> > + (fragments - drv_mgmt->fragments_base) / drv_mgmt->fragments_size;
> > }
> >
> > return pagelistinfo;
> > @@ -455,10 +449,10 @@ free_pagelist(struct vchiq_instance *instance, struct vchiq_pagelist_info *pagel
> > pagelistinfo->scatterlist_mapped = 0;
> >
> > /* Deal with any partial cache lines (fragments) */
> > - if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS && g_fragments_base) {
> > - char *fragments = g_fragments_base +
> > + if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS && drv_mgmt->fragments_base) {
> > + char *fragments = drv_mgmt->fragments_base +
> > (pagelist->type - PAGELIST_READ_WITH_FRAGMENTS) *
> > - g_fragments_size;
> > + drv_mgmt->fragments_size;
> > int head_bytes, tail_bytes;
> >
> > head_bytes = (drv_mgmt->info->cache_line_size - pagelist->offset) &
> > @@ -483,11 +477,11 @@ free_pagelist(struct vchiq_instance *instance, struct vchiq_pagelist_info *pagel
> > fragments + drv_mgmt->info->cache_line_size,
> > tail_bytes);
> >
> > - down(&g_free_fragments_mutex);
> > - *(char **)fragments = g_free_fragments;
> > - g_free_fragments = fragments;
> > - up(&g_free_fragments_mutex);
> > - up(&g_free_fragments_sema);
> > + down(&drv_mgmt->free_fragments_mutex);
> > + *(char **)fragments = drv_mgmt->free_fragments;
> > + drv_mgmt->free_fragments = fragments;
> > + up(&drv_mgmt->free_fragments_mutex);
> > + up(&drv_mgmt->free_fragments_sema);
> > }
> >
> > /* Need to mark all the pages dirty. */
> > @@ -523,11 +517,11 @@ static int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state
> > if (err < 0)
> > return err;
> >
> > - g_fragments_size = 2 * drv_mgmt->info->cache_line_size;
> > + drv_mgmt->fragments_size = 2 * drv_mgmt->info->cache_line_size;
> >
> > /* Allocate space for the channels in coherent memory */
> > slot_mem_size = PAGE_ALIGN(TOTAL_SLOTS * VCHIQ_SLOT_SIZE);
> > - frag_mem_size = PAGE_ALIGN(g_fragments_size * MAX_FRAGMENTS);
> > + frag_mem_size = PAGE_ALIGN(drv_mgmt->fragments_size * MAX_FRAGMENTS);
> >
> > slot_mem = dmam_alloc_coherent(dev, slot_mem_size + frag_mem_size,
> > &slot_phys, GFP_KERNEL);
> > @@ -547,15 +541,16 @@ static int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state
> > vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX] =
> > MAX_FRAGMENTS;
> >
> > - g_fragments_base = (char *)slot_mem + slot_mem_size;
> > + drv_mgmt->fragments_base = (char *)slot_mem + slot_mem_size;
> >
> > - g_free_fragments = g_fragments_base;
> > + drv_mgmt->free_fragments = drv_mgmt->fragments_base;
> > for (i = 0; i < (MAX_FRAGMENTS - 1); i++) {
> > - *(char **)&g_fragments_base[i * g_fragments_size] =
> > - &g_fragments_base[(i + 1) * g_fragments_size];
> > + *(char **)&drv_mgmt->fragments_base[i * drv_mgmt->fragments_size] =
> > + &drv_mgmt->fragments_base[(i + 1) * drv_mgmt->fragments_size];
> > }
> > - *(char **)&g_fragments_base[i * g_fragments_size] = NULL;
> > - sema_init(&g_free_fragments_sema, MAX_FRAGMENTS);
> > + *(char **)&drv_mgmt->fragments_base[i * drv_mgmt->fragments_size] = NULL;
> > + sema_init(&drv_mgmt->free_fragments_sema, MAX_FRAGMENTS);
> > + sema_init(&drv_mgmt->free_fragments_mutex, 1);
> >
> > err = vchiq_init_state(state, vchiq_slot_zero, dev);
> > if (err)
> > diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
> > index 4a5b5ae9625a..a3bba245bfe2 100644
> > --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
> > +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
> > @@ -59,6 +59,12 @@ struct vchiq_drv_mgmt {
> > struct mutex connected_mutex;
> >
> > void (*deferred_callback[VCHIQ_DRV_MAX_CALLBACKS])(void);
> > +
> > + struct semaphore free_fragments_sema;
> > + struct semaphore free_fragments_mutex;
Could we turn the latter into a mutex (in a separate patch) ?
> > + char *fragments_base;
> > + char *free_fragments;
> > + unsigned int fragments_size;
> > };
> >
> > struct user_service {
--
Regards,
Laurent Pinchart
next prev parent reply other threads:[~2024-03-28 23:04 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-28 18:11 [PATCH v4 00/11] staging: vc04_services: Drop non-essential global members Umang Jain
2024-03-28 18:11 ` [PATCH v4 01/11] staging: vc04_services: Drop g_once_init global variable Umang Jain
2024-03-28 18:11 ` [PATCH v4 02/11] staging: vc04_services: vchiq_arm: Split driver static and runtime data Umang Jain
2024-03-28 22:33 ` Laurent Pinchart
2024-03-28 18:11 ` [PATCH v4 03/11] staging: vc04_services: vchiq_arm: Drop g_cache_line_size Umang Jain
2024-03-28 18:11 ` [PATCH v4 04/11] staging: vc04_services: Move variables for tracking connections Umang Jain
2024-03-28 22:38 ` Laurent Pinchart
2024-03-28 18:11 ` [PATCH v4 05/11] staging: vc04_services: Drop vchiq_connected.[ch] files Umang Jain
2024-03-28 22:52 ` Laurent Pinchart
2024-03-28 18:11 ` [PATCH v4 06/11] staging: vc04_services: Move global variables tracking allocated pages Umang Jain
2024-03-28 22:59 ` Laurent Pinchart
2024-03-28 23:04 ` Laurent Pinchart [this message]
2024-04-01 5:00 ` Umang Jain
2024-03-28 18:11 ` [PATCH v4 07/11] staging: vc04_services: Move global memory mapped pointer Umang Jain
2024-03-28 23:01 ` Laurent Pinchart
2024-03-28 18:11 ` [PATCH v4 08/11] staging: vc04_services: Move spinlocks to vchiq_state Umang Jain
2024-03-28 18:11 ` [PATCH v4 09/11] staging: vc04_services: vchiq_mmal: Rename service_callback() Umang Jain
2024-03-28 23:03 ` Laurent Pinchart
2024-03-28 18:11 ` [PATCH v4 10/11] staging: vc04_services: Move global g_state vchiq_state pointer Umang Jain
2024-03-28 18:11 ` [PATCH v4 11/11] staging: vc04_services: Drop completed TODO item Umang Jain
2024-04-09 15:46 ` [PATCH v4 00/11] staging: vc04_services: Drop non-essential global members Greg KH
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240328230408.GN11463@pendragon.ideasonboard.com \
--to=laurent.pinchart@ideasonboard.com \
--cc=dave.stevenson@raspberrypi.com \
--cc=error27@gmail.com \
--cc=greg@kroah.com \
--cc=kieran.bingham@ideasonboard.com \
--cc=linux-staging@lists.linux.dev \
--cc=phil@raspberrypi.com \
--cc=umang.jain@ideasonboard.com \
--cc=wahrenst@gmx.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).