dri-devel Archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/12] accel/ivpu: Changes for 6.10
@ 2024-05-13 12:04 Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 01/12] accel/ivpu: Update VPU FW API headers Jacek Lawrynowicz
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Jacek Lawrynowicz

There are couple of major new features in this patchset:
  * Hardware scheduler support (disabled by default)
  * Profiling support
  * Expose NPU busy time in sysfs

Other then that, there are two small random fixes.

v2: Included Jeffrey's v1 comments

v1: https://lore.kernel.org/dri-devel/20240508132106.2387464-1-jacek.lawrynowicz@linux.intel.com

Jacek Lawrynowicz (2):
  accel/ivpu: Update VPU FW API headers
  accel/ivpu: Increase reset counter when warm boot fails

Tomasz Rusinowicz (3):
  accel/ivpu: Add NPU profiling support
  accel/ivpu: Configure fw logging using debugfs
  accel/ivpu: Share NPU busy time in sysfs

Wachowski, Karol (7):
  accel/ivpu: Add sched_mode module param
  accel/ivpu: Create priority based command queues
  accel/ivpu: Implement support for preemption buffers
  accel/ivpu: Add HWS JSM messages
  accel/ivpu: Implement support for hardware scheduler
  accel/ivpu: Add resume engine support
  accel/ivpu: Add force snoop module parameter

 drivers/accel/ivpu/Makefile       |   6 +-
 drivers/accel/ivpu/ivpu_debugfs.c |  50 +++++
 drivers/accel/ivpu/ivpu_drv.c     |  44 ++++-
 drivers/accel/ivpu/ivpu_drv.h     |  23 ++-
 drivers/accel/ivpu/ivpu_fw.c      |  10 +
 drivers/accel/ivpu/ivpu_fw.h      |   2 +
 drivers/accel/ivpu/ivpu_gem.h     |  11 +-
 drivers/accel/ivpu/ivpu_hw.h      |   3 +-
 drivers/accel/ivpu/ivpu_hw_37xx.c |   7 +-
 drivers/accel/ivpu/ivpu_hw_40xx.c |   9 +-
 drivers/accel/ivpu/ivpu_job.c     | 295 ++++++++++++++++++++++------
 drivers/accel/ivpu/ivpu_job.h     |   2 +
 drivers/accel/ivpu/ivpu_jsm_msg.c | 259 ++++++++++++++++++++++++-
 drivers/accel/ivpu/ivpu_jsm_msg.h |  20 +-
 drivers/accel/ivpu/ivpu_mmu.c     |  12 +-
 drivers/accel/ivpu/ivpu_ms.c      | 309 ++++++++++++++++++++++++++++++
 drivers/accel/ivpu/ivpu_ms.h      |  36 ++++
 drivers/accel/ivpu/ivpu_pm.c      |   5 +
 drivers/accel/ivpu/ivpu_sysfs.c   |  58 ++++++
 drivers/accel/ivpu/ivpu_sysfs.h   |  13 ++
 drivers/accel/ivpu/vpu_jsm_api.h  |  14 +-
 include/uapi/drm/ivpu_accel.h     |  69 ++++++-
 22 files changed, 1173 insertions(+), 84 deletions(-)
 create mode 100644 drivers/accel/ivpu/ivpu_ms.c
 create mode 100644 drivers/accel/ivpu/ivpu_ms.h
 create mode 100644 drivers/accel/ivpu/ivpu_sysfs.c
 create mode 100644 drivers/accel/ivpu/ivpu_sysfs.h

--
2.43.2

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 01/12] accel/ivpu: Update VPU FW API headers
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 02/12] accel/ivpu: Add sched_mode module param Jacek Lawrynowicz
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Jacek Lawrynowicz

Update JSM API to 3.16.0.

Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
Reviewed-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
---
 drivers/accel/ivpu/vpu_jsm_api.h | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/accel/ivpu/vpu_jsm_api.h b/drivers/accel/ivpu/vpu_jsm_api.h
index e46f3531211a..33f462b1a25d 100644
--- a/drivers/accel/ivpu/vpu_jsm_api.h
+++ b/drivers/accel/ivpu/vpu_jsm_api.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: MIT */
 /*
- * Copyright (c) 2020-2023, Intel Corporation.
+ * Copyright (c) 2020-2024, Intel Corporation.
  */
 
 /**
@@ -22,12 +22,12 @@
 /*
  * Minor version changes when API backward compatibility is preserved.
  */
-#define VPU_JSM_API_VER_MINOR 15
+#define VPU_JSM_API_VER_MINOR 16
 
 /*
  * API header changed (field names, documentation, formatting) but API itself has not been changed
  */
-#define VPU_JSM_API_VER_PATCH 6
+#define VPU_JSM_API_VER_PATCH 0
 
 /*
  * Index in the API version table
@@ -868,6 +868,14 @@ struct vpu_ipc_msg_payload_hws_set_scheduling_log {
 	 * is generated when an event log is written to this index.
 	 */
 	u64 notify_index;
+	/*
+	 * Enable extra events to be output to log for debug of scheduling algorithm.
+	 * Interpreted by VPU as a boolean to enable or disable, expected values are
+	 * 0 and 1.
+	 */
+	u32 enable_extra_events;
+	/* Zero Padding */
+	u32 reserved_0;
 };
 
 /*
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 02/12] accel/ivpu: Add sched_mode module param
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 01/12] accel/ivpu: Update VPU FW API headers Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 03/12] accel/ivpu: Create priority based command queues Jacek Lawrynowicz
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Wachowski, Karol, Jacek Lawrynowicz

From: "Wachowski, Karol" <karol.wachowski@intel.com>

This param will be used to enable/disable HWS (hardware scheduler).
The HWS is a FW side feature and may not be available on all
HW generations and FW versions.

Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/accel/ivpu/ivpu_drv.c     | 4 ++++
 drivers/accel/ivpu/ivpu_drv.h     | 1 +
 drivers/accel/ivpu/ivpu_hw.h      | 3 ++-
 drivers/accel/ivpu/ivpu_hw_37xx.c | 1 +
 drivers/accel/ivpu/ivpu_hw_40xx.c | 3 ++-
 5 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
index 51d3f1a55d02..8d80052182f0 100644
--- a/drivers/accel/ivpu/ivpu_drv.c
+++ b/drivers/accel/ivpu/ivpu_drv.c
@@ -51,6 +51,10 @@ u8 ivpu_pll_max_ratio = U8_MAX;
 module_param_named(pll_max_ratio, ivpu_pll_max_ratio, byte, 0644);
 MODULE_PARM_DESC(pll_max_ratio, "Maximum PLL ratio used to set NPU frequency");
 
+int ivpu_sched_mode;
+module_param_named(sched_mode, ivpu_sched_mode, int, 0444);
+MODULE_PARM_DESC(sched_mode, "Scheduler mode: 0 - Default scheduler, 1 - Force HW scheduler");
+
 bool ivpu_disable_mmu_cont_pages;
 module_param_named(disable_mmu_cont_pages, ivpu_disable_mmu_cont_pages, bool, 0644);
 MODULE_PARM_DESC(disable_mmu_cont_pages, "Disable MMU contiguous pages optimization");
diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
index bb4374d0eaec..71b87455e22b 100644
--- a/drivers/accel/ivpu/ivpu_drv.h
+++ b/drivers/accel/ivpu/ivpu_drv.h
@@ -158,6 +158,7 @@ struct ivpu_file_priv {
 extern int ivpu_dbg_mask;
 extern u8 ivpu_pll_min_ratio;
 extern u8 ivpu_pll_max_ratio;
+extern int ivpu_sched_mode;
 extern bool ivpu_disable_mmu_cont_pages;
 
 #define IVPU_TEST_MODE_FW_TEST            BIT(0)
diff --git a/drivers/accel/ivpu/ivpu_hw.h b/drivers/accel/ivpu/ivpu_hw.h
index 094c659d2800..d247a2e99496 100644
--- a/drivers/accel/ivpu/ivpu_hw.h
+++ b/drivers/accel/ivpu/ivpu_hw.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (C) 2020-2023 Intel Corporation
+ * Copyright (C) 2020-2024 Intel Corporation
  */
 
 #ifndef __IVPU_HW_H__
@@ -59,6 +59,7 @@ struct ivpu_hw_info {
 		u32 profiling_freq;
 	} pll;
 	u32 tile_fuse;
+	u32 sched_mode;
 	u32 sku;
 	u16 config;
 	int dma_bits;
diff --git a/drivers/accel/ivpu/ivpu_hw_37xx.c b/drivers/accel/ivpu/ivpu_hw_37xx.c
index bd25e2d9fb0f..ce664b6515aa 100644
--- a/drivers/accel/ivpu/ivpu_hw_37xx.c
+++ b/drivers/accel/ivpu/ivpu_hw_37xx.c
@@ -589,6 +589,7 @@ static int ivpu_hw_37xx_info_init(struct ivpu_device *vdev)
 	hw->tile_fuse = TILE_FUSE_ENABLE_BOTH;
 	hw->sku = TILE_SKU_BOTH;
 	hw->config = WP_CONFIG_2_TILE_4_3_RATIO;
+	hw->sched_mode = ivpu_sched_mode;
 
 	ivpu_pll_init_frequency_ratios(vdev);
 
diff --git a/drivers/accel/ivpu/ivpu_hw_40xx.c b/drivers/accel/ivpu/ivpu_hw_40xx.c
index b0b88d4c8926..186cd87079c2 100644
--- a/drivers/accel/ivpu/ivpu_hw_40xx.c
+++ b/drivers/accel/ivpu/ivpu_hw_40xx.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (C) 2020-2023 Intel Corporation
+ * Copyright (C) 2020-2024 Intel Corporation
  */
 
 #include "ivpu_drv.h"
@@ -724,6 +724,7 @@ static int ivpu_hw_40xx_info_init(struct ivpu_device *vdev)
 	else
 		ivpu_dbg(vdev, MISC, "Fuse: All %d tiles enabled\n", TILE_MAX_NUM);
 
+	hw->sched_mode = ivpu_sched_mode;
 	hw->tile_fuse = tile_disable;
 	hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT;
 
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 03/12] accel/ivpu: Create priority based command queues
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 01/12] accel/ivpu: Update VPU FW API headers Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 02/12] accel/ivpu: Add sched_mode module param Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 04/12] accel/ivpu: Implement support for preemption buffers Jacek Lawrynowicz
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Wachowski, Karol, Jacek Lawrynowicz

From: "Wachowski, Karol" <karol.wachowski@intel.com>

Create multiple command queues per engine with different priorities.
The cmdqs are created on-demand and they support 4 priority levels.
These priorities will later be used by the HWS (hardware scheduler).

Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
Reviewed-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
---
 drivers/accel/ivpu/ivpu_drv.h |  8 +++--
 drivers/accel/ivpu/ivpu_job.c | 61 +++++++++++++++++++++++------------
 2 files changed, 46 insertions(+), 23 deletions(-)

diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
index 71b87455e22b..aafc5c3e9041 100644
--- a/drivers/accel/ivpu/ivpu_drv.h
+++ b/drivers/accel/ivpu/ivpu_drv.h
@@ -39,7 +39,11 @@
 #define IVPU_MIN_DB 1
 #define IVPU_MAX_DB 255
 
-#define IVPU_NUM_ENGINES 2
+#define IVPU_NUM_ENGINES       2
+#define IVPU_NUM_PRIORITIES    4
+#define IVPU_NUM_CMDQS_PER_CTX (IVPU_NUM_ENGINES * IVPU_NUM_PRIORITIES)
+
+#define IVPU_CMDQ_INDEX(engine, priority) ((engine) * IVPU_NUM_PRIORITIES + (priority))
 
 #define IVPU_PLATFORM_SILICON 0
 #define IVPU_PLATFORM_SIMICS  2
@@ -149,7 +153,7 @@ struct ivpu_file_priv {
 	struct kref ref;
 	struct ivpu_device *vdev;
 	struct mutex lock; /* Protects cmdq */
-	struct ivpu_cmdq *cmdq[IVPU_NUM_ENGINES];
+	struct ivpu_cmdq *cmdq[IVPU_NUM_CMDQS_PER_CTX];
 	struct ivpu_mmu_context ctx;
 	bool has_mmu_faults;
 	bool bound;
diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
index a49bc9105ed0..b56035de1a59 100644
--- a/drivers/accel/ivpu/ivpu_job.c
+++ b/drivers/accel/ivpu/ivpu_job.c
@@ -79,10 +79,12 @@ static void ivpu_cmdq_free(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *c
 	kfree(cmdq);
 }
 
-static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u16 engine)
+static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u16 engine,
+					   u8 priority)
 {
+	int cmdq_idx = IVPU_CMDQ_INDEX(engine, priority);
+	struct ivpu_cmdq *cmdq = file_priv->cmdq[cmdq_idx];
 	struct ivpu_device *vdev = file_priv->vdev;
-	struct ivpu_cmdq *cmdq = file_priv->cmdq[engine];
 	int ret;
 
 	lockdep_assert_held(&file_priv->lock);
@@ -91,7 +93,7 @@ static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u16
 		cmdq = ivpu_cmdq_alloc(file_priv, engine);
 		if (!cmdq)
 			return NULL;
-		file_priv->cmdq[engine] = cmdq;
+		file_priv->cmdq[cmdq_idx] = cmdq;
 	}
 
 	if (cmdq->db_registered)
@@ -107,14 +109,15 @@ static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u16
 	return cmdq;
 }
 
-static void ivpu_cmdq_release_locked(struct ivpu_file_priv *file_priv, u16 engine)
+static void ivpu_cmdq_release_locked(struct ivpu_file_priv *file_priv, u16 engine, u8 priority)
 {
-	struct ivpu_cmdq *cmdq = file_priv->cmdq[engine];
+	int cmdq_idx = IVPU_CMDQ_INDEX(engine, priority);
+	struct ivpu_cmdq *cmdq = file_priv->cmdq[cmdq_idx];
 
 	lockdep_assert_held(&file_priv->lock);
 
 	if (cmdq) {
-		file_priv->cmdq[engine] = NULL;
+		file_priv->cmdq[cmdq_idx] = NULL;
 		if (cmdq->db_registered)
 			ivpu_jsm_unregister_db(file_priv->vdev, cmdq->db_id);
 
@@ -124,12 +127,14 @@ static void ivpu_cmdq_release_locked(struct ivpu_file_priv *file_priv, u16 engin
 
 void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv)
 {
-	int i;
+	u16 engine;
+	u8 priority;
 
 	lockdep_assert_held(&file_priv->lock);
 
-	for (i = 0; i < IVPU_NUM_ENGINES; i++)
-		ivpu_cmdq_release_locked(file_priv, i);
+	for (engine = 0; engine < IVPU_NUM_ENGINES; engine++)
+		for (priority = 0; priority < IVPU_NUM_PRIORITIES; priority++)
+			ivpu_cmdq_release_locked(file_priv, engine, priority);
 }
 
 /*
@@ -138,9 +143,10 @@ void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv)
  * and FW loses job queue state. The next time job queue is used it
  * will be registered again.
  */
-static void ivpu_cmdq_reset_locked(struct ivpu_file_priv *file_priv, u16 engine)
+static void ivpu_cmdq_reset_locked(struct ivpu_file_priv *file_priv, u16 engine, u8 priority)
 {
-	struct ivpu_cmdq *cmdq = file_priv->cmdq[engine];
+	int cmdq_idx = IVPU_CMDQ_INDEX(engine, priority);
+	struct ivpu_cmdq *cmdq = file_priv->cmdq[cmdq_idx];
 
 	lockdep_assert_held(&file_priv->lock);
 
@@ -154,12 +160,14 @@ static void ivpu_cmdq_reset_locked(struct ivpu_file_priv *file_priv, u16 engine)
 
 static void ivpu_cmdq_reset_all(struct ivpu_file_priv *file_priv)
 {
-	int i;
+	u16 engine;
+	u8 priority;
 
 	mutex_lock(&file_priv->lock);
 
-	for (i = 0; i < IVPU_NUM_ENGINES; i++)
-		ivpu_cmdq_reset_locked(file_priv, i);
+	for (engine = 0; engine < IVPU_NUM_ENGINES; engine++)
+		for (priority = 0; priority < IVPU_NUM_PRIORITIES; priority++)
+			ivpu_cmdq_reset_locked(file_priv, engine, priority);
 
 	mutex_unlock(&file_priv->lock);
 }
@@ -328,7 +336,7 @@ void ivpu_jobs_abort_all(struct ivpu_device *vdev)
 		ivpu_job_signal_and_destroy(vdev, id, DRM_IVPU_JOB_STATUS_ABORTED);
 }
 
-static int ivpu_job_submit(struct ivpu_job *job)
+static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
 {
 	struct ivpu_file_priv *file_priv = job->file_priv;
 	struct ivpu_device *vdev = job->vdev;
@@ -342,10 +350,10 @@ static int ivpu_job_submit(struct ivpu_job *job)
 
 	mutex_lock(&file_priv->lock);
 
-	cmdq = ivpu_cmdq_acquire(job->file_priv, job->engine_idx);
+	cmdq = ivpu_cmdq_acquire(job->file_priv, job->engine_idx, priority);
 	if (!cmdq) {
-		ivpu_warn_ratelimited(vdev, "Failed get job queue, ctx %d engine %d\n",
-				      file_priv->ctx.id, job->engine_idx);
+		ivpu_warn_ratelimited(vdev, "Failed to get job queue, ctx %d engine %d prio %d\n",
+				      file_priv->ctx.id, job->engine_idx, priority);
 		ret = -EINVAL;
 		goto err_unlock_file_priv;
 	}
@@ -375,8 +383,8 @@ static int ivpu_job_submit(struct ivpu_job *job)
 		ivpu_cmdq_ring_db(vdev, cmdq);
 	}
 
-	ivpu_dbg(vdev, JOB, "Job submitted: id %3u ctx %2d engine %d addr 0x%llx next %d\n",
-		 job->job_id, file_priv->ctx.id, job->engine_idx,
+	ivpu_dbg(vdev, JOB, "Job submitted: id %3u ctx %2d engine %d prio %d addr 0x%llx next %d\n",
+		 job->job_id, file_priv->ctx.id, job->engine_idx, priority,
 		 job->cmd_buf_vpu_addr, cmdq->jobq->header.tail);
 
 	xa_unlock(&vdev->submitted_jobs_xa);
@@ -464,6 +472,14 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32
 	return ret;
 }
 
+static inline u8 ivpu_job_to_hws_priority(struct ivpu_file_priv *file_priv, u8 priority)
+{
+	if (priority == DRM_IVPU_JOB_PRIORITY_DEFAULT)
+		return DRM_IVPU_JOB_PRIORITY_NORMAL;
+
+	return priority - 1;
+}
+
 int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 {
 	struct ivpu_file_priv *file_priv = file->driver_priv;
@@ -472,6 +488,7 @@ int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	struct ivpu_job *job;
 	u32 *buf_handles;
 	int idx, ret;
+	u8 priority;
 
 	if (params->engine > DRM_IVPU_ENGINE_COPY)
 		return -EINVAL;
@@ -525,8 +542,10 @@ int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		goto err_destroy_job;
 	}
 
+	priority = ivpu_job_to_hws_priority(file_priv, params->priority);
+
 	down_read(&vdev->pm->reset_lock);
-	ret = ivpu_job_submit(job);
+	ret = ivpu_job_submit(job, priority);
 	up_read(&vdev->pm->reset_lock);
 	if (ret)
 		goto err_signal_fence;
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 04/12] accel/ivpu: Implement support for preemption buffers
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (2 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 03/12] accel/ivpu: Create priority based command queues Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 05/12] accel/ivpu: Add HWS JSM messages Jacek Lawrynowicz
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Wachowski, Karol, Jacek Lawrynowicz

From: "Wachowski, Karol" <karol.wachowski@intel.com>

Allocate per-context preemption buffers that are required by HWS.

There are two preemption buffers:
  * primary - allocated in user memory range (PIOVA accessible)
  * secondary - allocated in shave memory range

Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
Reviewed-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
---
 drivers/accel/ivpu/ivpu_drv.h |  1 +
 drivers/accel/ivpu/ivpu_fw.c  |  3 ++
 drivers/accel/ivpu/ivpu_fw.h  |  2 ++
 drivers/accel/ivpu/ivpu_job.c | 65 +++++++++++++++++++++++++++++++++++
 drivers/accel/ivpu/ivpu_job.h |  2 ++
 5 files changed, 73 insertions(+)

diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
index aafc5c3e9041..f500b2d92452 100644
--- a/drivers/accel/ivpu/ivpu_drv.h
+++ b/drivers/accel/ivpu/ivpu_drv.h
@@ -170,6 +170,7 @@ extern bool ivpu_disable_mmu_cont_pages;
 #define IVPU_TEST_MODE_NULL_SUBMISSION    BIT(2)
 #define IVPU_TEST_MODE_D0I3_MSG_DISABLE   BIT(4)
 #define IVPU_TEST_MODE_D0I3_MSG_ENABLE    BIT(5)
+#define IVPU_TEST_MODE_PREEMPTION_DISABLE BIT(6)
 extern int ivpu_test_mode;
 
 struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv);
diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
index 1457300828bf..29ecf7db238b 100644
--- a/drivers/accel/ivpu/ivpu_fw.c
+++ b/drivers/accel/ivpu/ivpu_fw.c
@@ -200,6 +200,9 @@ static int ivpu_fw_parse(struct ivpu_device *vdev)
 
 	fw->dvfs_mode = 0;
 
+	fw->primary_preempt_buf_size = fw_hdr->preemption_buffer_1_size;
+	fw->secondary_preempt_buf_size = fw_hdr->preemption_buffer_2_size;
+
 	ivpu_dbg(vdev, FW_BOOT, "Size: file %lu image %u runtime %u shavenn %u\n",
 		 fw->file->size, fw->image_size, fw->runtime_size, fw->shave_nn_size);
 	ivpu_dbg(vdev, FW_BOOT, "Address: runtime 0x%llx, load 0x%llx, entry point 0x%llx\n",
diff --git a/drivers/accel/ivpu/ivpu_fw.h b/drivers/accel/ivpu/ivpu_fw.h
index 66b60fa161b5..66fc7da3ab0f 100644
--- a/drivers/accel/ivpu/ivpu_fw.h
+++ b/drivers/accel/ivpu/ivpu_fw.h
@@ -28,6 +28,8 @@ struct ivpu_fw_info {
 	u32 trace_destination_mask;
 	u64 trace_hw_component_mask;
 	u32 dvfs_mode;
+	u32 primary_preempt_buf_size;
+	u32 secondary_preempt_buf_size;
 };
 
 int ivpu_fw_init(struct ivpu_device *vdev);
diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
index b56035de1a59..3ef9d8022c9c 100644
--- a/drivers/accel/ivpu/ivpu_job.c
+++ b/drivers/accel/ivpu/ivpu_job.c
@@ -12,11 +12,13 @@
 #include <uapi/drm/ivpu_accel.h>
 
 #include "ivpu_drv.h"
+#include "ivpu_fw.h"
 #include "ivpu_hw.h"
 #include "ivpu_ipc.h"
 #include "ivpu_job.h"
 #include "ivpu_jsm_msg.h"
 #include "ivpu_pm.h"
+#include "vpu_boot_api.h"
 
 #define CMD_BUF_IDX	     0
 #define JOB_ID_JOB_MASK	     GENMASK(7, 0)
@@ -28,6 +30,53 @@ static void ivpu_cmdq_ring_db(struct ivpu_device *vdev, struct ivpu_cmdq *cmdq)
 	ivpu_hw_reg_db_set(vdev, cmdq->db_id);
 }
 
+static int ivpu_preemption_buffers_create(struct ivpu_device *vdev,
+					  struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
+{
+	u64 primary_size = ALIGN(vdev->fw->primary_preempt_buf_size, PAGE_SIZE);
+	u64 secondary_size = ALIGN(vdev->fw->secondary_preempt_buf_size, PAGE_SIZE);
+	struct ivpu_addr_range range;
+
+	if (vdev->hw->sched_mode != VPU_SCHEDULING_MODE_HW)
+		return 0;
+
+	range.start = vdev->hw->ranges.user.end - (primary_size * IVPU_NUM_CMDQS_PER_CTX);
+	range.end = vdev->hw->ranges.user.end;
+	cmdq->primary_preempt_buf = ivpu_bo_create(vdev, &file_priv->ctx, &range, primary_size,
+						   DRM_IVPU_BO_WC);
+	if (!cmdq->primary_preempt_buf) {
+		ivpu_err(vdev, "Failed to create primary preemption buffer\n");
+		return -ENOMEM;
+	}
+
+	range.start = vdev->hw->ranges.shave.end - (secondary_size * IVPU_NUM_CMDQS_PER_CTX);
+	range.end = vdev->hw->ranges.shave.end;
+	cmdq->secondary_preempt_buf = ivpu_bo_create(vdev, &file_priv->ctx, &range, secondary_size,
+						     DRM_IVPU_BO_WC);
+	if (!cmdq->secondary_preempt_buf) {
+		ivpu_err(vdev, "Failed to create secondary preemption buffer\n");
+		goto err_free_primary;
+	}
+
+	return 0;
+
+err_free_primary:
+	ivpu_bo_free(cmdq->primary_preempt_buf);
+	return -ENOMEM;
+}
+
+static void ivpu_preemption_buffers_free(struct ivpu_device *vdev,
+					 struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
+{
+	if (vdev->hw->sched_mode != VPU_SCHEDULING_MODE_HW)
+		return;
+
+	drm_WARN_ON(&vdev->drm, !cmdq->primary_preempt_buf);
+	drm_WARN_ON(&vdev->drm, !cmdq->secondary_preempt_buf);
+	ivpu_bo_free(cmdq->primary_preempt_buf);
+	ivpu_bo_free(cmdq->secondary_preempt_buf);
+}
+
 static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv, u16 engine)
 {
 	struct xa_limit db_xa_limit = {.max = IVPU_MAX_DB, .min = IVPU_MIN_DB};
@@ -50,6 +99,10 @@ static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv, u16 e
 	if (!cmdq->mem)
 		goto err_erase_xa;
 
+	ret = ivpu_preemption_buffers_create(vdev, file_priv, cmdq);
+	if (ret)
+		goto err_free_cmdq_mem;
+
 	cmdq->entry_count = (u32)((ivpu_bo_size(cmdq->mem) - sizeof(struct vpu_job_queue_header)) /
 				  sizeof(struct vpu_job_queue_entry));
 
@@ -62,6 +115,8 @@ static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv, u16 e
 
 	return cmdq;
 
+err_free_cmdq_mem:
+	ivpu_bo_free(cmdq->mem);
 err_erase_xa:
 	xa_erase(&vdev->db_xa, cmdq->db_id);
 err_free_cmdq:
@@ -74,6 +129,7 @@ static void ivpu_cmdq_free(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *c
 	if (!cmdq)
 		return;
 
+	ivpu_preemption_buffers_free(file_priv->vdev, file_priv, cmdq);
 	ivpu_bo_free(cmdq->mem);
 	xa_erase(&file_priv->vdev->db_xa, cmdq->db_id);
 	kfree(cmdq);
@@ -207,6 +263,15 @@ static int ivpu_cmdq_push_job(struct ivpu_cmdq *cmdq, struct ivpu_job *job)
 	entry->flags = 0;
 	if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_SUBMISSION))
 		entry->flags = VPU_JOB_FLAGS_NULL_SUBMISSION_MASK;
+
+	if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW &&
+	    (unlikely(!(ivpu_test_mode & IVPU_TEST_MODE_PREEMPTION_DISABLE)))) {
+		entry->primary_preempt_buf_addr = cmdq->primary_preempt_buf->vpu_addr;
+		entry->primary_preempt_buf_size = ivpu_bo_size(cmdq->primary_preempt_buf);
+		entry->secondary_preempt_buf_addr = cmdq->secondary_preempt_buf->vpu_addr;
+		entry->secondary_preempt_buf_size = ivpu_bo_size(cmdq->secondary_preempt_buf);
+	}
+
 	wmb(); /* Ensure that tail is updated after filling entry */
 	header->tail = next_entry;
 	wmb(); /* Flush WC buffer for jobq header */
diff --git a/drivers/accel/ivpu/ivpu_job.h b/drivers/accel/ivpu/ivpu_job.h
index ca4984071cc7..e50002b5788c 100644
--- a/drivers/accel/ivpu/ivpu_job.h
+++ b/drivers/accel/ivpu/ivpu_job.h
@@ -24,6 +24,8 @@ struct ivpu_file_priv;
  */
 struct ivpu_cmdq {
 	struct vpu_job_queue *jobq;
+	struct ivpu_bo *primary_preempt_buf;
+	struct ivpu_bo *secondary_preempt_buf;
 	struct ivpu_bo *mem;
 	u32 entry_count;
 	u32 db_id;
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 05/12] accel/ivpu: Add HWS JSM messages
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (3 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 04/12] accel/ivpu: Implement support for preemption buffers Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 06/12] accel/ivpu: Implement support for hardware scheduler Jacek Lawrynowicz
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Wachowski, Karol, Jacek Lawrynowicz

From: "Wachowski, Karol" <karol.wachowski@intel.com>

Add JSM messages that will be used to implement hardware scheduler.
Most of these messages are used to create and manage HWS specific
command queues.

Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
Reviewed-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
---
 drivers/accel/ivpu/ivpu_drv.h     |   1 +
 drivers/accel/ivpu/ivpu_jsm_msg.c | 161 +++++++++++++++++++++++++++++-
 drivers/accel/ivpu/ivpu_jsm_msg.h |  14 ++-
 3 files changed, 174 insertions(+), 2 deletions(-)

diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
index f500b2d92452..9e9d85ad78ea 100644
--- a/drivers/accel/ivpu/ivpu_drv.h
+++ b/drivers/accel/ivpu/ivpu_drv.h
@@ -171,6 +171,7 @@ extern bool ivpu_disable_mmu_cont_pages;
 #define IVPU_TEST_MODE_D0I3_MSG_DISABLE   BIT(4)
 #define IVPU_TEST_MODE_D0I3_MSG_ENABLE    BIT(5)
 #define IVPU_TEST_MODE_PREEMPTION_DISABLE BIT(6)
+#define IVPU_TEST_MODE_HWS_EXTRA_EVENTS	  BIT(7)
 extern int ivpu_test_mode;
 
 struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv);
diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
index 8cea0dd731b9..4b260065ad72 100644
--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
+++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (C) 2020-2023 Intel Corporation
+ * Copyright (C) 2020-2024 Intel Corporation
  */
 
 #include "ivpu_drv.h"
@@ -281,3 +281,162 @@ int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev)
 
 	return ivpu_hw_wait_for_idle(vdev);
 }
+
+int ivpu_jsm_hws_create_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_group, u32 cmdq_id,
+			     u32 pid, u32 engine, u64 cmdq_base, u32 cmdq_size)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_CREATE_CMD_QUEUE };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.hws_create_cmdq.host_ssid = ctx_id;
+	req.payload.hws_create_cmdq.process_id = pid;
+	req.payload.hws_create_cmdq.engine_idx = engine;
+	req.payload.hws_create_cmdq.cmdq_group = cmdq_group;
+	req.payload.hws_create_cmdq.cmdq_id = cmdq_id;
+	req.payload.hws_create_cmdq.cmdq_base = cmdq_base;
+	req.payload.hws_create_cmdq.cmdq_size = cmdq_size;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_CREATE_CMD_QUEUE_RSP, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_warn_ratelimited(vdev, "Failed to create command queue: %d\n", ret);
+
+	return ret;
+}
+
+int ivpu_jsm_hws_destroy_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_DESTROY_CMD_QUEUE };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.hws_destroy_cmdq.host_ssid = ctx_id;
+	req.payload.hws_destroy_cmdq.cmdq_id = cmdq_id;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_DESTROY_CMD_QUEUE_RSP, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_warn_ratelimited(vdev, "Failed to destroy command queue: %d\n", ret);
+
+	return ret;
+}
+
+int ivpu_jsm_hws_register_db(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id, u32 db_id,
+			     u64 cmdq_base, u32 cmdq_size)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_HWS_REGISTER_DB };
+	struct vpu_jsm_msg resp;
+	int ret = 0;
+
+	req.payload.hws_register_db.db_id = db_id;
+	req.payload.hws_register_db.host_ssid = ctx_id;
+	req.payload.hws_register_db.cmdq_id = cmdq_id;
+	req.payload.hws_register_db.cmdq_base = cmdq_base;
+	req.payload.hws_register_db.cmdq_size = cmdq_size;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_REGISTER_DB_DONE, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_err_ratelimited(vdev, "Failed to register doorbell %u: %d\n", db_id, ret);
+
+	return ret;
+}
+
+int ivpu_jsm_hws_resume_engine(struct ivpu_device *vdev, u32 engine)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_HWS_ENGINE_RESUME };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	if (engine >= VPU_ENGINE_NB)
+		return -EINVAL;
+
+	req.payload.hws_resume_engine.engine_idx = engine;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_HWS_RESUME_ENGINE_DONE, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_err_ratelimited(vdev, "Failed to resume engine %d: %d\n", engine, ret);
+
+	return ret;
+}
+
+int ivpu_jsm_hws_set_context_sched_properties(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id,
+					      u32 priority)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_SET_CONTEXT_SCHED_PROPERTIES };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.hws_set_context_sched_properties.host_ssid = ctx_id;
+	req.payload.hws_set_context_sched_properties.cmdq_id = cmdq_id;
+	req.payload.hws_set_context_sched_properties.priority_band = priority;
+	req.payload.hws_set_context_sched_properties.realtime_priority_level = 0;
+	req.payload.hws_set_context_sched_properties.in_process_priority = 0;
+	req.payload.hws_set_context_sched_properties.context_quantum = 20000;
+	req.payload.hws_set_context_sched_properties.grace_period_same_priority = 10000;
+	req.payload.hws_set_context_sched_properties.grace_period_lower_priority = 0;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_SET_CONTEXT_SCHED_PROPERTIES_RSP, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_warn_ratelimited(vdev, "Failed to set context sched properties: %d\n", ret);
+
+	return ret;
+}
+
+int ivpu_jsm_hws_set_scheduling_log(struct ivpu_device *vdev, u32 engine_idx, u32 host_ssid,
+				    u64 vpu_log_buffer_va)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_HWS_SET_SCHEDULING_LOG };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.hws_set_scheduling_log.engine_idx = engine_idx;
+	req.payload.hws_set_scheduling_log.host_ssid = host_ssid;
+	req.payload.hws_set_scheduling_log.vpu_log_buffer_va = vpu_log_buffer_va;
+	req.payload.hws_set_scheduling_log.notify_index = 0;
+	req.payload.hws_set_scheduling_log.enable_extra_events =
+		ivpu_test_mode & IVPU_TEST_MODE_HWS_EXTRA_EVENTS;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_HWS_SET_SCHEDULING_LOG_RSP, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_warn_ratelimited(vdev, "Failed to set scheduling log: %d\n", ret);
+
+	return ret;
+}
+
+int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	/* Idle */
+	req.payload.hws_priority_band_setup.grace_period[0] = 0;
+	req.payload.hws_priority_band_setup.process_grace_period[0] = 50000;
+	req.payload.hws_priority_band_setup.process_quantum[0] = 160000;
+	/* Normal */
+	req.payload.hws_priority_band_setup.grace_period[1] = 50000;
+	req.payload.hws_priority_band_setup.process_grace_period[1] = 50000;
+	req.payload.hws_priority_band_setup.process_quantum[1] = 300000;
+	/* Focus */
+	req.payload.hws_priority_band_setup.grace_period[2] = 50000;
+	req.payload.hws_priority_band_setup.process_grace_period[2] = 50000;
+	req.payload.hws_priority_band_setup.process_quantum[2] = 200000;
+	/* Realtime */
+	req.payload.hws_priority_band_setup.grace_period[3] = 0;
+	req.payload.hws_priority_band_setup.process_grace_period[3] = 50000;
+	req.payload.hws_priority_band_setup.process_quantum[3] = 200000;
+
+	req.payload.hws_priority_band_setup.normal_band_percentage = 10;
+
+	ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
+					   &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_warn_ratelimited(vdev, "Failed to set priority bands: %d\n", ret);
+
+	return ret;
+}
diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.h b/drivers/accel/ivpu/ivpu_jsm_msg.h
index ae75e5dbcc41..357728295fe9 100644
--- a/drivers/accel/ivpu/ivpu_jsm_msg.h
+++ b/drivers/accel/ivpu/ivpu_jsm_msg.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (C) 2020-2023 Intel Corporation
+ * Copyright (C) 2020-2024 Intel Corporation
  */
 
 #ifndef __IVPU_JSM_MSG_H__
@@ -23,4 +23,16 @@ int ivpu_jsm_trace_set_config(struct ivpu_device *vdev, u32 trace_level, u32 tra
 			      u64 trace_hw_component_mask);
 int ivpu_jsm_context_release(struct ivpu_device *vdev, u32 host_ssid);
 int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev);
+int ivpu_jsm_hws_create_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_group, u32 cmdq_id,
+			     u32 pid, u32 engine, u64 cmdq_base, u32 cmdq_size);
+int ivpu_jsm_hws_destroy_cmdq(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id);
+int ivpu_jsm_hws_register_db(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id, u32 db_id,
+			     u64 cmdq_base, u32 cmdq_size);
+int ivpu_jsm_hws_resume_engine(struct ivpu_device *vdev, u32 engine);
+int ivpu_jsm_hws_set_context_sched_properties(struct ivpu_device *vdev, u32 ctx_id, u32 cmdq_id,
+					      u32 priority);
+int ivpu_jsm_hws_set_scheduling_log(struct ivpu_device *vdev, u32 engine_idx, u32 host_ssid,
+				    u64 vpu_log_buffer_va);
+int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev);
+
 #endif
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 06/12] accel/ivpu: Implement support for hardware scheduler
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (4 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 05/12] accel/ivpu: Add HWS JSM messages Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 07/12] accel/ivpu: Add resume engine support Jacek Lawrynowicz
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Wachowski, Karol, Jacek Lawrynowicz

From: "Wachowski, Karol" <karol.wachowski@intel.com>

Add support for HWS (hardware scheduler). It is disabled by default.
The sched_mode module param can be used to enable it.

Each context has multiple command queues with different priorities and
HWS enables priority based execution on the HW/FW side.

The driver in HWS mode has to send a couple additional messages to
initialize HWS and describe command queue priorities.

Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/accel/ivpu/ivpu_drv.c |  20 ++++-
 drivers/accel/ivpu/ivpu_fw.c  |   7 ++
 drivers/accel/ivpu/ivpu_job.c | 162 ++++++++++++++++++++++++----------
 3 files changed, 142 insertions(+), 47 deletions(-)

diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
index 8d80052182f0..ca4fcef7edf5 100644
--- a/drivers/accel/ivpu/ivpu_drv.c
+++ b/drivers/accel/ivpu/ivpu_drv.c
@@ -78,7 +78,6 @@ static void file_priv_unbind(struct ivpu_device *vdev, struct ivpu_file_priv *fi
 		ivpu_dbg(vdev, FILE, "file_priv unbind: ctx %u\n", file_priv->ctx.id);
 
 		ivpu_cmdq_release_all_locked(file_priv);
-		ivpu_jsm_context_release(vdev, file_priv->ctx.id);
 		ivpu_bo_unbind_all_bos_from_context(vdev, &file_priv->ctx);
 		ivpu_mmu_user_context_fini(vdev, &file_priv->ctx);
 		file_priv->bound = false;
@@ -327,6 +326,21 @@ static int ivpu_wait_for_ready(struct ivpu_device *vdev)
 	return ret;
 }
 
+static int ivpu_hw_sched_init(struct ivpu_device *vdev)
+{
+	int ret = 0;
+
+	if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW) {
+		ret = ivpu_jsm_hws_setup_priority_bands(vdev);
+		if (ret) {
+			ivpu_err(vdev, "Failed to enable hw scheduler: %d", ret);
+			return ret;
+		}
+	}
+
+	return ret;
+}
+
 /**
  * ivpu_boot() - Start VPU firmware
  * @vdev: VPU device
@@ -360,6 +374,10 @@ int ivpu_boot(struct ivpu_device *vdev)
 	enable_irq(vdev->irq);
 	ivpu_hw_irq_enable(vdev);
 	ivpu_ipc_enable(vdev);
+
+	if (ivpu_fw_is_cold_boot(vdev))
+		return ivpu_hw_sched_init(vdev);
+
 	return 0;
 }
 
diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
index 29ecf7db238b..427cd72bd34f 100644
--- a/drivers/accel/ivpu/ivpu_fw.c
+++ b/drivers/accel/ivpu/ivpu_fw.c
@@ -44,6 +44,8 @@
 #define IVPU_FW_CHECK_API_VER_LT(vdev, fw_hdr, name, major, minor) \
 	ivpu_fw_check_api_ver_lt(vdev, fw_hdr, #name, VPU_##name##_API_VER_INDEX, major, minor)
 
+#define IVPU_FOCUS_PRESENT_TIMER_MS 1000
+
 static char *ivpu_firmware;
 module_param_named_unsafe(firmware, ivpu_firmware, charp, 0644);
 MODULE_PARM_DESC(firmware, "NPU firmware binary in /lib/firmware/..");
@@ -467,6 +469,8 @@ static void ivpu_fw_boot_params_print(struct ivpu_device *vdev, struct vpu_boot_
 		 boot_params->punit_telemetry_sram_size);
 	ivpu_dbg(vdev, FW_BOOT, "boot_params.vpu_telemetry_enable = 0x%x\n",
 		 boot_params->vpu_telemetry_enable);
+	ivpu_dbg(vdev, FW_BOOT, "boot_params.vpu_scheduling_mode = 0x%x\n",
+		 boot_params->vpu_scheduling_mode);
 	ivpu_dbg(vdev, FW_BOOT, "boot_params.dvfs_mode = %u\n",
 		 boot_params->dvfs_mode);
 	ivpu_dbg(vdev, FW_BOOT, "boot_params.d0i3_delayed_entry = %d\n",
@@ -567,6 +571,9 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
 	boot_params->punit_telemetry_sram_base = ivpu_hw_reg_telemetry_offset_get(vdev);
 	boot_params->punit_telemetry_sram_size = ivpu_hw_reg_telemetry_size_get(vdev);
 	boot_params->vpu_telemetry_enable = ivpu_hw_reg_telemetry_enable_get(vdev);
+	boot_params->vpu_scheduling_mode = vdev->hw->sched_mode;
+	if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW)
+		boot_params->vpu_focus_present_timer_ms = IVPU_FOCUS_PRESENT_TIMER_MS;
 	boot_params->dvfs_mode = vdev->fw->dvfs_mode;
 	if (!IVPU_WA(disable_d0i3_msg))
 		boot_params->d0i3_delayed_entry = 1;
diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
index 3ef9d8022c9c..1d7b4388eb3b 100644
--- a/drivers/accel/ivpu/ivpu_job.c
+++ b/drivers/accel/ivpu/ivpu_job.c
@@ -77,11 +77,10 @@ static void ivpu_preemption_buffers_free(struct ivpu_device *vdev,
 	ivpu_bo_free(cmdq->secondary_preempt_buf);
 }
 
-static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv, u16 engine)
+static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv)
 {
 	struct xa_limit db_xa_limit = {.max = IVPU_MAX_DB, .min = IVPU_MIN_DB};
 	struct ivpu_device *vdev = file_priv->vdev;
-	struct vpu_job_queue_header *jobq_header;
 	struct ivpu_cmdq *cmdq;
 	int ret;
 
@@ -103,16 +102,6 @@ static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv, u16 e
 	if (ret)
 		goto err_free_cmdq_mem;
 
-	cmdq->entry_count = (u32)((ivpu_bo_size(cmdq->mem) - sizeof(struct vpu_job_queue_header)) /
-				  sizeof(struct vpu_job_queue_entry));
-
-	cmdq->jobq = (struct vpu_job_queue *)ivpu_bo_vaddr(cmdq->mem);
-	jobq_header = &cmdq->jobq->header;
-	jobq_header->engine_idx = engine;
-	jobq_header->head = 0;
-	jobq_header->tail = 0;
-	wmb(); /* Flush WC buffer for jobq->header */
-
 	return cmdq;
 
 err_free_cmdq_mem:
@@ -135,33 +124,126 @@ static void ivpu_cmdq_free(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *c
 	kfree(cmdq);
 }
 
+static int ivpu_hws_cmdq_init(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq, u16 engine,
+			      u8 priority)
+{
+	struct ivpu_device *vdev = file_priv->vdev;
+	int ret;
+
+	ret = ivpu_jsm_hws_create_cmdq(vdev, file_priv->ctx.id, file_priv->ctx.id, cmdq->db_id,
+				       task_pid_nr(current), engine,
+				       cmdq->mem->vpu_addr, ivpu_bo_size(cmdq->mem));
+	if (ret)
+		return ret;
+
+	ret = ivpu_jsm_hws_set_context_sched_properties(vdev, file_priv->ctx.id, cmdq->db_id,
+							priority);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int ivpu_register_db(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
+{
+	struct ivpu_device *vdev = file_priv->vdev;
+	int ret;
+
+	if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW)
+		ret = ivpu_jsm_hws_register_db(vdev, file_priv->ctx.id, cmdq->db_id, cmdq->db_id,
+					       cmdq->mem->vpu_addr, ivpu_bo_size(cmdq->mem));
+	else
+		ret = ivpu_jsm_register_db(vdev, file_priv->ctx.id, cmdq->db_id,
+					   cmdq->mem->vpu_addr, ivpu_bo_size(cmdq->mem));
+
+	if (!ret)
+		ivpu_dbg(vdev, JOB, "DB %d registered to ctx %d\n", cmdq->db_id, file_priv->ctx.id);
+
+	return ret;
+}
+
+static int
+ivpu_cmdq_init(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq, u16 engine, u8 priority)
+{
+	struct ivpu_device *vdev = file_priv->vdev;
+	struct vpu_job_queue_header *jobq_header;
+	int ret;
+
+	lockdep_assert_held(&file_priv->lock);
+
+	if (cmdq->db_registered)
+		return 0;
+
+	cmdq->entry_count = (u32)((ivpu_bo_size(cmdq->mem) - sizeof(struct vpu_job_queue_header)) /
+				  sizeof(struct vpu_job_queue_entry));
+
+	cmdq->jobq = (struct vpu_job_queue *)ivpu_bo_vaddr(cmdq->mem);
+	jobq_header = &cmdq->jobq->header;
+	jobq_header->engine_idx = engine;
+	jobq_header->head = 0;
+	jobq_header->tail = 0;
+	wmb(); /* Flush WC buffer for jobq->header */
+
+	if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW) {
+		ret = ivpu_hws_cmdq_init(file_priv, cmdq, engine, priority);
+		if (ret)
+			return ret;
+	}
+
+	ret = ivpu_register_db(file_priv, cmdq);
+	if (ret)
+		return ret;
+
+	cmdq->db_registered = true;
+
+	return 0;
+}
+
+static int ivpu_cmdq_fini(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cmdq)
+{
+	struct ivpu_device *vdev = file_priv->vdev;
+	int ret;
+
+	lockdep_assert_held(&file_priv->lock);
+
+	if (!cmdq->db_registered)
+		return 0;
+
+	cmdq->db_registered = false;
+
+	if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW) {
+		ret = ivpu_jsm_hws_destroy_cmdq(vdev, file_priv->ctx.id, cmdq->db_id);
+		if (!ret)
+			ivpu_dbg(vdev, JOB, "Command queue %d destroyed\n", cmdq->db_id);
+	}
+
+	ret = ivpu_jsm_unregister_db(vdev, cmdq->db_id);
+	if (!ret)
+		ivpu_dbg(vdev, JOB, "DB %d unregistered\n", cmdq->db_id);
+
+	return 0;
+}
+
 static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u16 engine,
 					   u8 priority)
 {
 	int cmdq_idx = IVPU_CMDQ_INDEX(engine, priority);
 	struct ivpu_cmdq *cmdq = file_priv->cmdq[cmdq_idx];
-	struct ivpu_device *vdev = file_priv->vdev;
 	int ret;
 
 	lockdep_assert_held(&file_priv->lock);
 
 	if (!cmdq) {
-		cmdq = ivpu_cmdq_alloc(file_priv, engine);
+		cmdq = ivpu_cmdq_alloc(file_priv);
 		if (!cmdq)
 			return NULL;
 		file_priv->cmdq[cmdq_idx] = cmdq;
 	}
 
-	if (cmdq->db_registered)
-		return cmdq;
-
-	ret = ivpu_jsm_register_db(vdev, file_priv->ctx.id, cmdq->db_id,
-				   cmdq->mem->vpu_addr, ivpu_bo_size(cmdq->mem));
+	ret = ivpu_cmdq_init(file_priv, cmdq, engine, priority);
 	if (ret)
 		return NULL;
 
-	cmdq->db_registered = true;
-
 	return cmdq;
 }
 
@@ -174,9 +256,7 @@ static void ivpu_cmdq_release_locked(struct ivpu_file_priv *file_priv, u16 engin
 
 	if (cmdq) {
 		file_priv->cmdq[cmdq_idx] = NULL;
-		if (cmdq->db_registered)
-			ivpu_jsm_unregister_db(file_priv->vdev, cmdq->db_id);
-
+		ivpu_cmdq_fini(file_priv, cmdq);
 		ivpu_cmdq_free(file_priv, cmdq);
 	}
 }
@@ -194,36 +274,27 @@ void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv)
 }
 
 /*
- * Mark the doorbell as unregistered and reset job queue pointers.
+ * Mark the doorbell as unregistered
  * This function needs to be called when the VPU hardware is restarted
  * and FW loses job queue state. The next time job queue is used it
  * will be registered again.
  */
-static void ivpu_cmdq_reset_locked(struct ivpu_file_priv *file_priv, u16 engine, u8 priority)
-{
-	int cmdq_idx = IVPU_CMDQ_INDEX(engine, priority);
-	struct ivpu_cmdq *cmdq = file_priv->cmdq[cmdq_idx];
-
-	lockdep_assert_held(&file_priv->lock);
-
-	if (cmdq) {
-		cmdq->db_registered = false;
-		cmdq->jobq->header.head = 0;
-		cmdq->jobq->header.tail = 0;
-		wmb(); /* Flush WC buffer for jobq header */
-	}
-}
-
-static void ivpu_cmdq_reset_all(struct ivpu_file_priv *file_priv)
+static void ivpu_cmdq_reset(struct ivpu_file_priv *file_priv)
 {
 	u16 engine;
 	u8 priority;
 
 	mutex_lock(&file_priv->lock);
 
-	for (engine = 0; engine < IVPU_NUM_ENGINES; engine++)
-		for (priority = 0; priority < IVPU_NUM_PRIORITIES; priority++)
-			ivpu_cmdq_reset_locked(file_priv, engine, priority);
+	for (engine = 0; engine < IVPU_NUM_ENGINES; engine++) {
+		for (priority = 0; priority < IVPU_NUM_PRIORITIES; priority++) {
+			int cmdq_idx = IVPU_CMDQ_INDEX(engine, priority);
+			struct ivpu_cmdq *cmdq = file_priv->cmdq[cmdq_idx];
+
+			if (cmdq)
+				cmdq->db_registered = false;
+		}
+	}
 
 	mutex_unlock(&file_priv->lock);
 }
@@ -236,10 +307,9 @@ void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev)
 	mutex_lock(&vdev->context_list_lock);
 
 	xa_for_each(&vdev->context_xa, ctx_id, file_priv)
-		ivpu_cmdq_reset_all(file_priv);
+		ivpu_cmdq_reset(file_priv);
 
 	mutex_unlock(&vdev->context_list_lock);
-
 }
 
 static int ivpu_cmdq_push_job(struct ivpu_cmdq *cmdq, struct ivpu_job *job)
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 07/12] accel/ivpu: Add resume engine support
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (5 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 06/12] accel/ivpu: Implement support for hardware scheduler Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 08/12] accel/ivpu: Add NPU profiling support Jacek Lawrynowicz
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Wachowski, Karol, Jacek Lawrynowicz

From: "Wachowski, Karol" <karol.wachowski@intel.com>

Create debugfs interface that triggers sending resume engine IPC
command to VPU. It is used to test engine resume functionality in
driver user space tests.

Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/accel/ivpu/ivpu_debugfs.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/accel/ivpu/ivpu_debugfs.c b/drivers/accel/ivpu/ivpu_debugfs.c
index e07e447d08d1..6ff967e595cf 100644
--- a/drivers/accel/ivpu/ivpu_debugfs.c
+++ b/drivers/accel/ivpu/ivpu_debugfs.c
@@ -335,6 +335,28 @@ static const struct file_operations ivpu_reset_engine_fops = {
 	.write = ivpu_reset_engine_fn,
 };
 
+static ssize_t
+ivpu_resume_engine_fn(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
+{
+	struct ivpu_device *vdev = file->private_data;
+
+	if (!size)
+		return -EINVAL;
+
+	if (ivpu_jsm_hws_resume_engine(vdev, DRM_IVPU_ENGINE_COMPUTE))
+		return -ENODEV;
+	if (ivpu_jsm_hws_resume_engine(vdev, DRM_IVPU_ENGINE_COPY))
+		return -ENODEV;
+
+	return size;
+}
+
+static const struct file_operations ivpu_resume_engine_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.write = ivpu_resume_engine_fn,
+};
+
 void ivpu_debugfs_init(struct ivpu_device *vdev)
 {
 	struct dentry *debugfs_root = vdev->drm.debugfs_root;
@@ -358,6 +380,8 @@ void ivpu_debugfs_init(struct ivpu_device *vdev)
 
 	debugfs_create_file("reset_engine", 0200, debugfs_root, vdev,
 			    &ivpu_reset_engine_fops);
+	debugfs_create_file("resume_engine", 0200, debugfs_root, vdev,
+			    &ivpu_resume_engine_fops);
 
 	if (ivpu_hw_gen(vdev) >= IVPU_HW_40XX)
 		debugfs_create_file("fw_profiling_freq_drive", 0200,
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 08/12] accel/ivpu: Add NPU profiling support
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (6 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 07/12] accel/ivpu: Add resume engine support Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 09/12] accel/ivpu: Add force snoop module parameter Jacek Lawrynowicz
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Tomasz Rusinowicz, Jacek Lawrynowicz

From: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>

Implement time based Metric Streamer profiling UAPI.

This is a generic mechanism allowing user mode tools to sample
NPU metrics. These metrics are defined by the FW and transparent to
the driver.

The user space can check for this feature by checking
DRM_IVPU_CAP_METRIC_STREAMER driver capability.

Signed-off-by: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/accel/ivpu/Makefile       |   3 +-
 drivers/accel/ivpu/ivpu_drv.c     |  14 +-
 drivers/accel/ivpu/ivpu_drv.h     |   3 +
 drivers/accel/ivpu/ivpu_jsm_msg.c |  98 ++++++++++
 drivers/accel/ivpu/ivpu_jsm_msg.h |   8 +-
 drivers/accel/ivpu/ivpu_ms.c      | 309 ++++++++++++++++++++++++++++++
 drivers/accel/ivpu/ivpu_ms.h      |  36 ++++
 drivers/accel/ivpu/ivpu_pm.c      |   4 +
 include/uapi/drm/ivpu_accel.h     |  69 ++++++-
 9 files changed, 540 insertions(+), 4 deletions(-)
 create mode 100644 drivers/accel/ivpu/ivpu_ms.c
 create mode 100644 drivers/accel/ivpu/ivpu_ms.h

diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile
index 95ff7ad16338..1c67a73cfefe 100644
--- a/drivers/accel/ivpu/Makefile
+++ b/drivers/accel/ivpu/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0-only
-# Copyright (C) 2023 Intel Corporation
+# Copyright (C) 2023-2024 Intel Corporation
 
 intel_vpu-y := \
 	ivpu_drv.o \
@@ -13,6 +13,7 @@ intel_vpu-y := \
 	ivpu_jsm_msg.o \
 	ivpu_mmu.o \
 	ivpu_mmu_context.o \
+	ivpu_ms.o \
 	ivpu_pm.o
 
 intel_vpu-$(CONFIG_DEBUG_FS) += ivpu_debugfs.o
diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
index ca4fcef7edf5..a02a1929f5a1 100644
--- a/drivers/accel/ivpu/ivpu_drv.c
+++ b/drivers/accel/ivpu/ivpu_drv.c
@@ -26,6 +26,7 @@
 #include "ivpu_jsm_msg.h"
 #include "ivpu_mmu.h"
 #include "ivpu_mmu_context.h"
+#include "ivpu_ms.h"
 #include "ivpu_pm.h"
 
 #ifndef DRIVER_VERSION_STR
@@ -100,6 +101,7 @@ static void file_priv_release(struct kref *ref)
 	mutex_unlock(&vdev->context_list_lock);
 	pm_runtime_put_autosuspend(vdev->drm.dev);
 
+	mutex_destroy(&file_priv->ms_lock);
 	mutex_destroy(&file_priv->lock);
 	kfree(file_priv);
 }
@@ -122,7 +124,7 @@ static int ivpu_get_capabilities(struct ivpu_device *vdev, struct drm_ivpu_param
 {
 	switch (args->index) {
 	case DRM_IVPU_CAP_METRIC_STREAMER:
-		args->value = 0;
+		args->value = 1;
 		break;
 	case DRM_IVPU_CAP_DMA_MEMORY_RANGE:
 		args->value = 1;
@@ -231,10 +233,13 @@ static int ivpu_open(struct drm_device *dev, struct drm_file *file)
 		goto err_dev_exit;
 	}
 
+	INIT_LIST_HEAD(&file_priv->ms_instance_list);
+
 	file_priv->vdev = vdev;
 	file_priv->bound = true;
 	kref_init(&file_priv->ref);
 	mutex_init(&file_priv->lock);
+	mutex_init(&file_priv->ms_lock);
 
 	mutex_lock(&vdev->context_list_lock);
 
@@ -263,6 +268,7 @@ static int ivpu_open(struct drm_device *dev, struct drm_file *file)
 	xa_erase_irq(&vdev->context_xa, ctx_id);
 err_unlock:
 	mutex_unlock(&vdev->context_list_lock);
+	mutex_destroy(&file_priv->ms_lock);
 	mutex_destroy(&file_priv->lock);
 	kfree(file_priv);
 err_dev_exit:
@@ -278,6 +284,7 @@ static void ivpu_postclose(struct drm_device *dev, struct drm_file *file)
 	ivpu_dbg(vdev, FILE, "file_priv close: ctx %u process %s pid %d\n",
 		 file_priv->ctx.id, current->comm, task_pid_nr(current));
 
+	ivpu_ms_cleanup(file_priv);
 	ivpu_file_priv_put(&file_priv);
 }
 
@@ -288,6 +295,10 @@ static const struct drm_ioctl_desc ivpu_drm_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(IVPU_BO_INFO, ivpu_bo_info_ioctl, 0),
 	DRM_IOCTL_DEF_DRV(IVPU_SUBMIT, ivpu_submit_ioctl, 0),
 	DRM_IOCTL_DEF_DRV(IVPU_BO_WAIT, ivpu_bo_wait_ioctl, 0),
+	DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_START, ivpu_ms_start_ioctl, 0),
+	DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_GET_DATA, ivpu_ms_get_data_ioctl, 0),
+	DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_STOP, ivpu_ms_stop_ioctl, 0),
+	DRM_IOCTL_DEF_DRV(IVPU_METRIC_STREAMER_GET_INFO, ivpu_ms_get_info_ioctl, 0),
 };
 
 static int ivpu_wait_for_ready(struct ivpu_device *vdev)
@@ -638,6 +649,7 @@ static void ivpu_dev_fini(struct ivpu_device *vdev)
 	ivpu_prepare_for_reset(vdev);
 	ivpu_shutdown(vdev);
 
+	ivpu_ms_cleanup_all(vdev);
 	ivpu_jobs_abort_all(vdev);
 	ivpu_job_done_consumer_fini(vdev);
 	ivpu_pm_cancel_recovery(vdev);
diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
index 9e9d85ad78ea..55341762b9d9 100644
--- a/drivers/accel/ivpu/ivpu_drv.h
+++ b/drivers/accel/ivpu/ivpu_drv.h
@@ -155,6 +155,9 @@ struct ivpu_file_priv {
 	struct mutex lock; /* Protects cmdq */
 	struct ivpu_cmdq *cmdq[IVPU_NUM_CMDQS_PER_CTX];
 	struct ivpu_mmu_context ctx;
+	struct mutex ms_lock; /* Protects ms_instance_list, ms_info_bo */
+	struct list_head ms_instance_list;
+	struct ivpu_bo *ms_info_bo;
 	bool has_mmu_faults;
 	bool bound;
 };
diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
index 4b260065ad72..e8dd73d947e4 100644
--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
+++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
@@ -440,3 +440,101 @@ int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
 
 	return ret;
 }
+
+int ivpu_jsm_metric_streamer_start(struct ivpu_device *vdev, u64 metric_group_mask,
+				   u64 sampling_rate, u64 buffer_addr, u64 buffer_size)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_START };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.metric_streamer_start.metric_group_mask = metric_group_mask;
+	req.payload.metric_streamer_start.sampling_rate = sampling_rate;
+	req.payload.metric_streamer_start.buffer_addr = buffer_addr;
+	req.payload.metric_streamer_start.buffer_size = buffer_size;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_START_DONE, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret) {
+		ivpu_warn_ratelimited(vdev, "Failed to start metric streamer: ret %d\n", ret);
+		return ret;
+	}
+
+	return ret;
+}
+
+int ivpu_jsm_metric_streamer_stop(struct ivpu_device *vdev, u64 metric_group_mask)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_STOP };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.metric_streamer_stop.metric_group_mask = metric_group_mask;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_STOP_DONE, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret)
+		ivpu_warn_ratelimited(vdev, "Failed to stop metric streamer: ret %d\n", ret);
+
+	return ret;
+}
+
+int ivpu_jsm_metric_streamer_update(struct ivpu_device *vdev, u64 metric_group_mask,
+				    u64 buffer_addr, u64 buffer_size, u64 *bytes_written)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_UPDATE };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.metric_streamer_update.metric_group_mask = metric_group_mask;
+	req.payload.metric_streamer_update.buffer_addr = buffer_addr;
+	req.payload.metric_streamer_update.buffer_size = buffer_size;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_UPDATE_DONE, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret) {
+		ivpu_warn_ratelimited(vdev, "Failed to update metric streamer: ret %d\n", ret);
+		return ret;
+	}
+
+	if (buffer_size && resp.payload.metric_streamer_done.bytes_written > buffer_size) {
+		ivpu_warn_ratelimited(vdev, "MS buffer overflow: bytes_written %#llx > buffer_size %#llx\n",
+				      resp.payload.metric_streamer_done.bytes_written, buffer_size);
+		return -EOVERFLOW;
+	}
+
+	*bytes_written = resp.payload.metric_streamer_done.bytes_written;
+
+	return ret;
+}
+
+int ivpu_jsm_metric_streamer_info(struct ivpu_device *vdev, u64 metric_group_mask, u64 buffer_addr,
+				  u64 buffer_size, u32 *sample_size, u64 *info_size)
+{
+	struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_METRIC_STREAMER_INFO };
+	struct vpu_jsm_msg resp;
+	int ret;
+
+	req.payload.metric_streamer_start.metric_group_mask = metric_group_mask;
+	req.payload.metric_streamer_start.buffer_addr = buffer_addr;
+	req.payload.metric_streamer_start.buffer_size = buffer_size;
+
+	ret = ivpu_ipc_send_receive(vdev, &req, VPU_JSM_MSG_METRIC_STREAMER_INFO_DONE, &resp,
+				    VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+	if (ret) {
+		ivpu_warn_ratelimited(vdev, "Failed to get metric streamer info: ret %d\n", ret);
+		return ret;
+	}
+
+	if (!resp.payload.metric_streamer_done.sample_size) {
+		ivpu_warn_ratelimited(vdev, "Invalid sample size\n");
+		return -EBADMSG;
+	}
+
+	if (sample_size)
+		*sample_size = resp.payload.metric_streamer_done.sample_size;
+	if (info_size)
+		*info_size = resp.payload.metric_streamer_done.bytes_written;
+
+	return ret;
+}
diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.h b/drivers/accel/ivpu/ivpu_jsm_msg.h
index 357728295fe9..060363409fb3 100644
--- a/drivers/accel/ivpu/ivpu_jsm_msg.h
+++ b/drivers/accel/ivpu/ivpu_jsm_msg.h
@@ -34,5 +34,11 @@ int ivpu_jsm_hws_set_context_sched_properties(struct ivpu_device *vdev, u32 ctx_
 int ivpu_jsm_hws_set_scheduling_log(struct ivpu_device *vdev, u32 engine_idx, u32 host_ssid,
 				    u64 vpu_log_buffer_va);
 int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev);
-
+int ivpu_jsm_metric_streamer_start(struct ivpu_device *vdev, u64 metric_group_mask,
+				   u64 sampling_rate, u64 buffer_addr, u64 buffer_size);
+int ivpu_jsm_metric_streamer_stop(struct ivpu_device *vdev, u64 metric_group_mask);
+int ivpu_jsm_metric_streamer_update(struct ivpu_device *vdev, u64 metric_group_mask,
+				    u64 buffer_addr, u64 buffer_size, u64 *bytes_written);
+int ivpu_jsm_metric_streamer_info(struct ivpu_device *vdev, u64 metric_group_mask, u64 buffer_addr,
+				  u64 buffer_size, u32 *sample_size, u64 *info_size);
 #endif
diff --git a/drivers/accel/ivpu/ivpu_ms.c b/drivers/accel/ivpu/ivpu_ms.c
new file mode 100644
index 000000000000..2f9d37f5c208
--- /dev/null
+++ b/drivers/accel/ivpu/ivpu_ms.c
@@ -0,0 +1,309 @@
+// SPDX-License-Identifier: GPL-2.0-only OR MIT
+/*
+ * Copyright (C) 2020-2024 Intel Corporation
+ */
+
+#include <drm/drm_file.h>
+
+#include "ivpu_drv.h"
+#include "ivpu_gem.h"
+#include "ivpu_jsm_msg.h"
+#include "ivpu_ms.h"
+#include "ivpu_pm.h"
+
+#define MS_INFO_BUFFER_SIZE	  SZ_16K
+#define MS_NUM_BUFFERS		  2
+#define MS_READ_PERIOD_MULTIPLIER 2
+#define MS_MIN_SAMPLE_PERIOD_NS   1000000
+
+static struct ivpu_ms_instance *
+get_instance_by_mask(struct ivpu_file_priv *file_priv, u64 metric_mask)
+{
+	struct ivpu_ms_instance *ms;
+
+	lockdep_assert_held(&file_priv->ms_lock);
+
+	list_for_each_entry(ms, &file_priv->ms_instance_list, ms_instance_node)
+		if (ms->mask == metric_mask)
+			return ms;
+
+	return NULL;
+}
+
+int ivpu_ms_start_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct ivpu_file_priv *file_priv = file->driver_priv;
+	struct drm_ivpu_metric_streamer_start *args = data;
+	struct ivpu_device *vdev = file_priv->vdev;
+	struct ivpu_ms_instance *ms;
+	u64 single_buff_size;
+	u32 sample_size;
+	int ret;
+
+	if (!args->metric_group_mask || !args->read_period_samples ||
+	    args->sampling_period_ns < MS_MIN_SAMPLE_PERIOD_NS)
+		return -EINVAL;
+
+	mutex_lock(&file_priv->ms_lock);
+
+	if (get_instance_by_mask(file_priv, args->metric_group_mask)) {
+		ivpu_err(vdev, "Instance already exists (mask %#llx)\n", args->metric_group_mask);
+		ret = -EALREADY;
+		goto unlock;
+	}
+
+	ms = kzalloc(sizeof(*ms), GFP_KERNEL);
+	if (!ms) {
+		ret = -ENOMEM;
+		goto unlock;
+	}
+
+	ms->mask = args->metric_group_mask;
+
+	ret = ivpu_jsm_metric_streamer_info(vdev, ms->mask, 0, 0, &sample_size, NULL);
+	if (ret)
+		goto err_free_ms;
+
+	single_buff_size = sample_size *
+		((u64)args->read_period_samples * MS_READ_PERIOD_MULTIPLIER);
+	ms->bo = ivpu_bo_create_global(vdev, PAGE_ALIGN(single_buff_size * MS_NUM_BUFFERS),
+				       DRM_IVPU_BO_CACHED | DRM_IVPU_BO_MAPPABLE);
+	if (!ms->bo) {
+		ivpu_err(vdev, "Failed to allocate MS buffer (size %llu)\n", single_buff_size);
+		ret = -ENOMEM;
+		goto err_free_ms;
+	}
+
+	ms->buff_size = ivpu_bo_size(ms->bo) / MS_NUM_BUFFERS;
+	ms->active_buff_vpu_addr = ms->bo->vpu_addr;
+	ms->inactive_buff_vpu_addr = ms->bo->vpu_addr + ms->buff_size;
+	ms->active_buff_ptr = ivpu_bo_vaddr(ms->bo);
+	ms->inactive_buff_ptr = ivpu_bo_vaddr(ms->bo) + ms->buff_size;
+
+	ret = ivpu_jsm_metric_streamer_start(vdev, ms->mask, args->sampling_period_ns,
+					     ms->active_buff_vpu_addr, ms->buff_size);
+	if (ret)
+		goto err_free_bo;
+
+	args->sample_size = sample_size;
+	args->max_data_size = ivpu_bo_size(ms->bo);
+	list_add_tail(&ms->ms_instance_node, &file_priv->ms_instance_list);
+	goto unlock;
+
+err_free_bo:
+	ivpu_bo_free(ms->bo);
+err_free_ms:
+	kfree(ms);
+unlock:
+	mutex_unlock(&file_priv->ms_lock);
+	return ret;
+}
+
+static int
+copy_leftover_bytes(struct ivpu_ms_instance *ms,
+		    void __user *user_ptr, u64 user_size, u64 *user_bytes_copied)
+{
+	u64 copy_bytes;
+
+	if (ms->leftover_bytes) {
+		copy_bytes = min(user_size - *user_bytes_copied, ms->leftover_bytes);
+		if (copy_to_user(user_ptr + *user_bytes_copied, ms->leftover_addr, copy_bytes))
+			return -EFAULT;
+
+		ms->leftover_bytes -= copy_bytes;
+		ms->leftover_addr += copy_bytes;
+		*user_bytes_copied += copy_bytes;
+	}
+
+	return 0;
+}
+
+static int
+copy_samples_to_user(struct ivpu_device *vdev, struct ivpu_ms_instance *ms,
+		     void __user *user_ptr, u64 user_size, u64 *user_bytes_copied)
+{
+	u64 bytes_written;
+	int ret;
+
+	*user_bytes_copied = 0;
+
+	ret = copy_leftover_bytes(ms, user_ptr, user_size, user_bytes_copied);
+	if (ret)
+		return ret;
+
+	if (*user_bytes_copied == user_size)
+		return 0;
+
+	ret = ivpu_jsm_metric_streamer_update(vdev, ms->mask, ms->inactive_buff_vpu_addr,
+					      ms->buff_size, &bytes_written);
+	if (ret)
+		return ret;
+
+	swap(ms->active_buff_vpu_addr, ms->inactive_buff_vpu_addr);
+	swap(ms->active_buff_ptr, ms->inactive_buff_ptr);
+
+	ms->leftover_bytes = bytes_written;
+	ms->leftover_addr = ms->inactive_buff_ptr;
+
+	return copy_leftover_bytes(ms, user_ptr, user_size, user_bytes_copied);
+}
+
+int ivpu_ms_get_data_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct drm_ivpu_metric_streamer_get_data *args = data;
+	struct ivpu_file_priv *file_priv = file->driver_priv;
+	struct ivpu_device *vdev = file_priv->vdev;
+	struct ivpu_ms_instance *ms;
+	u64 bytes_written;
+	int ret;
+
+	if (!args->metric_group_mask)
+		return -EINVAL;
+
+	mutex_lock(&file_priv->ms_lock);
+
+	ms = get_instance_by_mask(file_priv, args->metric_group_mask);
+	if (!ms) {
+		ivpu_err(vdev, "Instance doesn't exist for mask: %#llx\n", args->metric_group_mask);
+		ret = -EINVAL;
+		goto unlock;
+	}
+
+	if (!args->buffer_size) {
+		ret = ivpu_jsm_metric_streamer_update(vdev, ms->mask, 0, 0, &bytes_written);
+		if (ret)
+			goto unlock;
+		args->data_size = bytes_written + ms->leftover_bytes;
+		goto unlock;
+	}
+
+	if (!args->buffer_ptr) {
+		ret = -EINVAL;
+		goto unlock;
+	}
+
+	ret = copy_samples_to_user(vdev, ms, u64_to_user_ptr(args->buffer_ptr),
+				   args->buffer_size, &args->data_size);
+unlock:
+	mutex_unlock(&file_priv->ms_lock);
+
+	return ret;
+}
+
+static void free_instance(struct ivpu_file_priv *file_priv, struct ivpu_ms_instance *ms)
+{
+	lockdep_assert_held(&file_priv->ms_lock);
+
+	list_del(&ms->ms_instance_node);
+	ivpu_jsm_metric_streamer_stop(file_priv->vdev, ms->mask);
+	ivpu_bo_free(ms->bo);
+	kfree(ms);
+}
+
+int ivpu_ms_stop_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct ivpu_file_priv *file_priv = file->driver_priv;
+	struct drm_ivpu_metric_streamer_stop *args = data;
+	struct ivpu_ms_instance *ms;
+
+	if (!args->metric_group_mask)
+		return -EINVAL;
+
+	mutex_lock(&file_priv->ms_lock);
+
+	ms = get_instance_by_mask(file_priv, args->metric_group_mask);
+	if (ms)
+		free_instance(file_priv, ms);
+
+	mutex_unlock(&file_priv->ms_lock);
+
+	return ms ? 0 : -EINVAL;
+}
+
+static inline struct ivpu_bo *get_ms_info_bo(struct ivpu_file_priv *file_priv)
+{
+	lockdep_assert_held(&file_priv->ms_lock);
+
+	if (file_priv->ms_info_bo)
+		return file_priv->ms_info_bo;
+
+	file_priv->ms_info_bo = ivpu_bo_create_global(file_priv->vdev, MS_INFO_BUFFER_SIZE,
+						      DRM_IVPU_BO_CACHED | DRM_IVPU_BO_MAPPABLE);
+	return file_priv->ms_info_bo;
+}
+
+int ivpu_ms_get_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct drm_ivpu_metric_streamer_get_data *args = data;
+	struct ivpu_file_priv *file_priv = file->driver_priv;
+	struct ivpu_device *vdev = file_priv->vdev;
+	struct ivpu_bo *bo;
+	u64 info_size;
+	int ret;
+
+	if (!args->metric_group_mask)
+		return -EINVAL;
+
+	if (!args->buffer_size)
+		return ivpu_jsm_metric_streamer_info(vdev, args->metric_group_mask,
+						     0, 0, NULL, &args->data_size);
+	if (!args->buffer_ptr)
+		return -EINVAL;
+
+	mutex_lock(&file_priv->ms_lock);
+
+	bo = get_ms_info_bo(file_priv);
+	if (!bo) {
+		ret = -ENOMEM;
+		goto unlock;
+	}
+
+	ret = ivpu_jsm_metric_streamer_info(vdev, args->metric_group_mask, bo->vpu_addr,
+					    ivpu_bo_size(bo), NULL, &info_size);
+	if (ret)
+		goto unlock;
+
+	if (args->buffer_size < info_size) {
+		ret = -ENOSPC;
+		goto unlock;
+	}
+
+	if (copy_to_user(u64_to_user_ptr(args->buffer_ptr), ivpu_bo_vaddr(bo), info_size))
+		ret = -EFAULT;
+
+	args->data_size = info_size;
+unlock:
+	mutex_unlock(&file_priv->ms_lock);
+
+	return ret;
+}
+
+void ivpu_ms_cleanup(struct ivpu_file_priv *file_priv)
+{
+	struct ivpu_ms_instance *ms, *tmp;
+
+	mutex_lock(&file_priv->ms_lock);
+
+	if (file_priv->ms_info_bo) {
+		ivpu_bo_free(file_priv->ms_info_bo);
+		file_priv->ms_info_bo = NULL;
+	}
+
+	list_for_each_entry_safe(ms, tmp, &file_priv->ms_instance_list, ms_instance_node)
+		free_instance(file_priv, ms);
+
+	mutex_unlock(&file_priv->ms_lock);
+}
+
+void ivpu_ms_cleanup_all(struct ivpu_device *vdev)
+{
+	struct ivpu_file_priv *file_priv;
+	unsigned long ctx_id;
+
+	mutex_lock(&vdev->context_list_lock);
+
+	xa_for_each(&vdev->context_xa, ctx_id, file_priv)
+		ivpu_ms_cleanup(file_priv);
+
+	mutex_unlock(&vdev->context_list_lock);
+}
diff --git a/drivers/accel/ivpu/ivpu_ms.h b/drivers/accel/ivpu/ivpu_ms.h
new file mode 100644
index 000000000000..fbd5ebebc3d9
--- /dev/null
+++ b/drivers/accel/ivpu/ivpu_ms.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0-only OR MIT */
+/*
+ * Copyright (C) 2020-2024 Intel Corporation
+ */
+#ifndef __IVPU_MS_H__
+#define __IVPU_MS_H__
+
+#include <linux/list.h>
+
+struct drm_device;
+struct drm_file;
+struct ivpu_bo;
+struct ivpu_device;
+struct ivpu_file_priv;
+
+struct ivpu_ms_instance {
+	struct ivpu_bo *bo;
+	struct list_head ms_instance_node;
+	u64 mask;
+	u64 buff_size;
+	u64 active_buff_vpu_addr;
+	u64 inactive_buff_vpu_addr;
+	void *active_buff_ptr;
+	void *inactive_buff_ptr;
+	u64 leftover_bytes;
+	void *leftover_addr;
+};
+
+int ivpu_ms_start_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int ivpu_ms_stop_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int ivpu_ms_get_data_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int ivpu_ms_get_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+void ivpu_ms_cleanup(struct ivpu_file_priv *file_priv);
+void ivpu_ms_cleanup_all(struct ivpu_device *vdev);
+
+#endif /* __IVPU_MS_H__ */
diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
index 4f5ea466731f..7b2aa205fdec 100644
--- a/drivers/accel/ivpu/ivpu_pm.c
+++ b/drivers/accel/ivpu/ivpu_pm.c
@@ -18,6 +18,7 @@
 #include "ivpu_job.h"
 #include "ivpu_jsm_msg.h"
 #include "ivpu_mmu.h"
+#include "ivpu_ms.h"
 #include "ivpu_pm.h"
 
 static bool ivpu_disable_recovery;
@@ -131,6 +132,7 @@ static void ivpu_pm_recovery_work(struct work_struct *work)
 	ivpu_suspend(vdev);
 	ivpu_pm_prepare_cold_boot(vdev);
 	ivpu_jobs_abort_all(vdev);
+	ivpu_ms_cleanup_all(vdev);
 
 	ret = ivpu_resume(vdev);
 	if (ret)
@@ -333,6 +335,8 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
 	ivpu_hw_reset(vdev);
 	ivpu_pm_prepare_cold_boot(vdev);
 	ivpu_jobs_abort_all(vdev);
+	ivpu_ms_cleanup_all(vdev);
+
 	ivpu_dbg(vdev, PM, "Pre-reset done.\n");
 }
 
diff --git a/include/uapi/drm/ivpu_accel.h b/include/uapi/drm/ivpu_accel.h
index 19a13468eca5..084fb529e1e9 100644
--- a/include/uapi/drm/ivpu_accel.h
+++ b/include/uapi/drm/ivpu_accel.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
 /*
- * Copyright (C) 2020-2023 Intel Corporation
+ * Copyright (C) 2020-2024 Intel Corporation
  */
 
 #ifndef __UAPI_IVPU_DRM_H__
@@ -21,6 +21,10 @@ extern "C" {
 #define DRM_IVPU_BO_INFO		  0x03
 #define DRM_IVPU_SUBMIT			  0x05
 #define DRM_IVPU_BO_WAIT		  0x06
+#define DRM_IVPU_METRIC_STREAMER_START	  0x07
+#define DRM_IVPU_METRIC_STREAMER_STOP	  0x08
+#define DRM_IVPU_METRIC_STREAMER_GET_DATA 0x09
+#define DRM_IVPU_METRIC_STREAMER_GET_INFO 0x0a
 
 #define DRM_IOCTL_IVPU_GET_PARAM                                               \
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_IVPU_GET_PARAM, struct drm_ivpu_param)
@@ -40,6 +44,22 @@ extern "C" {
 #define DRM_IOCTL_IVPU_BO_WAIT                                                 \
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_IVPU_BO_WAIT, struct drm_ivpu_bo_wait)
 
+#define DRM_IOCTL_IVPU_METRIC_STREAMER_START                                   \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_IVPU_METRIC_STREAMER_START,            \
+		 struct drm_ivpu_metric_streamer_start)
+
+#define DRM_IOCTL_IVPU_METRIC_STREAMER_STOP                                    \
+	DRM_IOW(DRM_COMMAND_BASE + DRM_IVPU_METRIC_STREAMER_STOP,              \
+		struct drm_ivpu_metric_streamer_stop)
+
+#define DRM_IOCTL_IVPU_METRIC_STREAMER_GET_DATA                                \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_IVPU_METRIC_STREAMER_GET_DATA,         \
+		 struct drm_ivpu_metric_streamer_get_data)
+
+#define DRM_IOCTL_IVPU_METRIC_STREAMER_GET_INFO                                \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_IVPU_METRIC_STREAMER_GET_INFO,         \
+		 struct drm_ivpu_metric_streamer_get_data)
+
 /**
  * DOC: contexts
  *
@@ -336,6 +356,53 @@ struct drm_ivpu_bo_wait {
 	__u32 pad;
 };
 
+/**
+ * struct drm_ivpu_metric_streamer_start - Start collecting metric data
+ */
+struct drm_ivpu_metric_streamer_start {
+	/** @metric_group_mask: Indicates metric streamer instance */
+	__u64 metric_group_mask;
+	/** @sampling_period_ns: Sampling period in nanoseconds */
+	__u64 sampling_period_ns;
+	/**
+	 * @read_period_samples:
+	 *
+	 * Number of samples after which user space will try to read the data.
+	 * Reading the data after significantly longer period may cause data loss.
+	 */
+	__u32 read_period_samples;
+	/** @sample_size: Returned size of a single sample in bytes */
+	__u32 sample_size;
+	/** @max_data_size: Returned max @data_size from %DRM_IOCTL_IVPU_METRIC_STREAMER_GET_DATA */
+	__u32 max_data_size;
+};
+
+/**
+ * struct drm_ivpu_metric_streamer_get_data - Copy collected metric data
+ */
+struct drm_ivpu_metric_streamer_get_data {
+	/** @metric_group_mask: Indicates metric streamer instance */
+	__u64 metric_group_mask;
+	/** @buffer_ptr: A pointer to a destination for the copied data */
+	__u64 buffer_ptr;
+	/** @buffer_size: Size of the destination buffer */
+	__u64 buffer_size;
+	/**
+	 * @data_size: Returned size of copied metric data
+	 *
+	 * If the @buffer_size is zero, returns the amount of data ready to be copied.
+	 */
+	__u64 data_size;
+};
+
+/**
+ * struct drm_ivpu_metric_streamer_stop - Stop collecting metric data
+ */
+struct drm_ivpu_metric_streamer_stop {
+	/** @metric_group_mask: Indicates metric streamer instance */
+	__u64 metric_group_mask;
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 09/12] accel/ivpu: Add force snoop module parameter
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (7 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 08/12] accel/ivpu: Add NPU profiling support Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 10/12] accel/ivpu: Configure fw logging using debugfs Jacek Lawrynowicz
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Wachowski, Karol, Jacek Lawrynowicz

From: "Wachowski, Karol" <karol.wachowski@intel.com>

Add module parameter that enforces snooping for all NPU accesses,
both through MMU PTEs mappings and through TCU page table walk
override register bits for MMU page walks / configuration access.

Signed-off-by: Wachowski, Karol <karol.wachowski@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
Reviewed-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
---
 drivers/accel/ivpu/ivpu_drv.c     |  4 ++++
 drivers/accel/ivpu/ivpu_drv.h     |  6 ++++++
 drivers/accel/ivpu/ivpu_gem.h     | 11 +++++++----
 drivers/accel/ivpu/ivpu_hw_37xx.c |  6 +++++-
 drivers/accel/ivpu/ivpu_hw_40xx.c |  6 +++++-
 drivers/accel/ivpu/ivpu_mmu.c     | 12 ++++++++----
 6 files changed, 35 insertions(+), 10 deletions(-)

diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
index a02a1929f5a1..bd702401216c 100644
--- a/drivers/accel/ivpu/ivpu_drv.c
+++ b/drivers/accel/ivpu/ivpu_drv.c
@@ -60,6 +60,10 @@ bool ivpu_disable_mmu_cont_pages;
 module_param_named(disable_mmu_cont_pages, ivpu_disable_mmu_cont_pages, bool, 0644);
 MODULE_PARM_DESC(disable_mmu_cont_pages, "Disable MMU contiguous pages optimization");
 
+bool ivpu_force_snoop;
+module_param_named(force_snoop, ivpu_force_snoop, bool, 0644);
+MODULE_PARM_DESC(force_snoop, "Force snooping for NPU host memory access");
+
 struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv)
 {
 	struct ivpu_device *vdev = file_priv->vdev;
diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
index 55341762b9d9..973f8ded23e9 100644
--- a/drivers/accel/ivpu/ivpu_drv.h
+++ b/drivers/accel/ivpu/ivpu_drv.h
@@ -167,6 +167,7 @@ extern u8 ivpu_pll_min_ratio;
 extern u8 ivpu_pll_max_ratio;
 extern int ivpu_sched_mode;
 extern bool ivpu_disable_mmu_cont_pages;
+extern bool ivpu_force_snoop;
 
 #define IVPU_TEST_MODE_FW_TEST            BIT(0)
 #define IVPU_TEST_MODE_NULL_HW            BIT(1)
@@ -241,4 +242,9 @@ static inline bool ivpu_is_fpga(struct ivpu_device *vdev)
 	return ivpu_get_platform(vdev) == IVPU_PLATFORM_FPGA;
 }
 
+static inline bool ivpu_is_force_snoop_enabled(struct ivpu_device *vdev)
+{
+	return ivpu_force_snoop;
+}
+
 #endif /* __IVPU_DRV_H__ */
diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h
index fb7117c13eec..d975000abd78 100644
--- a/drivers/accel/ivpu/ivpu_gem.h
+++ b/drivers/accel/ivpu/ivpu_gem.h
@@ -60,14 +60,17 @@ static inline u32 ivpu_bo_cache_mode(struct ivpu_bo *bo)
 	return bo->flags & DRM_IVPU_BO_CACHE_MASK;
 }
 
-static inline bool ivpu_bo_is_snooped(struct ivpu_bo *bo)
+static inline struct ivpu_device *ivpu_bo_to_vdev(struct ivpu_bo *bo)
 {
-	return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED;
+	return to_ivpu_device(bo->base.base.dev);
 }
 
-static inline struct ivpu_device *ivpu_bo_to_vdev(struct ivpu_bo *bo)
+static inline bool ivpu_bo_is_snooped(struct ivpu_bo *bo)
 {
-	return to_ivpu_device(bo->base.base.dev);
+	if (ivpu_is_force_snoop_enabled(ivpu_bo_to_vdev(bo)))
+		return true;
+
+	return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED;
 }
 
 static inline void *ivpu_to_cpu_addr(struct ivpu_bo *bo, u32 vpu_addr)
diff --git a/drivers/accel/ivpu/ivpu_hw_37xx.c b/drivers/accel/ivpu/ivpu_hw_37xx.c
index ce664b6515aa..250291cc1f3a 100644
--- a/drivers/accel/ivpu/ivpu_hw_37xx.c
+++ b/drivers/accel/ivpu/ivpu_hw_37xx.c
@@ -514,7 +514,11 @@ static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev)
 
 	val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, NOSNOOP_OVERRIDE_EN, val);
 	val = REG_CLR_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AW_NOSNOOP_OVERRIDE, val);
-	val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val);
+
+	if (ivpu_is_force_snoop_enabled(vdev))
+		val = REG_CLR_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val);
+	else
+		val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val);
 
 	REGV_WR32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, val);
 }
diff --git a/drivers/accel/ivpu/ivpu_hw_40xx.c b/drivers/accel/ivpu/ivpu_hw_40xx.c
index 186cd87079c2..e64ee705d00c 100644
--- a/drivers/accel/ivpu/ivpu_hw_40xx.c
+++ b/drivers/accel/ivpu/ivpu_hw_40xx.c
@@ -531,7 +531,11 @@ static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev)
 
 	val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, SNOOP_OVERRIDE_EN, val);
 	val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AW_SNOOP_OVERRIDE, val);
-	val = REG_CLR_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val);
+
+	if (ivpu_is_force_snoop_enabled(vdev))
+		val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val);
+	else
+		val = REG_CLR_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val);
 
 	REGV_WR32(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, val);
 }
diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c
index 2e46b322c450..8682e6145520 100644
--- a/drivers/accel/ivpu/ivpu_mmu.c
+++ b/drivers/accel/ivpu/ivpu_mmu.c
@@ -519,7 +519,8 @@ static int ivpu_mmu_cmdq_sync(struct ivpu_device *vdev)
 	if (ret)
 		return ret;
 
-	clflush_cache_range(q->base, IVPU_MMU_CMDQ_SIZE);
+	if (!ivpu_is_force_snoop_enabled(vdev))
+		clflush_cache_range(q->base, IVPU_MMU_CMDQ_SIZE);
 	REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, q->prod);
 
 	ret = ivpu_mmu_cmdq_wait_for_cons(vdev);
@@ -567,7 +568,8 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
 	int ret;
 
 	memset(mmu->cmdq.base, 0, IVPU_MMU_CMDQ_SIZE);
-	clflush_cache_range(mmu->cmdq.base, IVPU_MMU_CMDQ_SIZE);
+	if (!ivpu_is_force_snoop_enabled(vdev))
+		clflush_cache_range(mmu->cmdq.base, IVPU_MMU_CMDQ_SIZE);
 	mmu->cmdq.prod = 0;
 	mmu->cmdq.cons = 0;
 
@@ -661,7 +663,8 @@ static void ivpu_mmu_strtab_link_cd(struct ivpu_device *vdev, u32 sid)
 	WRITE_ONCE(entry[1], str[1]);
 	WRITE_ONCE(entry[0], str[0]);
 
-	clflush_cache_range(entry, IVPU_MMU_STRTAB_ENT_SIZE);
+	if (!ivpu_is_force_snoop_enabled(vdev))
+		clflush_cache_range(entry, IVPU_MMU_STRTAB_ENT_SIZE);
 
 	ivpu_dbg(vdev, MMU, "STRTAB write entry (SSID=%u): 0x%llx, 0x%llx\n", sid, str[0], str[1]);
 }
@@ -735,7 +738,8 @@ static int ivpu_mmu_cd_add(struct ivpu_device *vdev, u32 ssid, u64 cd_dma)
 	WRITE_ONCE(entry[3], cd[3]);
 	WRITE_ONCE(entry[0], cd[0]);
 
-	clflush_cache_range(entry, IVPU_MMU_CDTAB_ENT_SIZE);
+	if (!ivpu_is_force_snoop_enabled(vdev))
+		clflush_cache_range(entry, IVPU_MMU_CDTAB_ENT_SIZE);
 
 	ivpu_dbg(vdev, MMU, "CDTAB %s entry (SSID=%u, dma=%pad): 0x%llx, 0x%llx, 0x%llx, 0x%llx\n",
 		 cd_dma ? "write" : "clear", ssid, &cd_dma, cd[0], cd[1], cd[2], cd[3]);
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 10/12] accel/ivpu: Configure fw logging using debugfs
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (8 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 09/12] accel/ivpu: Add force snoop module parameter Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 11/12] accel/ivpu: Increase reset counter when warm boot fails Jacek Lawrynowicz
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Tomasz Rusinowicz, Jacek Lawrynowicz

From: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>

Add fw_dyndbg file that can be used to control FW logging.

Signed-off-by: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/accel/ivpu/ivpu_debugfs.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/accel/ivpu/ivpu_debugfs.c b/drivers/accel/ivpu/ivpu_debugfs.c
index 6ff967e595cf..b6c7d6a53c79 100644
--- a/drivers/accel/ivpu/ivpu_debugfs.c
+++ b/drivers/accel/ivpu/ivpu_debugfs.c
@@ -145,6 +145,30 @@ static const struct file_operations dvfs_mode_fops = {
 	.write = dvfs_mode_fops_write,
 };
 
+static ssize_t
+fw_dyndbg_fops_write(struct file *file, const char __user *user_buf, size_t size, loff_t *pos)
+{
+	struct ivpu_device *vdev = file->private_data;
+	char buffer[VPU_DYNDBG_CMD_MAX_LEN] = {};
+	int ret;
+
+	if (size >= VPU_DYNDBG_CMD_MAX_LEN)
+		return -EINVAL;
+
+	ret = strncpy_from_user(buffer, user_buf, size);
+	if (ret < 0)
+		return ret;
+
+	ivpu_jsm_dyndbg_control(vdev, buffer, size);
+	return size;
+}
+
+static const struct file_operations fw_dyndbg_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.write = fw_dyndbg_fops_write,
+};
+
 static int fw_log_show(struct seq_file *s, void *v)
 {
 	struct ivpu_device *vdev = s->private;
@@ -369,6 +393,8 @@ void ivpu_debugfs_init(struct ivpu_device *vdev)
 	debugfs_create_file("dvfs_mode", 0200, debugfs_root, vdev,
 			    &dvfs_mode_fops);
 
+	debugfs_create_file("fw_dyndbg", 0200, debugfs_root, vdev,
+			    &fw_dyndbg_fops);
 	debugfs_create_file("fw_log", 0644, debugfs_root, vdev,
 			    &fw_log_fops);
 	debugfs_create_file("fw_trace_destination_mask", 0200, debugfs_root, vdev,
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 11/12] accel/ivpu: Increase reset counter when warm boot fails
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (9 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 10/12] accel/ivpu: Configure fw logging using debugfs Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-13 12:04 ` [PATCH v2 12/12] accel/ivpu: Share NPU busy time in sysfs Jacek Lawrynowicz
  2024-05-15  5:56 ` [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Jacek Lawrynowicz

Failed warm boot causes a cold boot that looses FW state and is
equivalent to a recovery or reset, so reset_counter should be
incremented in order for this failure to be detected by tests.

Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
Reviewed-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
---
 drivers/accel/ivpu/ivpu_pm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
index 7b2aa205fdec..02b4eac13f8b 100644
--- a/drivers/accel/ivpu/ivpu_pm.c
+++ b/drivers/accel/ivpu/ivpu_pm.c
@@ -264,6 +264,7 @@ int ivpu_pm_runtime_suspend_cb(struct device *dev)
 
 	if (!hw_is_idle) {
 		ivpu_err(vdev, "NPU failed to enter idle, force suspended.\n");
+		atomic_inc(&vdev->pm->reset_counter);
 		ivpu_fw_log_dump(vdev);
 		ivpu_pm_prepare_cold_boot(vdev);
 	} else {
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 12/12] accel/ivpu: Share NPU busy time in sysfs
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (10 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 11/12] accel/ivpu: Increase reset counter when warm boot fails Jacek Lawrynowicz
@ 2024-05-13 12:04 ` Jacek Lawrynowicz
  2024-05-15  5:56 ` [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-13 12:04 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo, Tomasz Rusinowicz, Jacek Lawrynowicz

From: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>

The driver tracks the time spent by NPU executing jobs
and shares it through sysfs `npu_busy_time_us` file.
It can be then used by user space applications to monitor device
utilization.

NPU is considered 'busy' starting with a first job submitted
to firmware and ending when there is no more jobs pending/executing.

Signed-off-by: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/accel/ivpu/Makefile     |  3 +-
 drivers/accel/ivpu/ivpu_drv.c   |  2 ++
 drivers/accel/ivpu/ivpu_drv.h   |  3 ++
 drivers/accel/ivpu/ivpu_job.c   | 23 ++++++++++++-
 drivers/accel/ivpu/ivpu_sysfs.c | 58 +++++++++++++++++++++++++++++++++
 drivers/accel/ivpu/ivpu_sysfs.h | 13 ++++++++
 6 files changed, 100 insertions(+), 2 deletions(-)
 create mode 100644 drivers/accel/ivpu/ivpu_sysfs.c
 create mode 100644 drivers/accel/ivpu/ivpu_sysfs.h

diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile
index 1c67a73cfefe..e16a9f5c1c89 100644
--- a/drivers/accel/ivpu/Makefile
+++ b/drivers/accel/ivpu/Makefile
@@ -14,7 +14,8 @@ intel_vpu-y := \
 	ivpu_mmu.o \
 	ivpu_mmu_context.o \
 	ivpu_ms.o \
-	ivpu_pm.o
+	ivpu_pm.o \
+	ivpu_sysfs.o
 
 intel_vpu-$(CONFIG_DEBUG_FS) += ivpu_debugfs.o
 
diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
index bd702401216c..130455d39841 100644
--- a/drivers/accel/ivpu/ivpu_drv.c
+++ b/drivers/accel/ivpu/ivpu_drv.c
@@ -28,6 +28,7 @@
 #include "ivpu_mmu_context.h"
 #include "ivpu_ms.h"
 #include "ivpu_pm.h"
+#include "ivpu_sysfs.h"
 
 #ifndef DRIVER_VERSION_STR
 #define DRIVER_VERSION_STR __stringify(DRM_IVPU_DRIVER_MAJOR) "." \
@@ -696,6 +697,7 @@ static int ivpu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		return ret;
 
 	ivpu_debugfs_init(vdev);
+	ivpu_sysfs_init(vdev);
 
 	ret = drm_dev_register(&vdev->drm, 0);
 	if (ret) {
diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
index 973f8ded23e9..4de7fc0c7026 100644
--- a/drivers/accel/ivpu/ivpu_drv.h
+++ b/drivers/accel/ivpu/ivpu_drv.h
@@ -135,6 +135,9 @@ struct ivpu_device {
 
 	atomic64_t unique_id_counter;
 
+	ktime_t busy_start_ts;
+	ktime_t busy_time;
+
 	struct {
 		int boot;
 		int jsm;
diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
index 1d7b4388eb3b..845181b48b3a 100644
--- a/drivers/accel/ivpu/ivpu_job.c
+++ b/drivers/accel/ivpu/ivpu_job.c
@@ -438,11 +438,28 @@ ivpu_job_create(struct ivpu_file_priv *file_priv, u32 engine_idx, u32 bo_count)
 	return NULL;
 }
 
+static struct ivpu_job *ivpu_job_remove_from_submitted_jobs(struct ivpu_device *vdev, u32 job_id)
+{
+	struct ivpu_job *job;
+
+	xa_lock(&vdev->submitted_jobs_xa);
+	job = __xa_erase(&vdev->submitted_jobs_xa, job_id);
+
+	if (xa_empty(&vdev->submitted_jobs_xa) && job) {
+		vdev->busy_time = ktime_add(ktime_sub(ktime_get(), vdev->busy_start_ts),
+					    vdev->busy_time);
+	}
+
+	xa_unlock(&vdev->submitted_jobs_xa);
+
+	return job;
+}
+
 static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32 job_status)
 {
 	struct ivpu_job *job;
 
-	job = xa_erase(&vdev->submitted_jobs_xa, job_id);
+	job = ivpu_job_remove_from_submitted_jobs(vdev, job_id);
 	if (!job)
 		return -ENOENT;
 
@@ -477,6 +494,7 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
 	struct ivpu_device *vdev = job->vdev;
 	struct xa_limit job_id_range;
 	struct ivpu_cmdq *cmdq;
+	bool is_first_job;
 	int ret;
 
 	ret = ivpu_rpm_get(vdev);
@@ -497,6 +515,7 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
 	job_id_range.max = job_id_range.min | JOB_ID_JOB_MASK;
 
 	xa_lock(&vdev->submitted_jobs_xa);
+	is_first_job = xa_empty(&vdev->submitted_jobs_xa);
 	ret = __xa_alloc(&vdev->submitted_jobs_xa, &job->job_id, job, job_id_range, GFP_KERNEL);
 	if (ret) {
 		ivpu_dbg(vdev, JOB, "Too many active jobs in ctx %d\n",
@@ -516,6 +535,8 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
 		wmb(); /* Flush WC buffer for jobq header */
 	} else {
 		ivpu_cmdq_ring_db(vdev, cmdq);
+		if (is_first_job)
+			vdev->busy_start_ts = ktime_get();
 	}
 
 	ivpu_dbg(vdev, JOB, "Job submitted: id %3u ctx %2d engine %d prio %d addr 0x%llx next %d\n",
diff --git a/drivers/accel/ivpu/ivpu_sysfs.c b/drivers/accel/ivpu/ivpu_sysfs.c
new file mode 100644
index 000000000000..913669f1786e
--- /dev/null
+++ b/drivers/accel/ivpu/ivpu_sysfs.c
@@ -0,0 +1,58 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2024 Intel Corporation
+ */
+
+#include <linux/device.h>
+#include <linux/err.h>
+
+#include "ivpu_hw.h"
+#include "ivpu_sysfs.h"
+
+/*
+ * npu_busy_time_us is the time that the device spent executing jobs.
+ * The time is counted when and only when there are jobs submitted to firmware.
+ *
+ * This time can be used to measure the utilization of NPU, either by calculating
+ * npu_busy_time_us difference between two timepoints (i.e. measuring the time
+ * that the NPU was active during some workload) or monitoring utilization percentage
+ * by reading npu_busy_time_us periodically.
+ *
+ * When reading the value periodically, it shouldn't be read too often as it may have
+ * an impact on job submission performance. Recommended period is 1 second.
+ */
+static ssize_t
+npu_busy_time_us_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	struct drm_device *drm = dev_get_drvdata(dev);
+	struct ivpu_device *vdev = to_ivpu_device(drm);
+	ktime_t total, now = 0;
+
+	xa_lock(&vdev->submitted_jobs_xa);
+	total = vdev->busy_time;
+	if (!xa_empty(&vdev->submitted_jobs_xa))
+		now = ktime_sub(ktime_get(), vdev->busy_start_ts);
+	xa_unlock(&vdev->submitted_jobs_xa);
+
+	return sysfs_emit(buf, "%lld\n", ktime_to_us(ktime_add(total, now)));
+}
+
+static DEVICE_ATTR_RO(npu_busy_time_us);
+
+static struct attribute *ivpu_dev_attrs[] = {
+	&dev_attr_npu_busy_time_us.attr,
+	NULL,
+};
+
+static struct attribute_group ivpu_dev_attr_group = {
+	.attrs = ivpu_dev_attrs,
+};
+
+void ivpu_sysfs_init(struct ivpu_device *vdev)
+{
+	int ret;
+
+	ret = devm_device_add_group(vdev->drm.dev, &ivpu_dev_attr_group);
+	if (ret)
+		ivpu_warn(vdev, "Failed to add group to device, ret %d", ret);
+}
diff --git a/drivers/accel/ivpu/ivpu_sysfs.h b/drivers/accel/ivpu/ivpu_sysfs.h
new file mode 100644
index 000000000000..9836f09b35a3
--- /dev/null
+++ b/drivers/accel/ivpu/ivpu_sysfs.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2024 Intel Corporation
+ */
+
+#ifndef __IVPU_SYSFS_H__
+#define __IVPU_SYSFS_H__
+
+#include "ivpu_drv.h"
+
+void ivpu_sysfs_init(struct ivpu_device *vdev);
+
+#endif /* __IVPU_SYSFS_H__ */
-- 
2.43.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 00/12] accel/ivpu: Changes for 6.10
  2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
                   ` (11 preceding siblings ...)
  2024-05-13 12:04 ` [PATCH v2 12/12] accel/ivpu: Share NPU busy time in sysfs Jacek Lawrynowicz
@ 2024-05-15  5:56 ` Jacek Lawrynowicz
  12 siblings, 0 replies; 14+ messages in thread
From: Jacek Lawrynowicz @ 2024-05-15  5:56 UTC (permalink / raw
  To: dri-devel; +Cc: oded.gabbay, quic_jhugo

Applied to drm-misc-next

On 13.05.2024 14:04, Jacek Lawrynowicz wrote:
> There are couple of major new features in this patchset:
>   * Hardware scheduler support (disabled by default)
>   * Profiling support
>   * Expose NPU busy time in sysfs
> 
> Other then that, there are two small random fixes.
> 
> v2: Included Jeffrey's v1 comments
> 
> v1: https://lore.kernel.org/dri-devel/20240508132106.2387464-1-jacek.lawrynowicz@linux.intel.com
> 
> Jacek Lawrynowicz (2):
>   accel/ivpu: Update VPU FW API headers
>   accel/ivpu: Increase reset counter when warm boot fails
> 
> Tomasz Rusinowicz (3):
>   accel/ivpu: Add NPU profiling support
>   accel/ivpu: Configure fw logging using debugfs
>   accel/ivpu: Share NPU busy time in sysfs
> 
> Wachowski, Karol (7):
>   accel/ivpu: Add sched_mode module param
>   accel/ivpu: Create priority based command queues
>   accel/ivpu: Implement support for preemption buffers
>   accel/ivpu: Add HWS JSM messages
>   accel/ivpu: Implement support for hardware scheduler
>   accel/ivpu: Add resume engine support
>   accel/ivpu: Add force snoop module parameter
> 
>  drivers/accel/ivpu/Makefile       |   6 +-
>  drivers/accel/ivpu/ivpu_debugfs.c |  50 +++++
>  drivers/accel/ivpu/ivpu_drv.c     |  44 ++++-
>  drivers/accel/ivpu/ivpu_drv.h     |  23 ++-
>  drivers/accel/ivpu/ivpu_fw.c      |  10 +
>  drivers/accel/ivpu/ivpu_fw.h      |   2 +
>  drivers/accel/ivpu/ivpu_gem.h     |  11 +-
>  drivers/accel/ivpu/ivpu_hw.h      |   3 +-
>  drivers/accel/ivpu/ivpu_hw_37xx.c |   7 +-
>  drivers/accel/ivpu/ivpu_hw_40xx.c |   9 +-
>  drivers/accel/ivpu/ivpu_job.c     | 295 ++++++++++++++++++++++------
>  drivers/accel/ivpu/ivpu_job.h     |   2 +
>  drivers/accel/ivpu/ivpu_jsm_msg.c | 259 ++++++++++++++++++++++++-
>  drivers/accel/ivpu/ivpu_jsm_msg.h |  20 +-
>  drivers/accel/ivpu/ivpu_mmu.c     |  12 +-
>  drivers/accel/ivpu/ivpu_ms.c      | 309 ++++++++++++++++++++++++++++++
>  drivers/accel/ivpu/ivpu_ms.h      |  36 ++++
>  drivers/accel/ivpu/ivpu_pm.c      |   5 +
>  drivers/accel/ivpu/ivpu_sysfs.c   |  58 ++++++
>  drivers/accel/ivpu/ivpu_sysfs.h   |  13 ++
>  drivers/accel/ivpu/vpu_jsm_api.h  |  14 +-
>  include/uapi/drm/ivpu_accel.h     |  69 ++++++-
>  22 files changed, 1173 insertions(+), 84 deletions(-)
>  create mode 100644 drivers/accel/ivpu/ivpu_ms.c
>  create mode 100644 drivers/accel/ivpu/ivpu_ms.h
>  create mode 100644 drivers/accel/ivpu/ivpu_sysfs.c
>  create mode 100644 drivers/accel/ivpu/ivpu_sysfs.h
> 
> --
> 2.43.2

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-05-15  5:56 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-13 12:04 [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 01/12] accel/ivpu: Update VPU FW API headers Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 02/12] accel/ivpu: Add sched_mode module param Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 03/12] accel/ivpu: Create priority based command queues Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 04/12] accel/ivpu: Implement support for preemption buffers Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 05/12] accel/ivpu: Add HWS JSM messages Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 06/12] accel/ivpu: Implement support for hardware scheduler Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 07/12] accel/ivpu: Add resume engine support Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 08/12] accel/ivpu: Add NPU profiling support Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 09/12] accel/ivpu: Add force snoop module parameter Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 10/12] accel/ivpu: Configure fw logging using debugfs Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 11/12] accel/ivpu: Increase reset counter when warm boot fails Jacek Lawrynowicz
2024-05-13 12:04 ` [PATCH v2 12/12] accel/ivpu: Share NPU busy time in sysfs Jacek Lawrynowicz
2024-05-15  5:56 ` [PATCH v2 00/12] accel/ivpu: Changes for 6.10 Jacek Lawrynowicz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).