All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 100/459] drm/amdgpu/discovery: add harvest info data table
@ 2019-06-17 19:25 Alex Deucher
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/include/discovery.h | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/include/discovery.h b/drivers/gpu/drm/amd/include/discovery.h
index 93a8ae0aacda..e01d4cd9f2cb 100644
--- a/drivers/gpu/drm/amd/include/discovery.h
+++ b/drivers/gpu/drm/amd/include/discovery.h
@@ -33,7 +33,7 @@ typedef enum
 {
 	IP_DISCOVERY = 0,
 	GC,
-	TABLE_3,
+	HARVEST_INFO,
 	TABLE_4,
 	RESERVED_1,
 	RESERVED_2,
@@ -144,6 +144,22 @@ struct gc_info_v1_0 {
 	uint32_t gc_num_gl2a;
 };
 
+typedef struct harvest_info_header {
+	uint32_t signature; /* Table Signature */
+	uint32_t version;   /* Table Version */
+} harvest_info_header;
+
+typedef struct harvest_info {
+	uint16_t hw_id;          /* Hardware ID */
+	uint8_t number_instance; /* Instance of the IP */
+	uint8_t reserved;        /* Reserved for alignment */
+} harvest_info;
+
+typedef struct harvest_table {
+	harvest_info_header header;
+	harvest_info list[32];
+} harvest_table;
+
 #pragma pack()
 
 #endif
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 101/459] drm/amdgpu/discovery: use hardcoded mmRCC_CONFIG_MEMSIZE
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 102/459] drm/amdgpu/discovery: fix hwid for nbio Alex Deucher
                     ` (91 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

register base offset of nbio is not known before IP Discovery table is
parsed, so hardcode this value.

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index f61eb8542c4d..ac065ab91c4b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -28,10 +28,11 @@
 #include "nbio/nbio_2_3_offset.h"
 #include "discovery.h"
 
-#define mmMM_INDEX	0x0
-#define mmMM_INDEX_HI	0x6
-#define mmMM_DATA	0x1
-#define HW_ID_MAX	300
+#define mmRCC_CONFIG_MEMSIZE	0xde3
+#define mmMM_INDEX		0x0
+#define mmMM_INDEX_HI		0x6
+#define mmMM_DATA		0x1
+#define HW_ID_MAX		300
 
 const char *hw_id_names[HW_ID_MAX] = {
 	[MP1_HWID]		= "MP1",
@@ -134,8 +135,7 @@ static int hw_id_map[MAX_HWIP] = {
 static int amdgpu_discovery_read_binary(struct amdgpu_device *adev, uint8_t *binary)
 {
 	uint32_t *p = (uint32_t *)binary;
-	uint64_t vram_size = RREG32_SOC15(NBIO, 0,
-			mmRCC_DEV0_EPF0_RCC_CONFIG_MEMSIZE) * 1024 * 1024;
+	uint64_t vram_size = (uint64_t)RREG32(mmRCC_CONFIG_MEMSIZE) << 20;
 	uint64_t pos = vram_size - BINARY_MAX_SIZE;
 	unsigned long flags;
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 102/459] drm/amdgpu/discovery: fix hwid for nbio
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2019-06-17 19:25   ` [PATCH 101/459] drm/amdgpu/discovery: use hardcoded mmRCC_CONFIG_MEMSIZE Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 103/459] drm/amdgpu/discovery: stop taking psp header into account Alex Deucher
                     ` (90 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index ac065ab91c4b..ec14fd1350e2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -117,7 +117,7 @@ static int hw_id_map[MAX_HWIP] = {
 	[SDMA1_HWIP]	= SDMA1_HWID,
 	[MMHUB_HWIP]	= MMHUB_HWID,
 	[ATHUB_HWIP]	= ATHUB_HWID,
-	[NBIO_HWIP]	= DBGU_NBIO_HWID,
+	[NBIO_HWIP]	= NBIF_HWID,
 	[MP0_HWIP]	= MP0_HWID,
 	[MP1_HWIP]	= MP1_HWID,
 	[UVD_HWIP]	= UVD_HWID,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 103/459] drm/amdgpu/discovery: stop taking psp header into account
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2019-06-17 19:25   ` [PATCH 101/459] drm/amdgpu/discovery: use hardcoded mmRCC_CONFIG_MEMSIZE Alex Deucher
  2019-06-17 19:25   ` [PATCH 102/459] drm/amdgpu/discovery: fix hwid for nbio Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 104/459] drm/amdgpu/discovery: update definition for struct die_header Alex Deucher
                     ` (89 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

psp will write a header to vram, but the value exposed in
RCC_CONFIG_MEMSIZE does not include the memory that this header is
written to. Therefore, the interpretation of the table does not need to
take the psp header into account.

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index ec14fd1350e2..5f967ae8d4ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -189,7 +189,7 @@ int amdgpu_discovery_init(struct amdgpu_device *adev)
 		goto out;
 	}
 
-	bhdr = (struct binary_header *)(adev->discovery + PSP_HEADER_SIZE);
+	bhdr = (struct binary_header *)adev->discovery;
 
 	if (le32_to_cpu(bhdr->binary_signature) != BINARY_SIGNATURE) {
 		DRM_ERROR("invalid ip discovery binary signature\n");
@@ -197,8 +197,7 @@ int amdgpu_discovery_init(struct amdgpu_device *adev)
 		goto out;
 	}
 
-	offset = PSP_HEADER_SIZE +
-		offsetof(struct binary_header, binary_checksum) +
+	offset = offsetof(struct binary_header, binary_checksum) +
 		sizeof(bhdr->binary_checksum);
 	size = bhdr->binary_size - offset;
 	checksum = bhdr->binary_checksum;
@@ -275,7 +274,7 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
 		return -EINVAL;
 	}
 
-	bhdr = (struct binary_header *)(adev->discovery + PSP_HEADER_SIZE);
+	bhdr = (struct binary_header *)adev->discovery;
 	ihdr = (struct ip_discovery_header *)(adev->discovery +
 			le16_to_cpu(bhdr->table_list[IP_DISCOVERY].offset));
 	num_dies = le16_to_cpu(ihdr->num_dies);
@@ -338,7 +337,7 @@ int amdgpu_discovery_get_ip_version(struct amdgpu_device *adev, int hw_id,
 		return -EINVAL;
 	}
 
-	bhdr = (struct binary_header *)(adev->discovery + PSP_HEADER_SIZE);
+	bhdr = (struct binary_header *)adev->discovery;
 	ihdr = (struct ip_discovery_header *)(adev->discovery +
 			le16_to_cpu(bhdr->table_list[IP_DISCOVERY].offset));
 	num_dies = le16_to_cpu(ihdr->num_dies);
@@ -376,7 +375,7 @@ int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
 		return -EINVAL;
 	}
 
-	bhdr = (struct binary_header *)(adev->discovery + PSP_HEADER_SIZE);
+	bhdr = (struct binary_header *)adev->discovery;
 	gc_info = (struct gc_info_v1_0 *)(adev->discovery +
 			le16_to_cpu(bhdr->table_list[GC].offset));
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 104/459] drm/amdgpu/discovery: update definition for struct die_header
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (2 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 103/459] drm/amdgpu/discovery: stop taking psp header into account Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 105/459] drm/amdgpu/discovery: stop converting the units of base addresses Alex Deucher
                     ` (88 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/include/discovery.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/include/discovery.h b/drivers/gpu/drm/amd/include/discovery.h
index e01d4cd9f2cb..5dcb776548d8 100644
--- a/drivers/gpu/drm/amd/include/discovery.h
+++ b/drivers/gpu/drm/amd/include/discovery.h
@@ -99,8 +99,8 @@ typedef struct ip
 
 typedef struct die_header
 {
-	uint32_t die_id;
-	uint32_t num_ips;
+	uint16_t die_id;
+	uint16_t num_ips;
 } die_header;
 
 typedef struct ip_structure
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 105/459] drm/amdgpu/discovery: stop converting the units of base addresses
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (3 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 104/459] drm/amdgpu/discovery: update definition for struct die_header Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 106/459] drm/amdgpu/discovery: add module param for ip discovery enablement Alex Deucher
                     ` (87 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

the unit is already in dword

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 5f967ae8d4ed..697800c4741f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -300,11 +300,11 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
 
 					for (k = 0; k < num_base_address; k++) {
 						/*
-						 * convert the endianness and unit (in dword) of base addresses in place,
+						 * convert the endianness of base addresses in place,
 						 * so that we don't need to convert them when accessing adev->reg_offset.
 						 */
-						ip->base_address[k] = le32_to_cpu(ip->base_address[k]) >> 2;
-						DRM_DEBUG("\t0x%08x\n", ip->base_address[k] << 2);
+						ip->base_address[k] = le32_to_cpu(ip->base_address[k]);
+						DRM_DEBUG("\t0x%08x\n", ip->base_address[k]);
 					}
 
 					adev->reg_offset[hw_ip][ip->number_instance] =
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 106/459] drm/amdgpu/discovery: add module param for ip discovery enablement
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (4 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 105/459] drm/amdgpu/discovery: stop converting the units of base addresses Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 107/459] drm/amdgpu/discovery: refactor ip list traversal Alex Deucher
                     ` (86 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h        |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 10 ++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c    |  5 +++++
 3 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 000ef2dddd7e..b4a887e42370 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -161,6 +161,7 @@ extern int amdgpu_ras_enable;
 extern uint amdgpu_ras_mask;
 extern int amdgpu_async_gfx_ring;
 extern int amdgpu_mcbp;
+extern int amdgpu_discovery;
 
 #ifdef CONFIG_DRM_AMDGPU_SI
 extern int amdgpu_si_support;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 347a1ba0abe9..facf6ae79040 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2577,6 +2577,14 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	if (amdgpu_mcbp)
 		DRM_INFO("MCBP is enabled\n");
 
+	if (amdgpu_discovery) {
+		r = amdgpu_discovery_init(adev);
+		if (r) {
+			dev_err(adev->dev, "amdgpu_discovery_init failed\n");
+			return r;
+		}
+	}
+
 	/* early init functions */
 	r = amdgpu_device_ip_early_init(adev);
 	if (r)
@@ -2832,6 +2840,8 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
 	device_remove_file(adev->dev, &dev_attr_pcie_replay_count);
 	amdgpu_ucode_sysfs_fini(adev);
 	amdgpu_debugfs_preempt_cleanup(adev);
+	if (amdgpu_discovery)
+		amdgpu_discovery_fini(adev);
 }
 
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 00753f9b8b52..b22598a30134 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -138,6 +138,7 @@ uint amdgpu_smu_memory_pool_size = 0;
 uint amdgpu_dc_feature_mask = 0;
 int amdgpu_async_gfx_ring = 1;
 int amdgpu_mcbp = 0;
+int amdgpu_discovery = 0;
 
 struct amdgpu_mgpu_info mgpu_info = {
 	.mutex = __MUTEX_INITIALIZER(mgpu_info.mutex),
@@ -579,6 +580,10 @@ MODULE_PARM_DESC(mcbp,
 	"Enable Mid-command buffer preemption (0 = disabled (default), 1 = enabled)");
 module_param_named(mcbp, amdgpu_mcbp, int, 0444);
 
+MODULE_PARM_DESC(discovery,
+	"Allow driver to discover hardware IPs from IP Discovery table at the top of VRAM");
+module_param_named(discovery, amdgpu_discovery, int, 0444);
+
 #ifdef CONFIG_HSA_AMD
 /**
  * DOC: sched_policy (int)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 107/459] drm/amdgpu/discovery: refactor ip list traversal
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (5 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 106/459] drm/amdgpu/discovery: add module param for ip discovery enablement Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 108/459] drm/amdgpu: disable concurrent flushes for Navi10 v2 Alex Deucher
                     ` (85 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

for each ip, check whether it is needed by amdgpu driver,
if yes, record its base addresses

v2: change some DRM_INFO to DRM_DEBUG
v3: remove unused variable (Alex)

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 71 +++++++++++--------
 1 file changed, 42 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 697800c4741f..e049ae6a76fb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -266,7 +266,6 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
 	uint16_t num_ips;
 	uint8_t num_base_address;
 	int hw_ip;
-	int hw_id;
 	int i, j, k;
 
 	if (!adev->discovery) {
@@ -279,40 +278,54 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
 			le16_to_cpu(bhdr->table_list[IP_DISCOVERY].offset));
 	num_dies = le16_to_cpu(ihdr->num_dies);
 
-	for (hw_ip = 0; hw_ip < MAX_HWIP; hw_ip++) {
-		hw_id = hw_id_map[hw_ip];
-
-		for (i = 0; i < num_dies; i++) {
-			die_offset = le16_to_cpu(ihdr->die_info[i].die_offset);
-			dhdr = (struct die_header *)(adev->discovery + die_offset);
-			num_ips = le16_to_cpu(dhdr->num_ips);
-			ip_offset = die_offset + sizeof(*dhdr);
-
-			for (j = 0; j < num_ips; j++) {
-				ip = (struct ip *)(adev->discovery + ip_offset);
-				num_base_address = ip->num_base_address;
-
-				if (le16_to_cpu(ip->hw_id) == hw_id) {
-					DRM_DEBUG("%s(%d) v%d.%d.%d:\n",
-						  hw_id_names[hw_id], hw_id,
-						  ip->major, ip->minor,
-						  ip->revision);
-
-					for (k = 0; k < num_base_address; k++) {
-						/*
-						 * convert the endianness of base addresses in place,
-						 * so that we don't need to convert them when accessing adev->reg_offset.
-						 */
-						ip->base_address[k] = le32_to_cpu(ip->base_address[k]);
-						DRM_DEBUG("\t0x%08x\n", ip->base_address[k]);
-					}
+	DRM_DEBUG("number of dies: %d\n", num_dies);
 
+	for (i = 0; i < num_dies; i++) {
+		die_offset = le16_to_cpu(ihdr->die_info[i].die_offset);
+		dhdr = (struct die_header *)(adev->discovery + die_offset);
+		num_ips = le16_to_cpu(dhdr->num_ips);
+		ip_offset = die_offset + sizeof(*dhdr);
+
+		if (le16_to_cpu(dhdr->die_id) != i) {
+			DRM_ERROR("invalid die id %d, expected %d\n",
+					le16_to_cpu(dhdr->die_id), i);
+			return -EINVAL;
+		}
+
+		DRM_DEBUG("number of hardware IPs on die%d: %d\n",
+				le16_to_cpu(dhdr->die_id), num_ips);
+
+		for (j = 0; j < num_ips; j++) {
+			ip = (struct ip *)(adev->discovery + ip_offset);
+			num_base_address = ip->num_base_address;
+
+			DRM_DEBUG("%s(%d) #%d v%d.%d.%d:\n",
+				  hw_id_names[le16_to_cpu(ip->hw_id)],
+				  le16_to_cpu(ip->hw_id),
+				  ip->number_instance,
+				  ip->major, ip->minor,
+				  ip->revision);
+
+			for (k = 0; k < num_base_address; k++) {
+				/*
+				 * convert the endianness of base addresses in place,
+				 * so that we don't need to convert them when accessing adev->reg_offset.
+				 */
+				ip->base_address[k] = le32_to_cpu(ip->base_address[k]);
+				DRM_DEBUG("\t0x%08x\n", ip->base_address[k]);
+			}
+
+			for (hw_ip = 0; hw_ip < MAX_HWIP; hw_ip++) {
+				if (hw_id_map[hw_ip] == le16_to_cpu(ip->hw_id)) {
+					DRM_INFO("set register base offset for %s\n",
+							hw_id_names[le16_to_cpu(ip->hw_id)]);
 					adev->reg_offset[hw_ip][ip->number_instance] =
 						ip->base_address;
 				}
 
-				ip_offset += sizeof(*ip) + 4 * (ip->num_base_address - 1);
 			}
+
+			ip_offset += sizeof(*ip) + 4 * (ip->num_base_address - 1);
 		}
 	}
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 108/459] drm/amdgpu: disable concurrent flushes for Navi10 v2
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (6 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 107/459] drm/amdgpu/discovery: refactor ip list traversal Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 109/459] drm/amdgpu: add pa_sc_tile_steering_override to drm_amdgpu_info_device Alex Deucher
                     ` (84 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Christian König

From: Christian König <christian.koenig@amd.com>

Navi10 have a bug in the SDMA which can theoretically cause memory
corruption with concurrent VMID flushes

v2: explicitely check Navi10

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index df9b173c3d0b..5899d214187b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -364,8 +364,11 @@ static int amdgpu_vmid_grab_used(struct amdgpu_vm *vm,
 		if (updates && (!flushed || dma_fence_is_later(updates, flushed)))
 			needs_flush = true;
 
-		/* Concurrent flushes are only possible starting with Vega10 */
-		if (adev->asic_type < CHIP_VEGA10 && needs_flush)
+		/* Concurrent flushes are only possible starting with Vega10 and
+		 * are broken on Navi10 and Navi14.
+		 */
+		if (needs_flush && (adev->asic_type < CHIP_VEGA10 ||
+				    adev->asic_type == CHIP_NAVI10))
 			continue;
 
 		/* Good, we can use this VMID. Remember this submission as
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 109/459] drm/amdgpu: add pa_sc_tile_steering_override to drm_amdgpu_info_device
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (7 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 108/459] drm/amdgpu: disable concurrent flushes for Navi10 v2 Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 110/459] drm/amdgpu: set the default value of pa_sc_tile_steering_override Alex Deucher
                     ` (83 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

the initial/default value of pa_sc_tile_steering_override need to
be exposed to user mode driver

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 include/uapi/drm/amdgpu_drm.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 344c36e89923..61870478bc9c 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -995,6 +995,8 @@ struct drm_amdgpu_info_device {
 	__u64 high_va_offset;
 	/** The maximum high virtual address */
 	__u64 high_va_max;
+	/* gfx10 pa_sc_tile_steering_override */
+	__u32 pa_sc_tile_steering_override;
 };
 
 struct drm_amdgpu_info_hw_ip {
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 110/459] drm/amdgpu: set the default value of pa_sc_tile_steering_override
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (8 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 109/459] drm/amdgpu: add pa_sc_tile_steering_override to drm_amdgpu_info_device Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 111/459] drm/amdgpu: add initial support for sdma v5.0 (v6) Alex Deucher
                     ` (82 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

So userspace can access it.

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index e56704dd841b..ed051fdb509f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -762,6 +762,10 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
 		dev_info.gs_prim_buffer_depth = adev->gfx.config.gs_prim_buffer_depth;
 		dev_info.max_gs_waves_per_vgt = adev->gfx.config.max_gs_threads;
 
+		if (adev->family >= AMDGPU_FAMILY_NV)
+			dev_info.pa_sc_tile_steering_override =
+				adev->gfx.config.pa_sc_tile_steering_override;
+
 		return copy_to_user(out, &dev_info,
 				    min((size_t)size, sizeof(dev_info))) ? -EFAULT : 0;
 	}
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 111/459] drm/amdgpu: add initial support for sdma v5.0 (v6)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (9 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 110/459] drm/amdgpu: set the default value of pa_sc_tile_steering_override Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 112/459] drm/amdgpu: add Navi10 VCN firmware support Alex Deucher
                     ` (81 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

SDMA (System DMA) is a general purpose DMA engine usable
by UMDs for transfers or the kernel for paging or GPUVM
updates.

v1: support basic funcitonalites includes rb, ib, vm,
    copy buffer and trap irq
v2: convert to use new get_vm_pde in emit_vm_flush
v3: retire amdgpu_ttm_set_active_vram_size from sdma v5
v4: retire the redundant hdp_invalidate implementation
v5: squash in updates
v6: some golden regs moved to vbios

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile    |    3 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 1685 ++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.h |   45 +
 3 files changed, 1732 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index e62f7cdf8823..0e7a402d5ef8 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -117,7 +117,8 @@ amdgpu-y += \
 	amdgpu_sdma.o \
 	sdma_v2_4.o \
 	sdma_v3_0.o \
-	sdma_v4_0.o
+	sdma_v4_0.o \
+	sdma_v5_0.o
 
 # add UVD block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
new file mode 100644
index 000000000000..083f81611e24
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -0,0 +1,1685 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/firmware.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_ucode.h"
+#include "amdgpu_trace.h"
+
+#include "gc/gc_10_1_0_offset.h"
+#include "gc/gc_10_1_0_sh_mask.h"
+#include "hdp/hdp_5_0_0_offset.h"
+#include "ivsrcid/sdma0/irqsrcs_sdma0_5_0.h"
+#include "ivsrcid/sdma1/irqsrcs_sdma1_5_0.h"
+
+#include "soc15_common.h"
+#include "soc15.h"
+#include "navi10_sdma_pkt_open.h"
+#include "nbio_v2_3.h"
+#include "sdma_v5_0.h"
+
+MODULE_FIRMWARE("amdgpu/navi10_sdma.bin");
+MODULE_FIRMWARE("amdgpu/navi10_sdma1.bin");
+
+#define SDMA1_REG_OFFSET 0x600
+#define SDMA0_HYP_DEC_REG_START 0x5880
+#define SDMA0_HYP_DEC_REG_END 0x5893
+#define SDMA1_HYP_DEC_REG_OFFSET 0x20
+
+static void sdma_v5_0_set_ring_funcs(struct amdgpu_device *adev);
+static void sdma_v5_0_set_buffer_funcs(struct amdgpu_device *adev);
+static void sdma_v5_0_set_vm_pte_funcs(struct amdgpu_device *adev);
+static void sdma_v5_0_set_irq_funcs(struct amdgpu_device *adev);
+
+static const struct soc15_reg_golden golden_settings_sdma_5[] = {
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_CHICKEN_BITS, 0xffbf1f0f, 0x03ab0107),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_GFX_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_PAGE_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC2_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC3_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC4_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC5_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC6_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC7_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_UTCL1_PAGE, 0x00ffffff, 0x000c5c00),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_CHICKEN_BITS, 0xffbf1f0f, 0x03ab0107),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_GFX_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_PAGE_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC2_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC3_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC4_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC5_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC6_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC7_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_UTCL1_PAGE, 0x00ffffff, 0x000c5c00)
+};
+
+static const struct soc15_reg_golden golden_settings_sdma_nv10[] = {
+};
+
+static u32 sdma_v5_0_get_reg_offset(struct amdgpu_device *adev, u32 instance, u32 internal_offset)
+{
+	u32 base;
+
+	if (internal_offset >= SDMA0_HYP_DEC_REG_START &&
+	    internal_offset <= SDMA0_HYP_DEC_REG_END) {
+		base = adev->reg_offset[GC_HWIP][0][1];
+		if (instance == 1)
+			internal_offset += SDMA1_HYP_DEC_REG_OFFSET;
+	} else {
+		base = adev->reg_offset[GC_HWIP][0][0];
+		if (instance == 1)
+			internal_offset += SDMA1_REG_OFFSET;
+	}
+
+	return base + internal_offset;
+}
+
+static void sdma_v5_0_init_golden_registers(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		soc15_program_register_sequence(adev,
+						golden_settings_sdma_5,
+						(const u32)ARRAY_SIZE(golden_settings_sdma_5));
+		soc15_program_register_sequence(adev,
+						golden_settings_sdma_nv10,
+						(const u32)ARRAY_SIZE(golden_settings_sdma_nv10));
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * sdma_v5_0_init_microcode - load ucode images from disk
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Use the firmware interface to load the ucode images into
+ * the driver (not loaded into hw).
+ * Returns 0 on success, error on failure.
+ */
+
+// emulation only, won't work on real chip
+// navi10 real chip need to use PSP to load firmware
+static int sdma_v5_0_init_microcode(struct amdgpu_device *adev)
+{
+	const char *chip_name;
+	char fw_name[30];
+	int err = 0, i;
+	struct amdgpu_firmware_info *info = NULL;
+	const struct common_firmware_header *header = NULL;
+	const struct sdma_firmware_header_v1_0 *hdr;
+
+	DRM_DEBUG("\n");
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		chip_name = "navi10";
+		break;
+	default:
+		BUG();
+	}
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		if (i == 0)
+			snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_sdma.bin", chip_name);
+		else
+			snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_sdma1.bin", chip_name);
+		err = request_firmware(&adev->sdma.instance[i].fw, fw_name, adev->dev);
+		if (err)
+			goto out;
+		err = amdgpu_ucode_validate(adev->sdma.instance[i].fw);
+		if (err)
+			goto out;
+		hdr = (const struct sdma_firmware_header_v1_0 *)adev->sdma.instance[i].fw->data;
+		adev->sdma.instance[i].fw_version = le32_to_cpu(hdr->header.ucode_version);
+		adev->sdma.instance[i].feature_version = le32_to_cpu(hdr->ucode_feature_version);
+		if (adev->sdma.instance[i].feature_version >= 20)
+			adev->sdma.instance[i].burst_nop = true;
+		DRM_DEBUG("psp_load == '%s'\n",
+				adev->firmware.load_type == AMDGPU_FW_LOAD_PSP ? "true" : "false");
+
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SDMA0 + i];
+			info->ucode_id = AMDGPU_UCODE_ID_SDMA0 + i;
+			info->fw = adev->sdma.instance[i].fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+		}
+	}
+out:
+	if (err) {
+		DRM_ERROR("sdma_v5_0: Failed to load firmware \"%s\"\n", fw_name);
+		for (i = 0; i < adev->sdma.num_instances; i++) {
+			release_firmware(adev->sdma.instance[i].fw);
+			adev->sdma.instance[i].fw = NULL;
+		}
+	}
+	return err;
+}
+
+static unsigned sdma_v5_0_ring_init_cond_exec(struct amdgpu_ring *ring)
+{
+	unsigned ret;
+
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_COND_EXE));
+	amdgpu_ring_write(ring, lower_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, 1);
+	ret = ring->wptr & ring->buf_mask;/* this is the offset we need patch later */
+	amdgpu_ring_write(ring, 0x55aa55aa);/* insert dummy here and patch it later */
+
+	return ret;
+}
+
+static void sdma_v5_0_ring_patch_cond_exec(struct amdgpu_ring *ring,
+					   unsigned offset)
+{
+	unsigned cur;
+
+	BUG_ON(offset > ring->buf_mask);
+	BUG_ON(ring->ring[offset] != 0x55aa55aa);
+
+	cur = (ring->wptr - 1) & ring->buf_mask;
+	if (cur > offset)
+		ring->ring[offset] = cur - offset;
+	else
+		ring->ring[offset] = (ring->buf_mask + 1) - offset + cur;
+}
+
+/**
+ * sdma_v5_0_ring_get_rptr - get the current read pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Get the current rptr from the hardware (NAVI10+).
+ */
+static uint64_t sdma_v5_0_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	u64 *rptr;
+
+	/* XXX check if swapping is necessary on BE */
+	rptr = ((u64 *)&ring->adev->wb.wb[ring->rptr_offs]);
+
+	DRM_DEBUG("rptr before shift == 0x%016llx\n", *rptr);
+	return ((*rptr) >> 2);
+}
+
+/**
+ * sdma_v5_0_ring_get_wptr - get the current write pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Get the current wptr from the hardware (NAVI10+).
+ */
+static uint64_t sdma_v5_0_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u64 *wptr = NULL;
+	uint64_t local_wptr = 0;
+
+	if (ring->use_doorbell) {
+		/* XXX check if swapping is necessary on BE */
+		wptr = ((u64 *)&adev->wb.wb[ring->wptr_offs]);
+		DRM_DEBUG("wptr/doorbell before shift == 0x%016llx\n", *wptr);
+		*wptr = (*wptr) >> 2;
+		DRM_DEBUG("wptr/doorbell after shift == 0x%016llx\n", *wptr);
+	} else {
+		u32 lowbit, highbit;
+
+		wptr = &local_wptr;
+		lowbit = RREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR)) >> 2;
+		highbit = RREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR_HI)) >> 2;
+
+		DRM_DEBUG("wptr [%i]high== 0x%08x low==0x%08x\n",
+				ring->me, highbit, lowbit);
+		*wptr = highbit;
+		*wptr = (*wptr) << 32;
+		*wptr |= lowbit;
+	}
+
+	return *wptr;
+}
+
+/**
+ * sdma_v5_0_ring_set_wptr - commit the write pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Write the wptr back to the hardware (NAVI10+).
+ */
+static void sdma_v5_0_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	DRM_DEBUG("Setting write pointer\n");
+	if (ring->use_doorbell) {
+		DRM_DEBUG("Using doorbell -- "
+				"wptr_offs == 0x%08x "
+				"lower_32_bits(ring->wptr) << 2 == 0x%08x "
+				"upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
+				ring->wptr_offs,
+				lower_32_bits(ring->wptr << 2),
+				upper_32_bits(ring->wptr << 2));
+		/* XXX check if swapping is necessary on BE */
+		adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr << 2);
+		adev->wb.wb[ring->wptr_offs + 1] = upper_32_bits(ring->wptr << 2);
+		DRM_DEBUG("calling WDOORBELL64(0x%08x, 0x%016llx)\n",
+				ring->doorbell_index, ring->wptr << 2);
+		WDOORBELL64(ring->doorbell_index, ring->wptr << 2);
+	} else {
+		DRM_DEBUG("Not using doorbell -- "
+				"mmSDMA%i_GFX_RB_WPTR == 0x%08x "
+				"mmSDMA%i_GFX_RB_WPTR_HI == 0x%08x\n",
+				ring->me,
+				lower_32_bits(ring->wptr << 2),
+				ring->me,
+				upper_32_bits(ring->wptr << 2));
+		WREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR),
+			lower_32_bits(ring->wptr << 2));
+		WREG32(sdma_v5_0_get_reg_offset(adev, ring->me, mmSDMA0_GFX_RB_WPTR_HI),
+			upper_32_bits(ring->wptr << 2));
+	}
+}
+
+static void sdma_v5_0_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count)
+{
+	struct amdgpu_sdma_instance *sdma = amdgpu_sdma_get_instance_from_ring(ring);
+	int i;
+
+	for (i = 0; i < count; i++)
+		if (sdma && sdma->burst_nop && (i == 0))
+			amdgpu_ring_write(ring, ring->funcs->nop |
+				SDMA_PKT_NOP_HEADER_COUNT(count - 1));
+		else
+			amdgpu_ring_write(ring, ring->funcs->nop);
+}
+
+/**
+ * sdma_v5_0_ring_emit_ib - Schedule an IB on the DMA engine
+ *
+ * @ring: amdgpu ring pointer
+ * @ib: IB object to schedule
+ *
+ * Schedule an IB in the DMA ring (NAVI10).
+ */
+static void sdma_v5_0_ring_emit_ib(struct amdgpu_ring *ring,
+				   struct amdgpu_job *job,
+				   struct amdgpu_ib *ib,
+				   uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+	uint64_t csa_mc_addr = amdgpu_sdma_get_csa_mc_addr(ring, vmid);
+
+	/* IB packet must end on a 8 DW boundary */
+	sdma_v5_0_ring_insert_nop(ring, (10 - (lower_32_bits(ring->wptr) & 7)) % 8);
+
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_INDIRECT) |
+			  SDMA_PKT_INDIRECT_HEADER_VMID(vmid & 0xf));
+	/* base must be 32 byte aligned */
+	amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr) & 0xffffffe0);
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, ib->length_dw);
+	amdgpu_ring_write(ring, lower_32_bits(csa_mc_addr));
+	amdgpu_ring_write(ring, upper_32_bits(csa_mc_addr));
+}
+
+/**
+ * sdma_v5_0_ring_emit_hdp_flush - emit an hdp flush on the DMA ring
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Emit an hdp flush packet on the requested DMA ring.
+ */
+static void sdma_v5_0_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u32 ref_and_mask = 0;
+	const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio_funcs->hdp_flush_reg;
+
+	if (ring->me == 0)
+		ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0;
+	else
+		ref_and_mask = nbio_hf_reg->ref_and_mask_sdma1;
+
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(1) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* == */
+	amdgpu_ring_write(ring, (adev->nbio_funcs->get_hdp_flush_done_offset(adev)) << 2);
+	amdgpu_ring_write(ring, (adev->nbio_funcs->get_hdp_flush_req_offset(adev)) << 2);
+	amdgpu_ring_write(ring, ref_and_mask); /* reference */
+	amdgpu_ring_write(ring, ref_and_mask); /* mask */
+	amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+			  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10)); /* retry count, poll interval */
+}
+
+/**
+ * sdma_v5_0_ring_emit_fence - emit a fence on the DMA ring
+ *
+ * @ring: amdgpu ring pointer
+ * @fence: amdgpu fence object
+ *
+ * Add a DMA fence packet to the ring to write
+ * the fence seq number and DMA trap packet to generate
+ * an interrupt if needed (NAVI10).
+ */
+static void sdma_v5_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+				      unsigned flags)
+{
+	struct amdgpu_device *adev = ring->adev;
+	bool write64bit = flags & AMDGPU_FENCE_FLAG_64BIT;
+	/* write the fence */
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_FENCE) |
+			  SDMA_PKT_FENCE_HEADER_MTYPE(0x3)); /* Ucached(UC) */
+	/* zero in first two bits */
+	BUG_ON(addr & 0x3);
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+
+	/* optionally write high bits as well */
+	if (write64bit) {
+		addr += 4;
+		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_FENCE) |
+				  SDMA_PKT_FENCE_HEADER_MTYPE(0x3));
+		/* zero in first two bits */
+		BUG_ON(addr & 0x3);
+		amdgpu_ring_write(ring, lower_32_bits(addr));
+		amdgpu_ring_write(ring, upper_32_bits(addr));
+		amdgpu_ring_write(ring, upper_32_bits(seq));
+	}
+
+	/* Interrupt not work fine on GFX10.1 model yet. Use fallback instead */
+	if ((flags & AMDGPU_FENCE_FLAG_INT) && adev->pdev->device != 0x50) {
+		/* generate an interrupt */
+		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_TRAP));
+		amdgpu_ring_write(ring, SDMA_PKT_TRAP_INT_CONTEXT_INT_CONTEXT(0));
+	}
+}
+
+
+/**
+ * sdma_v5_0_gfx_stop - stop the gfx async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Stop the gfx async dma ring buffers (NAVI10).
+ */
+static void sdma_v5_0_gfx_stop(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
+	struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
+	u32 rb_cntl, ib_cntl;
+	int i;
+
+	if ((adev->mman.buffer_funcs_ring == sdma0) ||
+	    (adev->mman.buffer_funcs_ring == sdma1))
+		amdgpu_ttm_set_buffer_funcs_status(adev, false);
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		rb_cntl = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL));
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 0);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
+		ib_cntl = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL));
+		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL), ib_cntl);
+	}
+
+	sdma0->sched.ready = false;
+	sdma1->sched.ready = false;
+}
+
+/**
+ * sdma_v5_0_rlc_stop - stop the compute async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Stop the compute async dma queues (NAVI10).
+ */
+static void sdma_v5_0_rlc_stop(struct amdgpu_device *adev)
+{
+	/* XXX todo */
+}
+
+/**
+ * sdma_v_0_ctx_switch_enable - stop the async dma engines context switch
+ *
+ * @adev: amdgpu_device pointer
+ * @enable: enable/disable the DMA MEs context switch.
+ *
+ * Halt or unhalt the async dma engines context switch (NAVI10).
+ */
+static void sdma_v5_0_ctx_switch_enable(struct amdgpu_device *adev, bool enable)
+{
+	u32 f32_cntl, phase_quantum = 0;
+	int i;
+
+	if (amdgpu_sdma_phase_quantum) {
+		unsigned value = amdgpu_sdma_phase_quantum;
+		unsigned unit = 0;
+
+		while (value > (SDMA0_PHASE0_QUANTUM__VALUE_MASK >>
+				SDMA0_PHASE0_QUANTUM__VALUE__SHIFT)) {
+			value = (value + 1) >> 1;
+			unit++;
+		}
+		if (unit > (SDMA0_PHASE0_QUANTUM__UNIT_MASK >>
+			    SDMA0_PHASE0_QUANTUM__UNIT__SHIFT)) {
+			value = (SDMA0_PHASE0_QUANTUM__VALUE_MASK >>
+				 SDMA0_PHASE0_QUANTUM__VALUE__SHIFT);
+			unit = (SDMA0_PHASE0_QUANTUM__UNIT_MASK >>
+				SDMA0_PHASE0_QUANTUM__UNIT__SHIFT);
+			WARN_ONCE(1,
+			"clamping sdma_phase_quantum to %uK clock cycles\n",
+				  value << unit);
+		}
+		phase_quantum =
+			value << SDMA0_PHASE0_QUANTUM__VALUE__SHIFT |
+			unit  << SDMA0_PHASE0_QUANTUM__UNIT__SHIFT;
+	}
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		f32_cntl = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CNTL));
+		f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
+				AUTO_CTXSW_ENABLE, enable ? 1 : 0);
+		if (enable && amdgpu_sdma_phase_quantum) {
+			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_PHASE0_QUANTUM),
+			       phase_quantum);
+			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_PHASE1_QUANTUM),
+			       phase_quantum);
+			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_PHASE2_QUANTUM),
+			       phase_quantum);
+		}
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CNTL), f32_cntl);
+	}
+
+}
+
+/**
+ * sdma_v5_0_enable - stop the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ * @enable: enable/disable the DMA MEs.
+ *
+ * Halt or unhalt the async dma engines (NAVI10).
+ */
+static void sdma_v5_0_enable(struct amdgpu_device *adev, bool enable)
+{
+	u32 f32_cntl;
+	int i;
+
+	if (enable == false) {
+		sdma_v5_0_gfx_stop(adev);
+		sdma_v5_0_rlc_stop(adev);
+	}
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		f32_cntl = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_F32_CNTL));
+		f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_F32_CNTL, HALT, enable ? 0 : 1);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_F32_CNTL), f32_cntl);
+	}
+}
+
+/**
+ * sdma_v5_0_gfx_resume - setup and start the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Set up the gfx DMA ring buffers and enable them (NAVI10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v5_0_gfx_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	u32 rb_cntl, ib_cntl;
+	u32 rb_bufsz;
+	u32 wb_offset;
+	u32 doorbell;
+	u32 doorbell_offset;
+	u32 temp;
+	u32 wptr_gpu_addr, wptr_poll_cntl;
+	int i, r;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		ring = &adev->sdma.instance[i].ring;
+		wb_offset = (ring->rptr_offs * 4);
+
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_SEM_WAIT_FAIL_TIMER_CNTL), 0);
+
+		/* Set ring buffer size in dwords */
+		rb_bufsz = order_base_2(ring->ring_size / 4);
+		rb_cntl = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL));
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SIZE, rb_bufsz);
+#ifdef __BIG_ENDIAN
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SWAP_ENABLE, 1);
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL,
+					RPTR_WRITEBACK_SWAP_ENABLE, 1);
+#endif
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
+
+		/* Initialize the ring buffer's read and write pointers */
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR), 0);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR_HI), 0);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), 0);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI), 0);
+
+		/* setup the wptr shadow polling */
+		wptr_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_POLL_ADDR_LO),
+		       lower_32_bits(wptr_gpu_addr));
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_POLL_ADDR_HI),
+		       upper_32_bits(wptr_gpu_addr));
+		wptr_poll_cntl = RREG32(sdma_v5_0_get_reg_offset(adev, i,
+							 mmSDMA0_GFX_RB_WPTR_POLL_CNTL));
+		wptr_poll_cntl = REG_SET_FIELD(wptr_poll_cntl,
+					       SDMA0_GFX_RB_WPTR_POLL_CNTL,
+					       F32_POLL_ENABLE, 1);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_POLL_CNTL),
+		       wptr_poll_cntl);
+
+		/* set the wb address whether it's enabled or not */
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR_ADDR_HI),
+		       upper_32_bits(adev->wb.gpu_addr + wb_offset) & 0xFFFFFFFF);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR_ADDR_LO),
+		       lower_32_bits(adev->wb.gpu_addr + wb_offset) & 0xFFFFFFFC);
+
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RPTR_WRITEBACK_ENABLE, 1);
+
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_BASE), ring->gpu_addr >> 8);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_BASE_HI), ring->gpu_addr >> 40);
+
+		ring->wptr = 0;
+
+		/* before programing wptr to a less value, need set minor_ptr_update first */
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_MINOR_PTR_UPDATE), 1);
+
+		if (!amdgpu_sriov_vf(adev)) { /* only bare-metal use register write for wptr */
+			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr) << 2);
+			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr) << 2);
+		}
+
+		doorbell = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL));
+		doorbell_offset = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL_OFFSET));
+
+		if (ring->use_doorbell) {
+			doorbell = REG_SET_FIELD(doorbell, SDMA0_GFX_DOORBELL, ENABLE, 1);
+			doorbell_offset = REG_SET_FIELD(doorbell_offset, SDMA0_GFX_DOORBELL_OFFSET,
+					OFFSET, ring->doorbell_index);
+		} else {
+			doorbell = REG_SET_FIELD(doorbell, SDMA0_GFX_DOORBELL, ENABLE, 0);
+		}
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL), doorbell);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL_OFFSET), doorbell_offset);
+
+		adev->nbio_funcs->sdma_doorbell_range(adev, i, ring->use_doorbell,
+						      ring->doorbell_index, 20);
+
+		if (amdgpu_sriov_vf(adev))
+			sdma_v5_0_ring_set_wptr(ring);
+
+		/* set minor_ptr_update to 0 after wptr programed */
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_MINOR_PTR_UPDATE), 0);
+
+		/* set utc l1 enable flag always to 1 */
+		temp = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CNTL));
+		temp = REG_SET_FIELD(temp, SDMA0_CNTL, UTC_L1_ENABLE, 1);
+
+		/* enable MCBP */
+		temp = REG_SET_FIELD(temp, SDMA0_CNTL, MIDCMD_PREEMPT_ENABLE, 1);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CNTL), temp);
+
+		/* Set up RESP_MODE to non-copy addresses */
+		temp = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_UTCL1_CNTL));
+		temp = REG_SET_FIELD(temp, SDMA0_UTCL1_CNTL, RESP_MODE, 2);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_UTCL1_CNTL), temp);
+
+		/* program default cache read and write policy */
+		temp = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_UTCL1_PAGE));
+		/* clean read policy and write policy bits */
+		temp &= 0xFF0FFF;
+		temp |= ((CACHE_READ_POLICY_L2__DEFAULT << 12) | (CACHE_WRITE_POLICY_L2__DEFAULT << 14));
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_UTCL1_PAGE), temp);
+
+		if (!amdgpu_sriov_vf(adev)) {
+			/* unhalt engine */
+			temp = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_F32_CNTL));
+			temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
+			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_F32_CNTL), temp);
+		}
+
+		/* enable DMA RB */
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 1);
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
+
+		ib_cntl = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL));
+		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 1);
+#ifdef __BIG_ENDIAN
+		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_SWAP_ENABLE, 1);
+#endif
+		/* enable DMA IBs */
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL), ib_cntl);
+
+		ring->sched.ready = true;
+
+		if (amdgpu_sriov_vf(adev)) { /* bare-metal sequence doesn't need below to lines */
+			sdma_v5_0_ctx_switch_enable(adev, true);
+			sdma_v5_0_enable(adev, true);
+		}
+
+		r = amdgpu_ring_test_ring(ring);
+		if (r) {
+			ring->sched.ready = false;
+			return r;
+		}
+
+		if (adev->mman.buffer_funcs_ring == ring)
+			amdgpu_ttm_set_buffer_funcs_status(adev, true);
+	}
+
+	return 0;
+}
+
+/**
+ * sdma_v5_0_rlc_resume - setup and start the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Set up the compute DMA queues and enable them (NAVI10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v5_0_rlc_resume(struct amdgpu_device *adev)
+{
+	return 0;
+}
+
+/**
+ * sdma_v5_0_load_microcode - load the sDMA ME ucode
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Loads the sDMA0/1 ucode.
+ * Returns 0 for success, -EINVAL if the ucode is not available.
+ */
+static int sdma_v5_0_load_microcode(struct amdgpu_device *adev)
+{
+	const struct sdma_firmware_header_v1_0 *hdr;
+	const __le32 *fw_data;
+	u32 fw_size;
+	int i, j;
+
+	/* halt the MEs */
+	sdma_v5_0_enable(adev, false);
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		if (!adev->sdma.instance[i].fw)
+			return -EINVAL;
+
+		hdr = (const struct sdma_firmware_header_v1_0 *)adev->sdma.instance[i].fw->data;
+		amdgpu_ucode_print_sdma_hdr(&hdr->header);
+		fw_size = le32_to_cpu(hdr->header.ucode_size_bytes) / 4;
+
+		fw_data = (const __le32 *)
+			(adev->sdma.instance[i].fw->data +
+				le32_to_cpu(hdr->header.ucode_array_offset_bytes));
+
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_UCODE_ADDR), 0);
+
+		for (j = 0; j < fw_size; j++) {
+			if (amdgpu_emu_mode == 1 && j % 500 == 0)
+				msleep(1);
+			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_UCODE_DATA), le32_to_cpup(fw_data++));
+		}
+
+		WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_UCODE_ADDR), adev->sdma.instance[i].fw_version);
+	}
+
+	return 0;
+}
+
+/**
+ * sdma_v5_0_start - setup and start the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Set up the DMA engines and enable them (NAVI10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v5_0_start(struct amdgpu_device *adev)
+{
+	int r = 0;
+
+	if (amdgpu_sriov_vf(adev)) {
+		sdma_v5_0_ctx_switch_enable(adev, false);
+		sdma_v5_0_enable(adev, false);
+
+		/* set RB registers */
+		r = sdma_v5_0_gfx_resume(adev);
+		return r;
+	}
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+		r = sdma_v5_0_load_microcode(adev);
+		if (r)
+			return r;
+
+		/* The value of mmSDMA_F32_CNTL is invalid the moment after loading fw */
+		if (amdgpu_emu_mode == 1 && adev->pdev->device == 0x4d)
+			msleep(1000);
+	}
+
+	/* unhalt the MEs */
+	sdma_v5_0_enable(adev, true);
+	/* enable sdma ring preemption */
+	sdma_v5_0_ctx_switch_enable(adev, true);
+
+	/* start the gfx rings and rlc compute queues */
+	r = sdma_v5_0_gfx_resume(adev);
+	if (r)
+		return r;
+	r = sdma_v5_0_rlc_resume(adev);
+
+	return r;
+}
+
+/**
+ * sdma_v5_0_ring_test_ring - simple async dma engine test
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ *
+ * Test the DMA engine by writing using it to write an
+ * value to memory. (NAVI10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v5_0_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	unsigned i;
+	unsigned index;
+	int r;
+	u32 tmp;
+	u64 gpu_addr;
+
+	r = amdgpu_device_wb_get(adev, &index);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to allocate wb slot\n", r);
+		return r;
+	}
+
+	gpu_addr = adev->wb.gpu_addr + (index * 4);
+	tmp = 0xCAFEDEAD;
+	adev->wb.wb[index] = cpu_to_le32(tmp);
+
+	r = amdgpu_ring_alloc(ring, 5);
+	if (r) {
+		DRM_ERROR("amdgpu: dma failed to lock ring %d (%d).\n", ring->idx, r);
+		amdgpu_device_wb_free(adev, index);
+		return r;
+	}
+
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) |
+			  SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR));
+	amdgpu_ring_write(ring, lower_32_bits(gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(gpu_addr));
+	amdgpu_ring_write(ring, SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0));
+	amdgpu_ring_write(ring, 0xDEADBEEF);
+	amdgpu_ring_commit(ring);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		tmp = le32_to_cpu(adev->wb.wb[index]);
+		if (tmp == 0xDEADBEEF)
+			break;
+		if (amdgpu_emu_mode == 1)
+			msleep(1);
+		else
+			DRM_UDELAY(1);
+	}
+
+	if (i < adev->usec_timeout) {
+		if (amdgpu_emu_mode == 1)
+			DRM_INFO("ring test on %d succeeded in %d msecs\n", ring->idx, i);
+		else
+			DRM_INFO("ring test on %d succeeded in %d usecs\n", ring->idx, i);
+	} else {
+		DRM_ERROR("amdgpu: ring %d test failed (0x%08X)\n",
+			  ring->idx, tmp);
+		r = -EINVAL;
+	}
+	amdgpu_device_wb_free(adev, index);
+
+	return r;
+}
+
+/**
+ * sdma_v5_0_ring_test_ib - test an IB on the DMA engine
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ *
+ * Test a simple IB in the DMA ring (NAVI10).
+ * Returns 0 on success, error on failure.
+ */
+static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct amdgpu_ib ib;
+	struct dma_fence *f = NULL;
+	unsigned index;
+	long r;
+	u32 tmp = 0;
+	u64 gpu_addr;
+
+	r = amdgpu_device_wb_get(adev, &index);
+	if (r) {
+		dev_err(adev->dev, "(%ld) failed to allocate wb slot\n", r);
+		return r;
+	}
+
+	gpu_addr = adev->wb.gpu_addr + (index * 4);
+	tmp = 0xCAFEDEAD;
+	adev->wb.wb[index] = cpu_to_le32(tmp);
+	memset(&ib, 0, sizeof(ib));
+	r = amdgpu_ib_get(adev, NULL, 256, &ib);
+	if (r) {
+		DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r);
+		goto err0;
+	}
+
+	ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR);
+	ib.ptr[1] = lower_32_bits(gpu_addr);
+	ib.ptr[2] = upper_32_bits(gpu_addr);
+	ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0);
+	ib.ptr[4] = 0xDEADBEEF;
+	ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP);
+	ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP);
+	ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP);
+	ib.length_dw = 8;
+
+	r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f);
+	if (r)
+		goto err1;
+
+	r = dma_fence_wait_timeout(f, false, timeout);
+	if (r == 0) {
+		DRM_ERROR("amdgpu: IB test timed out\n");
+		r = -ETIMEDOUT;
+		goto err1;
+	} else if (r < 0) {
+		DRM_ERROR("amdgpu: fence wait failed (%ld).\n", r);
+		goto err1;
+	}
+	tmp = le32_to_cpu(adev->wb.wb[index]);
+	if (tmp == 0xDEADBEEF) {
+		DRM_INFO("ib test on ring %d succeeded\n", ring->idx);
+		r = 0;
+	} else {
+		DRM_ERROR("amdgpu: ib test failed (0x%08X)\n", tmp);
+		r = -EINVAL;
+	}
+
+err1:
+	amdgpu_ib_free(adev, &ib, NULL);
+	dma_fence_put(f);
+err0:
+	amdgpu_device_wb_free(adev, index);
+	return r;
+}
+
+
+/**
+ * sdma_v5_0_vm_copy_pte - update PTEs by copying them from the GART
+ *
+ * @ib: indirect buffer to fill with commands
+ * @pe: addr of the page entry
+ * @src: src addr to copy from
+ * @count: number of page entries to update
+ *
+ * Update PTEs by copying them from the GART using sDMA (NAVI10).
+ */
+static void sdma_v5_0_vm_copy_pte(struct amdgpu_ib *ib,
+				  uint64_t pe, uint64_t src,
+				  unsigned count)
+{
+	unsigned bytes = count * 8;
+
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_COPY) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR);
+	ib->ptr[ib->length_dw++] = bytes - 1;
+	ib->ptr[ib->length_dw++] = 0; /* src/dst endian swap */
+	ib->ptr[ib->length_dw++] = lower_32_bits(src);
+	ib->ptr[ib->length_dw++] = upper_32_bits(src);
+	ib->ptr[ib->length_dw++] = lower_32_bits(pe);
+	ib->ptr[ib->length_dw++] = upper_32_bits(pe);
+
+}
+
+/**
+ * sdma_v5_0_vm_write_pte - update PTEs by writing them manually
+ *
+ * @ib: indirect buffer to fill with commands
+ * @pe: addr of the page entry
+ * @addr: dst addr to write into pe
+ * @count: number of page entries to update
+ * @incr: increase next addr by incr bytes
+ * @flags: access flags
+ *
+ * Update PTEs by writing them manually using sDMA (NAVI10).
+ */
+static void sdma_v5_0_vm_write_pte(struct amdgpu_ib *ib, uint64_t pe,
+				   uint64_t value, unsigned count,
+				   uint32_t incr)
+{
+	unsigned ndw = count * 2;
+
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR);
+	ib->ptr[ib->length_dw++] = lower_32_bits(pe);
+	ib->ptr[ib->length_dw++] = upper_32_bits(pe);
+	ib->ptr[ib->length_dw++] = ndw - 1;
+	for (; ndw > 0; ndw -= 2) {
+		ib->ptr[ib->length_dw++] = lower_32_bits(value);
+		ib->ptr[ib->length_dw++] = upper_32_bits(value);
+		value += incr;
+	}
+}
+
+/**
+ * sdma_v5_0_vm_set_pte_pde - update the page tables using sDMA
+ *
+ * @ib: indirect buffer to fill with commands
+ * @pe: addr of the page entry
+ * @addr: dst addr to write into pe
+ * @count: number of page entries to update
+ * @incr: increase next addr by incr bytes
+ * @flags: access flags
+ *
+ * Update the page tables using sDMA (NAVI10).
+ */
+static void sdma_v5_0_vm_set_pte_pde(struct amdgpu_ib *ib,
+				     uint64_t pe,
+				     uint64_t addr, unsigned count,
+				     uint32_t incr, uint64_t flags)
+{
+	/* for physically contiguous pages (vram) */
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_PTEPDE);
+	ib->ptr[ib->length_dw++] = lower_32_bits(pe); /* dst addr */
+	ib->ptr[ib->length_dw++] = upper_32_bits(pe);
+	ib->ptr[ib->length_dw++] = lower_32_bits(flags); /* mask */
+	ib->ptr[ib->length_dw++] = upper_32_bits(flags);
+	ib->ptr[ib->length_dw++] = lower_32_bits(addr); /* value */
+	ib->ptr[ib->length_dw++] = upper_32_bits(addr);
+	ib->ptr[ib->length_dw++] = incr; /* increment size */
+	ib->ptr[ib->length_dw++] = 0;
+	ib->ptr[ib->length_dw++] = count - 1; /* number of entries */
+}
+
+/**
+ * sdma_v5_0_ring_pad_ib - pad the IB to the required number of dw
+ *
+ * @ib: indirect buffer to fill with padding
+ *
+ */
+static void sdma_v5_0_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib)
+{
+	struct amdgpu_sdma_instance *sdma = amdgpu_sdma_get_instance_from_ring(ring);
+	u32 pad_count;
+	int i;
+
+	pad_count = (8 - (ib->length_dw & 0x7)) % 8;
+	for (i = 0; i < pad_count; i++)
+		if (sdma && sdma->burst_nop && (i == 0))
+			ib->ptr[ib->length_dw++] =
+				SDMA_PKT_HEADER_OP(SDMA_OP_NOP) |
+				SDMA_PKT_NOP_HEADER_COUNT(pad_count - 1);
+		else
+			ib->ptr[ib->length_dw++] =
+				SDMA_PKT_HEADER_OP(SDMA_OP_NOP);
+}
+
+
+/**
+ * sdma_v5_0_ring_emit_pipeline_sync - sync the pipeline
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Make sure all previous operations are completed (CIK).
+ */
+static void sdma_v5_0_ring_emit_pipeline_sync(struct amdgpu_ring *ring)
+{
+	uint32_t seq = ring->fence_drv.sync_seq;
+	uint64_t addr = ring->fence_drv.gpu_addr;
+
+	/* wait for idle */
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(0) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3) | /* equal */
+			  SDMA_PKT_POLL_REGMEM_HEADER_MEM_POLL(1));
+	amdgpu_ring_write(ring, addr & 0xfffffffc);
+	amdgpu_ring_write(ring, upper_32_bits(addr) & 0xffffffff);
+	amdgpu_ring_write(ring, seq); /* reference */
+	amdgpu_ring_write(ring, 0xfffffff); /* mask */
+	amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+			  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(4)); /* retry count, poll interval */
+}
+
+
+/**
+ * sdma_v5_0_ring_emit_vm_flush - vm flush using sDMA
+ *
+ * @ring: amdgpu_ring pointer
+ * @vm: amdgpu_vm pointer
+ *
+ * Update the page table base and flush the VM TLB
+ * using sDMA (NAVI10).
+ */
+static void sdma_v5_0_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					 unsigned vmid, uint64_t pd_addr)
+{
+	amdgpu_gmc_emit_flush_gpu_tlb(ring, vmid, pd_addr);
+}
+
+static void sdma_v5_0_ring_emit_wreg(struct amdgpu_ring *ring,
+				     uint32_t reg, uint32_t val)
+{
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_SRBM_WRITE) |
+			  SDMA_PKT_SRBM_WRITE_HEADER_BYTE_EN(0xf));
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, val);
+}
+
+static void sdma_v5_0_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
+					 uint32_t val, uint32_t mask)
+{
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(0) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* equal */
+	amdgpu_ring_write(ring, reg << 2);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, val); /* reference */
+	amdgpu_ring_write(ring, mask); /* mask */
+	amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+			  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10));
+}
+
+static int sdma_v5_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->sdma.num_instances = 2;
+
+	sdma_v5_0_set_ring_funcs(adev);
+	sdma_v5_0_set_buffer_funcs(adev);
+	sdma_v5_0_set_vm_pte_funcs(adev);
+	sdma_v5_0_set_irq_funcs(adev);
+
+	return 0;
+}
+
+
+static int sdma_v5_0_sw_init(void *handle)
+{
+	struct amdgpu_ring *ring;
+	int r, i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* SDMA trap event */
+	r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_SDMA0,
+			      SDMA0_5_0__SRCID__SDMA_TRAP,
+			      &adev->sdma.trap_irq);
+	if (r)
+		return r;
+
+	/* SDMA trap event */
+	r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_SDMA1,
+			      SDMA1_5_0__SRCID__SDMA_TRAP,
+			      &adev->sdma.trap_irq);
+	if (r)
+		return r;
+
+	r = sdma_v5_0_init_microcode(adev);
+	if (r) {
+		DRM_ERROR("Failed to load sdma firmware!\n");
+		return r;
+	}
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		ring = &adev->sdma.instance[i].ring;
+		ring->ring_obj = NULL;
+		ring->use_doorbell = true;
+
+		DRM_INFO("use_doorbell being set to: [%s]\n",
+				ring->use_doorbell?"true":"false");
+
+		ring->doorbell_index = (i == 0) ?
+			(adev->doorbell_index.sdma_engine[0] << 1) //get DWORD offset
+			: (adev->doorbell_index.sdma_engine[1] << 1); // get DWORD offset
+
+		sprintf(ring->name, "sdma%d", i);
+		r = amdgpu_ring_init(adev, ring, 1024,
+				     &adev->sdma.trap_irq,
+				     (i == 0) ?
+				     AMDGPU_SDMA_IRQ_INSTANCE0 :
+				     AMDGPU_SDMA_IRQ_INSTANCE1);
+		if (r)
+			return r;
+	}
+
+	return r;
+}
+
+static int sdma_v5_0_sw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++)
+		amdgpu_ring_fini(&adev->sdma.instance[i].ring);
+
+	return 0;
+}
+
+static int sdma_v5_0_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	sdma_v5_0_init_golden_registers(adev);
+
+	r = sdma_v5_0_start(adev);
+
+	return r;
+}
+
+static int sdma_v5_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (amdgpu_sriov_vf(adev))
+		return 0;
+
+	sdma_v5_0_ctx_switch_enable(adev, false);
+	sdma_v5_0_enable(adev, false);
+
+	return 0;
+}
+
+static int sdma_v5_0_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return sdma_v5_0_hw_fini(adev);
+}
+
+static int sdma_v5_0_resume(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return sdma_v5_0_hw_init(adev);
+}
+
+static bool sdma_v5_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	u32 i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		u32 tmp = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_STATUS_REG));
+
+		if (!(tmp & SDMA0_STATUS_REG__IDLE_MASK))
+			return false;
+	}
+
+	return true;
+}
+
+static int sdma_v5_0_wait_for_idle(void *handle)
+{
+	unsigned i;
+	u32 sdma0, sdma1;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		sdma0 = RREG32(sdma_v5_0_get_reg_offset(adev, 0, mmSDMA0_STATUS_REG));
+		sdma1 = RREG32(sdma_v5_0_get_reg_offset(adev, 1, mmSDMA0_STATUS_REG));
+
+		if (sdma0 & sdma1 & SDMA0_STATUS_REG__IDLE_MASK)
+			return 0;
+		udelay(1);
+	}
+	return -ETIMEDOUT;
+}
+
+static int sdma_v5_0_soft_reset(void *handle)
+{
+	/* todo */
+
+	return 0;
+}
+
+static int sdma_v5_0_ring_preempt_ib(struct amdgpu_ring *ring)
+{
+	int i, r = 0;
+	struct amdgpu_device *adev = ring->adev;
+	u32 index = 0;
+	u64 sdma_gfx_preempt;
+
+	amdgpu_sdma_get_index_from_ring(ring, &index);
+	if (index == 0)
+		sdma_gfx_preempt = mmSDMA0_GFX_PREEMPT;
+	else
+		sdma_gfx_preempt = mmSDMA1_GFX_PREEMPT;
+
+	/* assert preemption condition */
+	amdgpu_ring_set_preempt_cond_exec(ring, false);
+
+	/* emit the trailing fence */
+	ring->trail_seq += 1;
+	amdgpu_ring_alloc(ring, 10);
+	sdma_v5_0_ring_emit_fence(ring, ring->trail_fence_gpu_addr,
+				  ring->trail_seq, 0);
+	amdgpu_ring_commit(ring);
+
+	/* assert IB preemption */
+	WREG32(sdma_gfx_preempt, 1);
+
+	/* poll the trailing fence */
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (ring->trail_seq ==
+		    le32_to_cpu(*(ring->trail_fence_cpu_addr)))
+			break;
+		DRM_UDELAY(1);
+	}
+
+	if (i >= adev->usec_timeout) {
+		r = -EINVAL;
+		DRM_ERROR("ring %d failed to be preempted\n", ring->idx);
+	}
+
+	/* deassert IB preemption */
+	WREG32(sdma_gfx_preempt, 0);
+
+	/* deassert the preemption condition */
+	amdgpu_ring_set_preempt_cond_exec(ring, true);
+	return r;
+}
+
+static int sdma_v5_0_set_trap_irq_state(struct amdgpu_device *adev,
+					struct amdgpu_irq_src *source,
+					unsigned type,
+					enum amdgpu_interrupt_state state)
+{
+	u32 sdma_cntl;
+
+	u32 reg_offset = (type == AMDGPU_SDMA_IRQ_INSTANCE0) ?
+		sdma_v5_0_get_reg_offset(adev, 0, mmSDMA0_CNTL) :
+		sdma_v5_0_get_reg_offset(adev, 1, mmSDMA0_CNTL);
+
+	sdma_cntl = RREG32(reg_offset);
+	sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA0_CNTL, TRAP_ENABLE,
+		       state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
+	WREG32(reg_offset, sdma_cntl);
+
+	return 0;
+}
+
+static int sdma_v5_0_process_trap_irq(struct amdgpu_device *adev,
+				      struct amdgpu_irq_src *source,
+				      struct amdgpu_iv_entry *entry)
+{
+	DRM_DEBUG("IH: SDMA trap\n");
+	switch (entry->client_id) {
+	case SOC15_IH_CLIENTID_SDMA0:
+		switch (entry->ring_id) {
+		case 0:
+			amdgpu_fence_process(&adev->sdma.instance[0].ring);
+			break;
+		case 1:
+			/* XXX compute */
+			break;
+		case 2:
+			/* XXX compute */
+			break;
+		case 3:
+			/* XXX page queue*/
+			break;
+		}
+		break;
+	case SOC15_IH_CLIENTID_SDMA1:
+		switch (entry->ring_id) {
+		case 0:
+			amdgpu_fence_process(&adev->sdma.instance[1].ring);
+			break;
+		case 1:
+			/* XXX compute */
+			break;
+		case 2:
+			/* XXX compute */
+			break;
+		case 3:
+			/* XXX page queue*/
+			break;
+		}
+		break;
+	}
+	return 0;
+}
+
+static int sdma_v5_0_process_illegal_inst_irq(struct amdgpu_device *adev,
+					      struct amdgpu_irq_src *source,
+					      struct amdgpu_iv_entry *entry)
+{
+	return 0;
+}
+
+static void sdma_v5_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+						       bool enable)
+{
+	uint32_t data, def;
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		if (enable && (adev->cg_flags & AMD_CG_SUPPORT_SDMA_MGCG)) {
+			/* Enable sdma clock gating */
+			def = data = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CLK_CTRL));
+			data &= ~(SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				  SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				  SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				  SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				  SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				  SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				  SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				  SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+			if (def != data)
+				WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CLK_CTRL), data);
+		} else {
+			/* Disable sdma clock gating */
+			def = data = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CLK_CTRL));
+			data |= (SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				 SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				 SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				 SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				 SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				 SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				 SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				 SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+			if (def != data)
+				WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_CLK_CTRL), data);
+		}
+	}
+}
+
+static void sdma_v5_0_update_medium_grain_light_sleep(struct amdgpu_device *adev,
+						      bool enable)
+{
+	uint32_t data, def;
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		if (enable && (adev->cg_flags & AMD_CG_SUPPORT_SDMA_LS)) {
+			/* Enable sdma mem light sleep */
+			def = data = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_POWER_CNTL));
+			data |= SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+			if (def != data)
+				WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_POWER_CNTL), data);
+
+		} else {
+			/* Disable sdma mem light sleep */
+			def = data = RREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_POWER_CNTL));
+			data &= ~SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+			if (def != data)
+				WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_POWER_CNTL), data);
+
+		}
+	}
+}
+
+static int sdma_v5_0_set_clockgating_state(void *handle,
+					   enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (amdgpu_sriov_vf(adev))
+		return 0;
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		sdma_v5_0_update_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		sdma_v5_0_update_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int sdma_v5_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	return 0;
+}
+
+static void sdma_v5_0_get_clockgating_state(void *handle, u32 *flags)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int data;
+
+	if (amdgpu_sriov_vf(adev))
+		*flags = 0;
+
+	/* AMD_CG_SUPPORT_SDMA_MGCG */
+	data = RREG32(sdma_v5_0_get_reg_offset(adev, 0, mmSDMA0_CLK_CTRL));
+	if (!(data & SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK))
+		*flags |= AMD_CG_SUPPORT_SDMA_MGCG;
+
+	/* AMD_CG_SUPPORT_SDMA_LS */
+	data = RREG32(sdma_v5_0_get_reg_offset(adev, 0, mmSDMA0_POWER_CNTL));
+	if (data & SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK)
+		*flags |= AMD_CG_SUPPORT_SDMA_LS;
+}
+
+const struct amd_ip_funcs sdma_v5_0_ip_funcs = {
+	.name = "sdma_v5_0",
+	.early_init = sdma_v5_0_early_init,
+	.late_init = NULL,
+	.sw_init = sdma_v5_0_sw_init,
+	.sw_fini = sdma_v5_0_sw_fini,
+	.hw_init = sdma_v5_0_hw_init,
+	.hw_fini = sdma_v5_0_hw_fini,
+	.suspend = sdma_v5_0_suspend,
+	.resume = sdma_v5_0_resume,
+	.is_idle = sdma_v5_0_is_idle,
+	.wait_for_idle = sdma_v5_0_wait_for_idle,
+	.soft_reset = sdma_v5_0_soft_reset,
+	.set_clockgating_state = sdma_v5_0_set_clockgating_state,
+	.set_powergating_state = sdma_v5_0_set_powergating_state,
+	.get_clockgating_state = sdma_v5_0_get_clockgating_state,
+};
+
+static const struct amdgpu_ring_funcs sdma_v5_0_ring_funcs = {
+	.type = AMDGPU_RING_TYPE_SDMA,
+	.align_mask = 0xf,
+	.nop = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP),
+	.support_64bit_ptrs = true,
+	.vmhub = AMDGPU_GFXHUB,
+	.get_rptr = sdma_v5_0_ring_get_rptr,
+	.get_wptr = sdma_v5_0_ring_get_wptr,
+	.set_wptr = sdma_v5_0_ring_set_wptr,
+	.emit_frame_size =
+		5 + /* sdma_v5_0_ring_init_cond_exec */
+		6 + /* sdma_v5_0_ring_emit_hdp_flush */
+		3 + /* hdp_invalidate */
+		6 + /* sdma_v5_0_ring_emit_pipeline_sync */
+		/* sdma_v5_0_ring_emit_vm_flush */
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 +
+		10 + 10 + 10, /* sdma_v5_0_ring_emit_fence x3 for user fence, vm fence */
+	.emit_ib_size = 7 + 6, /* sdma_v5_0_ring_emit_ib */
+	.emit_ib = sdma_v5_0_ring_emit_ib,
+	.emit_fence = sdma_v5_0_ring_emit_fence,
+	.emit_pipeline_sync = sdma_v5_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = sdma_v5_0_ring_emit_vm_flush,
+	.emit_hdp_flush = sdma_v5_0_ring_emit_hdp_flush,
+	.test_ring = sdma_v5_0_ring_test_ring,
+	.test_ib = sdma_v5_0_ring_test_ib,
+	.insert_nop = sdma_v5_0_ring_insert_nop,
+	.pad_ib = sdma_v5_0_ring_pad_ib,
+	.emit_wreg = sdma_v5_0_ring_emit_wreg,
+	.emit_reg_wait = sdma_v5_0_ring_emit_reg_wait,
+	.init_cond_exec = sdma_v5_0_ring_init_cond_exec,
+	.patch_cond_exec = sdma_v5_0_ring_patch_cond_exec,
+	.preempt_ib = sdma_v5_0_ring_preempt_ib,
+};
+
+static void sdma_v5_0_set_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		adev->sdma.instance[i].ring.funcs = &sdma_v5_0_ring_funcs;
+		adev->sdma.instance[i].ring.me = i;
+	}
+}
+
+static const struct amdgpu_irq_src_funcs sdma_v5_0_trap_irq_funcs = {
+	.set = sdma_v5_0_set_trap_irq_state,
+	.process = sdma_v5_0_process_trap_irq,
+};
+
+static const struct amdgpu_irq_src_funcs sdma_v5_0_illegal_inst_irq_funcs = {
+	.process = sdma_v5_0_process_illegal_inst_irq,
+};
+
+static void sdma_v5_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
+	adev->sdma.trap_irq.funcs = &sdma_v5_0_trap_irq_funcs;
+	adev->sdma.illegal_inst_irq.funcs = &sdma_v5_0_illegal_inst_irq_funcs;
+}
+
+/**
+ * sdma_v5_0_emit_copy_buffer - copy buffer using the sDMA engine
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ * @src_offset: src GPU address
+ * @dst_offset: dst GPU address
+ * @byte_count: number of bytes to xfer
+ *
+ * Copy GPU buffers using the DMA engine (NAVI10).
+ * Used by the amdgpu ttm implementation to move pages if
+ * registered as the asic copy callback.
+ */
+static void sdma_v5_0_emit_copy_buffer(struct amdgpu_ib *ib,
+				       uint64_t src_offset,
+				       uint64_t dst_offset,
+				       uint32_t byte_count)
+{
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_COPY) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR);
+	ib->ptr[ib->length_dw++] = byte_count - 1;
+	ib->ptr[ib->length_dw++] = 0; /* src/dst endian swap */
+	ib->ptr[ib->length_dw++] = lower_32_bits(src_offset);
+	ib->ptr[ib->length_dw++] = upper_32_bits(src_offset);
+	ib->ptr[ib->length_dw++] = lower_32_bits(dst_offset);
+	ib->ptr[ib->length_dw++] = upper_32_bits(dst_offset);
+}
+
+/**
+ * sdma_v5_0_emit_fill_buffer - fill buffer using the sDMA engine
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ * @src_data: value to write to buffer
+ * @dst_offset: dst GPU address
+ * @byte_count: number of bytes to xfer
+ *
+ * Fill GPU buffers using the DMA engine (NAVI10).
+ */
+static void sdma_v5_0_emit_fill_buffer(struct amdgpu_ib *ib,
+				       uint32_t src_data,
+				       uint64_t dst_offset,
+				       uint32_t byte_count)
+{
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_CONST_FILL);
+	ib->ptr[ib->length_dw++] = lower_32_bits(dst_offset);
+	ib->ptr[ib->length_dw++] = upper_32_bits(dst_offset);
+	ib->ptr[ib->length_dw++] = src_data;
+	ib->ptr[ib->length_dw++] = byte_count - 1;
+}
+
+static const struct amdgpu_buffer_funcs sdma_v5_0_buffer_funcs = {
+	.copy_max_bytes = 0x400000,
+	.copy_num_dw = 7,
+	.emit_copy_buffer = sdma_v5_0_emit_copy_buffer,
+
+	.fill_max_bytes = 0x400000,
+	.fill_num_dw = 5,
+	.emit_fill_buffer = sdma_v5_0_emit_fill_buffer,
+};
+
+static void sdma_v5_0_set_buffer_funcs(struct amdgpu_device *adev)
+{
+	if (adev->mman.buffer_funcs == NULL) {
+		adev->mman.buffer_funcs = &sdma_v5_0_buffer_funcs;
+		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	}
+}
+
+static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
+	.copy_pte_num_dw = 7,
+	.copy_pte = sdma_v5_0_vm_copy_pte,
+	.write_pte = sdma_v5_0_vm_write_pte,
+	.set_pte_pde = sdma_v5_0_vm_set_pte_pde,
+};
+
+static void sdma_v5_0_set_vm_pte_funcs(struct amdgpu_device *adev)
+{
+	struct drm_gpu_scheduler *sched;
+	unsigned i;
+
+	if (adev->vm_manager.vm_pte_funcs == NULL) {
+		adev->vm_manager.vm_pte_funcs = &sdma_v5_0_vm_pte_funcs;
+		for (i = 0; i < adev->sdma.num_instances; i++) {
+			sched = &adev->sdma.instance[i].ring.sched;
+			adev->vm_manager.vm_pte_rqs[i] =
+				&sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+		}
+		adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
+	}
+}
+
+const struct amdgpu_ip_block_version sdma_v5_0_ip_block = {
+	.type = AMD_IP_BLOCK_TYPE_SDMA,
+	.major = 5,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &sdma_v5_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.h b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.h
new file mode 100644
index 000000000000..d5a94e3d181c
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.h
@@ -0,0 +1,45 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __SDMA_V5_0_H__
+#define __SDMA_V5_0_H__
+
+enum sdma_v5_0_utcl2_cache_read_policy {
+	CACHE_READ_POLICY_L2__LRU    = 0x00000000,
+	CACHE_READ_POLICY_L2__STREAM = 0x00000001,
+	CACHE_READ_POLICY_L2__NOA    = 0x00000002,
+	CACHE_READ_POLICY_L2__DEFAULT = CACHE_READ_POLICY_L2__NOA,
+};
+
+enum sdma_v5_0_utcl2_cache_write_policy {
+	CACHE_WRITE_POLICY_L2__LRU    = 0x00000000,
+	CACHE_WRITE_POLICY_L2__STREAM = 0x00000001,
+	CACHE_WRITE_POLICY_L2__NOA    = 0x00000002,
+	CACHE_WRITE_POLICY_L2__BYPASS = 0x00000003,
+	CACHE_WRITE_POLICY_L2__DEFAULT = CACHE_WRITE_POLICY_L2__BYPASS,
+};
+
+extern const struct amd_ip_funcs sdma_v5_0_ip_funcs;
+extern const struct amdgpu_ip_block_version sdma_v5_0_ip_block;
+
+#endif /* __SDMA_V5_0_H__ */
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 112/459] drm/amdgpu: add Navi10 VCN firmware support
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (10 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 111/459] drm/amdgpu: add initial support for sdma v5.0 (v6) Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 113/459] drm/amdgpu: add VCN2.0 decode ring test Alex Deucher
                     ` (80 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Leo Liu

From: Leo Liu <leo.liu@amd.com>

Add Navi10 to VCN family

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index d786098364dd..8ece427b6019 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -45,10 +45,12 @@
 #define FIRMWARE_RAVEN		"amdgpu/raven_vcn.bin"
 #define FIRMWARE_PICASSO	"amdgpu/picasso_vcn.bin"
 #define FIRMWARE_RAVEN2		"amdgpu/raven2_vcn.bin"
+#define FIRMWARE_NAVI10 	"amdgpu/navi10_vcn.bin"
 
 MODULE_FIRMWARE(FIRMWARE_RAVEN);
 MODULE_FIRMWARE(FIRMWARE_PICASSO);
 MODULE_FIRMWARE(FIRMWARE_RAVEN2);
+MODULE_FIRMWARE(FIRMWARE_NAVI10);
 
 static void amdgpu_vcn_idle_work_handler(struct work_struct *work);
 
@@ -71,6 +73,9 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
 		else
 			fw_name = FIRMWARE_RAVEN;
 		break;
+	case CHIP_NAVI10:
+		fw_name = FIRMWARE_NAVI10;
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 113/459] drm/amdgpu: add VCN2.0 decode ring test
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (11 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 112/459] drm/amdgpu: add Navi10 VCN firmware support Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 114/459] drm/amdgpu: add VCN2.0 decode ib test Alex Deucher
                     ` (79 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Leo Liu

From: Leo Liu <leo.liu@amd.com>

Add internal register offset for registers involving in ring tests

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 8 +++-----
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 5 +++++
 drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c   | 3 +++
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 8ece427b6019..5dbd975bac09 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -312,17 +312,15 @@ int amdgpu_vcn_dec_ring_test_ring(struct amdgpu_ring *ring)
 	unsigned i;
 	int r;
 
-	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9), 0xCAFEDEAD);
+	WREG32(adev->vcn.external.scratch9, 0xCAFEDEAD);
 	r = amdgpu_ring_alloc(ring, 3);
 	if (r)
 		return r;
-
-	amdgpu_ring_write(ring,
-		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9), 0));
+	amdgpu_ring_write(ring, PACKET0(adev->vcn.internal.scratch9, 0));
 	amdgpu_ring_write(ring, 0xDEADBEEF);
 	amdgpu_ring_commit(ring);
 	for (i = 0; i < adev->usec_timeout; i++) {
-		tmp = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9));
+		tmp = RREG32(adev->vcn.external.scratch9);
 		if (tmp == 0xDEADBEEF)
 			break;
 		DRM_UDELAY(1);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index a1ee19251aae..b80fc139eb7b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -87,6 +87,10 @@ struct dpg_pause_state {
 	enum internal_dpg_state jpeg;
 };
 
+struct amdgpu_vcn_reg{
+	unsigned	scratch9;
+};
+
 struct amdgpu_vcn {
 	struct amdgpu_bo	*vcpu_bo;
 	void			*cpu_addr;
@@ -102,6 +106,7 @@ struct amdgpu_vcn {
 	unsigned		num_enc_rings;
 	enum amd_powergating_state cur_state;
 	struct dpg_pause_state pause_state;
+	struct amdgpu_vcn_reg	internal, external;
 	int (*pause_dpg_mode)(struct amdgpu_device *adev,
 		struct dpg_pause_state *new_state);
 };
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index bb47f5b24be5..bab900653a0b 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -128,6 +128,9 @@ static int vcn_v1_0_sw_init(void *handle)
 	if (r)
 		return r;
 
+	adev->vcn.internal.scratch9 = adev->vcn.external.scratch9 =
+		SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9);
+
 	for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
 		ring = &adev->vcn.ring_enc[i];
 		sprintf(ring->name, "vcn_enc%d", i);
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 114/459] drm/amdgpu: add VCN2.0 decode ib test
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (12 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 113/459] drm/amdgpu: add VCN2.0 decode ring test Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 115/459] drm/amdgpu: add JPEG2.0 decode ring test Alex Deucher
                     ` (78 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Leo Liu

From: Leo Liu <leo.liu@amd.com>

Add internal register offset for registers involving in ib tests

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 8 ++++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 6 +++++-
 drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c   | 8 ++++++++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 5dbd975bac09..1d575e2e701b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -349,14 +349,14 @@ static int amdgpu_vcn_dec_send_msg(struct amdgpu_ring *ring,
 
 	ib = &job->ibs[0];
 	addr = amdgpu_bo_gpu_offset(bo);
-	ib->ptr[0] = PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA0), 0);
+	ib->ptr[0] = PACKET0(adev->vcn.internal.data0, 0);
 	ib->ptr[1] = addr;
-	ib->ptr[2] = PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA1), 0);
+	ib->ptr[2] = PACKET0(adev->vcn.internal.data1, 0);
 	ib->ptr[3] = addr >> 32;
-	ib->ptr[4] = PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_CMD), 0);
+	ib->ptr[4] = PACKET0(adev->vcn.internal.cmd, 0);
 	ib->ptr[5] = 0;
 	for (i = 6; i < 16; i += 2) {
-		ib->ptr[i] = PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP), 0);
+		ib->ptr[i] = PACKET0(adev->vcn.internal.nop, 0);
 		ib->ptr[i+1] = 0;
 	}
 	ib->length_dw = 16;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index b80fc139eb7b..b14655a0e1db 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -25,7 +25,7 @@
 #define __AMDGPU_VCN_H__
 
 #define AMDGPU_VCN_STACK_SIZE		(128*1024)
-#define AMDGPU_VCN_CONTEXT_SIZE	(512*1024)
+#define AMDGPU_VCN_CONTEXT_SIZE 	(512*1024)
 
 #define AMDGPU_VCN_FIRMWARE_OFFSET	256
 #define AMDGPU_VCN_MAX_ENC_RINGS	3
@@ -88,6 +88,10 @@ struct dpg_pause_state {
 };
 
 struct amdgpu_vcn_reg{
+	unsigned	data0;
+	unsigned	data1;
+	unsigned	cmd;
+	unsigned	nop;
 	unsigned	scratch9;
 };
 
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index bab900653a0b..2a2c40cf32c8 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -130,6 +130,14 @@ static int vcn_v1_0_sw_init(void *handle)
 
 	adev->vcn.internal.scratch9 = adev->vcn.external.scratch9 =
 		SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9);
+	adev->vcn.internal.data0 = adev->vcn.external.data0 =
+		SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA0);
+	adev->vcn.internal.data1 = adev->vcn.external.data1 =
+		SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA1);
+	adev->vcn.internal.cmd = adev->vcn.external.cmd =
+		SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_CMD);
+	adev->vcn.internal.nop = adev->vcn.external.nop =
+		SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP);
 
 	for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
 		ring = &adev->vcn.ring_enc[i];
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 115/459] drm/amdgpu: add JPEG2.0 decode ring test
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (13 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 114/459] drm/amdgpu: add VCN2.0 decode ib test Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 116/459] drm/amdgpu: add JPEG2.0 decode ring ib test Alex Deucher
                     ` (77 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Leo Liu

From: Leo Liu <leo.liu@amd.com>

Use register from JPEG tile, the UVD tile reg won't work for JPEG

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 8 +++-----
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c   | 2 ++
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 1d575e2e701b..ef2b7a9356ef 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -631,19 +631,17 @@ int amdgpu_vcn_jpeg_ring_test_ring(struct amdgpu_ring *ring)
 	unsigned i;
 	int r;
 
-	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9), 0xCAFEDEAD);
+	WREG32(adev->vcn.external.jpeg_pitch, 0xCAFEDEAD);
 	r = amdgpu_ring_alloc(ring, 3);
-
 	if (r)
 		return r;
 
-	amdgpu_ring_write(ring,
-		PACKETJ(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9), 0, 0, 0));
+	amdgpu_ring_write(ring, PACKET0(adev->vcn.internal.jpeg_pitch, 0));
 	amdgpu_ring_write(ring, 0xDEADBEEF);
 	amdgpu_ring_commit(ring);
 
 	for (i = 0; i < adev->usec_timeout; i++) {
-		tmp = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9));
+		tmp = RREG32(adev->vcn.external.jpeg_pitch);
 		if (tmp == 0xDEADBEEF)
 			break;
 		DRM_UDELAY(1);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index b14655a0e1db..cc94841f2f06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -93,6 +93,7 @@ struct amdgpu_vcn_reg{
 	unsigned	cmd;
 	unsigned	nop;
 	unsigned	scratch9;
+	unsigned	jpeg_pitch;
 };
 
 struct amdgpu_vcn {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 2a2c40cf32c8..855b1f9609e4 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -154,6 +154,8 @@ static int vcn_v1_0_sw_init(void *handle)
 		return r;
 
 	adev->vcn.pause_dpg_mode = vcn_v1_0_pause_dpg_mode;
+	adev->vcn.internal.jpeg_pitch = adev->vcn.external.jpeg_pitch =
+		SOC15_REG_OFFSET(UVD, 0, mmUVD_JPEG_PITCH);
 
 	return 0;
 }
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 116/459] drm/amdgpu: add JPEG2.0 decode ring ib test
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (14 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 115/459] drm/amdgpu: add JPEG2.0 decode ring test Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 117/459] drm/amdgpu: add initial VCN2.0 support (v2) Alex Deucher
                     ` (76 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Leo Liu

From: Leo Liu <leo.liu@amd.com>

Add internal register offset for registers involving in ib tests

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index ef2b7a9356ef..6a74f5499ef7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -669,7 +669,7 @@ static int amdgpu_vcn_jpeg_set_reg(struct amdgpu_ring *ring, uint32_t handle,
 
 	ib = &job->ibs[0];
 
-	ib->ptr[0] = PACKETJ(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9), 0, 0, PACKETJ_TYPE0);
+	ib->ptr[0] = PACKETJ(adev->vcn.internal.jpeg_pitch, 0, 0, PACKETJ_TYPE0);
 	ib->ptr[1] = 0xDEADBEEF;
 	for (i = 2; i < 16; i += 2) {
 		ib->ptr[i] = PACKETJ(0, 0, 0, PACKETJ_TYPE6);
@@ -715,7 +715,7 @@ int amdgpu_vcn_jpeg_ring_test_ib(struct amdgpu_ring *ring, long timeout)
 	}
 
 	for (i = 0; i < adev->usec_timeout; i++) {
-		tmp = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9));
+		tmp = RREG32(adev->vcn.external.jpeg_pitch);
 		if (tmp == 0xDEADBEEF)
 			break;
 		DRM_UDELAY(1);
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 117/459] drm/amdgpu: add initial VCN2.0 support (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (15 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 116/459] drm/amdgpu: add JPEG2.0 decode ring ib test Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 118/459] drm/amdgpu/mes: add amdgpu_mes driver parameter Alex Deucher
                     ` (75 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, James Zhu, Leo Liu

From: Leo Liu <leo.liu@amd.com>

VCN (Video Core Next) is the video encode/decode block.

Porting over the same functions from VCN1.0

v2: squash in updates (Alex)

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: James Zhu <James.Zhu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile     |    3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h |    1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c   | 1865 +++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/vcn_v2_0.h   |   29 +
 4 files changed, 1897 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vcn_v2_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 0e7a402d5ef8..b7916a138239 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -136,7 +136,8 @@ amdgpu-y += \
 # add VCN block
 amdgpu-y += \
 	amdgpu_vcn.o \
-	vcn_v1_0.o
+	vcn_v1_0.o \
+	vcn_v2_0.o
 
 # add ATHUB block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index cc94841f2f06..3f6349c6f33d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -75,6 +75,7 @@ enum engine_status_constants {
 	UVD_STATUS__BUSY = 0x5,
 	UVD_POWER_STATUS__UVD_POWER_STATUS_TILES_OFF = 0x1,
 	UVD_STATUS__RBC_BUSY = 0x1,
+	UVD_PGFSM_STATUS_UVDJ_PWR_ON = 0,
 };
 
 enum internal_dpg_state {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
new file mode 100644
index 000000000000..1b9770cb650b
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
@@ -0,0 +1,1865 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/firmware.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_vcn.h"
+#include "soc15.h"
+#include "soc15d.h"
+
+#include "vcn/vcn_2_0_0_offset.h"
+#include "vcn/vcn_2_0_0_sh_mask.h"
+#include "ivsrcid/vcn/irqsrcs_vcn_2_0.h"
+
+#define mmUVD_CONTEXT_ID_INTERNAL_OFFSET			0x1fd
+#define mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET			0x503
+#define mmUVD_GPCOM_VCPU_DATA0_INTERNAL_OFFSET			0x504
+#define mmUVD_GPCOM_VCPU_DATA1_INTERNAL_OFFSET			0x505
+#define mmUVD_NO_OP_INTERNAL_OFFSET				0x53f
+#define mmUVD_GP_SCRATCH8_INTERNAL_OFFSET			0x54a
+#define mmUVD_SCRATCH9_INTERNAL_OFFSET				0xc01d
+
+#define mmUVD_LMI_RBC_IB_VMID_INTERNAL_OFFSET			0x1e1
+#define mmUVD_LMI_RBC_IB_64BIT_BAR_HIGH_INTERNAL_OFFSET 	0x5a6
+#define mmUVD_LMI_RBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET		0x5a7
+#define mmUVD_RBC_IB_SIZE_INTERNAL_OFFSET			0x1e2
+#define mmUVD_GPCOM_SYS_CMD_INTERNAL_OFFSET			0x1bF
+
+#define mmUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET 			0x1bfff
+#define mmUVD_JPEG_GPCOM_CMD_INTERNAL_OFFSET				0x4029
+#define mmUVD_JPEG_GPCOM_DATA0_INTERNAL_OFFSET				0x402a
+#define mmUVD_JPEG_GPCOM_DATA1_INTERNAL_OFFSET				0x402b
+#define mmUVD_LMI_JRBC_RB_MEM_WR_64BIT_BAR_LOW_INTERNAL_OFFSET		0x40ea
+#define mmUVD_LMI_JRBC_RB_MEM_WR_64BIT_BAR_HIGH_INTERNAL_OFFSET 	0x40eb
+#define mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET				0x40cf
+#define mmUVD_LMI_JPEG_VMID_INTERNAL_OFFSET				0x40d1
+#define mmUVD_LMI_JRBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET 		0x40e8
+#define mmUVD_LMI_JRBC_IB_64BIT_BAR_HIGH_INTERNAL_OFFSET		0x40e9
+#define mmUVD_JRBC_IB_SIZE_INTERNAL_OFFSET				0x4082
+#define mmUVD_LMI_JRBC_RB_MEM_RD_64BIT_BAR_LOW_INTERNAL_OFFSET		0x40ec
+#define mmUVD_LMI_JRBC_RB_MEM_RD_64BIT_BAR_HIGH_INTERNAL_OFFSET 	0x40ed
+#define mmUVD_JRBC_RB_COND_RD_TIMER_INTERNAL_OFFSET			0x4085
+#define mmUVD_JRBC_RB_REF_DATA_INTERNAL_OFFSET				0x4084
+#define mmUVD_JRBC_STATUS_INTERNAL_OFFSET				0x4089
+#define mmUVD_JPEG_PITCH_INTERNAL_OFFSET				0x401f
+
+#define JRBC_DEC_EXTERNAL_REG_WRITE_ADDR				0x18000
+
+static int vcn_v2_0_stop(struct amdgpu_device *adev);
+static void vcn_v2_0_set_dec_ring_funcs(struct amdgpu_device *adev);
+static void vcn_v2_0_set_enc_ring_funcs(struct amdgpu_device *adev);
+static void vcn_v2_0_set_jpeg_ring_funcs(struct amdgpu_device *adev);
+static void vcn_v2_0_set_irq_funcs(struct amdgpu_device *adev);
+static int vcn_v2_0_set_powergating_state(void *handle,
+				enum amd_powergating_state state);
+
+/**
+ * vcn_v2_0_early_init - set function pointers
+ *
+ * @handle: amdgpu_device pointer
+ *
+ * Set ring and irq function pointers
+ */
+static int vcn_v2_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->vcn.num_enc_rings = 2;
+
+	vcn_v2_0_set_dec_ring_funcs(adev);
+	vcn_v2_0_set_enc_ring_funcs(adev);
+	vcn_v2_0_set_jpeg_ring_funcs(adev);
+	vcn_v2_0_set_irq_funcs(adev);
+
+	return 0;
+}
+
+/**
+ * vcn_v2_0_sw_init - sw init for VCN block
+ *
+ * @handle: amdgpu_device pointer
+ *
+ * Load firmware and sw initialization
+ */
+static int vcn_v2_0_sw_init(void *handle)
+{
+	struct amdgpu_ring *ring;
+	int i, r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* VCN DEC TRAP */
+	r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_VCN,
+			      VCN_2_0__SRCID__UVD_SYSTEM_MESSAGE_INTERRUPT,
+			      &adev->vcn.irq);
+	if (r)
+		return r;
+
+	/* VCN ENC TRAP */
+	for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+		r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_VCN,
+				      i + VCN_2_0__SRCID__UVD_ENC_GENERAL_PURPOSE,
+				      &adev->vcn.irq);
+		if (r)
+			return r;
+	}
+
+	/* VCN JPEG TRAP */
+	r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_VCN,
+			      VCN_2_0__SRCID__JPEG_DECODE,
+			      &adev->vcn.irq);
+	if (r)
+		return r;
+
+	r = amdgpu_vcn_sw_init(adev);
+	if (r)
+		return r;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		const struct common_firmware_header *hdr;
+		hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
+		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].ucode_id = AMDGPU_UCODE_ID_VCN;
+		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+		DRM_INFO("PSP loading VCN firmware\n");
+	}
+
+	r = amdgpu_vcn_resume(adev);
+	if (r)
+		return r;
+
+	ring = &adev->vcn.ring_dec;
+
+	ring->use_doorbell = true;
+	ring->doorbell_index = adev->doorbell_index.vcn.vcn_ring0_1 << 1;
+
+	sprintf(ring->name, "vcn_dec");
+	r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.irq, 0);
+	if (r)
+		return r;
+
+	adev->vcn.internal.scratch9 = mmUVD_SCRATCH9_INTERNAL_OFFSET;
+	adev->vcn.external.scratch9 = SOC15_REG_OFFSET(UVD, 0, mmUVD_SCRATCH9);
+	adev->vcn.internal.data0 = mmUVD_GPCOM_VCPU_DATA0_INTERNAL_OFFSET;
+	adev->vcn.external.data0 = SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA0);
+	adev->vcn.internal.data1 = mmUVD_GPCOM_VCPU_DATA1_INTERNAL_OFFSET;
+	adev->vcn.external.data1 = SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA1);
+	adev->vcn.internal.cmd = mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET;
+	adev->vcn.external.cmd = SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_CMD);
+	adev->vcn.internal.nop = mmUVD_NO_OP_INTERNAL_OFFSET;
+	adev->vcn.external.nop = SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP);
+
+	for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+		ring = &adev->vcn.ring_enc[i];
+		ring->use_doorbell = true;
+		ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 2 + i;
+		sprintf(ring->name, "vcn_enc%d", i);
+		r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.irq, 0);
+		if (r)
+			return r;
+	}
+
+	ring = &adev->vcn.ring_jpeg;
+	ring->use_doorbell = true;
+	ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 1;
+	sprintf(ring->name, "vcn_jpeg");
+	r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.irq, 0);
+	if (r)
+		return r;
+
+	adev->vcn.internal.jpeg_pitch = mmUVD_JPEG_PITCH_INTERNAL_OFFSET;
+	adev->vcn.external.jpeg_pitch = SOC15_REG_OFFSET(UVD, 0, mmUVD_JPEG_PITCH);
+
+	return 0;
+}
+
+/**
+ * vcn_v2_0_sw_fini - sw fini for VCN block
+ *
+ * @handle: amdgpu_device pointer
+ *
+ * VCN suspend and free up sw allocation
+ */
+static int vcn_v2_0_sw_fini(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = amdgpu_vcn_suspend(adev);
+	if (r)
+		return r;
+
+	r = amdgpu_vcn_sw_fini(adev);
+
+	return r;
+}
+
+/**
+ * vcn_v2_0_hw_init - start and test VCN block
+ *
+ * @handle: amdgpu_device pointer
+ *
+ * Initialize the hardware, boot up the VCPU and do some testing
+ */
+static int vcn_v2_0_hw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ring *ring = &adev->vcn.ring_dec;
+	int i, r;
+
+	adev->nbio_funcs->vcn_doorbell_range(adev, ring->use_doorbell,
+		ring->doorbell_index);
+
+	ring->sched.ready = true;
+	r = amdgpu_ring_test_ring(ring);
+	if (r) {
+		ring->sched.ready = false;
+		goto done;
+	}
+
+	for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+		ring = &adev->vcn.ring_enc[i];
+		ring->sched.ready = true;
+		r = amdgpu_ring_test_ring(ring);
+		if (r) {
+			ring->sched.ready = false;
+			goto done;
+		}
+	}
+
+	ring = &adev->vcn.ring_jpeg;
+	ring->sched.ready = true;
+	r = amdgpu_ring_test_ring(ring);
+	if (r) {
+		ring->sched.ready = false;
+		goto done;
+	}
+
+done:
+	if (!r)
+		DRM_INFO("VCN decode and encode initialized successfully.\n");
+
+	return r;
+}
+
+/**
+ * vcn_v2_0_hw_fini - stop the hardware block
+ *
+ * @handle: amdgpu_device pointer
+ *
+ * Stop the VCN block, mark ring as not ready any more
+ */
+static int vcn_v2_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ring *ring = &adev->vcn.ring_dec;
+	int i;
+
+	if (RREG32_SOC15(VCN, 0, mmUVD_STATUS))
+		vcn_v2_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+
+	ring->sched.ready = false;
+
+	for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+		ring = &adev->vcn.ring_enc[i];
+		ring->sched.ready = false;
+	}
+
+	ring = &adev->vcn.ring_jpeg;
+	ring->sched.ready = false;
+
+	return 0;
+}
+
+/**
+ * vcn_v2_0_suspend - suspend VCN block
+ *
+ * @handle: amdgpu_device pointer
+ *
+ * HW fini and suspend VCN block
+ */
+static int vcn_v2_0_suspend(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = vcn_v2_0_hw_fini(adev);
+	if (r)
+		return r;
+
+	r = amdgpu_vcn_suspend(adev);
+
+	return r;
+}
+
+/**
+ * vcn_v2_0_resume - resume VCN block
+ *
+ * @handle: amdgpu_device pointer
+ *
+ * Resume firmware and hw init VCN block
+ */
+static int vcn_v2_0_resume(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = amdgpu_vcn_resume(adev);
+	if (r)
+		return r;
+
+	r = vcn_v2_0_hw_init(adev);
+
+	return r;
+}
+
+/**
+ * vcn_v2_0_mc_resume - memory controller programming
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Let the VCN memory controller know it's offsets
+ */
+static void vcn_v2_0_mc_resume(struct amdgpu_device *adev)
+{
+	uint32_t size = AMDGPU_GPU_PAGE_ALIGN(adev->vcn.fw->size + 4);
+	uint32_t offset;
+
+	/* cache window 0: fw */
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
+			(adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_lo));
+		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
+			(adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_hi));
+		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0, 0);
+		offset = 0;
+	} else {
+		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
+			lower_32_bits(adev->vcn.gpu_addr));
+		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
+			upper_32_bits(adev->vcn.gpu_addr));
+		offset = size;
+		/* No signed header for now from firmware
+		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0,
+			AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
+		*/
+		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0, 0);
+	}
+
+	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE0, size);
+
+	/* cache window 1: stack */
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW,
+		lower_32_bits(adev->vcn.gpu_addr + offset));
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH,
+		upper_32_bits(adev->vcn.gpu_addr + offset));
+	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET1, 0);
+	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE1, AMDGPU_VCN_STACK_SIZE);
+
+	/* cache window 2: context */
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW,
+		lower_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_STACK_SIZE));
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH,
+		upper_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_STACK_SIZE));
+	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET2, 0);
+	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE2, AMDGPU_VCN_CONTEXT_SIZE);
+
+	WREG32_SOC15(UVD, 0, mmUVD_GFX10_ADDR_CONFIG, adev->gfx.config.gb_addr_config);
+	WREG32_SOC15(UVD, 0, mmJPEG_DEC_GFX10_ADDR_CONFIG, adev->gfx.config.gb_addr_config);
+}
+
+/**
+ * vcn_v2_0_disable_clock_gating - disable VCN clock gating
+ *
+ * @adev: amdgpu_device pointer
+ * @sw: enable SW clock gating
+ *
+ * Disable clock gating for VCN block
+ */
+static void vcn_v2_0_disable_clock_gating(struct amdgpu_device *adev)
+{
+	uint32_t data;
+
+	/* UVD disable CGC */
+	data = RREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL);
+	if (adev->cg_flags & AMD_CG_SUPPORT_VCN_MGCG)
+		data |= 1 << UVD_CGC_CTRL__DYN_CLOCK_MODE__SHIFT;
+	else
+		data &= ~ UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
+	data |= 1 << UVD_CGC_CTRL__CLK_GATE_DLY_TIMER__SHIFT;
+	data |= 4 << UVD_CGC_CTRL__CLK_OFF_DELAY__SHIFT;
+	WREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL, data);
+
+	data = RREG32_SOC15(VCN, 0, mmUVD_CGC_GATE);
+	data &= ~(UVD_CGC_GATE__SYS_MASK
+		| UVD_CGC_GATE__UDEC_MASK
+		| UVD_CGC_GATE__MPEG2_MASK
+		| UVD_CGC_GATE__REGS_MASK
+		| UVD_CGC_GATE__RBC_MASK
+		| UVD_CGC_GATE__LMI_MC_MASK
+		| UVD_CGC_GATE__LMI_UMC_MASK
+		| UVD_CGC_GATE__IDCT_MASK
+		| UVD_CGC_GATE__MPRD_MASK
+		| UVD_CGC_GATE__MPC_MASK
+		| UVD_CGC_GATE__LBSI_MASK
+		| UVD_CGC_GATE__LRBBM_MASK
+		| UVD_CGC_GATE__UDEC_RE_MASK
+		| UVD_CGC_GATE__UDEC_CM_MASK
+		| UVD_CGC_GATE__UDEC_IT_MASK
+		| UVD_CGC_GATE__UDEC_DB_MASK
+		| UVD_CGC_GATE__UDEC_MP_MASK
+		| UVD_CGC_GATE__WCB_MASK
+		| UVD_CGC_GATE__VCPU_MASK
+		| UVD_CGC_GATE__SCPU_MASK);
+	WREG32_SOC15(VCN, 0, mmUVD_CGC_GATE, data);
+
+	data = RREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL);
+	data &= ~(UVD_CGC_CTRL__UDEC_RE_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_CM_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_IT_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_DB_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_MP_MODE_MASK
+		| UVD_CGC_CTRL__SYS_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_MODE_MASK
+		| UVD_CGC_CTRL__MPEG2_MODE_MASK
+		| UVD_CGC_CTRL__REGS_MODE_MASK
+		| UVD_CGC_CTRL__RBC_MODE_MASK
+		| UVD_CGC_CTRL__LMI_MC_MODE_MASK
+		| UVD_CGC_CTRL__LMI_UMC_MODE_MASK
+		| UVD_CGC_CTRL__IDCT_MODE_MASK
+		| UVD_CGC_CTRL__MPRD_MODE_MASK
+		| UVD_CGC_CTRL__MPC_MODE_MASK
+		| UVD_CGC_CTRL__LBSI_MODE_MASK
+		| UVD_CGC_CTRL__LRBBM_MODE_MASK
+		| UVD_CGC_CTRL__WCB_MODE_MASK
+		| UVD_CGC_CTRL__VCPU_MODE_MASK
+		| UVD_CGC_CTRL__SCPU_MODE_MASK);
+	WREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL, data);
+
+	/* turn on */
+	data = RREG32_SOC15(VCN, 0, mmUVD_SUVD_CGC_GATE);
+	data |= (UVD_SUVD_CGC_GATE__SRE_MASK
+		| UVD_SUVD_CGC_GATE__SIT_MASK
+		| UVD_SUVD_CGC_GATE__SMP_MASK
+		| UVD_SUVD_CGC_GATE__SCM_MASK
+		| UVD_SUVD_CGC_GATE__SDB_MASK
+		| UVD_SUVD_CGC_GATE__SRE_H264_MASK
+		| UVD_SUVD_CGC_GATE__SRE_HEVC_MASK
+		| UVD_SUVD_CGC_GATE__SIT_H264_MASK
+		| UVD_SUVD_CGC_GATE__SIT_HEVC_MASK
+		| UVD_SUVD_CGC_GATE__SCM_H264_MASK
+		| UVD_SUVD_CGC_GATE__SCM_HEVC_MASK
+		| UVD_SUVD_CGC_GATE__SDB_H264_MASK
+		| UVD_SUVD_CGC_GATE__SDB_HEVC_MASK
+		| UVD_SUVD_CGC_GATE__SCLR_MASK
+		| UVD_SUVD_CGC_GATE__UVD_SC_MASK
+		| UVD_SUVD_CGC_GATE__ENT_MASK
+		| UVD_SUVD_CGC_GATE__SIT_HEVC_DEC_MASK
+		| UVD_SUVD_CGC_GATE__SIT_HEVC_ENC_MASK
+		| UVD_SUVD_CGC_GATE__SITE_MASK
+		| UVD_SUVD_CGC_GATE__SRE_VP9_MASK
+		| UVD_SUVD_CGC_GATE__SCM_VP9_MASK
+		| UVD_SUVD_CGC_GATE__SIT_VP9_DEC_MASK
+		| UVD_SUVD_CGC_GATE__SDB_VP9_MASK
+		| UVD_SUVD_CGC_GATE__IME_HEVC_MASK);
+	WREG32_SOC15(VCN, 0, mmUVD_SUVD_CGC_GATE, data);
+
+	data = RREG32_SOC15(VCN, 0, mmUVD_SUVD_CGC_CTRL);
+	data &= ~(UVD_SUVD_CGC_CTRL__SRE_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SIT_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SMP_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SCM_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SDB_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SCLR_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__UVD_SC_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__ENT_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__IME_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SITE_MODE_MASK);
+	WREG32_SOC15(VCN, 0, mmUVD_SUVD_CGC_CTRL, data);
+}
+
+/**
+ * jpeg_v2_0_start - start JPEG block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Setup and start the JPEG block
+ */
+static int jpeg_v2_0_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = &adev->vcn.ring_jpeg;
+	uint32_t tmp;
+	int r = 0;
+
+	/* disable power gating */
+	tmp = 1 << UVD_PGFSM_CONFIG__UVDJ_PWR_CONFIG__SHIFT;
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_PGFSM_CONFIG), tmp);
+
+	SOC15_WAIT_ON_RREG(VCN, 0,
+		mmUVD_PGFSM_STATUS, UVD_PGFSM_STATUS_UVDJ_PWR_ON,
+		UVD_PGFSM_STATUS__UVDJ_PWR_STATUS_MASK, r);
+
+	if (r) {
+		DRM_ERROR("amdgpu: JPEG disable power gating failed\n");
+		return r;
+	}
+
+	/* Removing the anti hang mechanism to indicate the UVDJ tile is ON */
+	tmp = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_JPEG_POWER_STATUS)) & ~0x1;
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_JPEG_POWER_STATUS), tmp);
+
+	/* JPEG disable CGC */
+	tmp = RREG32_SOC15(VCN, 0, mmJPEG_CGC_CTRL);
+	tmp |= 1 << JPEG_CGC_CTRL__DYN_CLOCK_MODE__SHIFT;
+	tmp |= 1 << JPEG_CGC_CTRL__CLK_GATE_DLY_TIMER__SHIFT;
+	tmp |= 4 << JPEG_CGC_CTRL__CLK_OFF_DELAY__SHIFT;
+	WREG32_SOC15(VCN, 0, mmJPEG_CGC_CTRL, tmp);
+
+	tmp = RREG32_SOC15(VCN, 0, mmJPEG_CGC_GATE);
+	tmp &= ~(JPEG_CGC_GATE__JPEG_DEC_MASK
+		| JPEG_CGC_GATE__JPEG2_DEC_MASK
+		| JPEG_CGC_GATE__JPEG_ENC_MASK
+		| JPEG_CGC_GATE__JMCIF_MASK
+		| JPEG_CGC_GATE__JRBBM_MASK);
+	WREG32_SOC15(VCN, 0, mmJPEG_CGC_GATE, tmp);
+
+	/* enable JMI channel */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_JMI_CNTL), 0,
+		~UVD_JMI_CNTL__SOFT_RESET_MASK);
+
+	/* enable System Interrupt for JRBC */
+	WREG32_P(SOC15_REG_OFFSET(VCN, 0, mmJPEG_SYS_INT_EN),
+		JPEG_SYS_INT_EN__DJRBC_MASK,
+		~JPEG_SYS_INT_EN__DJRBC_MASK);
+
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_JRBC_RB_VMID, 0);
+	WREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_CNTL, (0x00000001L | 0x00000002L));
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_JRBC_RB_64BIT_BAR_LOW,
+		lower_32_bits(ring->gpu_addr));
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_JRBC_RB_64BIT_BAR_HIGH,
+		upper_32_bits(ring->gpu_addr));
+	WREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_RPTR, 0);
+	WREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_WPTR, 0);
+	WREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_CNTL, 0x00000002L);
+	WREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_SIZE, ring->ring_size / 4);
+	ring->wptr = RREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_WPTR);
+
+	return 0;
+}
+
+/**
+ * jpeg_v2_0_stop - stop JPEG block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * stop the JPEG block
+ */
+static int jpeg_v2_0_stop(struct amdgpu_device *adev)
+{
+	uint32_t tmp;
+	int r = 0;
+
+	/* reset JMI */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_JMI_CNTL),
+		UVD_JMI_CNTL__SOFT_RESET_MASK,
+		~UVD_JMI_CNTL__SOFT_RESET_MASK);
+
+	/* enable JPEG CGC */
+	tmp = RREG32_SOC15(VCN, 0, mmJPEG_CGC_CTRL);
+	tmp |= 1 << JPEG_CGC_CTRL__DYN_CLOCK_MODE__SHIFT;
+	tmp |= 1 << JPEG_CGC_CTRL__CLK_GATE_DLY_TIMER__SHIFT;
+	tmp |= 4 << JPEG_CGC_CTRL__CLK_OFF_DELAY__SHIFT;
+	WREG32_SOC15(VCN, 0, mmJPEG_CGC_CTRL, tmp);
+
+
+	tmp = RREG32_SOC15(VCN, 0, mmJPEG_CGC_GATE);
+	tmp |= (JPEG_CGC_GATE__JPEG_DEC_MASK
+		|JPEG_CGC_GATE__JPEG2_DEC_MASK
+		|JPEG_CGC_GATE__JPEG_ENC_MASK
+		|JPEG_CGC_GATE__JMCIF_MASK
+		|JPEG_CGC_GATE__JRBBM_MASK);
+	WREG32_SOC15(VCN, 0, mmJPEG_CGC_GATE, tmp);
+
+	/* enable power gating */
+	tmp = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_JPEG_POWER_STATUS));
+	tmp &= ~UVD_JPEG_POWER_STATUS__JPEG_POWER_STATUS_MASK;
+	tmp |=  0x1; //UVD_JPEG_POWER_STATUS__JPEG_POWER_STATUS_TILES_OFF;
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_JPEG_POWER_STATUS), tmp);
+
+	tmp = 2 << UVD_PGFSM_CONFIG__UVDJ_PWR_CONFIG__SHIFT;
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_PGFSM_CONFIG), tmp);
+
+	SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_PGFSM_STATUS,
+		(2 << UVD_PGFSM_STATUS__UVDJ_PWR_STATUS__SHIFT),
+		UVD_PGFSM_STATUS__UVDJ_PWR_STATUS_MASK, r);
+
+	if (r) {
+		DRM_ERROR("amdgpu: JPEG enable power gating failed\n");
+		return r;
+	}
+
+	return r;
+}
+
+/**
+ * vcn_v2_0_enable_clock_gating - enable VCN clock gating
+ *
+ * @adev: amdgpu_device pointer
+ * @sw: enable SW clock gating
+ *
+ * Enable clock gating for VCN block
+ */
+static void vcn_v2_0_enable_clock_gating(struct amdgpu_device *adev)
+{
+	uint32_t data = 0;
+
+	/* enable UVD CGC */
+	data = RREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL);
+	if (adev->cg_flags & AMD_CG_SUPPORT_VCN_MGCG)
+		data |= 1 << UVD_CGC_CTRL__DYN_CLOCK_MODE__SHIFT;
+	else
+		data |= 0 << UVD_CGC_CTRL__DYN_CLOCK_MODE__SHIFT;
+	data |= 1 << UVD_CGC_CTRL__CLK_GATE_DLY_TIMER__SHIFT;
+	data |= 4 << UVD_CGC_CTRL__CLK_OFF_DELAY__SHIFT;
+	WREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL, data);
+
+	data = RREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL);
+	data |= (UVD_CGC_CTRL__UDEC_RE_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_CM_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_IT_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_DB_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_MP_MODE_MASK
+		| UVD_CGC_CTRL__SYS_MODE_MASK
+		| UVD_CGC_CTRL__UDEC_MODE_MASK
+		| UVD_CGC_CTRL__MPEG2_MODE_MASK
+		| UVD_CGC_CTRL__REGS_MODE_MASK
+		| UVD_CGC_CTRL__RBC_MODE_MASK
+		| UVD_CGC_CTRL__LMI_MC_MODE_MASK
+		| UVD_CGC_CTRL__LMI_UMC_MODE_MASK
+		| UVD_CGC_CTRL__IDCT_MODE_MASK
+		| UVD_CGC_CTRL__MPRD_MODE_MASK
+		| UVD_CGC_CTRL__MPC_MODE_MASK
+		| UVD_CGC_CTRL__LBSI_MODE_MASK
+		| UVD_CGC_CTRL__LRBBM_MODE_MASK
+		| UVD_CGC_CTRL__WCB_MODE_MASK
+		| UVD_CGC_CTRL__VCPU_MODE_MASK
+		| UVD_CGC_CTRL__SCPU_MODE_MASK);
+	WREG32_SOC15(VCN, 0, mmUVD_CGC_CTRL, data);
+
+	data = RREG32_SOC15(VCN, 0, mmUVD_SUVD_CGC_CTRL);
+	data |= (UVD_SUVD_CGC_CTRL__SRE_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SIT_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SMP_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SCM_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SDB_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SCLR_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__UVD_SC_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__ENT_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__IME_MODE_MASK
+		| UVD_SUVD_CGC_CTRL__SITE_MODE_MASK);
+	WREG32_SOC15(VCN, 0, mmUVD_SUVD_CGC_CTRL, data);
+}
+
+static void vcn_v2_0_disable_static_power_gating(struct amdgpu_device *adev)
+{
+	uint32_t data = 0;
+	int ret;
+
+	if (adev->pg_flags & AMD_PG_SUPPORT_VCN) {
+		data = (1 << UVD_PGFSM_CONFIG__UVDM_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDU_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDF_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDC_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDB_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDIL_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDIR_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDTD_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDTE_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDE_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDW_PWR_CONFIG__SHIFT);
+
+		WREG32_SOC15(VCN, 0, mmUVD_PGFSM_CONFIG, data);
+		SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_PGFSM_STATUS,
+			UVD_PGFSM_STATUS__UVDM_UVDU_PWR_ON, 0xFFFFFF, ret);
+	} else {
+		data = (1 << UVD_PGFSM_CONFIG__UVDM_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDU_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDF_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDC_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDB_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDIL_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDIR_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDTD_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDTE_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDE_PWR_CONFIG__SHIFT
+			| 1 << UVD_PGFSM_CONFIG__UVDW_PWR_CONFIG__SHIFT);
+		WREG32_SOC15(VCN, 0, mmUVD_PGFSM_CONFIG, data);
+		SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_PGFSM_STATUS, 0,  0xFFFFFFFF, ret);
+	}
+
+	/* polling UVD_PGFSM_STATUS to confirm UVDM_PWR_STATUS,
+	 * UVDU_PWR_STATUS are 0 (power on) */
+
+	data = RREG32_SOC15(VCN, 0, mmUVD_POWER_STATUS);
+	data &= ~0x103;
+	if (adev->pg_flags & AMD_PG_SUPPORT_VCN)
+		data |= UVD_PGFSM_CONFIG__UVDM_UVDU_PWR_ON |
+			UVD_POWER_STATUS__UVD_PG_EN_MASK;
+
+	WREG32_SOC15(VCN, 0, mmUVD_POWER_STATUS, data);
+}
+
+static void vcn_v2_0_enable_static_power_gating(struct amdgpu_device *adev)
+{
+	uint32_t data = 0;
+	int ret;
+
+	if (adev->pg_flags & AMD_PG_SUPPORT_VCN) {
+		/* Before power off, this indicator has to be turned on */
+		data = RREG32_SOC15(VCN, 0, mmUVD_POWER_STATUS);
+		data &= ~UVD_POWER_STATUS__UVD_POWER_STATUS_MASK;
+		data |= UVD_POWER_STATUS__UVD_POWER_STATUS_TILES_OFF;
+		WREG32_SOC15(VCN, 0, mmUVD_POWER_STATUS, data);
+
+
+		data = (2 << UVD_PGFSM_CONFIG__UVDM_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDU_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDF_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDC_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDB_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDIL_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDIR_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDTD_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDTE_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDE_PWR_CONFIG__SHIFT
+			| 2 << UVD_PGFSM_CONFIG__UVDW_PWR_CONFIG__SHIFT);
+
+		WREG32_SOC15(VCN, 0, mmUVD_PGFSM_CONFIG, data);
+
+		data = (2 << UVD_PGFSM_STATUS__UVDM_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDU_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDF_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDC_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDB_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDIL_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDIR_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDTD_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDTE_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDE_PWR_STATUS__SHIFT
+			| 2 << UVD_PGFSM_STATUS__UVDW_PWR_STATUS__SHIFT);
+		SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_PGFSM_STATUS, data, 0xFFFFFFFF, ret);
+	}
+}
+
+static int vcn_v2_0_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = &adev->vcn.ring_dec;
+	uint32_t rb_bufsz, tmp;
+	uint32_t lmi_swap_cntl;
+	int i, j, r;
+
+	vcn_v2_0_disable_static_power_gating(adev);
+
+	/* set uvd status busy */
+	tmp = RREG32_SOC15(UVD, 0, mmUVD_STATUS) | UVD_STATUS__UVD_BUSY;
+	WREG32_SOC15(UVD, 0, mmUVD_STATUS, tmp);
+
+	/*SW clock gating */
+	vcn_v2_0_disable_clock_gating(adev);
+
+	/* enable VCPU clock */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CNTL),
+		UVD_VCPU_CNTL__CLK_EN_MASK, ~UVD_VCPU_CNTL__CLK_EN_MASK);
+
+	/* disable master interrupt */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_MASTINT_EN), 0,
+		~UVD_MASTINT_EN__VCPU_EN_MASK);
+
+	/* setup mmUVD_LMI_CTRL */
+	tmp = RREG32_SOC15(UVD, 0, mmUVD_LMI_CTRL);
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_CTRL, tmp |
+		UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK	|
+		UVD_LMI_CTRL__MASK_MC_URGENT_MASK |
+		UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK |
+		UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK);
+
+	/* setup mmUVD_MPC_CNTL */
+	tmp = RREG32_SOC15(UVD, 0, mmUVD_MPC_CNTL);
+	tmp &= ~UVD_MPC_CNTL__REPLACEMENT_MODE_MASK;
+	tmp |= 0x2 << UVD_MPC_CNTL__REPLACEMENT_MODE__SHIFT;
+	WREG32_SOC15(VCN, 0, mmUVD_MPC_CNTL, tmp);
+
+	/* setup UVD_MPC_SET_MUXA0 */
+	WREG32_SOC15(UVD, 0, mmUVD_MPC_SET_MUXA0,
+		((0x1 << UVD_MPC_SET_MUXA0__VARA_1__SHIFT) |
+		(0x2 << UVD_MPC_SET_MUXA0__VARA_2__SHIFT) |
+		(0x3 << UVD_MPC_SET_MUXA0__VARA_3__SHIFT) |
+		(0x4 << UVD_MPC_SET_MUXA0__VARA_4__SHIFT)));
+
+	/* setup UVD_MPC_SET_MUXB0 */
+	WREG32_SOC15(UVD, 0, mmUVD_MPC_SET_MUXB0,
+		((0x1 << UVD_MPC_SET_MUXB0__VARB_1__SHIFT) |
+		(0x2 << UVD_MPC_SET_MUXB0__VARB_2__SHIFT) |
+		(0x3 << UVD_MPC_SET_MUXB0__VARB_3__SHIFT) |
+		(0x4 << UVD_MPC_SET_MUXB0__VARB_4__SHIFT)));
+
+	/* setup mmUVD_MPC_SET_MUX */
+	WREG32_SOC15(UVD, 0, mmUVD_MPC_SET_MUX,
+		((0x0 << UVD_MPC_SET_MUX__SET_0__SHIFT) |
+		(0x1 << UVD_MPC_SET_MUX__SET_1__SHIFT) |
+		(0x2 << UVD_MPC_SET_MUX__SET_2__SHIFT)));
+
+	vcn_v2_0_mc_resume(adev);
+
+	/* release VCPU reset to boot */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 0,
+		~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+
+	/* enable LMI MC and UMC channels */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2), 0,
+		~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
+
+	tmp = RREG32_SOC15(VCN, 0, mmUVD_SOFT_RESET);
+	tmp &= ~UVD_SOFT_RESET__LMI_SOFT_RESET_MASK;
+	tmp &= ~UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK;
+	WREG32_SOC15(VCN, 0, mmUVD_SOFT_RESET, tmp);
+
+	/* disable byte swapping */
+	lmi_swap_cntl = 0;
+#ifdef __BIG_ENDIAN
+	/* swap (8 in 32) RB and IB */
+	lmi_swap_cntl = 0xa;
+#endif
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_SWAP_CNTL, lmi_swap_cntl);
+
+	for (i = 0; i < 10; ++i) {
+		uint32_t status;
+
+		for (j = 0; j < 100; ++j) {
+			status = RREG32_SOC15(UVD, 0, mmUVD_STATUS);
+			if (status & 2)
+				break;
+			mdelay(10);
+		}
+		r = 0;
+		if (status & 2)
+			break;
+
+		DRM_ERROR("VCN decode not responding, trying to reset the VCPU!!!\n");
+		WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+			UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK,
+			~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+		mdelay(10);
+		WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 0,
+			~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+		mdelay(10);
+		r = -1;
+	}
+
+	if (r) {
+		DRM_ERROR("VCN decode not responding, giving up!!!\n");
+		return r;
+	}
+
+	/* enable master interrupt */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_MASTINT_EN),
+		UVD_MASTINT_EN__VCPU_EN_MASK,
+		~UVD_MASTINT_EN__VCPU_EN_MASK);
+
+	/* clear the busy bit of VCN_STATUS */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS), 0,
+		~(2 << UVD_STATUS__VCPU_REPORT__SHIFT));
+
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_RBC_RB_VMID, 0);
+
+	/* force RBC into idle state */
+	rb_bufsz = order_base_2(ring->ring_size);
+	tmp = REG_SET_FIELD(0, UVD_RBC_RB_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_BLKSZ, 1);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_FETCH, 1);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_UPDATE, 1);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_RPTR_WR_EN, 1);
+	WREG32_SOC15(UVD, 0, mmUVD_RBC_RB_CNTL, tmp);
+
+	/* programm the RB_BASE for ring buffer */
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_RBC_RB_64BIT_BAR_LOW,
+		lower_32_bits(ring->gpu_addr));
+	WREG32_SOC15(UVD, 0, mmUVD_LMI_RBC_RB_64BIT_BAR_HIGH,
+		upper_32_bits(ring->gpu_addr));
+
+	/* Initialize the ring buffer's read and write pointers */
+	WREG32_SOC15(UVD, 0, mmUVD_RBC_RB_RPTR, 0);
+
+	ring->wptr = RREG32_SOC15(UVD, 0, mmUVD_RBC_RB_RPTR);
+	WREG32_SOC15(UVD, 0, mmUVD_RBC_RB_WPTR,
+			lower_32_bits(ring->wptr));
+
+	ring = &adev->vcn.ring_enc[0];
+	WREG32_SOC15(UVD, 0, mmUVD_RB_RPTR, lower_32_bits(ring->wptr));
+	WREG32_SOC15(UVD, 0, mmUVD_RB_WPTR, lower_32_bits(ring->wptr));
+	WREG32_SOC15(UVD, 0, mmUVD_RB_BASE_LO, ring->gpu_addr);
+	WREG32_SOC15(UVD, 0, mmUVD_RB_BASE_HI, upper_32_bits(ring->gpu_addr));
+	WREG32_SOC15(UVD, 0, mmUVD_RB_SIZE, ring->ring_size / 4);
+
+	ring = &adev->vcn.ring_enc[1];
+	WREG32_SOC15(UVD, 0, mmUVD_RB_RPTR2, lower_32_bits(ring->wptr));
+	WREG32_SOC15(UVD, 0, mmUVD_RB_WPTR2, lower_32_bits(ring->wptr));
+	WREG32_SOC15(UVD, 0, mmUVD_RB_BASE_LO2, ring->gpu_addr);
+	WREG32_SOC15(UVD, 0, mmUVD_RB_BASE_HI2, upper_32_bits(ring->gpu_addr));
+	WREG32_SOC15(UVD, 0, mmUVD_RB_SIZE2, ring->ring_size / 4);
+
+	r = jpeg_v2_0_start(adev);
+
+	return r;
+}
+
+static int vcn_v2_0_stop(struct amdgpu_device *adev)
+{
+	uint32_t tmp;
+	int r;
+
+	r = jpeg_v2_0_stop(adev);
+	if (r)
+		return r;
+	/* wait for uvd idle */
+	SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_STATUS, UVD_STATUS__IDLE, 0x7, r);
+	if (r)
+		return r;
+
+	tmp = UVD_LMI_STATUS__VCPU_LMI_WRITE_CLEAN_MASK |
+		UVD_LMI_STATUS__READ_CLEAN_MASK |
+		UVD_LMI_STATUS__WRITE_CLEAN_MASK |
+		UVD_LMI_STATUS__WRITE_CLEAN_RAW_MASK;
+	SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_LMI_STATUS, tmp, tmp, r);
+	if (r)
+		return r;
+
+	/* stall UMC channel */
+	tmp = RREG32_SOC15(VCN, 0, mmUVD_LMI_CTRL2);
+	tmp |= UVD_LMI_CTRL2__STALL_ARB_UMC_MASK;
+	WREG32_SOC15(VCN, 0, mmUVD_LMI_CTRL2, tmp);
+
+	tmp = UVD_LMI_STATUS__UMC_READ_CLEAN_RAW_MASK|
+		UVD_LMI_STATUS__UMC_WRITE_CLEAN_RAW_MASK;
+	SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_LMI_STATUS, tmp, tmp, r);
+	if (r)
+		return r;
+
+	/* disable VCPU clock */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CNTL), 0,
+		~(UVD_VCPU_CNTL__CLK_EN_MASK));
+
+	/* reset LMI UMC */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+		UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK,
+		~UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK);
+
+	/* reset LMI */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+		UVD_SOFT_RESET__LMI_SOFT_RESET_MASK,
+		~UVD_SOFT_RESET__LMI_SOFT_RESET_MASK);
+
+	/* reset VCPU */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+		UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK,
+		~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+
+	/* clear status */
+	WREG32_SOC15(VCN, 0, mmUVD_STATUS, 0);
+
+	vcn_v2_0_enable_clock_gating(adev);
+	vcn_v2_0_enable_static_power_gating(adev);
+
+	return 0;
+}
+
+static bool vcn_v2_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return (RREG32_SOC15(VCN, 0, mmUVD_STATUS) == UVD_STATUS__IDLE);
+}
+
+static int vcn_v2_0_wait_for_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int ret = 0;
+
+	SOC15_WAIT_ON_RREG(VCN, 0, mmUVD_STATUS, UVD_STATUS__IDLE,
+		UVD_STATUS__IDLE, ret);
+
+	return ret;
+}
+
+static int vcn_v2_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
+
+	if (enable) {
+		/* wait for STATUS to clear */
+		if (vcn_v2_0_is_idle(handle))
+			return -EBUSY;
+		vcn_v2_0_enable_clock_gating(adev);
+	} else {
+		/* disable HW gating and enable Sw gating */
+		vcn_v2_0_disable_clock_gating(adev);
+	}
+	return 0;
+}
+
+/**
+ * vcn_v2_0_dec_ring_get_rptr - get read pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware read pointer
+ */
+static uint64_t vcn_v2_0_dec_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	return RREG32_SOC15(UVD, 0, mmUVD_RBC_RB_RPTR);
+}
+
+/**
+ * vcn_v2_0_dec_ring_get_wptr - get write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware write pointer
+ */
+static uint64_t vcn_v2_0_dec_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell)
+		return adev->wb.wb[ring->wptr_offs];
+	else
+		return RREG32_SOC15(UVD, 0, mmUVD_RBC_RB_WPTR);
+}
+
+/**
+ * vcn_v2_0_dec_ring_set_wptr - set write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Commits the write pointer to the hardware
+ */
+static void vcn_v2_0_dec_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell) {
+		adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr);
+		WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr));
+	} else {
+		WREG32_SOC15(UVD, 0, mmUVD_RBC_RB_WPTR, lower_32_bits(ring->wptr));
+	}
+}
+
+/**
+ * vcn_v2_0_dec_ring_insert_start - insert a start command
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Write a start command to the ring.
+ */
+static void vcn_v2_0_dec_ring_insert_start(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA0_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, VCN_DEC_CMD_PACKET_START << 1);
+}
+
+/**
+ * vcn_v2_0_dec_ring_insert_end - insert a end command
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Write a end command to the ring.
+ */
+static void vcn_v2_0_dec_ring_insert_end(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, VCN_DEC_CMD_PACKET_END << 1);
+}
+
+/**
+ * vcn_v2_0_dec_ring_insert_nop - insert a nop command
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Write a nop command to the ring.
+ */
+static void vcn_v2_0_dec_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count)
+{
+	int i;
+
+	WARN_ON(ring->wptr % 2 || count % 2);
+
+	for (i = 0; i < count / 2; i++) {
+		amdgpu_ring_write(ring, PACKET0(mmUVD_NO_OP_INTERNAL_OFFSET, 0));
+		amdgpu_ring_write(ring, 0);
+	}
+}
+
+/**
+ * vcn_v2_0_dec_ring_emit_fence - emit an fence & trap command
+ *
+ * @ring: amdgpu_ring pointer
+ * @fence: fence to emit
+ *
+ * Write a fence and a trap command to the ring.
+ */
+static void vcn_v2_0_dec_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+				     unsigned flags)
+{
+	WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_CONTEXT_ID_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, seq);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA0_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, addr & 0xffffffff);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA1_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, upper_32_bits(addr) & 0xff);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, VCN_DEC_CMD_FENCE << 1);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA0_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA1_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET, 0));
+
+	amdgpu_ring_write(ring, VCN_DEC_CMD_TRAP << 1);
+}
+
+/**
+ * vcn_v2_0_dec_ring_emit_ib - execute indirect buffer
+ *
+ * @ring: amdgpu_ring pointer
+ * @ib: indirect buffer to execute
+ *
+ * Write ring commands to execute the indirect buffer
+ */
+static void vcn_v2_0_dec_ring_emit_ib(struct amdgpu_ring *ring,
+				      struct amdgpu_job *job,
+				      struct amdgpu_ib *ib,
+				      uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_LMI_RBC_IB_VMID_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, vmid);
+
+	amdgpu_ring_write(ring,	PACKET0(mmUVD_LMI_RBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring,	PACKET0(mmUVD_LMI_RBC_IB_64BIT_BAR_HIGH_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring,	PACKET0(mmUVD_RBC_IB_SIZE_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, ib->length_dw);
+}
+
+static void vcn_v2_0_dec_ring_emit_reg_wait(struct amdgpu_ring *ring,
+					    uint32_t reg, uint32_t val,
+					    uint32_t mask)
+{
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA0_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, reg << 2);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA1_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, val);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GP_SCRATCH8_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, mask);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET, 0));
+
+	amdgpu_ring_write(ring, VCN_DEC_CMD_REG_READ_COND_WAIT << 1);
+}
+
+static void vcn_v2_0_dec_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					    unsigned vmid, uint64_t pd_addr)
+{
+	struct amdgpu_vmhub *hub = &ring->adev->vmhub[ring->funcs->vmhub];
+	uint32_t data0, data1, mask;
+
+	pd_addr = amdgpu_gmc_emit_flush_gpu_tlb(ring, vmid, pd_addr);
+
+	/* wait for register write */
+	data0 = hub->ctx0_ptb_addr_lo32 + vmid * 2;
+	data1 = lower_32_bits(pd_addr);
+	mask = 0xffffffff;
+	vcn_v2_0_dec_ring_emit_reg_wait(ring, data0, data1, mask);
+}
+
+static void vcn_v2_0_dec_ring_emit_wreg(struct amdgpu_ring *ring,
+					uint32_t reg, uint32_t val)
+{
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA0_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, reg << 2);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA1_INTERNAL_OFFSET, 0));
+	amdgpu_ring_write(ring, val);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET, 0));
+
+	amdgpu_ring_write(ring, VCN_DEC_CMD_WRITE_REG << 1);
+}
+
+/**
+ * vcn_v2_0_enc_ring_get_rptr - get enc read pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware enc read pointer
+ */
+static uint64_t vcn_v2_0_enc_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->vcn.ring_enc[0])
+		return RREG32_SOC15(UVD, 0, mmUVD_RB_RPTR);
+	else
+		return RREG32_SOC15(UVD, 0, mmUVD_RB_RPTR2);
+}
+
+ /**
+ * vcn_v2_0_enc_ring_get_wptr - get enc write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware enc write pointer
+ */
+static uint64_t vcn_v2_0_enc_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->vcn.ring_enc[0]) {
+		if (ring->use_doorbell)
+			return adev->wb.wb[ring->wptr_offs];
+		else
+			return RREG32_SOC15(UVD, 0, mmUVD_RB_WPTR);
+	} else {
+		if (ring->use_doorbell)
+			return adev->wb.wb[ring->wptr_offs];
+		else
+			return RREG32_SOC15(UVD, 0, mmUVD_RB_WPTR2);
+	}
+}
+
+ /**
+ * vcn_v2_0_enc_ring_set_wptr - set enc write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Commits the enc write pointer to the hardware
+ */
+static void vcn_v2_0_enc_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->vcn.ring_enc[0]) {
+		if (ring->use_doorbell) {
+			adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr);
+			WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr));
+		} else {
+			WREG32_SOC15(UVD, 0, mmUVD_RB_WPTR, lower_32_bits(ring->wptr));
+		}
+	} else {
+		if (ring->use_doorbell) {
+			adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr);
+			WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr));
+		} else {
+			WREG32_SOC15(UVD, 0, mmUVD_RB_WPTR2, lower_32_bits(ring->wptr));
+		}
+	}
+}
+
+/**
+ * vcn_v2_0_enc_ring_emit_fence - emit an enc fence & trap command
+ *
+ * @ring: amdgpu_ring pointer
+ * @fence: fence to emit
+ *
+ * Write enc a fence and a trap command to the ring.
+ */
+static void vcn_v2_0_enc_ring_emit_fence(struct amdgpu_ring *ring, u64 addr,
+			u64 seq, unsigned flags)
+{
+	WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	amdgpu_ring_write(ring, VCN_ENC_CMD_FENCE);
+	amdgpu_ring_write(ring, addr);
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, seq);
+	amdgpu_ring_write(ring, VCN_ENC_CMD_TRAP);
+}
+
+static void vcn_v2_0_enc_ring_insert_end(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, VCN_ENC_CMD_END);
+}
+
+/**
+ * vcn_v2_0_enc_ring_emit_ib - enc execute indirect buffer
+ *
+ * @ring: amdgpu_ring pointer
+ * @ib: indirect buffer to execute
+ *
+ * Write enc ring commands to execute the indirect buffer
+ */
+static void vcn_v2_0_enc_ring_emit_ib(struct amdgpu_ring *ring,
+				      struct amdgpu_job *job,
+				      struct amdgpu_ib *ib,
+				      uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+
+	amdgpu_ring_write(ring, VCN_ENC_CMD_IB);
+	amdgpu_ring_write(ring, vmid);
+	amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, ib->length_dw);
+}
+
+static void vcn_v2_0_enc_ring_emit_reg_wait(struct amdgpu_ring *ring,
+					    uint32_t reg, uint32_t val,
+					    uint32_t mask)
+{
+	amdgpu_ring_write(ring, VCN_ENC_CMD_REG_WAIT);
+	amdgpu_ring_write(ring, reg << 2);
+	amdgpu_ring_write(ring, mask);
+	amdgpu_ring_write(ring, val);
+}
+
+static void vcn_v2_0_enc_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					    unsigned int vmid, uint64_t pd_addr)
+{
+	struct amdgpu_vmhub *hub = &ring->adev->vmhub[ring->funcs->vmhub];
+
+	pd_addr = amdgpu_gmc_emit_flush_gpu_tlb(ring, vmid, pd_addr);
+
+	/* wait for reg writes */
+	vcn_v2_0_enc_ring_emit_reg_wait(ring, hub->ctx0_ptb_addr_lo32 + vmid * 2,
+					lower_32_bits(pd_addr), 0xffffffff);
+}
+
+static void vcn_v2_0_enc_ring_emit_wreg(struct amdgpu_ring *ring,
+					uint32_t reg, uint32_t val)
+{
+	amdgpu_ring_write(ring, VCN_ENC_CMD_REG_WRITE);
+	amdgpu_ring_write(ring,	reg << 2);
+	amdgpu_ring_write(ring, val);
+}
+
+/**
+ * vcn_v2_0_jpeg_ring_get_rptr - get read pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware read pointer
+ */
+static uint64_t vcn_v2_0_jpeg_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	return RREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_RPTR);
+}
+
+/**
+ * vcn_v2_0_jpeg_ring_get_wptr - get write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware write pointer
+ */
+static uint64_t vcn_v2_0_jpeg_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell)
+		return adev->wb.wb[ring->wptr_offs];
+	else
+		return RREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_WPTR);
+}
+
+/**
+ * vcn_v2_0_jpeg_ring_set_wptr - set write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Commits the write pointer to the hardware
+ */
+static void vcn_v2_0_jpeg_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell) {
+		adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr);
+		WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr));
+	} else {
+		WREG32_SOC15(UVD, 0, mmUVD_JRBC_RB_WPTR, lower_32_bits(ring->wptr));
+	}
+}
+
+/**
+ * vcn_v2_0_jpeg_ring_insert_start - insert a start command
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Write a start command to the ring.
+ */
+static void vcn_v2_0_jpeg_ring_insert_start(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x68e04);
+
+	amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x80010000);
+}
+
+/**
+ * vcn_v2_0_jpeg_ring_insert_end - insert a end command
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Write a end command to the ring.
+ */
+static void vcn_v2_0_jpeg_ring_insert_end(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x68e04);
+
+	amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x00010000);
+}
+
+/**
+ * vcn_v2_0_jpeg_ring_emit_fence - emit an fence & trap command
+ *
+ * @ring: amdgpu_ring pointer
+ * @fence: fence to emit
+ *
+ * Write a fence and a trap command to the ring.
+ */
+static void vcn_v2_0_jpeg_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+				     unsigned flags)
+{
+	WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_JPEG_GPCOM_DATA0_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, seq);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JPEG_GPCOM_DATA1_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, seq);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_RB_MEM_WR_64BIT_BAR_LOW_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_RB_MEM_WR_64BIT_BAR_HIGH_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JPEG_GPCOM_CMD_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x8);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JPEG_GPCOM_CMD_INTERNAL_OFFSET,
+		0, PACKETJ_CONDITION_CHECK0, PACKETJ_TYPE4));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x3fbc);
+
+	amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x1);
+
+	amdgpu_ring_write(ring, PACKETJ(0, 0, 0, PACKETJ_TYPE7));
+	amdgpu_ring_write(ring, 0);
+}
+
+/**
+ * vcn_v2_0_jpeg_ring_emit_ib - execute indirect buffer
+ *
+ * @ring: amdgpu_ring pointer
+ * @ib: indirect buffer to execute
+ *
+ * Write ring commands to execute the indirect buffer.
+ */
+static void vcn_v2_0_jpeg_ring_emit_ib(struct amdgpu_ring *ring,
+				       struct amdgpu_job *job,
+				       struct amdgpu_ib *ib,
+				       uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
+
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JPEG_VMID_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, (vmid | (vmid << 4)));
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr));
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_IB_64BIT_BAR_HIGH_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JRBC_IB_SIZE_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, ib->length_dw);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_RB_MEM_RD_64BIT_BAR_LOW_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, lower_32_bits(ring->gpu_addr));
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_LMI_JRBC_RB_MEM_RD_64BIT_BAR_HIGH_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, upper_32_bits(ring->gpu_addr));
+
+	amdgpu_ring_write(ring,	PACKETJ(0, 0, PACKETJ_CONDITION_CHECK0, PACKETJ_TYPE2));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JRBC_RB_COND_RD_TIMER_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x01400200);
+
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_JRBC_RB_REF_DATA_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x2);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JRBC_STATUS_INTERNAL_OFFSET,
+		0, PACKETJ_CONDITION_CHECK3, PACKETJ_TYPE3));
+	amdgpu_ring_write(ring, 0x2);
+}
+
+static void vcn_v2_0_jpeg_ring_emit_reg_wait(struct amdgpu_ring *ring,
+					    uint32_t reg, uint32_t val,
+					    uint32_t mask)
+{
+	uint32_t reg_offset = (reg << 2);
+
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_JRBC_RB_COND_RD_TIMER_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, 0x01400200);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JRBC_RB_REF_DATA_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	amdgpu_ring_write(ring, val);
+
+	amdgpu_ring_write(ring, PACKETJ(mmUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	if (reg_offset >= 0x10000 && reg_offset <= 0x105ff) {
+		amdgpu_ring_write(ring, 0);
+		amdgpu_ring_write(ring,
+			PACKETJ((reg_offset >> 2), 0, 0, PACKETJ_TYPE3));
+	} else {
+		amdgpu_ring_write(ring, reg_offset);
+		amdgpu_ring_write(ring,	PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+			0, 0, PACKETJ_TYPE3));
+	}
+	amdgpu_ring_write(ring, mask);
+}
+
+static void vcn_v2_0_jpeg_ring_emit_vm_flush(struct amdgpu_ring *ring,
+		unsigned vmid, uint64_t pd_addr)
+{
+	struct amdgpu_vmhub *hub = &ring->adev->vmhub[ring->funcs->vmhub];
+	uint32_t data0, data1, mask;
+
+	pd_addr = amdgpu_gmc_emit_flush_gpu_tlb(ring, vmid, pd_addr);
+
+	/* wait for register write */
+	data0 = hub->ctx0_ptb_addr_lo32 + vmid * 2;
+	data1 = lower_32_bits(pd_addr);
+	mask = 0xffffffff;
+	vcn_v2_0_jpeg_ring_emit_reg_wait(ring, data0, data1, mask);
+}
+
+static void vcn_v2_0_jpeg_ring_emit_wreg(struct amdgpu_ring *ring,
+					uint32_t reg, uint32_t val)
+{
+	uint32_t reg_offset = (reg << 2);
+
+	amdgpu_ring_write(ring,	PACKETJ(mmUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+		0, 0, PACKETJ_TYPE0));
+	if (reg_offset >= 0x10000 && reg_offset <= 0x105ff) {
+		amdgpu_ring_write(ring, 0);
+		amdgpu_ring_write(ring,
+			PACKETJ((reg_offset >> 2), 0, 0, PACKETJ_TYPE0));
+	} else {
+		amdgpu_ring_write(ring, reg_offset);
+		amdgpu_ring_write(ring,	PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+			0, 0, PACKETJ_TYPE0));
+	}
+	amdgpu_ring_write(ring, val);
+}
+
+static void vcn_v2_0_jpeg_ring_nop(struct amdgpu_ring *ring, uint32_t count)
+{
+	int i;
+
+	WARN_ON(ring->wptr % 2 || count % 2);
+
+	for (i = 0; i < count / 2; i++) {
+		amdgpu_ring_write(ring, PACKETJ(0, 0, 0, PACKETJ_TYPE6));
+		amdgpu_ring_write(ring, 0);
+	}
+}
+
+static int vcn_v2_0_set_interrupt_state(struct amdgpu_device *adev,
+					struct amdgpu_irq_src *source,
+					unsigned type,
+					enum amdgpu_interrupt_state state)
+{
+	return 0;
+}
+
+static int vcn_v2_0_process_interrupt(struct amdgpu_device *adev,
+				      struct amdgpu_irq_src *source,
+				      struct amdgpu_iv_entry *entry)
+{
+	DRM_DEBUG("IH: VCN TRAP\n");
+
+	switch (entry->src_id) {
+	case VCN_2_0__SRCID__UVD_SYSTEM_MESSAGE_INTERRUPT:
+		amdgpu_fence_process(&adev->vcn.ring_dec);
+		break;
+	case VCN_2_0__SRCID__UVD_ENC_GENERAL_PURPOSE:
+		amdgpu_fence_process(&adev->vcn.ring_enc[0]);
+		break;
+	case VCN_2_0__SRCID__UVD_ENC_LOW_LATENCY:
+		amdgpu_fence_process(&adev->vcn.ring_enc[1]);
+		break;
+	case VCN_2_0__SRCID__JPEG_DECODE:
+		amdgpu_fence_process(&adev->vcn.ring_jpeg);
+		break;
+	default:
+		DRM_ERROR("Unhandled interrupt: %d %d\n",
+			  entry->src_id, entry->src_data[0]);
+		break;
+	}
+
+	return 0;
+}
+
+static int vcn_v2_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	/* This doesn't actually powergate the VCN block.
+	 * That's done in the dpm code via the SMC.  This
+	 * just re-inits the block as necessary.  The actual
+	 * gating still happens in the dpm code.  We should
+	 * revisit this when there is a cleaner line between
+	 * the smc and the hw blocks
+	 */
+	int ret;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if(state == adev->vcn.cur_state)
+		return 0;
+
+	if (state == AMD_PG_STATE_GATE)
+		ret = vcn_v2_0_stop(adev);
+	else
+		ret = vcn_v2_0_start(adev);
+
+	if(!ret)
+		adev->vcn.cur_state = state;
+	return ret;
+}
+
+static const struct amd_ip_funcs vcn_v2_0_ip_funcs = {
+	.name = "vcn_v2_0",
+	.early_init = vcn_v2_0_early_init,
+	.late_init = NULL,
+	.sw_init = vcn_v2_0_sw_init,
+	.sw_fini = vcn_v2_0_sw_fini,
+	.hw_init = vcn_v2_0_hw_init,
+	.hw_fini = vcn_v2_0_hw_fini,
+	.suspend = vcn_v2_0_suspend,
+	.resume = vcn_v2_0_resume,
+	.is_idle = vcn_v2_0_is_idle,
+	.wait_for_idle = vcn_v2_0_wait_for_idle,
+	.check_soft_reset = NULL,
+	.pre_soft_reset = NULL,
+	.soft_reset = NULL,
+	.post_soft_reset = NULL,
+	.set_clockgating_state = vcn_v2_0_set_clockgating_state,
+	.set_powergating_state = vcn_v2_0_set_powergating_state,
+};
+
+static const struct amdgpu_ring_funcs vcn_v2_0_dec_ring_vm_funcs = {
+	.type = AMDGPU_RING_TYPE_VCN_DEC,
+	.align_mask = 0xf,
+	.vmhub = AMDGPU_MMHUB,
+	.get_rptr = vcn_v2_0_dec_ring_get_rptr,
+	.get_wptr = vcn_v2_0_dec_ring_get_wptr,
+	.set_wptr = vcn_v2_0_dec_ring_set_wptr,
+	.emit_frame_size =
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+		8 + /* vcn_v2_0_dec_ring_emit_vm_flush */
+		14 + 14 + /* vcn_v2_0_dec_ring_emit_fence x2 vm fence */
+		6,
+	.emit_ib_size = 8, /* vcn_v2_0_dec_ring_emit_ib */
+	.emit_ib = vcn_v2_0_dec_ring_emit_ib,
+	.emit_fence = vcn_v2_0_dec_ring_emit_fence,
+	.emit_vm_flush = vcn_v2_0_dec_ring_emit_vm_flush,
+	.test_ring = amdgpu_vcn_dec_ring_test_ring,
+	.test_ib = amdgpu_vcn_dec_ring_test_ib,
+	.insert_nop = vcn_v2_0_dec_ring_insert_nop,
+	.insert_start = vcn_v2_0_dec_ring_insert_start,
+	.insert_end = vcn_v2_0_dec_ring_insert_end,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.begin_use = amdgpu_vcn_ring_begin_use,
+	.end_use = amdgpu_vcn_ring_end_use,
+	.emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
+	.emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
+	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+};
+
+static const struct amdgpu_ring_funcs vcn_v2_0_enc_ring_vm_funcs = {
+	.type = AMDGPU_RING_TYPE_VCN_ENC,
+	.align_mask = 0x3f,
+	.nop = VCN_ENC_CMD_NO_OP,
+	.vmhub = AMDGPU_MMHUB,
+	.get_rptr = vcn_v2_0_enc_ring_get_rptr,
+	.get_wptr = vcn_v2_0_enc_ring_get_wptr,
+	.set_wptr = vcn_v2_0_enc_ring_set_wptr,
+	.emit_frame_size =
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 4 +
+		4 + /* vcn_v2_0_enc_ring_emit_vm_flush */
+		5 + 5 + /* vcn_v2_0_enc_ring_emit_fence x2 vm fence */
+		1, /* vcn_v2_0_enc_ring_insert_end */
+	.emit_ib_size = 5, /* vcn_v2_0_enc_ring_emit_ib */
+	.emit_ib = vcn_v2_0_enc_ring_emit_ib,
+	.emit_fence = vcn_v2_0_enc_ring_emit_fence,
+	.emit_vm_flush = vcn_v2_0_enc_ring_emit_vm_flush,
+	.test_ring = amdgpu_vcn_enc_ring_test_ring,
+	.test_ib = amdgpu_vcn_enc_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.insert_end = vcn_v2_0_enc_ring_insert_end,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.begin_use = amdgpu_vcn_ring_begin_use,
+	.end_use = amdgpu_vcn_ring_end_use,
+	.emit_wreg = vcn_v2_0_enc_ring_emit_wreg,
+	.emit_reg_wait = vcn_v2_0_enc_ring_emit_reg_wait,
+	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+};
+
+static const struct amdgpu_ring_funcs vcn_v2_0_jpeg_ring_vm_funcs = {
+	.type = AMDGPU_RING_TYPE_VCN_JPEG,
+	.align_mask = 0xf,
+	.vmhub = AMDGPU_MMHUB,
+	.get_rptr = vcn_v2_0_jpeg_ring_get_rptr,
+	.get_wptr = vcn_v2_0_jpeg_ring_get_wptr,
+	.set_wptr = vcn_v2_0_jpeg_ring_set_wptr,
+	.emit_frame_size =
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+		8 + /* vcn_v2_0_jpeg_ring_emit_vm_flush */
+		18 + 18 + /* vcn_v2_0_jpeg_ring_emit_fence x2 vm fence */
+		8 + 16,
+	.emit_ib_size = 22, /* vcn_v2_0_jpeg_ring_emit_ib */
+	.emit_ib = vcn_v2_0_jpeg_ring_emit_ib,
+	.emit_fence = vcn_v2_0_jpeg_ring_emit_fence,
+	.emit_vm_flush = vcn_v2_0_jpeg_ring_emit_vm_flush,
+	.test_ring = amdgpu_vcn_jpeg_ring_test_ring,
+	.test_ib = amdgpu_vcn_jpeg_ring_test_ib,
+	.insert_nop = vcn_v2_0_jpeg_ring_nop,
+	.insert_start = vcn_v2_0_jpeg_ring_insert_start,
+	.insert_end = vcn_v2_0_jpeg_ring_insert_end,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.begin_use = amdgpu_vcn_ring_begin_use,
+	.end_use = amdgpu_vcn_ring_end_use,
+	.emit_wreg = vcn_v2_0_jpeg_ring_emit_wreg,
+	.emit_reg_wait = vcn_v2_0_jpeg_ring_emit_reg_wait,
+	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+};
+
+static void vcn_v2_0_set_dec_ring_funcs(struct amdgpu_device *adev)
+{
+	adev->vcn.ring_dec.funcs = &vcn_v2_0_dec_ring_vm_funcs;
+	DRM_INFO("VCN decode is enabled in VM mode\n");
+}
+
+static void vcn_v2_0_set_enc_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < adev->vcn.num_enc_rings; ++i)
+		adev->vcn.ring_enc[i].funcs = &vcn_v2_0_enc_ring_vm_funcs;
+
+	DRM_INFO("VCN encode is enabled in VM mode\n");
+}
+
+static void vcn_v2_0_set_jpeg_ring_funcs(struct amdgpu_device *adev)
+{
+	adev->vcn.ring_jpeg.funcs = &vcn_v2_0_jpeg_ring_vm_funcs;
+	DRM_INFO("VCN jpeg decode is enabled in VM mode\n");
+}
+
+static const struct amdgpu_irq_src_funcs vcn_v2_0_irq_funcs = {
+	.set = vcn_v2_0_set_interrupt_state,
+	.process = vcn_v2_0_process_interrupt,
+};
+
+static void vcn_v2_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->vcn.irq.num_types = adev->vcn.num_enc_rings + 2;
+	adev->vcn.irq.funcs = &vcn_v2_0_irq_funcs;
+}
+
+const struct amdgpu_ip_block_version vcn_v2_0_ip_block =
+{
+		.type = AMD_IP_BLOCK_TYPE_VCN,
+		.major = 2,
+		.minor = 0,
+		.rev = 0,
+		.funcs = &vcn_v2_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.h b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.h
new file mode 100644
index 000000000000..a74227f4663b
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __VCN_V2_0_H__
+#define __VCN_V2_0_H__
+
+extern const struct amdgpu_ip_block_version vcn_v2_0_ip_block;
+
+#endif /* __VCN_V2_0_H__ */
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 118/459] drm/amdgpu/mes: add amdgpu_mes driver parameter
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (16 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 117/459] drm/amdgpu: add initial VCN2.0 support (v2) Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 119/459] drm/amdgpu/mes: add mes header file and definition Alex Deucher
                     ` (74 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

amdgpu_mes, which is a driver scope parameter, is used
to whether enable mes or not.

MES (Micro Engine Scheduler) is the new on chip hw scheduling
microcontroller.  It can be used to handle queue scheduling and
preemption and priorities.

Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h     | 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index b4a887e42370..a4e2ce48bd63 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -162,6 +162,7 @@ extern uint amdgpu_ras_mask;
 extern int amdgpu_async_gfx_ring;
 extern int amdgpu_mcbp;
 extern int amdgpu_discovery;
+extern int amdgpu_mes;
 
 #ifdef CONFIG_DRM_AMDGPU_SI
 extern int amdgpu_si_support;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index b22598a30134..9646de2daa02 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -139,6 +139,7 @@ uint amdgpu_dc_feature_mask = 0;
 int amdgpu_async_gfx_ring = 1;
 int amdgpu_mcbp = 0;
 int amdgpu_discovery = 0;
+int amdgpu_mes = 0;
 
 struct amdgpu_mgpu_info mgpu_info = {
 	.mutex = __MUTEX_INITIALIZER(mgpu_info.mutex),
@@ -584,6 +585,10 @@ MODULE_PARM_DESC(discovery,
 	"Allow driver to discover hardware IPs from IP Discovery table at the top of VRAM");
 module_param_named(discovery, amdgpu_discovery, int, 0444);
 
+MODULE_PARM_DESC(mes,
+	"Enable Micro Engine Scheduler (0 = disabled (default), 1 = enabled)");
+module_param_named(mes, amdgpu_mes, int, 0444);
+
 #ifdef CONFIG_HSA_AMD
 /**
  * DOC: sched_policy (int)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 119/459] drm/amdgpu/mes: add mes header file and definition
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (17 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 118/459] drm/amdgpu/mes: add amdgpu_mes driver parameter Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 120/459] drm/amdgpu/mes: add definitions of ip callback function Alex Deucher
                     ` (73 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

Add dummy header file and definitions of mes.

Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h     |  5 ++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 31 +++++++++++++++++++++++++
 2 files changed, 36 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index a4e2ce48bd63..f59a7fbf544a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -85,6 +85,7 @@
 #include "amdgpu_amdkfd.h"
 #include "amdgpu_smu.h"
 #include "amdgpu_discovery.h"
+#include "amdgpu_mes.h"
 
 #define MAX_GPU_INSTANCE		16
 
@@ -920,6 +921,10 @@ struct amdgpu_device {
 	/* discovery */
 	uint8_t				*discovery;
 
+	/* mes */
+	bool                            enable_mes;
+	struct amdgpu_mes               mes;
+
 	struct amdgpu_ip_block          ip_blocks[AMDGPU_MAX_IP_NUM];
 	int				num_ip_blocks;
 	struct mutex	mn_lock;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
new file mode 100644
index 000000000000..621ef8a7f074
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __AMDGPU_MES_H__
+#define __AMDGPU_MES_H__
+
+struct amdgpu_mes {
+
+};
+
+#endif /* __AMDGPU_MES_H__ */
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 120/459] drm/amdgpu/mes: add definitions of ip callback function
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (18 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 119/459] drm/amdgpu/mes: add mes header file and definition Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 121/459] drm/amdgpu/mes: enable mes on navi10 and later asic Alex Deucher
                     ` (72 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

Abstract mes ip independent function layer.

Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 54 +++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
index 621ef8a7f074..788084310dd5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
@@ -24,8 +24,62 @@
 #ifndef __AMDGPU_MES_H__
 #define __AMDGPU_MES_H__
 
+struct amdgpu_mes_funcs;
+
 struct amdgpu_mes {
+	struct amdgpu_adev *adev;
+
+	/* ip specific functions */
+	struct amdgpu_mes_funcs *funcs;
+};
+
+struct mes_add_queue_input {
+	uint32_t	process_id;
+	uint64_t	page_table_base_addr;
+	uint64_t	process_va_start;
+	uint64_t	process_va_end;
+	uint64_t	process_quantum;
+	uint64_t	process_context_addr;
+	uint64_t	gang_quantum;
+	uint64_t	gang_context_addr;
+	uint32_t	inprocess_gang_priority;
+	uint32_t	gang_global_priority_level;
+	uint32_t	doorbell_offset;
+	uint64_t	mqd_addr;
+	uint64_t	wptr_addr;
+	uint32_t	queue_type;
+	uint32_t	paging;
+};
+
+struct mes_remove_queue_input {
+	uint32_t	doorbell_offset;
+	uint64_t	gang_context_addr;
+};
+
+struct mes_suspend_gang_input {
+	bool		suspend_all_gangs;
+	uint64_t	gang_context_addr;
+	uint64_t	suspend_fence_addr;
+	uint32_t	suspend_fence_value;
+};
+
+struct mes_resume_gang_input {
+	bool		resume_all_gangs;
+	uint64_t	gang_context_addr;
+};
+
+struct amdgpu_mes_funcs {
+	int (*add_hw_queue)(struct amdgpu_mes *mes,
+			    struct mes_add_queue_input *input);
+
+	int (*remove_hw_queue)(struct amdgpu_mes *mes,
+			       struct mes_remove_queue_input *input);
+
+	int (*suspend_gang)(struct amdgpu_mes *mes,
+			    struct mes_suspend_gang_input *input);
 
+	int (*resume_gang)(struct amdgpu_mes *mes,
+			   struct mes_resume_gang_input *input);
 };
 
 #endif /* __AMDGPU_MES_H__ */
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 121/459] drm/amdgpu/mes: enable mes on navi10 and later asic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (19 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 120/459] drm/amdgpu/mes: add definitions of ip callback function Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 122/459] drm/amdgpu/mes10.1: add ip block mes10.1 (v2) Alex Deucher
                     ` (71 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

When amdgpu_mes is enabled and asic family is navi10 and
later asic, enable mes per device.

Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index facf6ae79040..182dc834f7b6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2577,6 +2577,9 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	if (amdgpu_mcbp)
 		DRM_INFO("MCBP is enabled\n");
 
+	if (amdgpu_mes && adev->asic_type >= CHIP_NAVI10)
+		adev->enable_mes = true;
+
 	if (amdgpu_discovery) {
 		r = amdgpu_discovery_init(adev);
 		if (r) {
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 122/459] drm/amdgpu/mes10.1: add ip block mes10.1 (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (20 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 121/459] drm/amdgpu/mes: enable mes on navi10 and later asic Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 123/459] drm/amdgpu: add gfx v10 implementation (v8) Alex Deucher
                     ` (70 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

MES takes over the scheduling capability of GFX and SDMA,
add MES as a standalone ip.

v2: squash in updates (Alex)

Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile      |   4 +
 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c   | 103 +++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/mes_v10_1.h   |  29 +++++++
 drivers/gpu/drm/amd/include/amd_shared.h |   3 +-
 4 files changed, 138 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mes_v10_1.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index b7916a138239..f99865363e68 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -120,6 +120,10 @@ amdgpu-y += \
 	sdma_v4_0.o \
 	sdma_v5_0.o
 
+# add MES block
+amdgpu-y += \
+	mes_v10_1.o
+
 # add UVD block
 amdgpu-y += \
 	amdgpu_uvd.o \
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
new file mode 100644
index 000000000000..2e655736b24d
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
@@ -0,0 +1,103 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "amdgpu.h"
+
+static int mes_v10_1_add_hw_queue(struct amdgpu_mes *mes,
+				  struct mes_add_queue_input *input)
+{
+	return 0;
+}
+
+static int mes_v10_1_remove_hw_queue(struct amdgpu_mes *mes,
+				     struct mes_remove_queue_input *input)
+{
+	return 0;
+}
+
+static int mes_v10_1_suspend_gang(struct amdgpu_mes *mes,
+				  struct mes_suspend_gang_input *input)
+{
+	return 0;
+}
+
+static int mes_v10_1_resume_gang(struct amdgpu_mes *mes,
+				 struct mes_resume_gang_input *input)
+{
+	return 0;
+}
+
+static const struct amdgpu_mes_funcs mes_v10_1_funcs = {
+	.add_hw_queue = mes_v10_1_add_hw_queue,
+	.remove_hw_queue = mes_v10_1_remove_hw_queue,
+	.suspend_gang = mes_v10_1_suspend_gang,
+	.resume_gang = mes_v10_1_resume_gang,
+};
+
+static int mes_v10_1_sw_init(void *handle)
+{
+	return 0;
+}
+
+static int mes_v10_1_sw_fini(void *handle)
+{
+	return 0;
+}
+
+static int mes_v10_1_hw_init(void *handle)
+{
+	return 0;
+}
+
+static int mes_v10_1_hw_fini(void *handle)
+{
+	return 0;
+}
+
+static int mes_v10_1_suspend(void *handle)
+{
+	return 0;
+}
+
+static int mes_v10_1_resume(void *handle)
+{
+	return 0;
+}
+
+static const struct amd_ip_funcs mes_v10_1_ip_funcs = {
+	.name = "mes_v10_1",
+	.sw_init = mes_v10_1_sw_init,
+	.sw_fini = mes_v10_1_sw_fini,
+	.hw_init = mes_v10_1_hw_init,
+	.hw_fini = mes_v10_1_hw_fini,
+	.suspend = mes_v10_1_suspend,
+	.resume = mes_v10_1_resume,
+};
+
+const struct amdgpu_ip_block_version mes_v10_1_ip_block = {
+	.type = AMD_IP_BLOCK_TYPE_MES,
+	.major = 10,
+	.minor = 1,
+	.rev = 0,
+	.funcs = &mes_v10_1_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.h b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.h
new file mode 100644
index 000000000000..60ea48fe3484
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MES_V10_1_H__
+#define __MES_v10_1_H__
+
+extern const struct amdgpu_ip_block_version mes_v10_1_ip_block;
+
+#endif 
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
index 1e638357c4a3..61fe4af4bd44 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -52,7 +52,8 @@ enum amd_ip_block_type {
 	AMD_IP_BLOCK_TYPE_UVD,
 	AMD_IP_BLOCK_TYPE_VCE,
 	AMD_IP_BLOCK_TYPE_ACP,
-	AMD_IP_BLOCK_TYPE_VCN
+	AMD_IP_BLOCK_TYPE_VCN,
+	AMD_IP_BLOCK_TYPE_MES
 };
 
 enum amd_clockgating_state {
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 123/459] drm/amdgpu: add gfx v10 implementation (v8)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (21 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 122/459] drm/amdgpu/mes10.1: add ip block mes10.1 (v2) Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 124/459] drm/amdgpu: avoid to use SOC15_REG_OFFSET in static array for navi10 Alex Deucher
                     ` (69 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

GFX is the graphics and compute block on the GPU.

v1: add initial gfx v10 implementation (Ray)
v2: convert to new get_vm_pde function in emit_vm_flush (Hawking)
v3: switch to new emit ib interfaces (Hawking)
v4: squash in updates (Alex)
v5: remove unused variables (Alex)
v6: v6: some golden regs moved to vbios (Alex)
v7: squash in some cleanups (Alex)
v8: squash in golden settings update (Alex)

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile      |    3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |    4 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 5205 ++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.h   |   29 +
 4 files changed, 5238 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index f99865363e68..49479c93fab0 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -110,7 +110,8 @@ amdgpu-y += \
 	amdgpu_gfx.o \
 	amdgpu_rlc.o \
 	gfx_v8_0.o \
-	gfx_v9_0.o
+	gfx_v9_0.o \
+	gfx_v10_0.o
 
 # add async DMA block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 529ba1bdda55..4410c97ac9b7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -29,8 +29,8 @@
 #include <drm/drm_print.h>
 
 /* max number of rings */
-#define AMDGPU_MAX_RINGS		23
-#define AMDGPU_MAX_GFX_RINGS		1
+#define AMDGPU_MAX_RINGS		24
+#define AMDGPU_MAX_GFX_RINGS		2
 #define AMDGPU_MAX_COMPUTE_RINGS	8
 #define AMDGPU_MAX_VCE_RINGS		3
 #define AMDGPU_MAX_UVD_ENC_RINGS	2
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
new file mode 100644
index 000000000000..0d5d86a5d62f
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -0,0 +1,5205 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/firmware.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_gfx.h"
+#include "amdgpu_psp.h"
+#include "amdgpu_smu.h"
+#include "nv.h"
+#include "nvd.h"
+
+#include "gc/gc_10_1_0_offset.h"
+#include "gc/gc_10_1_0_sh_mask.h"
+#include "navi10_enum.h"
+#include "hdp/hdp_5_0_0_offset.h"
+#include "ivsrcid/gfx/irqsrcs_gfx_10_1.h"
+
+#include "soc15.h"
+#include "soc15_common.h"
+#include "clearstate_gfx10.h"
+#include "v10_structs.h"
+#include "gfx_v10_0.h"
+#include "nbio_v2_3.h"
+
+/**
+ * Navi10 has two graphic rings to share each graphic pipe.
+ * 1. Primary ring
+ * 2. Async ring
+ *
+ * In bring-up phase, it just used primary ring so set gfx ring number as 1 at
+ * first.
+ */
+#define GFX10_NUM_GFX_RINGS	2
+#define GFX10_MEC_HPD_SIZE	2048
+
+#define F32_CE_PROGRAM_RAM_SIZE		65536
+#define RLCG_UCODE_LOADING_START_ADDRESS	0x00002000L
+
+MODULE_FIRMWARE("amdgpu/navi10_ce.bin");
+MODULE_FIRMWARE("amdgpu/navi10_pfp.bin");
+MODULE_FIRMWARE("amdgpu/navi10_me.bin");
+MODULE_FIRMWARE("amdgpu/navi10_mec.bin");
+MODULE_FIRMWARE("amdgpu/navi10_mec2.bin");
+MODULE_FIRMWARE("amdgpu/navi10_rlc.bin");
+
+static const struct soc15_reg_golden golden_settings_gc_10_1[] =
+{
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_4, 0xffffffff, 0x00400014),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_CPF_CLK_CTRL, 0xfcff8fff, 0xf8000100),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQ_CLK_CTRL, 0x60000ff0, 0x60000100),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQG_CLK_CTRL, 0x40000000, 0x40000100),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_VGT_CLK_CTRL, 0xffff8fff, 0xffff8100),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_WD_CLK_CTRL, 0xfeff8fff, 0xfeff8100),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCH_PIPE_STEER, 0xffffffff, 0xe4e4e4e4),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCH_VC5_ENABLE, 0x00000002, 0x00000000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_SD_CNTL, 0x000007ff, 0x000005ff),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG, 0x20000000, 0x20000000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xffffffff, 0x00000420),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0x07800000, 0x04800000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DFSM_TILES_IN_FLIGHT, 0x0000ffff, 0x0000003f),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_LAST_OF_BURST_CONFIG, 0xffffffff, 0x03860204),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff0ffff, 0x00000500),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGE_PRIV_CONTROL, 0x000007ff, 0x000001fe),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0xffffffff, 0xe4e4e4e4),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x10321032),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x02310231),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2A_ADDR_MATCH_MASK, 0xffffffff, 0xffffffcf),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_ADDR_MATCH_MASK, 0xffffffff, 0xffffffcf),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CTRL2, 0xffffffff, 0x1402002f),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CTRL3, 0xffff9fff, 0x00001188),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE, 0x3fffffff, 0x08000009),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_LINE_STIPPLE_STATE, 0x0000ff0f, 0x00000000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmRMI_SPARE, 0xffffffff, 0xffff3101),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ALU_CLK_CTRL, 0xffffffff, 0xffffffff),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ARB_CONFIG, 0x00000100, 0x00000130),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_LDS_CLK_CTRL, 0xffffffff, 0xffffffff),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CGTT_CLK_CTRL, 0x40000ff0, 0x40000100),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00000000)
+};
+
+static const struct soc15_reg_golden golden_settings_gc_10_0_nv10[] =
+{
+	/* Pending on emulation bring up */
+};
+
+static void gfx_v10_0_set_ring_funcs(struct amdgpu_device *adev);
+static void gfx_v10_0_set_irq_funcs(struct amdgpu_device *adev);
+static void gfx_v10_0_set_gds_init(struct amdgpu_device *adev);
+static void gfx_v10_0_set_rlc_funcs(struct amdgpu_device *adev);
+static int gfx_v10_0_get_cu_info(struct amdgpu_device *adev,
+                                 struct amdgpu_cu_info *cu_info);
+static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev);
+static void gfx_v10_0_select_se_sh(struct amdgpu_device *adev, u32 se_num,
+				   u32 sh_num, u32 instance);
+static u32 gfx_v10_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev);
+
+static int gfx_v10_0_rlc_backdoor_autoload_buffer_init(struct amdgpu_device *adev);
+static void gfx_v10_0_rlc_backdoor_autoload_buffer_fini(struct amdgpu_device *adev);
+static int gfx_v10_0_rlc_backdoor_autoload_enable(struct amdgpu_device *adev);
+static int gfx_v10_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev);
+static void gfx_v10_0_ring_emit_ce_meta(struct amdgpu_ring *ring, bool resume);
+static void gfx_v10_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume);
+static void gfx_v10_0_ring_emit_tmz(struct amdgpu_ring *ring, bool start);
+
+static void gfx10_kiq_set_resources(struct amdgpu_ring *kiq_ring, uint64_t queue_mask)
+{
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_SET_RESOURCES, 6));
+	amdgpu_ring_write(kiq_ring, PACKET3_SET_RESOURCES_VMID_MASK(0) |
+			  PACKET3_SET_RESOURCES_QUEUE_TYPE(0));	/* vmid_mask:0 queue_type:0 (KIQ) */
+	amdgpu_ring_write(kiq_ring, lower_32_bits(queue_mask));	/* queue mask lo */
+	amdgpu_ring_write(kiq_ring, upper_32_bits(queue_mask));	/* queue mask hi */
+	amdgpu_ring_write(kiq_ring, 0);	/* gws mask lo */
+	amdgpu_ring_write(kiq_ring, 0);	/* gws mask hi */
+	amdgpu_ring_write(kiq_ring, 0);	/* oac mask */
+	amdgpu_ring_write(kiq_ring, 0);	/* gds heap base:0, gds heap size:0 */
+}
+
+static void gfx10_kiq_map_queues(struct amdgpu_ring *kiq_ring,
+					struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = kiq_ring->adev;
+	uint64_t mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj);
+	uint64_t wptr_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_MAP_QUEUES, 5));
+	/* Q_sel:0, vmid:0, vidmem: 1, engine:0, num_Q:1*/
+	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
+			  PACKET3_MAP_QUEUES_QUEUE_SEL(0) | /* Queue_Sel */
+			  PACKET3_MAP_QUEUES_VMID(0) | /* VMID */
+			  PACKET3_MAP_QUEUES_QUEUE(ring->queue) |
+			  PACKET3_MAP_QUEUES_PIPE(ring->pipe) |
+			  PACKET3_MAP_QUEUES_ME((ring->me == 1 ? 0 : 1)) |
+			  PACKET3_MAP_QUEUES_QUEUE_TYPE(0) | /*queue_type: normal compute queue */
+			  PACKET3_MAP_QUEUES_ALLOC_FORMAT(0) | /* alloc format: all_on_one_pipe */
+			  PACKET3_MAP_QUEUES_ENGINE_SEL(eng_sel) |
+			  PACKET3_MAP_QUEUES_NUM_QUEUES(1)); /* num_queues: must be 1 */
+	amdgpu_ring_write(kiq_ring, PACKET3_MAP_QUEUES_DOORBELL_OFFSET(ring->doorbell_index));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(mqd_addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(mqd_addr));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(wptr_addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr));
+}
+
+static void gfx10_kiq_unmap_queues(struct amdgpu_ring *kiq_ring,
+				   struct amdgpu_ring *ring,
+				   enum amdgpu_unmap_queues_action action,
+				   u64 gpu_addr, u64 seq)
+{
+	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_UNMAP_QUEUES, 4));
+	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
+			  PACKET3_UNMAP_QUEUES_ACTION(action) |
+			  PACKET3_UNMAP_QUEUES_QUEUE_SEL(0) |
+			  PACKET3_UNMAP_QUEUES_ENGINE_SEL(eng_sel) |
+			  PACKET3_UNMAP_QUEUES_NUM_QUEUES(1));
+	amdgpu_ring_write(kiq_ring,
+		  PACKET3_UNMAP_QUEUES_DOORBELL_OFFSET0(ring->doorbell_index));
+
+	if (action == PREEMPT_QUEUES_NO_UNMAP) {
+		amdgpu_ring_write(kiq_ring, lower_32_bits(gpu_addr));
+		amdgpu_ring_write(kiq_ring, upper_32_bits(gpu_addr));
+		amdgpu_ring_write(kiq_ring, seq);
+	} else {
+		amdgpu_ring_write(kiq_ring, 0);
+		amdgpu_ring_write(kiq_ring, 0);
+		amdgpu_ring_write(kiq_ring, 0);
+	}
+}
+
+static void gfx10_kiq_query_status(struct amdgpu_ring *kiq_ring,
+					struct amdgpu_ring *ring,
+					u64 addr,
+					u64 seq)
+{
+	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_QUERY_STATUS, 5));
+	amdgpu_ring_write(kiq_ring,
+				PACKET3_QUERY_STATUS_CONTEXT_ID(0) |
+				PACKET3_QUERY_STATUS_INTERRUPT_SEL(0) |
+				PACKET3_QUERY_STATUS_COMMAND(2));
+	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
+				PACKET3_QUERY_STATUS_DOORBELL_OFFSET(ring->doorbell_index) |
+				PACKET3_QUERY_STATUS_ENG_SEL(eng_sel));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(addr));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(seq));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(seq));
+}
+
+static const struct kiq_pm4_funcs gfx_v10_0_kiq_pm4_funcs = {
+	.kiq_set_resources = gfx10_kiq_set_resources,
+	.kiq_map_queues = gfx10_kiq_map_queues,
+	.kiq_unmap_queues = gfx10_kiq_unmap_queues,
+	.kiq_query_status = gfx10_kiq_query_status,
+	.set_resources_size = 8,
+	.map_queues_size = 7,
+	.unmap_queues_size = 6,
+	.query_status_size = 7,
+};
+
+static void gfx_v10_0_set_kiq_pm4_funcs(struct amdgpu_device *adev)
+{
+	adev->gfx.kiq.pmf = &gfx_v10_0_kiq_pm4_funcs;
+}
+
+static void gfx_v10_0_init_golden_registers(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		soc15_program_register_sequence(adev,
+						golden_settings_gc_10_1,
+						(const u32)ARRAY_SIZE(golden_settings_gc_10_1));
+		soc15_program_register_sequence(adev,
+						golden_settings_gc_10_0_nv10,
+						(const u32)ARRAY_SIZE(golden_settings_gc_10_0_nv10));
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v10_0_scratch_init(struct amdgpu_device *adev)
+{
+	adev->gfx.scratch.num_reg = 8;
+	adev->gfx.scratch.reg_base = SOC15_REG_OFFSET(GC, 0, mmSCRATCH_REG0);
+	adev->gfx.scratch.free_mask = (1u << adev->gfx.scratch.num_reg) - 1;
+}
+
+static void gfx_v10_0_write_data_to_reg(struct amdgpu_ring *ring, int eng_sel,
+				       bool wc, uint32_t reg, uint32_t val)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, WRITE_DATA_ENGINE_SEL(eng_sel) |
+			  WRITE_DATA_DST_SEL(0) | (wc ? WR_CONFIRM : 0));
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, val);
+}
+
+static void gfx_v10_0_wait_reg_mem(struct amdgpu_ring *ring, int eng_sel,
+				  int mem_space, int opt, uint32_t addr0,
+				  uint32_t addr1, uint32_t ref, uint32_t mask,
+				  uint32_t inv)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WAIT_REG_MEM, 5));
+	amdgpu_ring_write(ring,
+			  /* memory (1) or register (0) */
+			  (WAIT_REG_MEM_MEM_SPACE(mem_space) |
+			   WAIT_REG_MEM_OPERATION(opt) | /* wait */
+			   WAIT_REG_MEM_FUNCTION(3) |  /* equal */
+			   WAIT_REG_MEM_ENGINE(eng_sel)));
+
+	if (mem_space)
+		BUG_ON(addr0 & 0x3); /* Dword align */
+	amdgpu_ring_write(ring, addr0);
+	amdgpu_ring_write(ring, addr1);
+	amdgpu_ring_write(ring, ref);
+	amdgpu_ring_write(ring, mask);
+	amdgpu_ring_write(ring, inv); /* poll interval */
+}
+
+static int gfx_v10_0_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t scratch;
+	uint32_t tmp = 0;
+	unsigned i;
+	int r;
+
+	r = amdgpu_gfx_scratch_get(adev, &scratch);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to get scratch reg (%d).\n", r);
+		return r;
+	}
+
+	WREG32(scratch, 0xCAFEDEAD);
+
+	r = amdgpu_ring_alloc(ring, 3);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring %d (%d).\n",
+			  ring->idx, r);
+		amdgpu_gfx_scratch_free(adev, scratch);
+		return r;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SET_UCONFIG_REG, 1));
+	amdgpu_ring_write(ring, (scratch - PACKET3_SET_UCONFIG_REG_START));
+	amdgpu_ring_write(ring, 0xDEADBEEF);
+	amdgpu_ring_commit(ring);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		tmp = RREG32(scratch);
+		if (tmp == 0xDEADBEEF)
+			break;
+		if (amdgpu_emu_mode == 1)
+			msleep(1);
+		else
+			DRM_UDELAY(1);
+	}
+	if (i < adev->usec_timeout) {
+		if (amdgpu_emu_mode == 1)
+			DRM_INFO("ring test on %d succeeded in %d msecs\n",
+				 ring->idx, i);
+		else
+			DRM_INFO("ring test on %d succeeded in %d usecs\n",
+				 ring->idx, i);
+	} else {
+		DRM_ERROR("amdgpu: ring %d test failed (scratch(0x%04X)=0x%08X)\n",
+			  ring->idx, scratch, tmp);
+		r = -EINVAL;
+	}
+	amdgpu_gfx_scratch_free(adev, scratch);
+
+	return r;
+}
+
+static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+{
+        struct amdgpu_device *adev = ring->adev;
+        struct amdgpu_ib ib;
+        struct dma_fence *f = NULL;
+        uint32_t scratch;
+        uint32_t tmp = 0;
+        long r;
+
+        r = amdgpu_gfx_scratch_get(adev, &scratch);
+        if (r) {
+                DRM_ERROR("amdgpu: failed to get scratch reg (%ld).\n", r);
+                return r;
+        }
+
+        WREG32(scratch, 0xCAFEDEAD);
+
+        memset(&ib, 0, sizeof(ib));
+        r = amdgpu_ib_get(adev, NULL, 256, &ib);
+        if (r) {
+                DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r);
+                goto err1;
+        }
+
+        ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1);
+        ib.ptr[1] = ((scratch - PACKET3_SET_UCONFIG_REG_START));
+        ib.ptr[2] = 0xDEADBEEF;
+        ib.length_dw = 3;
+
+        r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f);
+        if (r)
+                goto err2;
+
+        r = dma_fence_wait_timeout(f, false, timeout);
+        if (r == 0) {
+                DRM_ERROR("amdgpu: IB test timed out.\n");
+                r = -ETIMEDOUT;
+                goto err2;
+        } else if (r < 0) {
+                DRM_ERROR("amdgpu: fence wait failed (%ld).\n", r);
+                goto err2;
+        }
+
+        tmp = RREG32(scratch);
+        if (tmp == 0xDEADBEEF) {
+                DRM_INFO("ib test on ring %d succeeded\n", ring->idx);
+                r = 0;
+        } else {
+                DRM_ERROR("amdgpu: ib test failed (scratch(0x%04X)=0x%08X)\n",
+                          scratch, tmp);
+                r = -EINVAL;
+        }
+err2:
+        amdgpu_ib_free(adev, &ib, NULL);
+        dma_fence_put(f);
+err1:
+        amdgpu_gfx_scratch_free(adev, scratch);
+
+        return r;
+}
+
+static void gfx_v10_0_free_microcode(struct amdgpu_device *adev)
+{
+	release_firmware(adev->gfx.pfp_fw);
+	adev->gfx.pfp_fw = NULL;
+	release_firmware(adev->gfx.me_fw);
+	adev->gfx.me_fw = NULL;
+	release_firmware(adev->gfx.ce_fw);
+	adev->gfx.ce_fw = NULL;
+	release_firmware(adev->gfx.rlc_fw);
+	adev->gfx.rlc_fw = NULL;
+	release_firmware(adev->gfx.mec_fw);
+	adev->gfx.mec_fw = NULL;
+	release_firmware(adev->gfx.mec2_fw);
+	adev->gfx.mec2_fw = NULL;
+
+	kfree(adev->gfx.rlc.register_list_format);
+}
+
+static void gfx_v10_0_init_rlc_ext_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_1 *rlc_hdr;
+
+	rlc_hdr = (const struct rlc_firmware_header_v2_1 *)adev->gfx.rlc_fw->data;
+	adev->gfx.rlc_srlc_fw_version = le32_to_cpu(rlc_hdr->save_restore_list_cntl_ucode_ver);
+	adev->gfx.rlc_srlc_feature_version = le32_to_cpu(rlc_hdr->save_restore_list_cntl_feature_ver);
+	adev->gfx.rlc.save_restore_list_cntl_size_bytes = le32_to_cpu(rlc_hdr->save_restore_list_cntl_size_bytes);
+	adev->gfx.rlc.save_restore_list_cntl = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->save_restore_list_cntl_offset_bytes);
+	adev->gfx.rlc_srlg_fw_version = le32_to_cpu(rlc_hdr->save_restore_list_gpm_ucode_ver);
+	adev->gfx.rlc_srlg_feature_version = le32_to_cpu(rlc_hdr->save_restore_list_gpm_feature_ver);
+	adev->gfx.rlc.save_restore_list_gpm_size_bytes = le32_to_cpu(rlc_hdr->save_restore_list_gpm_size_bytes);
+	adev->gfx.rlc.save_restore_list_gpm = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->save_restore_list_gpm_offset_bytes);
+	adev->gfx.rlc_srls_fw_version = le32_to_cpu(rlc_hdr->save_restore_list_srm_ucode_ver);
+	adev->gfx.rlc_srls_feature_version = le32_to_cpu(rlc_hdr->save_restore_list_srm_feature_ver);
+	adev->gfx.rlc.save_restore_list_srm_size_bytes = le32_to_cpu(rlc_hdr->save_restore_list_srm_size_bytes);
+	adev->gfx.rlc.save_restore_list_srm = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->save_restore_list_srm_offset_bytes);
+	adev->gfx.rlc.reg_list_format_direct_reg_list_length =
+			le32_to_cpu(rlc_hdr->reg_list_format_direct_reg_list_length);
+}
+
+static void gfx_v10_0_check_gfxoff_flag(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		if ((adev->gfx.rlc_fw_version < 85) ||
+			(adev->pm.fw_version < 0x002A0C00))
+			adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
+		break;
+	default:
+		break;
+	}
+}
+
+static int gfx_v10_0_init_microcode(struct amdgpu_device *adev)
+{
+	const char *chip_name;
+	char fw_name[30];
+	int err;
+	struct amdgpu_firmware_info *info = NULL;
+	const struct common_firmware_header *header = NULL;
+	const struct gfx_firmware_header_v1_0 *cp_hdr;
+	const struct rlc_firmware_header_v2_0 *rlc_hdr;
+	unsigned int *tmp = NULL;
+	unsigned int i = 0;
+	uint16_t version_major;
+	uint16_t version_minor;
+
+	DRM_DEBUG("\n");
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		chip_name = "navi10";
+		break;
+	default:
+		BUG();
+	}
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_pfp.bin", chip_name);
+	err = request_firmware(&adev->gfx.pfp_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.pfp_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.pfp_fw->data;
+	adev->gfx.pfp_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.pfp_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_me.bin", chip_name);
+	err = request_firmware(&adev->gfx.me_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.me_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.me_fw->data;
+	adev->gfx.me_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.me_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_ce.bin", chip_name);
+	err = request_firmware(&adev->gfx.ce_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.ce_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.ce_fw->data;
+	adev->gfx.ce_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.ce_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_rlc.bin", chip_name);
+	err = request_firmware(&adev->gfx.rlc_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.rlc_fw);
+	rlc_hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+	version_major = le16_to_cpu(rlc_hdr->header.header_version_major);
+	version_minor = le16_to_cpu(rlc_hdr->header.header_version_minor);
+	if (version_major == 2 && version_minor == 1)
+		adev->gfx.rlc.is_rlc_v2_1 = true;
+
+	adev->gfx.rlc_fw_version = le32_to_cpu(rlc_hdr->header.ucode_version);
+	adev->gfx.rlc_feature_version = le32_to_cpu(rlc_hdr->ucode_feature_version);
+	adev->gfx.rlc.save_and_restore_offset =
+			le32_to_cpu(rlc_hdr->save_and_restore_offset);
+	adev->gfx.rlc.clear_state_descriptor_offset =
+			le32_to_cpu(rlc_hdr->clear_state_descriptor_offset);
+	adev->gfx.rlc.avail_scratch_ram_locations =
+			le32_to_cpu(rlc_hdr->avail_scratch_ram_locations);
+	adev->gfx.rlc.reg_restore_list_size =
+			le32_to_cpu(rlc_hdr->reg_restore_list_size);
+	adev->gfx.rlc.reg_list_format_start =
+			le32_to_cpu(rlc_hdr->reg_list_format_start);
+	adev->gfx.rlc.reg_list_format_separate_start =
+			le32_to_cpu(rlc_hdr->reg_list_format_separate_start);
+	adev->gfx.rlc.starting_offsets_start =
+			le32_to_cpu(rlc_hdr->starting_offsets_start);
+	adev->gfx.rlc.reg_list_format_size_bytes =
+			le32_to_cpu(rlc_hdr->reg_list_format_size_bytes);
+	adev->gfx.rlc.reg_list_size_bytes =
+			le32_to_cpu(rlc_hdr->reg_list_size_bytes);
+	adev->gfx.rlc.register_list_format =
+			kmalloc(adev->gfx.rlc.reg_list_format_size_bytes +
+				adev->gfx.rlc.reg_list_size_bytes, GFP_KERNEL);
+	if (!adev->gfx.rlc.register_list_format) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	tmp = (unsigned int *)((uintptr_t)rlc_hdr +
+			le32_to_cpu(rlc_hdr->reg_list_format_array_offset_bytes));
+	for (i = 0 ; i < (rlc_hdr->reg_list_format_size_bytes >> 2); i++)
+		adev->gfx.rlc.register_list_format[i] =	le32_to_cpu(tmp[i]);
+
+	adev->gfx.rlc.register_restore = adev->gfx.rlc.register_list_format + i;
+
+	tmp = (unsigned int *)((uintptr_t)rlc_hdr +
+			le32_to_cpu(rlc_hdr->reg_list_array_offset_bytes));
+	for (i = 0 ; i < (rlc_hdr->reg_list_size_bytes >> 2); i++)
+		adev->gfx.rlc.register_restore[i] = le32_to_cpu(tmp[i]);
+
+	if (adev->gfx.rlc.is_rlc_v2_1)
+		gfx_v10_0_init_rlc_ext_microcode(adev);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mec.bin", chip_name);
+	err = request_firmware(&adev->gfx.mec_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.mec_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+	adev->gfx.mec_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.mec_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mec2.bin", chip_name);
+	err = request_firmware(&adev->gfx.mec2_fw, fw_name, adev->dev);
+	if (!err) {
+		err = amdgpu_ucode_validate(adev->gfx.mec2_fw);
+		if (err)
+			goto out;
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.mec2_fw->data;
+		adev->gfx.mec2_fw_version =
+		le32_to_cpu(cp_hdr->header.ucode_version);
+		adev->gfx.mec2_feature_version =
+		le32_to_cpu(cp_hdr->ucode_feature_version);
+	} else {
+		err = 0;
+		adev->gfx.mec2_fw = NULL;
+	}
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_PFP];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_PFP;
+		info->fw = adev->gfx.pfp_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_ME];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_ME;
+		info->fw = adev->gfx.me_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_CE];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_CE;
+		info->fw = adev->gfx.ce_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_G];
+		info->ucode_id = AMDGPU_UCODE_ID_RLC_G;
+		info->fw = adev->gfx.rlc_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		if (adev->gfx.rlc.is_rlc_v2_1 &&
+		    adev->gfx.rlc.save_restore_list_cntl_size_bytes &&
+		    adev->gfx.rlc.save_restore_list_gpm_size_bytes &&
+		    adev->gfx.rlc.save_restore_list_srm_size_bytes) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.save_restore_list_cntl_size_bytes, PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.save_restore_list_gpm_size_bytes, PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.save_restore_list_srm_size_bytes, PAGE_SIZE);
+		}
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC1];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_MEC1;
+		info->fw = adev->gfx.mec_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes) -
+			      le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC1_JT];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_MEC1_JT;
+		info->fw = adev->gfx.mec_fw;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+
+		if (adev->gfx.mec2_fw) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC2];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_MEC2;
+			info->fw = adev->gfx.mec2_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			cp_hdr = (const struct gfx_firmware_header_v1_0 *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes) -
+				      le32_to_cpu(cp_hdr->jt_size) * 4,
+				      PAGE_SIZE);
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC2_JT];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_MEC2_JT;
+			info->fw = adev->gfx.mec2_fw;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr->jt_size) * 4,
+				      PAGE_SIZE);
+		}
+	}
+
+out:
+	if (err) {
+		dev_err(adev->dev,
+			"gfx10: Failed to load firmware \"%s\"\n",
+			fw_name);
+		release_firmware(adev->gfx.pfp_fw);
+		adev->gfx.pfp_fw = NULL;
+		release_firmware(adev->gfx.me_fw);
+		adev->gfx.me_fw = NULL;
+		release_firmware(adev->gfx.ce_fw);
+		adev->gfx.ce_fw = NULL;
+		release_firmware(adev->gfx.rlc_fw);
+		adev->gfx.rlc_fw = NULL;
+		release_firmware(adev->gfx.mec_fw);
+		adev->gfx.mec_fw = NULL;
+		release_firmware(adev->gfx.mec2_fw);
+		adev->gfx.mec2_fw = NULL;
+	}
+
+	gfx_v10_0_check_gfxoff_flag(adev);
+
+	return err;
+}
+
+static u32 gfx_v10_0_get_csb_size(struct amdgpu_device *adev)
+{
+	u32 count = 0;
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+
+	/* begin clear state */
+	count += 2;
+	/* context control state */
+	count += 3;
+
+	for (sect = gfx10_cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT)
+				count += 2 + ext->reg_count;
+			else
+				return 0;
+		}
+	}
+
+	/* set PA_SC_TILE_STEERING_OVERRIDE */
+	count += 3;
+	/* end clear state */
+	count += 2;
+	/* clear state */
+	count += 2;
+
+	return count;
+}
+
+static void gfx_v10_0_get_csb_buffer(struct amdgpu_device *adev,
+				    volatile u32 *buffer)
+{
+	u32 count = 0, i;
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+	int ctx_reg_offset;
+
+	if (adev->gfx.rlc.cs_data == NULL)
+		return;
+	if (buffer == NULL)
+		return;
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	buffer[count++] = cpu_to_le32(PACKET3_PREAMBLE_BEGIN_CLEAR_STATE);
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	buffer[count++] = cpu_to_le32(0x80000000);
+	buffer[count++] = cpu_to_le32(0x80000000);
+
+	for (sect = adev->gfx.rlc.cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT) {
+				buffer[count++] =
+					cpu_to_le32(PACKET3(PACKET3_SET_CONTEXT_REG, ext->reg_count));
+				buffer[count++] = cpu_to_le32(ext->reg_index -
+						PACKET3_SET_CONTEXT_REG_START);
+				for (i = 0; i < ext->reg_count; i++)
+					buffer[count++] = cpu_to_le32(ext->extent[i]);
+			} else {
+				return;
+			}
+		}
+	}
+
+	ctx_reg_offset =
+		SOC15_REG_OFFSET(GC, 0, mmPA_SC_TILE_STEERING_OVERRIDE) - PACKET3_SET_CONTEXT_REG_START;
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_SET_CONTEXT_REG, 1));
+	buffer[count++] = cpu_to_le32(ctx_reg_offset);
+	buffer[count++] = cpu_to_le32(adev->gfx.config.pa_sc_tile_steering_override);
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	buffer[count++] = cpu_to_le32(PACKET3_PREAMBLE_END_CLEAR_STATE);
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_CLEAR_STATE, 0));
+	buffer[count++] = cpu_to_le32(0);
+}
+
+static void gfx_v10_0_rlc_fini(struct amdgpu_device *adev)
+{
+	/* clear state block */
+	amdgpu_bo_free_kernel(&adev->gfx.rlc.clear_state_obj,
+			&adev->gfx.rlc.clear_state_gpu_addr,
+			(void **)&adev->gfx.rlc.cs_ptr);
+
+	/* jump table block */
+	amdgpu_bo_free_kernel(&adev->gfx.rlc.cp_table_obj,
+			&adev->gfx.rlc.cp_table_gpu_addr,
+			(void **)&adev->gfx.rlc.cp_table_ptr);
+}
+
+static int gfx_v10_0_rlc_init(struct amdgpu_device *adev)
+{
+	const struct cs_section_def *cs_data;
+	int r;
+
+	adev->gfx.rlc.cs_data = gfx10_cs_data;
+
+	cs_data = adev->gfx.rlc.cs_data;
+
+	if (cs_data) {
+		/* init clear state block */
+		r = amdgpu_gfx_rlc_init_csb(adev);
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static void gfx_v10_0_mec_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.mec.hpd_eop_obj, NULL, NULL);
+	amdgpu_bo_free_kernel(&adev->gfx.mec.mec_fw_obj, NULL, NULL);
+}
+
+static int gfx_v10_0_me_init(struct amdgpu_device *adev)
+{
+	int r;
+
+	bitmap_zero(adev->gfx.me.queue_bitmap, AMDGPU_MAX_GFX_QUEUES);
+
+	amdgpu_gfx_graphics_queue_acquire(adev);
+
+	r = gfx_v10_0_init_microcode(adev);
+	if (r)
+		DRM_ERROR("Failed to load gfx firmware!\n");
+
+	return r;
+}
+
+static int gfx_v10_0_mec_init(struct amdgpu_device *adev)
+{
+	int r;
+	u32 *hpd;
+	const __le32 *fw_data = NULL;
+	unsigned fw_size;
+	u32 *fw = NULL;
+	size_t mec_hpd_size;
+
+	const struct gfx_firmware_header_v1_0 *mec_hdr = NULL;
+
+	bitmap_zero(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_COMPUTE_QUEUES);
+
+	/* take ownership of the relevant compute queues */
+	amdgpu_gfx_compute_queue_acquire(adev);
+	mec_hpd_size = adev->gfx.num_compute_rings * GFX10_MEC_HPD_SIZE;
+
+	r = amdgpu_bo_create_reserved(adev, mec_hpd_size, PAGE_SIZE,
+				      AMDGPU_GEM_DOMAIN_GTT,
+				      &adev->gfx.mec.hpd_eop_obj,
+				      &adev->gfx.mec.hpd_eop_gpu_addr,
+				      (void **)&hpd);
+	if (r) {
+		dev_warn(adev->dev, "(%d) create HDP EOP bo failed\n", r);
+		gfx_v10_0_mec_fini(adev);
+		return r;
+	}
+
+	memset(hpd, 0, adev->gfx.mec.hpd_eop_obj->tbo.mem.size);
+
+	amdgpu_bo_kunmap(adev->gfx.mec.hpd_eop_obj);
+	amdgpu_bo_unreserve(adev->gfx.mec.hpd_eop_obj);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+		mec_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+
+		fw_data = (const __le32 *) (adev->gfx.mec_fw->data +
+			 le32_to_cpu(mec_hdr->header.ucode_array_offset_bytes));
+		fw_size = le32_to_cpu(mec_hdr->header.ucode_size_bytes);
+
+		r = amdgpu_bo_create_reserved(adev, mec_hdr->header.ucode_size_bytes,
+					      PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+					      &adev->gfx.mec.mec_fw_obj,
+					      &adev->gfx.mec.mec_fw_gpu_addr,
+					      (void **)&fw);
+		if (r) {
+			dev_err(adev->dev, "(%d) failed to create mec fw bo\n", r);
+			gfx_v10_0_mec_fini(adev);
+			return r;
+		}
+
+		memcpy(fw, fw_data, fw_size);
+
+		amdgpu_bo_kunmap(adev->gfx.mec.mec_fw_obj);
+		amdgpu_bo_unreserve(adev->gfx.mec.mec_fw_obj);
+	}
+
+	return 0;
+}
+
+static uint32_t wave_read_ind(struct amdgpu_device *adev, uint32_t wave, uint32_t address)
+{
+	WREG32_SOC15(GC, 0, mmSQ_IND_INDEX,
+		(wave << SQ_IND_INDEX__WAVE_ID__SHIFT) |
+		(address << SQ_IND_INDEX__INDEX__SHIFT));
+	return RREG32_SOC15(GC, 0, mmSQ_IND_DATA);
+}
+
+static void wave_read_regs(struct amdgpu_device *adev, uint32_t wave,
+			   uint32_t thread, uint32_t regno,
+			   uint32_t num, uint32_t *out)
+{
+	WREG32_SOC15(GC, 0, mmSQ_IND_INDEX,
+		(wave << SQ_IND_INDEX__WAVE_ID__SHIFT) |
+		(regno << SQ_IND_INDEX__INDEX__SHIFT) |
+		(thread << SQ_IND_INDEX__WORKITEM_ID__SHIFT) |
+		(SQ_IND_INDEX__AUTO_INCR_MASK));
+	while (num--)
+		*(out++) = RREG32_SOC15(GC, 0, mmSQ_IND_DATA);
+}
+
+static void gfx_v10_0_read_wave_data(struct amdgpu_device *adev, uint32_t simd, uint32_t wave, uint32_t *dst, int *no_fields)
+{
+	/* in gfx10 the SIMD_ID is specified as part of the INSTANCE
+	 * field when performing a select_se_sh so it should be
+	 * zero here */
+	WARN_ON(simd != 0);
+
+	/* type 2 wave data */
+	dst[(*no_fields)++] = 2;
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_STATUS);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_LO);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_HI);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_EXEC_LO);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_EXEC_HI);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_HW_ID1);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_HW_ID2);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_INST_DW0);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_GPR_ALLOC);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_LDS_ALLOC);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_TRAPSTS);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_STS);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_STS2);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_DBG1);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_M0);
+}
+
+static void gfx_v10_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
+				     uint32_t wave, uint32_t start,
+				     uint32_t size, uint32_t *dst)
+{
+	WARN_ON(simd != 0);
+
+	wave_read_regs(
+		adev, wave, 0, start + SQIND_WAVE_SGPRS_OFFSET, size,
+		dst);
+}
+
+static void gfx_v10_0_read_wave_vgprs(struct amdgpu_device *adev, uint32_t simd,
+				      uint32_t wave, uint32_t thread,
+				      uint32_t start, uint32_t size,
+				      uint32_t *dst)
+{
+	wave_read_regs(
+		adev, wave, thread,
+		start + SQIND_WAVE_VGPRS_OFFSET, size, dst);
+}
+
+
+static const struct amdgpu_gfx_funcs gfx_v10_0_gfx_funcs = {
+	.get_gpu_clock_counter = &gfx_v10_0_get_gpu_clock_counter,
+	.select_se_sh = &gfx_v10_0_select_se_sh,
+	.read_wave_data = &gfx_v10_0_read_wave_data,
+	.read_wave_sgprs = &gfx_v10_0_read_wave_sgprs,
+	.read_wave_vgprs = &gfx_v10_0_read_wave_vgprs,
+};
+
+static void gfx_v10_0_gpu_early_init(struct amdgpu_device *adev)
+{
+	u32 gb_addr_config;
+
+	adev->gfx.funcs = &gfx_v10_0_gfx_funcs;
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		adev->gfx.config.max_hw_contexts = 8;
+		adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+		adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+		adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+		adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+		gb_addr_config = RREG32_SOC15(GC, 0, mmGB_ADDR_CONFIG);
+		break;
+	default:
+		BUG();
+		break;
+	}
+
+	adev->gfx.config.gb_addr_config = gb_addr_config;
+
+	adev->gfx.config.gb_addr_config_fields.num_pipes = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, NUM_PIPES);
+
+	adev->gfx.config.max_tile_pipes =
+		adev->gfx.config.gb_addr_config_fields.num_pipes;
+
+	adev->gfx.config.gb_addr_config_fields.max_compress_frags = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, MAX_COMPRESSED_FRAGS);
+	adev->gfx.config.gb_addr_config_fields.num_rb_per_se = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, NUM_RB_PER_SE);
+	adev->gfx.config.gb_addr_config_fields.num_se = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, NUM_SHADER_ENGINES);
+	adev->gfx.config.gb_addr_config_fields.pipe_interleave_size = 1 << (8 +
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, PIPE_INTERLEAVE_SIZE));
+}
+
+static int gfx_v10_0_gfx_ring_init(struct amdgpu_device *adev, int ring_id,
+				   int me, int pipe, int queue)
+{
+	int r;
+	struct amdgpu_ring *ring;
+	unsigned int irq_type;
+
+	ring = &adev->gfx.gfx_ring[ring_id];
+
+	ring->me = me;
+	ring->pipe = pipe;
+	ring->queue = queue;
+
+	ring->ring_obj = NULL;
+	ring->use_doorbell = true;
+
+	if (!ring_id)
+		ring->doorbell_index = adev->doorbell_index.gfx_ring0 << 1;
+	else
+		ring->doorbell_index = adev->doorbell_index.gfx_ring1 << 1;
+	sprintf(ring->name, "gfx_%d.%d.%d", ring->me, ring->pipe, ring->queue);
+
+	irq_type = AMDGPU_CP_IRQ_GFX_ME0_PIPE0_EOP + ring->pipe;
+	r = amdgpu_ring_init(adev, ring, 1024,
+			     &adev->gfx.eop_irq, irq_type);
+	if (r)
+		return r;
+	return 0;
+}
+
+static int gfx_v10_0_compute_ring_init(struct amdgpu_device *adev, int ring_id,
+				       int mec, int pipe, int queue)
+{
+	int r;
+	unsigned irq_type;
+	struct amdgpu_ring *ring = &adev->gfx.compute_ring[ring_id];
+
+	ring = &adev->gfx.compute_ring[ring_id];
+
+	/* mec0 is me1 */
+	ring->me = mec + 1;
+	ring->pipe = pipe;
+	ring->queue = queue;
+
+	ring->ring_obj = NULL;
+	ring->use_doorbell = true;
+	ring->doorbell_index = (adev->doorbell_index.mec_ring0 + ring_id) << 1;
+	ring->eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr
+				+ (ring_id * GFX10_MEC_HPD_SIZE);
+	sprintf(ring->name, "comp_%d.%d.%d", ring->me, ring->pipe, ring->queue);
+
+	irq_type = AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP
+		+ ((ring->me - 1) * adev->gfx.mec.num_pipe_per_mec)
+		+ ring->pipe;
+
+	/* type-2 packets are deprecated on MEC, use type-3 instead */
+	r = amdgpu_ring_init(adev, ring, 1024,
+			     &adev->gfx.eop_irq, irq_type);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static int gfx_v10_0_sw_init(void *handle)
+{
+	int i, j, k, r, ring_id = 0;
+	struct amdgpu_kiq *kiq;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		adev->gfx.me.num_me = 1;
+		adev->gfx.me.num_pipe_per_me = 2;
+		adev->gfx.me.num_queue_per_pipe = 1;
+		adev->gfx.mec.num_mec = 2;
+		adev->gfx.mec.num_pipe_per_mec = 4;
+		adev->gfx.mec.num_queue_per_pipe = 8;
+		break;
+	default:
+		adev->gfx.me.num_me = 1;
+		adev->gfx.me.num_pipe_per_me = 1;
+		adev->gfx.me.num_queue_per_pipe = 1;
+		adev->gfx.mec.num_mec = 1;
+		adev->gfx.mec.num_pipe_per_mec = 4;
+		adev->gfx.mec.num_queue_per_pipe = 8;
+		break;
+	}
+
+	/* KIQ event */
+	r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_GRBM_CP,
+			      GFX_10_1__SRCID__CP_IB2_INTERRUPT_PKT,
+			      &adev->gfx.kiq.irq);
+	if (r)
+		return r;
+
+	/* EOP Event */
+	r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_GRBM_CP,
+			      GFX_10_1__SRCID__CP_EOP_INTERRUPT,
+			      &adev->gfx.eop_irq);
+	if (r)
+		return r;
+
+	/* Privileged reg */
+        r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_GRBM_CP, GFX_10_1__SRCID__CP_PRIV_REG_FAULT,
+			      &adev->gfx.priv_reg_irq);
+        if (r)
+                return r;
+
+	/* Privileged inst */
+	r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_GRBM_CP, GFX_10_1__SRCID__CP_PRIV_INSTR_FAULT,
+			      &adev->gfx.priv_inst_irq);
+	if (r)
+		return r;
+
+	adev->gfx.gfx_current_status = AMDGPU_GFX_NORMAL_MODE;
+
+	gfx_v10_0_scratch_init(adev);
+
+	r = gfx_v10_0_me_init(adev);
+	if (r)
+		return r;
+
+	r = gfx_v10_0_rlc_init(adev);
+	if (r) {
+		DRM_ERROR("Failed to init rlc BOs!\n");
+		return r;
+	}
+
+	r = gfx_v10_0_mec_init(adev);
+	if (r) {
+		DRM_ERROR("Failed to init MEC BOs!\n");
+		return r;
+	}
+
+	/* set up the gfx ring */
+	for (i = 0; i < adev->gfx.me.num_me; i++) {
+		for (j = 0; j < adev->gfx.me.num_queue_per_pipe; j++) {
+			for (k = 0; k < adev->gfx.me.num_pipe_per_me; k++) {
+				if (!amdgpu_gfx_is_me_queue_enabled(adev, i, k, j))
+					continue;
+
+				r = gfx_v10_0_gfx_ring_init(adev, ring_id,
+							    i, k, j);
+				if (r)
+					return r;
+				ring_id++;
+			}
+		}
+	}
+
+	ring_id = 0;
+	/* set up the compute queues - allocate horizontally across pipes */
+	for (i = 0; i < adev->gfx.mec.num_mec; ++i) {
+		for (j = 0; j < adev->gfx.mec.num_queue_per_pipe; j++) {
+			for (k = 0; k < adev->gfx.mec.num_pipe_per_mec; k++) {
+				if (!amdgpu_gfx_is_mec_queue_enabled(adev, i, k,
+								     j))
+					continue;
+
+				r = gfx_v10_0_compute_ring_init(adev, ring_id,
+								i, k, j);
+				if (r)
+					return r;
+
+				ring_id++;
+			}
+		}
+	}
+
+	r = amdgpu_gfx_kiq_init(adev, GFX10_MEC_HPD_SIZE);
+	if (r) {
+		DRM_ERROR("Failed to init KIQ BOs!\n");
+		return r;
+	}
+
+	kiq = &adev->gfx.kiq;
+	r = amdgpu_gfx_kiq_init_ring(adev, &kiq->ring, &kiq->irq);
+	if (r)
+		return r;
+
+	r = amdgpu_gfx_mqd_sw_init(adev, sizeof(struct v10_compute_mqd));
+	if (r)
+		return r;
+
+	/* reserve GDS, GWS and OA resource for gfx */
+	r = amdgpu_bo_create_kernel(adev, adev->gds.mem.gfx_partition_size,
+				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_GDS,
+				    &adev->gds.gds_gfx_bo, NULL, NULL);
+	if (r)
+		return r;
+
+	r = amdgpu_bo_create_kernel(adev, adev->gds.gws.gfx_partition_size,
+				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_GWS,
+				    &adev->gds.gws_gfx_bo, NULL, NULL);
+	if (r)
+		return r;
+
+	r = amdgpu_bo_create_kernel(adev, adev->gds.oa.gfx_partition_size,
+				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_OA,
+				    &adev->gds.oa_gfx_bo, NULL, NULL);
+	if (r)
+		return r;
+
+	/* allocate visible FB for rlc auto-loading fw */
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) {
+		r = gfx_v10_0_rlc_backdoor_autoload_buffer_init(adev);
+		if (r)
+			return r;
+	}
+
+	adev->gfx.ce_ram_size = F32_CE_PROGRAM_RAM_SIZE;
+
+	gfx_v10_0_gpu_early_init(adev);
+
+	return 0;
+}
+
+static void gfx_v10_0_pfp_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.pfp.pfp_fw_obj,
+			      &adev->gfx.pfp.pfp_fw_gpu_addr,
+			      (void **)&adev->gfx.pfp.pfp_fw_ptr);
+}
+
+static void gfx_v10_0_ce_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.ce.ce_fw_obj,
+			      &adev->gfx.ce.ce_fw_gpu_addr,
+			      (void **)&adev->gfx.ce.ce_fw_ptr);
+}
+
+static void gfx_v10_0_me_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.me.me_fw_obj,
+			      &adev->gfx.me.me_fw_gpu_addr,
+			      (void **)&adev->gfx.me.me_fw_ptr);
+}
+
+static int gfx_v10_0_sw_fini(void *handle)
+{
+	int i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	amdgpu_bo_free_kernel(&adev->gds.oa_gfx_bo, NULL, NULL);
+	amdgpu_bo_free_kernel(&adev->gds.gws_gfx_bo, NULL, NULL);
+	amdgpu_bo_free_kernel(&adev->gds.gds_gfx_bo, NULL, NULL);
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		amdgpu_ring_fini(&adev->gfx.gfx_ring[i]);
+	for (i = 0; i < adev->gfx.num_compute_rings; i++)
+		amdgpu_ring_fini(&adev->gfx.compute_ring[i]);
+
+	amdgpu_gfx_mqd_sw_fini(adev);
+	amdgpu_gfx_kiq_free_ring(&adev->gfx.kiq.ring, &adev->gfx.kiq.irq);
+	amdgpu_gfx_kiq_fini(adev);
+
+	gfx_v10_0_pfp_fini(adev);
+	gfx_v10_0_ce_fini(adev);
+	gfx_v10_0_me_fini(adev);
+	gfx_v10_0_rlc_fini(adev);
+	gfx_v10_0_mec_fini(adev);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO)
+		gfx_v10_0_rlc_backdoor_autoload_buffer_fini(adev);
+
+	gfx_v10_0_free_microcode(adev);
+
+	return 0;
+}
+
+
+static void gfx_v10_0_tiling_mode_table_init(struct amdgpu_device *adev)
+{
+	/* TODO */
+}
+
+static void gfx_v10_0_select_se_sh(struct amdgpu_device *adev, u32 se_num,
+				   u32 sh_num, u32 instance)
+{
+	u32 data;
+
+	if (instance == 0xffffffff)
+		data = REG_SET_FIELD(0, GRBM_GFX_INDEX,
+				     INSTANCE_BROADCAST_WRITES, 1);
+	else
+		data = REG_SET_FIELD(0, GRBM_GFX_INDEX, INSTANCE_INDEX,
+				     instance);
+
+	if (se_num == 0xffffffff)
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_BROADCAST_WRITES,
+				     1);
+	else
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_INDEX, se_num);
+
+	if (sh_num == 0xffffffff)
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SA_BROADCAST_WRITES,
+				     1);
+	else
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SA_INDEX, sh_num);
+
+	WREG32_SOC15(GC, 0, mmGRBM_GFX_INDEX, data);
+}
+
+static u32 gfx_v10_0_get_rb_active_bitmap(struct amdgpu_device *adev)
+{
+	u32 data, mask;
+
+	data = RREG32_SOC15(GC, 0, mmCC_RB_BACKEND_DISABLE);
+	data |= RREG32_SOC15(GC, 0, mmGC_USER_RB_BACKEND_DISABLE);
+
+	data &= CC_RB_BACKEND_DISABLE__BACKEND_DISABLE_MASK;
+	data >>= GC_USER_RB_BACKEND_DISABLE__BACKEND_DISABLE__SHIFT;
+
+	mask = amdgpu_gfx_create_bitmask(adev->gfx.config.max_backends_per_se /
+					 adev->gfx.config.max_sh_per_se);
+
+	return (~data) & mask;
+}
+
+static void gfx_v10_0_setup_rb(struct amdgpu_device *adev)
+{
+	int i, j;
+	u32 data;
+	u32 active_rbs = 0;
+	u32 rb_bitmap_width_per_sh = adev->gfx.config.max_backends_per_se /
+					adev->gfx.config.max_sh_per_se;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+			gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff);
+			data = gfx_v10_0_get_rb_active_bitmap(adev);
+			active_rbs |= data << ((i * adev->gfx.config.max_sh_per_se + j) *
+					       rb_bitmap_width_per_sh);
+		}
+	}
+	gfx_v10_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	adev->gfx.config.backend_enable_mask = active_rbs;
+	adev->gfx.config.num_rbs = hweight32(active_rbs);
+}
+
+static u32 gfx_v10_0_init_pa_sc_tile_steering_override(struct amdgpu_device *adev)
+{
+	uint32_t num_sc;
+	uint32_t enabled_rb_per_sh;
+	uint32_t active_rb_bitmap;
+	uint32_t num_rb_per_sc;
+	uint32_t num_packer_per_sc;
+	uint32_t pa_sc_tile_steering_override;
+
+	/* init num_sc */
+	num_sc = adev->gfx.config.max_shader_engines * adev->gfx.config.max_sh_per_se *
+			adev->gfx.config.num_sc_per_sh;
+	/* init num_rb_per_sc */
+	active_rb_bitmap = gfx_v10_0_get_rb_active_bitmap(adev);
+	enabled_rb_per_sh = hweight32(active_rb_bitmap);
+	num_rb_per_sc = enabled_rb_per_sh / adev->gfx.config.num_sc_per_sh;
+	/* init num_packer_per_sc */
+	num_packer_per_sc = adev->gfx.config.num_packer_per_sc;
+
+	pa_sc_tile_steering_override = 0;
+	pa_sc_tile_steering_override |=
+		(order_base_2(num_sc) << PA_SC_TILE_STEERING_OVERRIDE__NUM_SC__SHIFT) &
+		PA_SC_TILE_STEERING_OVERRIDE__NUM_SC_MASK;
+	pa_sc_tile_steering_override |=
+		(order_base_2(num_rb_per_sc) << PA_SC_TILE_STEERING_OVERRIDE__NUM_RB_PER_SC__SHIFT) &
+		PA_SC_TILE_STEERING_OVERRIDE__NUM_RB_PER_SC_MASK;
+	pa_sc_tile_steering_override |=
+		(order_base_2(num_packer_per_sc) << PA_SC_TILE_STEERING_OVERRIDE__NUM_PACKER_PER_SC__SHIFT) &
+		PA_SC_TILE_STEERING_OVERRIDE__NUM_PACKER_PER_SC_MASK;
+
+	return pa_sc_tile_steering_override;
+}
+
+#define DEFAULT_SH_MEM_BASES	(0x6000)
+#define FIRST_COMPUTE_VMID	(8)
+#define LAST_COMPUTE_VMID	(16)
+
+static void gfx_v10_0_init_compute_vmid(struct amdgpu_device *adev)
+{
+	int i;
+	uint32_t sh_mem_config;
+	uint32_t sh_mem_bases;
+
+	/*
+	 * Configure apertures:
+	 * LDS:         0x60000000'00000000 - 0x60000001'00000000 (4GB)
+	 * Scratch:     0x60000001'00000000 - 0x60000002'00000000 (4GB)
+	 * GPUVM:       0x60010000'00000000 - 0x60020000'00000000 (1TB)
+	 */
+	sh_mem_bases = DEFAULT_SH_MEM_BASES | (DEFAULT_SH_MEM_BASES << 16);
+
+	sh_mem_config = SH_MEM_ADDRESS_MODE_64 |
+			SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+			SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT;
+
+	mutex_lock(&adev->srbm_mutex);
+	for (i = FIRST_COMPUTE_VMID; i < LAST_COMPUTE_VMID; i++) {
+		nv_grbm_select(adev, 0, 0, 0, i);
+		/* CP and shaders */
+		WREG32_SOC15(GC, 0, mmSH_MEM_CONFIG, sh_mem_config);
+		WREG32_SOC15(GC, 0, mmSH_MEM_BASES, sh_mem_bases);
+	}
+	nv_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+}
+
+static void gfx_v10_0_tcp_harvest(struct amdgpu_device *adev)
+{
+	int i, j, k;
+	int max_wgp_per_sh = adev->gfx.config.max_cu_per_sh >> 1;
+	u32 tmp, wgp_active_bitmap = 0;
+	u32 gcrd_targets_disable_tcp = 0;
+	u32 utcl_invreq_disable = 0;
+	/*
+	 * GCRD_TARGETS_DISABLE field contains
+	 * for Navi10: GL1C=[18:15], SQC=[14:10], TCP=[9:0]
+	 */
+	u32 gcrd_targets_disable_mask = amdgpu_gfx_create_bitmask(
+			2 * max_wgp_per_sh + /* TCP */
+			max_wgp_per_sh + /* SQC */
+			4); /* GL1C */
+	/*
+	 * UTCL1_UTCL0_INVREQ_DISABLE field contains
+	 * for Navi10: SQG=[24], RMI=[23:20], SQC=[19:10], TCP=[9:0]
+	 */
+	u32 utcl_invreq_disable_mask = amdgpu_gfx_create_bitmask(
+			2 * max_wgp_per_sh + /* TCP */
+			2 * max_wgp_per_sh + /* SQC */
+			4 + /* RMI */
+			1); /* SQG */
+
+	if (adev->asic_type == CHIP_NAVI10) {
+		mutex_lock(&adev->grbm_idx_mutex);
+		for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+			for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+				gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff);
+				wgp_active_bitmap = gfx_v10_0_get_wgp_active_bitmap_per_sh(adev);
+				/* 
+				 * Set corresponding TCP bits for the inactive WGPs in
+				 * GCRD_SA_TARGETS_DISABLE
+				 */
+				gcrd_targets_disable_tcp = 0;
+				/* Set TCP & SQC bits in UTCL1_UTCL0_INVREQ_DISABLE */
+				utcl_invreq_disable = 0;
+				
+				for (k = 0; k < max_wgp_per_sh; k++) {
+					if (!(wgp_active_bitmap & (1 << k))) {
+						gcrd_targets_disable_tcp |= 3 << (2 * k);
+						utcl_invreq_disable |= (3 << (2 * k)) |
+							(3 << (2 * (max_wgp_per_sh + k)));
+					}
+				}
+
+				gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff);
+				tmp = RREG32_SOC15(GC,0, mmUTCL1_UTCL0_INVREQ_DISABLE);
+				/* only override TCP & SQC bits */
+				tmp &= 0xffffffff << (4 * max_wgp_per_sh);
+				tmp |= (utcl_invreq_disable & utcl_invreq_disable_mask);
+				WREG32_SOC15(GC, 0, mmUTCL1_UTCL0_INVREQ_DISABLE, tmp);
+
+				tmp = RREG32_SOC15(GC,0, mmGCRD_SA_TARGETS_DISABLE);
+				/* only override TCP bits */
+				tmp &= 0xffffffff << (2 * max_wgp_per_sh);
+				tmp |= (gcrd_targets_disable_tcp & gcrd_targets_disable_mask);
+				WREG32_SOC15(GC, 0, mmGCRD_SA_TARGETS_DISABLE, tmp);
+			}
+		}
+
+		gfx_v10_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+		mutex_unlock(&adev->grbm_idx_mutex);
+	}
+}
+
+static void gfx_v10_0_constants_init(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	int i;
+
+	WREG32_FIELD15(GC, 0, GRBM_CNTL, READ_TIMEOUT, 0xff);
+
+	gfx_v10_0_tiling_mode_table_init(adev);
+
+	gfx_v10_0_setup_rb(adev);
+	gfx_v10_0_get_cu_info(adev, &adev->gfx.cu_info);
+	adev->gfx.config.pa_sc_tile_steering_override =
+		gfx_v10_0_init_pa_sc_tile_steering_override(adev);
+
+	/* XXX SH_MEM regs */
+	/* where to put LDS, scratch, GPUVM in FSA64 space */
+	mutex_lock(&adev->srbm_mutex);
+	for (i = 0; i < adev->vm_manager.id_mgr[AMDGPU_GFXHUB].num_ids; i++) {
+		nv_grbm_select(adev, 0, 0, 0, i);
+		/* CP and shaders */
+		if (i == 0) {
+			tmp = REG_SET_FIELD(0, SH_MEM_CONFIG, ALIGNMENT_MODE,
+					    SH_MEM_ALIGNMENT_MODE_UNALIGNED);
+			tmp = REG_SET_FIELD(tmp, SH_MEM_CONFIG, RETRY_MODE, 0);
+			WREG32_SOC15(GC, 0, mmSH_MEM_CONFIG, tmp);
+			WREG32_SOC15(GC, 0, mmSH_MEM_BASES, 0);
+		} else {
+			tmp = REG_SET_FIELD(0, SH_MEM_CONFIG, ALIGNMENT_MODE,
+					    SH_MEM_ALIGNMENT_MODE_UNALIGNED);
+			tmp = REG_SET_FIELD(tmp, SH_MEM_CONFIG, RETRY_MODE, 0);
+			WREG32_SOC15(GC, 0, mmSH_MEM_CONFIG, tmp);
+			tmp = REG_SET_FIELD(0, SH_MEM_BASES, PRIVATE_BASE,
+				(adev->gmc.private_aperture_start >> 48));
+			tmp = REG_SET_FIELD(tmp, SH_MEM_BASES, SHARED_BASE,
+				(adev->gmc.shared_aperture_start >> 48));
+			WREG32_SOC15(GC, 0, mmSH_MEM_BASES, tmp);
+		}
+	}
+	nv_grbm_select(adev, 0, 0, 0, 0);
+
+	mutex_unlock(&adev->srbm_mutex);
+
+	gfx_v10_0_init_compute_vmid(adev);
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	/*
+	 * making sure that the following register writes will be broadcasted
+	 * to all the shaders
+	 */
+	gfx_v10_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+
+	WREG32_SOC15(GC, 0, mmPA_SC_FIFO_SIZE,
+		   (adev->gfx.config.sc_prim_fifo_size_frontend <<
+			PA_SC_FIFO_SIZE__SC_FRONTEND_PRIM_FIFO_SIZE__SHIFT) |
+		   (adev->gfx.config.sc_prim_fifo_size_backend <<
+			PA_SC_FIFO_SIZE__SC_BACKEND_PRIM_FIFO_SIZE__SHIFT) |
+		   (adev->gfx.config.sc_hiz_tile_fifo_size <<
+			PA_SC_FIFO_SIZE__SC_HIZ_TILE_FIFO_SIZE__SHIFT) |
+		   (adev->gfx.config.sc_earlyz_tile_fifo_size <<
+			PA_SC_FIFO_SIZE__SC_EARLYZ_TILE_FIFO_SIZE__SHIFT));
+	mutex_unlock(&adev->grbm_idx_mutex);
+}
+
+static void gfx_v10_0_enable_gui_idle_interrupt(struct amdgpu_device *adev,
+					       bool enable)
+{
+	u32 tmp = RREG32_SOC15(GC, 0, mmCP_INT_CNTL_RING0);
+
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE,
+			    enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE,
+			    enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE,
+			    enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE,
+			    enable ? 1 : 0);
+
+	WREG32_SOC15(GC, 0, mmCP_INT_CNTL_RING0, tmp);
+}
+
+static void gfx_v10_0_init_csb(struct amdgpu_device *adev)
+{
+	/* csib */
+	WREG32_SOC15(GC, 0, mmRLC_CSIB_ADDR_HI,
+		     adev->gfx.rlc.clear_state_gpu_addr >> 32);
+	WREG32_SOC15(GC, 0, mmRLC_CSIB_ADDR_LO,
+		     adev->gfx.rlc.clear_state_gpu_addr & 0xfffffffc);
+	WREG32_SOC15(GC, 0, mmRLC_CSIB_LENGTH, adev->gfx.rlc.clear_state_size);
+}
+
+static void gfx_v10_0_init_pg(struct amdgpu_device *adev)
+{
+	gfx_v10_0_init_csb(adev);
+
+	amdgpu_gmc_flush_gpu_tlb(adev, 0, 0);
+
+	/* TODO: init power gating */
+	return;
+}
+
+void gfx_v10_0_rlc_stop(struct amdgpu_device *adev)
+{
+	u32 tmp = RREG32_SOC15(GC, 0, mmRLC_CNTL);
+
+	tmp = REG_SET_FIELD(tmp, RLC_CNTL, RLC_ENABLE_F32, 0);
+	WREG32_SOC15(GC, 0, mmRLC_CNTL, tmp);
+
+	gfx_v10_0_enable_gui_idle_interrupt(adev, false);
+}
+
+static void gfx_v10_0_rlc_reset(struct amdgpu_device *adev)
+{
+	WREG32_FIELD15(GC, 0, GRBM_SOFT_RESET, SOFT_RESET_RLC, 1);
+	udelay(50);
+	WREG32_FIELD15(GC, 0, GRBM_SOFT_RESET, SOFT_RESET_RLC, 0);
+	udelay(50);
+}
+
+static void gfx_v10_0_rlc_smu_handshake_cntl(struct amdgpu_device *adev,
+					     bool enable)
+{
+	uint32_t rlc_pg_cntl;
+
+	rlc_pg_cntl = RREG32_SOC15(GC, 0, mmRLC_PG_CNTL);
+
+	if (!enable) {
+		/* RLC_PG_CNTL[23] = 0 (default)
+		 * RLC will wait for handshake acks with SMU
+		 * GFXOFF will be enabled
+		 * RLC_PG_CNTL[23] = 1
+		 * RLC will not issue any message to SMU
+		 * hence no handshake between SMU & RLC
+		 * GFXOFF will be disabled
+		 */
+		rlc_pg_cntl |= 0x80000;
+	} else
+		rlc_pg_cntl &= ~0x80000;
+	WREG32_SOC15(GC, 0, mmRLC_PG_CNTL, rlc_pg_cntl);
+}
+
+static void gfx_v10_0_rlc_start(struct amdgpu_device *adev)
+{
+	/* TODO: enable rlc & smu handshake until smu
+	 * and gfxoff feature works as expected */
+	if (!(amdgpu_pp_feature_mask & PP_GFXOFF_MASK))
+		gfx_v10_0_rlc_smu_handshake_cntl(adev, false);
+
+	WREG32_FIELD15(GC, 0, RLC_CNTL, RLC_ENABLE_F32, 1);
+	udelay(50);
+}
+
+static void gfx_v10_0_rlc_enable_srm(struct amdgpu_device *adev)
+{
+	uint32_t tmp;
+
+	/* enable Save Restore Machine */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_SRM_CNTL));
+	tmp |= RLC_SRM_CNTL__AUTO_INCR_ADDR_MASK;
+	tmp |= RLC_SRM_CNTL__SRM_ENABLE_MASK;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_SRM_CNTL), tmp);
+}
+
+static int gfx_v10_0_rlc_load_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_0 *hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+
+	if (!adev->gfx.rlc_fw)
+		return -EINVAL;
+
+	hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+	amdgpu_ucode_print_rlc_hdr(&hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			   le32_to_cpu(hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(hdr->header.ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, mmRLC_GPM_UCODE_ADDR,
+		     RLCG_UCODE_LOADING_START_ADDRESS);
+
+	for (i = 0; i < fw_size; i++)
+		WREG32_SOC15(GC, 0, mmRLC_GPM_UCODE_DATA,
+			     le32_to_cpup(fw_data++));
+
+	WREG32_SOC15(GC, 0, mmRLC_GPM_UCODE_ADDR, adev->gfx.rlc_fw_version);
+
+	return 0;
+}
+
+static int gfx_v10_0_rlc_resume(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (amdgpu_sriov_vf(adev))
+		return 0;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		r = gfx_v10_0_wait_for_rlc_autoload_complete(adev);
+		if (r)
+			return r;
+		gfx_v10_0_init_pg(adev);
+
+		/* enable RLC SRM */
+		gfx_v10_0_rlc_enable_srm(adev);
+
+	} else {
+		adev->gfx.rlc.funcs->stop(adev);
+
+		/* disable CG */
+		WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL, 0);
+
+		/* disable PG */
+		WREG32_SOC15(GC, 0, mmRLC_PG_CNTL, 0);
+
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+			/* legacy rlc firmware loading */
+			r = gfx_v10_0_rlc_load_microcode(adev);
+			if (r)
+				return r;
+		} else if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) {
+			/* rlc backdoor autoload firmware */
+			r = gfx_v10_0_rlc_backdoor_autoload_enable(adev);
+			if (r)
+				return r;
+		}
+
+		gfx_v10_0_init_pg(adev);
+		adev->gfx.rlc.funcs->start(adev);
+
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) {
+			r = gfx_v10_0_wait_for_rlc_autoload_complete(adev);
+			if (r)
+				return r;
+		}
+	}
+	return 0;
+}
+
+static struct {
+	FIRMWARE_ID	id;
+	unsigned int	offset;
+	unsigned int	size;
+} rlc_autoload_info[FIRMWARE_ID_MAX];
+
+static int gfx_v10_0_parse_rlc_toc(struct amdgpu_device *adev)
+{
+	int ret;
+	RLC_TABLE_OF_CONTENT *rlc_toc;
+
+	ret = amdgpu_bo_create_reserved(adev, adev->psp.toc_bin_size, PAGE_SIZE,
+					AMDGPU_GEM_DOMAIN_GTT,
+					&adev->gfx.rlc.rlc_toc_bo,
+					&adev->gfx.rlc.rlc_toc_gpu_addr,
+					(void **)&adev->gfx.rlc.rlc_toc_buf);
+	if (ret) {
+		dev_err(adev->dev, "(%d) failed to create rlc toc bo\n", ret);
+		return ret;
+	}
+
+	/* Copy toc from psp sos fw to rlc toc buffer */
+	memcpy(adev->gfx.rlc.rlc_toc_buf, adev->psp.toc_start_addr, adev->psp.toc_bin_size);
+
+	rlc_toc = (RLC_TABLE_OF_CONTENT *)adev->gfx.rlc.rlc_toc_buf;
+	while (rlc_toc && (rlc_toc->id > FIRMWARE_ID_INVALID) &&
+		(rlc_toc->id < FIRMWARE_ID_MAX)) {
+		if ((rlc_toc->id >= FIRMWARE_ID_CP_CE) &&
+		    (rlc_toc->id <= FIRMWARE_ID_CP_MES)) {
+			/* Offset needs 4KB alignment */
+			rlc_toc->offset = ALIGN(rlc_toc->offset * 4, PAGE_SIZE);
+		}
+
+		rlc_autoload_info[rlc_toc->id].id = rlc_toc->id;
+		rlc_autoload_info[rlc_toc->id].offset = rlc_toc->offset * 4;
+		rlc_autoload_info[rlc_toc->id].size = rlc_toc->size * 4;
+
+		rlc_toc++;
+	};
+
+	return 0;
+}
+
+static uint32_t gfx_v10_0_calc_toc_total_size(struct amdgpu_device *adev)
+{
+	uint32_t total_size = 0;
+	FIRMWARE_ID id;
+	int ret;
+
+	ret = gfx_v10_0_parse_rlc_toc(adev);
+	if (ret) {
+		dev_err(adev->dev, "failed to parse rlc toc\n");
+		return 0;
+	}
+
+	for (id = FIRMWARE_ID_RLC_G_UCODE; id < FIRMWARE_ID_MAX; id++)
+		total_size += rlc_autoload_info[id].size;
+
+	/* In case the offset in rlc toc ucode is aligned */
+	if (total_size < rlc_autoload_info[FIRMWARE_ID_MAX-1].offset)
+		total_size = rlc_autoload_info[FIRMWARE_ID_MAX-1].offset +
+				rlc_autoload_info[FIRMWARE_ID_MAX-1].size;
+
+	return total_size;
+}
+
+static int gfx_v10_0_rlc_backdoor_autoload_buffer_init(struct amdgpu_device *adev)
+{
+	int r;
+	uint32_t total_size;
+
+	total_size = gfx_v10_0_calc_toc_total_size(adev);
+
+	r = amdgpu_bo_create_reserved(adev, total_size, PAGE_SIZE,
+				      AMDGPU_GEM_DOMAIN_GTT,
+				      &adev->gfx.rlc.rlc_autoload_bo,
+				      &adev->gfx.rlc.rlc_autoload_gpu_addr,
+				      (void **)&adev->gfx.rlc.rlc_autoload_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create fw autoload bo\n", r);
+		return r;
+	}
+
+	return 0;
+}
+
+static void gfx_v10_0_rlc_backdoor_autoload_buffer_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.rlc.rlc_toc_bo,
+			      &adev->gfx.rlc.rlc_toc_gpu_addr,
+			      (void **)&adev->gfx.rlc.rlc_toc_buf);
+	amdgpu_bo_free_kernel(&adev->gfx.rlc.rlc_autoload_bo,
+			      &adev->gfx.rlc.rlc_autoload_gpu_addr,
+			      (void **)&adev->gfx.rlc.rlc_autoload_ptr);
+}
+
+static void gfx_v10_0_rlc_backdoor_autoload_copy_ucode(struct amdgpu_device *adev,
+						       FIRMWARE_ID id,
+						       const void *fw_data,
+						       uint32_t fw_size)
+{
+	uint32_t toc_offset;
+	uint32_t toc_fw_size;
+	char *ptr = adev->gfx.rlc.rlc_autoload_ptr;
+
+	if (id <= FIRMWARE_ID_INVALID || id >= FIRMWARE_ID_MAX)
+		return;
+
+	toc_offset = rlc_autoload_info[id].offset;
+	toc_fw_size = rlc_autoload_info[id].size;
+
+	if (fw_size == 0)
+		fw_size = toc_fw_size;
+
+	if (fw_size > toc_fw_size)
+		fw_size = toc_fw_size;
+
+	memcpy(ptr + toc_offset, fw_data, fw_size);
+
+	if (fw_size < toc_fw_size)
+		memset(ptr + toc_offset + fw_size, 0, toc_fw_size - fw_size);
+}
+
+static void gfx_v10_0_rlc_backdoor_autoload_copy_toc_ucode(struct amdgpu_device *adev)
+{
+	void *data;
+	uint32_t size;
+
+	data = adev->gfx.rlc.rlc_toc_buf;
+	size = rlc_autoload_info[FIRMWARE_ID_RLC_TOC].size;
+
+	gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+						   FIRMWARE_ID_RLC_TOC,
+						   data, size);
+}
+
+static void gfx_v10_0_rlc_backdoor_autoload_copy_gfx_ucode(struct amdgpu_device *adev)
+{
+	const __le32 *fw_data;
+	uint32_t fw_size;
+	const struct gfx_firmware_header_v1_0 *cp_hdr;
+	const struct rlc_firmware_header_v2_0 *rlc_hdr;
+
+	/* pfp ucode */
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.pfp_fw->data;
+	fw_data = (const __le32 *)(adev->gfx.pfp_fw->data +
+		le32_to_cpu(cp_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(cp_hdr->header.ucode_size_bytes);
+	gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+						   FIRMWARE_ID_CP_PFP,
+						   fw_data, fw_size);
+
+	/* ce ucode */
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.ce_fw->data;
+	fw_data = (const __le32 *)(adev->gfx.ce_fw->data +
+		le32_to_cpu(cp_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(cp_hdr->header.ucode_size_bytes);
+	gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+						   FIRMWARE_ID_CP_CE,
+						   fw_data, fw_size);
+
+	/* me ucode */
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.me_fw->data;
+	fw_data = (const __le32 *)(adev->gfx.me_fw->data +
+		le32_to_cpu(cp_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(cp_hdr->header.ucode_size_bytes);
+	gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+						   FIRMWARE_ID_CP_ME,
+						   fw_data, fw_size);
+
+	/* rlc ucode */
+	rlc_hdr = (const struct rlc_firmware_header_v2_0 *)
+		adev->gfx.rlc_fw->data;
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+		le32_to_cpu(rlc_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(rlc_hdr->header.ucode_size_bytes);
+	gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+						   FIRMWARE_ID_RLC_G_UCODE,
+						   fw_data, fw_size);
+
+	/* mec1 ucode */
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.mec_fw->data;
+	fw_data = (const __le32 *) (adev->gfx.mec_fw->data +
+		le32_to_cpu(cp_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(cp_hdr->header.ucode_size_bytes) -
+		cp_hdr->jt_size * 4;
+	gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+						   FIRMWARE_ID_CP_MEC,
+						   fw_data, fw_size);
+	/* mec2 ucode is not necessary if mec2 ucode is same as mec1 */
+}
+
+/* Temporarily put sdma part here */
+static void gfx_v10_0_rlc_backdoor_autoload_copy_sdma_ucode(struct amdgpu_device *adev)
+{
+	const __le32 *fw_data;
+	uint32_t fw_size;
+	const struct sdma_firmware_header_v1_0 *sdma_hdr;
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		sdma_hdr = (const struct sdma_firmware_header_v1_0 *)
+			adev->sdma.instance[i].fw->data;
+		fw_data = (const __le32 *) (adev->sdma.instance[i].fw->data +
+			le32_to_cpu(sdma_hdr->header.ucode_array_offset_bytes));
+		fw_size = le32_to_cpu(sdma_hdr->header.ucode_size_bytes);
+
+		if (i == 0) {
+			gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+				FIRMWARE_ID_SDMA0_UCODE, fw_data, fw_size);
+			gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+				FIRMWARE_ID_SDMA0_JT,
+				(uint32_t *)fw_data +
+				sdma_hdr->jt_offset,
+				sdma_hdr->jt_size * 4);
+		} else if (i == 1) {
+			gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+				FIRMWARE_ID_SDMA1_UCODE, fw_data, fw_size);
+			gfx_v10_0_rlc_backdoor_autoload_copy_ucode(adev,
+				FIRMWARE_ID_SDMA1_JT,
+				(uint32_t *)fw_data +
+				sdma_hdr->jt_offset,
+				sdma_hdr->jt_size * 4);
+		}
+	}
+}
+
+static int gfx_v10_0_rlc_backdoor_autoload_enable(struct amdgpu_device *adev)
+{
+	uint32_t rlc_g_offset, rlc_g_size, tmp;
+	uint64_t gpu_addr;
+
+	gfx_v10_0_rlc_backdoor_autoload_copy_toc_ucode(adev);
+	gfx_v10_0_rlc_backdoor_autoload_copy_sdma_ucode(adev);
+	gfx_v10_0_rlc_backdoor_autoload_copy_gfx_ucode(adev);
+
+	rlc_g_offset = rlc_autoload_info[FIRMWARE_ID_RLC_G_UCODE].offset;
+	rlc_g_size = rlc_autoload_info[FIRMWARE_ID_RLC_G_UCODE].size;
+	gpu_addr = adev->gfx.rlc.rlc_autoload_gpu_addr + rlc_g_offset;
+
+	WREG32_SOC15(GC, 0, mmRLC_HYP_BOOTLOAD_ADDR_HI, upper_32_bits(gpu_addr));
+	WREG32_SOC15(GC, 0, mmRLC_HYP_BOOTLOAD_ADDR_LO, lower_32_bits(gpu_addr));
+	WREG32_SOC15(GC, 0, mmRLC_HYP_BOOTLOAD_SIZE, rlc_g_size);
+
+	tmp = RREG32_SOC15(GC, 0, mmRLC_HYP_RESET_VECTOR);
+	if (!(tmp & (RLC_HYP_RESET_VECTOR__COLD_BOOT_EXIT_MASK |
+		   RLC_HYP_RESET_VECTOR__VDDGFX_EXIT_MASK))) {
+		DRM_ERROR("Neither COLD_BOOT_EXIT nor VDDGFX_EXIT is set\n");
+		return -EINVAL;
+	}
+
+	tmp = RREG32_SOC15(GC, 0, mmRLC_CNTL);
+	if (tmp & RLC_CNTL__RLC_ENABLE_F32_MASK) {
+		DRM_ERROR("RLC ROM should halt itself\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int gfx_v10_0_rlc_backdoor_autoload_config_me_cache(struct amdgpu_device *adev)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	int i;
+	uint64_t addr;
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, mmCP_ME_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, mmCP_ME_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_ME_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_ME_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Program me ucode address into intruction cache address register */
+	addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+		rlc_autoload_info[FIRMWARE_ID_CP_ME].offset;
+	WREG32_SOC15(GC, 0, mmCP_ME_IC_BASE_LO,
+			lower_32_bits(addr) & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_ME_IC_BASE_HI,
+			upper_32_bits(addr));
+
+	return 0;
+}
+
+static int gfx_v10_0_rlc_backdoor_autoload_config_ce_cache(struct amdgpu_device *adev)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	int i;
+	uint64_t addr;
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, mmCP_CE_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CE_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, mmCP_CE_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_CE_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_CE_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Program ce ucode address into intruction cache address register */
+	addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+		rlc_autoload_info[FIRMWARE_ID_CP_CE].offset;
+	WREG32_SOC15(GC, 0, mmCP_CE_IC_BASE_LO,
+			lower_32_bits(addr) & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_CE_IC_BASE_HI,
+			upper_32_bits(addr));
+
+	return 0;
+}
+
+static int gfx_v10_0_rlc_backdoor_autoload_config_pfp_cache(struct amdgpu_device *adev)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	int i;
+	uint64_t addr;
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, mmCP_PFP_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, mmCP_PFP_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_PFP_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_PFP_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Program pfp ucode address into intruction cache address register */
+	addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+		rlc_autoload_info[FIRMWARE_ID_CP_PFP].offset;
+	WREG32_SOC15(GC, 0, mmCP_PFP_IC_BASE_LO,
+			lower_32_bits(addr) & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_PFP_IC_BASE_HI,
+			upper_32_bits(addr));
+
+	return 0;
+}
+
+static int gfx_v10_0_rlc_backdoor_autoload_config_mec_cache(struct amdgpu_device *adev)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	int i;
+	uint64_t addr;
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, mmCP_CPC_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, mmCP_CPC_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_CPC_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_CPC_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Program mec1 ucode address into intruction cache address register */
+	addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+		rlc_autoload_info[FIRMWARE_ID_CP_MEC].offset;
+	WREG32_SOC15(GC, 0, mmCP_CPC_IC_BASE_LO,
+			lower_32_bits(addr) & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_CPC_IC_BASE_HI,
+			upper_32_bits(addr));
+
+	return 0;
+}
+
+static int gfx_v10_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev)
+{
+	uint32_t cp_status;
+	uint32_t bootload_status;
+	int i, r;
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		cp_status = RREG32_SOC15(GC, 0, mmCP_STAT);
+		bootload_status = RREG32_SOC15(GC, 0, mmRLC_RLCS_BOOTLOAD_STATUS);
+		if ((cp_status == 0) &&
+		    (REG_GET_FIELD(bootload_status,
+			RLC_RLCS_BOOTLOAD_STATUS, BOOTLOAD_COMPLETE) == 1)) {
+			break;
+		}
+		udelay(1);
+	}
+
+	if (i >= adev->usec_timeout) {
+		dev_err(adev->dev, "rlc autoload: gc ucode autoload timeout\n");
+		return -ETIMEDOUT;
+	}
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) {
+		r = gfx_v10_0_rlc_backdoor_autoload_config_me_cache(adev);
+		if (r)
+			return r;
+
+		r = gfx_v10_0_rlc_backdoor_autoload_config_ce_cache(adev);
+		if (r)
+			return r;
+
+		r = gfx_v10_0_rlc_backdoor_autoload_config_pfp_cache(adev);
+		if (r)
+			return r;
+
+		r = gfx_v10_0_rlc_backdoor_autoload_config_mec_cache(adev);
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static void gfx_v10_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable)
+{
+	int i;
+	u32 tmp = RREG32_SOC15(GC, 0, mmCP_ME_CNTL);
+
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_HALT, enable ? 0 : 1);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, enable ? 0 : 1);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, enable ? 0 : 1);
+	if (!enable) {
+		for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+			adev->gfx.gfx_ring[i].sched.ready = false;
+	}
+	WREG32_SOC15(GC, 0, mmCP_ME_CNTL, tmp);
+	udelay(50);
+}
+
+static int gfx_v10_0_cp_gfx_load_pfp_microcode(struct amdgpu_device *adev)
+{
+	int r;
+	const struct gfx_firmware_header_v1_0 *pfp_hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+	uint32_t tmp;
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+
+	pfp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.pfp_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&pfp_hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.pfp_fw->data +
+		le32_to_cpu(pfp_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(pfp_hdr->header.ucode_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, pfp_hdr->header.ucode_size_bytes,
+				      PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+				      &adev->gfx.pfp.pfp_fw_obj,
+				      &adev->gfx.pfp.pfp_fw_gpu_addr,
+				      (void **)&adev->gfx.pfp.pfp_fw_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create pfp fw bo\n", r);
+		gfx_v10_0_pfp_fini(adev);
+		return r;
+	}
+
+	memcpy(adev->gfx.pfp.pfp_fw_ptr, fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->gfx.pfp.pfp_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.pfp.pfp_fw_obj);
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, mmCP_PFP_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, mmCP_PFP_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_PFP_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_PFP_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	if (amdgpu_emu_mode == 1)
+		adev->nbio_funcs->hdp_flush(adev, NULL);
+
+	tmp = RREG32_SOC15(GC, 0, mmCP_PFP_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, ADDRESS_CLAMP, 1);
+	WREG32_SOC15(GC, 0, mmCP_PFP_IC_BASE_CNTL, tmp);
+	WREG32_SOC15(GC, 0, mmCP_PFP_IC_BASE_LO,
+		adev->gfx.pfp.pfp_fw_gpu_addr & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_PFP_IC_BASE_HI,
+		upper_32_bits(adev->gfx.pfp.pfp_fw_gpu_addr));
+
+	return 0;
+}
+
+static int gfx_v10_0_cp_gfx_load_ce_microcode(struct amdgpu_device *adev)
+{
+	int r;
+	const struct gfx_firmware_header_v1_0 *ce_hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+	uint32_t tmp;
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+
+	ce_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.ce_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&ce_hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.ce_fw->data +
+		le32_to_cpu(ce_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(ce_hdr->header.ucode_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, ce_hdr->header.ucode_size_bytes,
+				      PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+				      &adev->gfx.ce.ce_fw_obj,
+				      &adev->gfx.ce.ce_fw_gpu_addr,
+				      (void **)&adev->gfx.ce.ce_fw_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create ce fw bo\n", r);
+		gfx_v10_0_ce_fini(adev);
+		return r;
+	}
+
+	memcpy(adev->gfx.ce.ce_fw_ptr, fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->gfx.ce.ce_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.ce.ce_fw_obj);
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, mmCP_CE_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CE_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, mmCP_CE_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_CE_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_CE_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	if (amdgpu_emu_mode == 1)
+		adev->nbio_funcs->hdp_flush(adev, NULL);
+
+	tmp = RREG32_SOC15(GC, 0, mmCP_CE_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CE_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CE_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CE_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CE_IC_BASE_CNTL, ADDRESS_CLAMP, 1);
+	WREG32_SOC15(GC, 0, mmCP_CE_IC_BASE_LO,
+		adev->gfx.ce.ce_fw_gpu_addr & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_CE_IC_BASE_HI,
+		upper_32_bits(adev->gfx.ce.ce_fw_gpu_addr));
+
+	return 0;
+}
+
+static int gfx_v10_0_cp_gfx_load_me_microcode(struct amdgpu_device *adev)
+{
+	int r;
+	const struct gfx_firmware_header_v1_0 *me_hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+	uint32_t tmp;
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+
+	me_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.me_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&me_hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.me_fw->data +
+		le32_to_cpu(me_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(me_hdr->header.ucode_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, me_hdr->header.ucode_size_bytes,
+				      PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+				      &adev->gfx.me.me_fw_obj,
+				      &adev->gfx.me.me_fw_gpu_addr,
+				      (void **)&adev->gfx.me.me_fw_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create me fw bo\n", r);
+		gfx_v10_0_me_fini(adev);
+		return r;
+	}
+
+	memcpy(adev->gfx.me.me_fw_ptr, fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->gfx.me.me_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.me.me_fw_obj);
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, mmCP_ME_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, mmCP_ME_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_ME_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_ME_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	if (amdgpu_emu_mode == 1)
+		adev->nbio_funcs->hdp_flush(adev, NULL);
+
+	tmp = RREG32_SOC15(GC, 0, mmCP_ME_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, ADDRESS_CLAMP, 1);
+	WREG32_SOC15(GC, 0, mmCP_ME_IC_BASE_LO,
+		adev->gfx.me.me_fw_gpu_addr & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_ME_IC_BASE_HI,
+		upper_32_bits(adev->gfx.me.me_fw_gpu_addr));
+
+	return 0;
+}
+
+static int gfx_v10_0_cp_gfx_load_microcode(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (!adev->gfx.me_fw || !adev->gfx.pfp_fw || !adev->gfx.ce_fw)
+		return -EINVAL;
+
+	gfx_v10_0_cp_gfx_enable(adev, false);
+
+	r = gfx_v10_0_cp_gfx_load_pfp_microcode(adev);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to load pfp fw\n", r);
+		return r;
+	}
+
+	r = gfx_v10_0_cp_gfx_load_ce_microcode(adev);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to load ce fw\n", r);
+		return r;
+	}
+
+	r = gfx_v10_0_cp_gfx_load_me_microcode(adev);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to load me fw\n", r);
+		return r;
+	}
+
+	return 0;
+}
+
+static int gfx_v10_0_cp_gfx_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+	int r, i;
+	int ctx_reg_offset;
+
+	/* init the CP */
+	WREG32_SOC15(GC, 0, mmCP_MAX_CONTEXT,
+		     adev->gfx.config.max_hw_contexts - 1);
+	WREG32_SOC15(GC, 0, mmCP_DEVICE_ID, 1);
+
+	gfx_v10_0_cp_gfx_enable(adev, true);
+
+	ring = &adev->gfx.gfx_ring[0];
+	r = amdgpu_ring_alloc(ring, gfx_v10_0_get_csb_size(adev) + 4);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+		return r;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	amdgpu_ring_write(ring, PACKET3_PREAMBLE_BEGIN_CLEAR_STATE);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	amdgpu_ring_write(ring, 0x80000000);
+	amdgpu_ring_write(ring, 0x80000000);
+
+	for (sect = gfx10_cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT) {
+				amdgpu_ring_write(ring,
+						  PACKET3(PACKET3_SET_CONTEXT_REG,
+							  ext->reg_count));
+				amdgpu_ring_write(ring, ext->reg_index -
+						  PACKET3_SET_CONTEXT_REG_START);
+				for (i = 0; i < ext->reg_count; i++)
+					amdgpu_ring_write(ring, ext->extent[i]);
+			}
+		}
+	}
+
+	ctx_reg_offset =
+		SOC15_REG_OFFSET(GC, 0, mmPA_SC_TILE_STEERING_OVERRIDE) - PACKET3_SET_CONTEXT_REG_START;
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SET_CONTEXT_REG, 1));
+	amdgpu_ring_write(ring, ctx_reg_offset);
+	amdgpu_ring_write(ring, adev->gfx.config.pa_sc_tile_steering_override);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	amdgpu_ring_write(ring, PACKET3_PREAMBLE_END_CLEAR_STATE);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CLEAR_STATE, 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SET_BASE, 2));
+	amdgpu_ring_write(ring, PACKET3_BASE_INDEX(CE_PARTITION_BASE));
+	amdgpu_ring_write(ring, 0x8000);
+	amdgpu_ring_write(ring, 0x8000);
+
+	amdgpu_ring_commit(ring);
+
+	/* submit cs packet to copy state 0 to next available state */
+	ring = &adev->gfx.gfx_ring[1];
+	r = amdgpu_ring_alloc(ring, 2);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+		return r;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CLEAR_STATE, 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_commit(ring);
+
+	return 0;
+}
+
+static void gfx_v10_0_cp_gfx_switch_pipe(struct amdgpu_device *adev,
+					 CP_PIPE_ID pipe)
+{
+	u32 tmp;
+
+	tmp = RREG32_SOC15(GC, 0, mmGRBM_GFX_CNTL);
+	tmp = REG_SET_FIELD(tmp, GRBM_GFX_CNTL, PIPEID, pipe);
+
+	WREG32_SOC15(GC, 0, mmGRBM_GFX_CNTL, tmp);
+}
+
+static void gfx_v10_0_cp_gfx_set_doorbell(struct amdgpu_device *adev,
+					  struct amdgpu_ring *ring)
+{
+	u32 tmp;
+
+	tmp = RREG32_SOC15(GC, 0, mmCP_RB_DOORBELL_CONTROL);
+	if (ring->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+	} else {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
+	}
+	WREG32_SOC15(GC, 0, mmCP_RB_DOORBELL_CONTROL, tmp);
+	tmp = REG_SET_FIELD(0, CP_RB_DOORBELL_RANGE_LOWER,
+			    DOORBELL_RANGE_LOWER, ring->doorbell_index);
+	WREG32_SOC15(GC, 0, mmCP_RB_DOORBELL_RANGE_LOWER, tmp);
+
+	WREG32_SOC15(GC, 0, mmCP_RB_DOORBELL_RANGE_UPPER,
+		     CP_RB_DOORBELL_RANGE_UPPER__DOORBELL_RANGE_UPPER_MASK);
+}
+
+static int gfx_v10_0_cp_gfx_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	u32 tmp;
+	u32 rb_bufsz;
+	u64 rb_addr, rptr_addr, wptr_gpu_addr;
+	u32 i;
+
+	/* Set the write pointer delay */
+	WREG32_SOC15(GC, 0, mmCP_RB_WPTR_DELAY, 0);
+
+	/* set the RB to use vmid 0 */
+	WREG32_SOC15(GC, 0, mmCP_RB_VMID, 0);
+
+	/* Init gfx ring 0 for pipe 0 */
+	mutex_lock(&adev->srbm_mutex);
+	gfx_v10_0_cp_gfx_switch_pipe(adev, PIPE_ID0);
+	mutex_unlock(&adev->srbm_mutex);
+	/* Set ring buffer size */
+	ring = &adev->gfx.gfx_ring[0];
+	rb_bufsz = order_base_2(ring->ring_size / 8);
+	tmp = REG_SET_FIELD(0, CP_RB0_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, CP_RB0_CNTL, RB_BLKSZ, rb_bufsz - 2);
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_RB0_CNTL, BUF_SWAP, 1);
+#endif
+	WREG32_SOC15(GC, 0, mmCP_RB0_CNTL, tmp);
+
+	/* Initialize the ring buffer's write pointers */
+	ring->wptr = 0;
+	WREG32_SOC15(GC, 0, mmCP_RB0_WPTR, lower_32_bits(ring->wptr));
+	WREG32_SOC15(GC, 0, mmCP_RB0_WPTR_HI, upper_32_bits(ring->wptr));
+
+	/* set the wb address wether it's enabled or not */
+	rptr_addr = adev->wb.gpu_addr + (ring->rptr_offs * 4);
+	WREG32_SOC15(GC, 0, mmCP_RB0_RPTR_ADDR, lower_32_bits(rptr_addr));
+	WREG32_SOC15(GC, 0, mmCP_RB0_RPTR_ADDR_HI, upper_32_bits(rptr_addr) &
+		     CP_RB_RPTR_ADDR_HI__RB_RPTR_ADDR_HI_MASK);
+
+	wptr_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_ADDR_LO,
+		     lower_32_bits(wptr_gpu_addr));
+	WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_ADDR_HI,
+		     upper_32_bits(wptr_gpu_addr));
+
+	mdelay(1);
+	WREG32_SOC15(GC, 0, mmCP_RB0_CNTL, tmp);
+
+	rb_addr = ring->gpu_addr >> 8;
+	WREG32_SOC15(GC, 0, mmCP_RB0_BASE, rb_addr);
+	WREG32_SOC15(GC, 0, mmCP_RB0_BASE_HI, upper_32_bits(rb_addr));
+
+	WREG32_SOC15(GC, 0, mmCP_RB_ACTIVE, 1);
+
+	gfx_v10_0_cp_gfx_set_doorbell(adev, ring);
+
+	/* Init gfx ring 1 for pipe 1 */
+	mutex_lock(&adev->srbm_mutex);
+	gfx_v10_0_cp_gfx_switch_pipe(adev, PIPE_ID1);
+	mutex_unlock(&adev->srbm_mutex);
+	ring = &adev->gfx.gfx_ring[1];
+	rb_bufsz = order_base_2(ring->ring_size / 8);
+	tmp = REG_SET_FIELD(0, CP_RB1_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, CP_RB1_CNTL, RB_BLKSZ, rb_bufsz - 2);
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_RB1_CNTL, BUF_SWAP, 1);
+#endif
+	WREG32_SOC15(GC, 0, mmCP_RB1_CNTL, tmp);
+	/* Initialize the ring buffer's write pointers */
+	ring->wptr = 0;
+	WREG32_SOC15(GC, 0, mmCP_RB1_WPTR, lower_32_bits(ring->wptr));
+	WREG32_SOC15(GC, 0, mmCP_RB1_WPTR_HI, upper_32_bits(ring->wptr));
+	/* Set the wb address wether it's enabled or not */
+	rptr_addr = adev->wb.gpu_addr + (ring->rptr_offs * 4);
+	WREG32_SOC15(GC, 0, mmCP_RB1_RPTR_ADDR, lower_32_bits(rptr_addr));
+	WREG32_SOC15(GC, 0, mmCP_RB1_RPTR_ADDR_HI, upper_32_bits(rptr_addr) &
+		CP_RB1_RPTR_ADDR_HI__RB_RPTR_ADDR_HI_MASK);
+	wptr_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_ADDR_LO,
+		lower_32_bits(wptr_gpu_addr));
+	WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_ADDR_HI,
+		upper_32_bits(wptr_gpu_addr));
+
+	mdelay(1);
+	WREG32_SOC15(GC, 0, mmCP_RB1_CNTL, tmp);
+
+	rb_addr = ring->gpu_addr >> 8;
+	WREG32_SOC15(GC, 0, mmCP_RB1_BASE, rb_addr);
+	WREG32_SOC15(GC, 0, mmCP_RB1_BASE_HI, upper_32_bits(rb_addr));
+	WREG32_SOC15(GC, 0, mmCP_RB1_ACTIVE, 1);
+
+	gfx_v10_0_cp_gfx_set_doorbell(adev, ring);
+
+	/* Switch to pipe 0 */
+	mutex_lock(&adev->srbm_mutex);
+	gfx_v10_0_cp_gfx_switch_pipe(adev, PIPE_ID0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	/* start the ring */
+	gfx_v10_0_cp_gfx_start(adev);
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+		ring->sched.ready = true;
+	}
+
+	return 0;
+}
+
+static void gfx_v10_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)
+{
+	int i;
+
+	if (enable) {
+		WREG32_SOC15(GC, 0, mmCP_MEC_CNTL, 0);
+	} else {
+		WREG32_SOC15(GC, 0, mmCP_MEC_CNTL,
+			     (CP_MEC_CNTL__MEC_ME1_HALT_MASK |
+			      CP_MEC_CNTL__MEC_ME2_HALT_MASK));
+		for (i = 0; i < adev->gfx.num_compute_rings; i++)
+			adev->gfx.compute_ring[i].sched.ready = false;
+		adev->gfx.kiq.ring.sched.ready = false;
+	}
+	udelay(50);
+}
+
+static int gfx_v10_0_cp_compute_load_microcode(struct amdgpu_device *adev)
+{
+	const struct gfx_firmware_header_v1_0 *mec_hdr;
+	const __le32 *fw_data;
+	unsigned i;
+	u32 tmp;
+	u32 usec_timeout = 50000; /* Wait for 50 ms */
+
+	if (!adev->gfx.mec_fw)
+		return -EINVAL;
+
+	gfx_v10_0_cp_compute_enable(adev, false);
+
+	mec_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+	amdgpu_ucode_print_gfx_hdr(&mec_hdr->header);
+
+	fw_data = (const __le32 *)
+		(adev->gfx.mec_fw->data +
+		 le32_to_cpu(mec_hdr->header.ucode_array_offset_bytes));
+
+	/* Trigger an invalidation of the L1 instruction caches */
+        tmp = RREG32_SOC15(GC, 0, mmCP_CPC_IC_OP_CNTL);
+        tmp = REG_SET_FIELD(tmp, CP_CPC_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+        WREG32_SOC15(GC, 0, mmCP_CPC_IC_OP_CNTL, tmp);
+
+        /* Wait for invalidation complete */
+        for (i = 0; i < usec_timeout; i++) {
+                tmp = RREG32_SOC15(GC, 0, mmCP_CPC_IC_OP_CNTL);
+                if (1 == REG_GET_FIELD(tmp, CP_CPC_IC_OP_CNTL,
+                        INVALIDATE_CACHE_COMPLETE))
+                        break;
+                udelay(1);
+        }
+
+        if (i >= usec_timeout) {
+                dev_err(adev->dev, "failed to invalidate instruction cache\n");
+                return -EINVAL;
+        }
+
+	if (amdgpu_emu_mode == 1)
+		adev->nbio_funcs->hdp_flush(adev, NULL);
+
+	tmp = RREG32_SOC15(GC, 0, mmCP_CPC_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, ADDRESS_CLAMP, 1);
+	WREG32_SOC15(GC, 0, mmCP_CPC_IC_BASE_CNTL, tmp);
+
+	WREG32_SOC15(GC, 0, mmCP_CPC_IC_BASE_LO, adev->gfx.mec.mec_fw_gpu_addr &
+		     0xFFFFF000);
+	WREG32_SOC15(GC, 0, mmCP_CPC_IC_BASE_HI,
+		     upper_32_bits(adev->gfx.mec.mec_fw_gpu_addr));
+
+	/* MEC1 */
+	WREG32_SOC15(GC, 0, mmCP_MEC_ME1_UCODE_ADDR, 0);
+
+	for (i = 0; i < mec_hdr->jt_size; i++)
+		WREG32_SOC15(GC, 0, mmCP_MEC_ME1_UCODE_DATA,
+			     le32_to_cpup(fw_data + mec_hdr->jt_offset + i));
+
+	WREG32_SOC15(GC, 0, mmCP_MEC_ME1_UCODE_ADDR, adev->gfx.mec_fw_version);
+
+	/*
+	 * TODO: Loading MEC2 firmware is only necessary if MEC2 should run
+	 * different microcode than MEC1.
+	 */
+
+	return 0;
+}
+
+static void gfx_v10_0_kiq_setting(struct amdgpu_ring *ring)
+{
+	uint32_t tmp;
+	struct amdgpu_device *adev = ring->adev;
+
+	/* tell RLC which is KIQ queue */
+	tmp = RREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS);
+	tmp &= 0xffffff00;
+	tmp |= (ring->me << 5) | (ring->pipe << 3) | (ring->queue);
+	WREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS, tmp);
+	tmp |= 0x80;
+	WREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS, tmp);
+}
+
+static int gfx_v10_0_gfx_mqd_init(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_gfx_mqd *mqd = ring->mqd_ptr;
+	uint64_t hqd_gpu_addr, wb_gpu_addr;
+	uint32_t tmp;
+	uint32_t rb_bufsz;
+
+	/* set up gfx hqd wptr */
+	mqd->cp_gfx_hqd_wptr = 0;
+	mqd->cp_gfx_hqd_wptr_hi = 0;
+
+	/* set the pointer to the MQD */
+	mqd->cp_mqd_base_addr = ring->mqd_gpu_addr & 0xfffffffc;
+	mqd->cp_mqd_base_addr_hi = upper_32_bits(ring->mqd_gpu_addr);
+
+	/* set up mqd control */
+	tmp = RREG32_SOC15(GC, 0, mmCP_GFX_MQD_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, PRIV_STATE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, CACHE_POLICY, 0);
+	mqd->cp_gfx_mqd_control = tmp;
+
+	/* set up gfx_hqd_vimd with 0x0 to indicate the ring buffer's vmid */
+	tmp = RREG32_SOC15(GC, 0, mmCP_GFX_HQD_VMID);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_VMID, VMID, 0);
+	mqd->cp_gfx_hqd_vmid = 0;
+
+	/* set up default queue priority level
+	 * 0x0 = low priority, 0x1 = high priority */
+	tmp = RREG32_SOC15(GC, 0, mmCP_GFX_HQD_QUEUE_PRIORITY);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUEUE_PRIORITY, PRIORITY_LEVEL, 0);
+	mqd->cp_gfx_hqd_queue_priority = tmp;
+
+	/* set up time quantum */
+	tmp = RREG32_SOC15(GC, 0, mmCP_GFX_HQD_QUANTUM);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUANTUM, QUANTUM_EN, 1);
+	mqd->cp_gfx_hqd_quantum = tmp;
+
+	/* set up gfx hqd base. this is similar as CP_RB_BASE */
+	hqd_gpu_addr = ring->gpu_addr >> 8;
+	mqd->cp_gfx_hqd_base = hqd_gpu_addr;
+	mqd->cp_gfx_hqd_base_hi = upper_32_bits(hqd_gpu_addr);
+
+	/* set up hqd_rptr_addr/_hi, similar as CP_RB_RPTR */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->rptr_offs * 4);
+	mqd->cp_gfx_hqd_rptr_addr = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_gfx_hqd_rptr_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* set up rb_wptr_poll addr */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	mqd->cp_rb_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_rb_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* set up the gfx_hqd_control, similar as CP_RB0_CNTL */
+	rb_bufsz = order_base_2(ring->ring_size / 4) - 1;
+	tmp = RREG32_SOC15(GC, 0, mmCP_GFX_HQD_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BLKSZ, rb_bufsz - 2);
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, BUF_SWAP, 1);
+#endif
+	mqd->cp_gfx_hqd_cntl = tmp;
+
+	/* set up cp_doorbell_control */
+	tmp = RREG32_SOC15(GC, 0, mmCP_RB_DOORBELL_CONTROL);
+	if (ring->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+	} else
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
+	mqd->cp_rb_doorbell_control = tmp;
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	ring->wptr = 0;
+	mqd->cp_gfx_hqd_rptr = RREG32_SOC15(GC, 0, mmCP_GFX_HQD_RPTR);
+
+	/* active the queue */
+	mqd->cp_gfx_hqd_active = 1;
+
+	return 0;
+}
+
+#ifdef BRING_UP_DEBUG
+static int gfx_v10_0_gfx_queue_init_register(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_gfx_mqd *mqd = ring->mqd_ptr;
+
+	/* set mmCP_GFX_HQD_WPTR/_HI to 0 */
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_WPTR, mqd->cp_gfx_hqd_wptr);
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_WPTR_HI, mqd->cp_gfx_hqd_wptr_hi);
+
+	/* set GFX_MQD_BASE */
+	WREG32_SOC15(GC, 0, mmCP_MQD_BASE_ADDR, mqd->cp_mqd_base_addr);
+	WREG32_SOC15(GC, 0, mmCP_MQD_BASE_ADDR_HI, mqd->cp_mqd_base_addr_hi);
+
+	/* set GFX_MQD_CONTROL */
+	WREG32_SOC15(GC, 0, mmCP_GFX_MQD_CONTROL, mqd->cp_gfx_mqd_control);
+
+	/* set GFX_HQD_VMID to 0 */
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_VMID, mqd->cp_gfx_hqd_vmid);
+
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_QUEUE_PRIORITY,
+			mqd->cp_gfx_hqd_queue_priority);
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_QUANTUM, mqd->cp_gfx_hqd_quantum);
+
+	/* set GFX_HQD_BASE, similar as CP_RB_BASE */
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_BASE, mqd->cp_gfx_hqd_base);
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_BASE_HI, mqd->cp_gfx_hqd_base_hi);
+
+	/* set GFX_HQD_RPTR_ADDR, similar as CP_RB_RPTR */
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_RPTR_ADDR, mqd->cp_gfx_hqd_rptr_addr);
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_RPTR_ADDR_HI, mqd->cp_gfx_hqd_rptr_addr_hi);
+
+	/* set GFX_HQD_CNTL, similar as CP_RB_CNTL */
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_CNTL, mqd->cp_gfx_hqd_cntl);
+
+	/* set RB_WPTR_POLL_ADDR */
+	WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_ADDR_LO, mqd->cp_rb_wptr_poll_addr_lo);
+	WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_ADDR_HI, mqd->cp_rb_wptr_poll_addr_hi);
+
+	/* set RB_DOORBELL_CONTROL */
+	WREG32_SOC15(GC, 0, mmCP_RB_DOORBELL_CONTROL, mqd->cp_rb_doorbell_control);
+
+	/* active the queue */
+	WREG32_SOC15(GC, 0, mmCP_GFX_HQD_ACTIVE, mqd->cp_gfx_hqd_active);
+
+	return 0;
+}
+#endif
+
+static int gfx_v10_0_gfx_init_queue(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_gfx_mqd *mqd = ring->mqd_ptr;
+
+	if (adev->in_gpu_reset) {
+		/* reset mqd with the backup copy */
+		if (adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS])
+			memcpy(mqd, adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS], sizeof(*mqd));
+		/* reset the ring */
+		ring->wptr = 0;
+		amdgpu_ring_clear_ring(ring);
+#ifdef BRING_UP_DEBUG
+		mutex_lock(&adev->srbm_mutex);
+		nv_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v10_0_gfx_queue_init_register(ring);
+		nv_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+#endif
+	} else {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		nv_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v10_0_gfx_mqd_init(ring);
+#ifdef BRING_UP_DEBUG
+		gfx_v10_0_gfx_queue_init_register(ring);
+#endif
+		nv_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+		if (adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS])
+			memcpy(adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS], mqd, sizeof(*mqd));
+	}
+
+	return 0;
+}
+
+#ifndef BRING_UP_DEBUG
+static int gfx_v10_0_kiq_enable_kgq(struct amdgpu_device *adev)
+{
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	struct amdgpu_ring *kiq_ring = &adev->gfx.kiq.ring;
+	int r, i;
+
+	if (!kiq->pmf || !kiq->pmf->kiq_map_queues)
+		return -EINVAL;
+
+	r = amdgpu_ring_alloc(kiq_ring, kiq->pmf->map_queues_size *
+					adev->gfx.num_gfx_rings);
+	if (r) {
+		DRM_ERROR("Failed to lock KIQ (%d).\n", r);
+		return r;
+	}
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		kiq->pmf->kiq_map_queues(kiq_ring, &adev->gfx.gfx_ring[i]);
+
+	r = amdgpu_ring_test_ring(kiq_ring);
+	if (r) {
+		DRM_ERROR("kfq enable failed\n");
+		kiq_ring->sched.ready = false;
+	}
+	return r;
+}
+#endif
+
+static int gfx_v10_0_cp_async_gfx_ring_resume(struct amdgpu_device *adev)
+{
+	int r, i;
+	struct amdgpu_ring *ring;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+
+		r = amdgpu_bo_reserve(ring->mqd_obj, false);
+		if (unlikely(r != 0))
+			goto done;
+
+		r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+		if (!r) {
+			r = gfx_v10_0_gfx_init_queue(ring);
+			amdgpu_bo_kunmap(ring->mqd_obj);
+			ring->mqd_ptr = NULL;
+		}
+		amdgpu_bo_unreserve(ring->mqd_obj);
+		if (r)
+			goto done;
+	}
+#ifndef BRING_UP_DEBUG
+	r = gfx_v10_0_kiq_enable_kgq(adev);
+	if (r)
+		goto done;
+#endif
+	r = gfx_v10_0_cp_gfx_start(adev);
+	if (r)
+		goto done;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+		ring->sched.ready = true;
+	}
+done:
+	return r;
+}
+
+static int gfx_v10_0_compute_mqd_init(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_compute_mqd *mqd = ring->mqd_ptr;
+	uint64_t hqd_gpu_addr, wb_gpu_addr, eop_base_addr;
+	uint32_t tmp;
+
+	mqd->header = 0xC0310800;
+	mqd->compute_pipelinestat_enable = 0x00000001;
+	mqd->compute_static_thread_mgmt_se0 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se1 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se2 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se3 = 0xffffffff;
+	mqd->compute_misc_reserved = 0x00000003;
+
+	eop_base_addr = ring->eop_gpu_addr >> 8;
+	mqd->cp_hqd_eop_base_addr_lo = eop_base_addr;
+	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_EOP_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
+			(order_base_2(GFX10_MEC_HPD_SIZE / 4) - 1));
+
+	mqd->cp_hqd_eop_control = tmp;
+
+	/* enable doorbell? */
+	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL);
+
+	if (ring->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_HIT, 0);
+	}
+	else
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* disable the queue if it's active */
+	ring->wptr = 0;
+	mqd->cp_hqd_dequeue_request = 0;
+	mqd->cp_hqd_pq_rptr = 0;
+	mqd->cp_hqd_pq_wptr_lo = 0;
+	mqd->cp_hqd_pq_wptr_hi = 0;
+
+	/* set the pointer to the MQD */
+	mqd->cp_mqd_base_addr_lo = ring->mqd_gpu_addr & 0xfffffffc;
+	mqd->cp_mqd_base_addr_hi = upper_32_bits(ring->mqd_gpu_addr);
+
+	/* set MQD vmid to 0 */
+	tmp = RREG32_SOC15(GC, 0, mmCP_MQD_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
+	mqd->cp_mqd_control = tmp;
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	hqd_gpu_addr = ring->gpu_addr >> 8;
+	mqd->cp_hqd_pq_base_lo = hqd_gpu_addr;
+	mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
+			    (order_base_2(ring->ring_size / 4) - 1));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+			    ((order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1) << 8));
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
+#endif
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+	mqd->cp_hqd_pq_control = tmp;
+
+	/* set the wb address whether it's enabled or not */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->rptr_offs * 4);
+	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_rptr_report_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	tmp = 0;
+	/* enable the doorbell if requested */
+	if (ring->use_doorbell) {
+		tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				DOORBELL_OFFSET, ring->doorbell_index);
+
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_HIT, 0);
+	}
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	ring->wptr = 0;
+	mqd->cp_hqd_pq_rptr = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_RPTR);
+
+	/* set the vmid for the queue */
+	mqd->cp_hqd_vmid = 0;
+
+	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PERSISTENT_STATE);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x53);
+	mqd->cp_hqd_persistent_state = tmp;
+
+	/* set MIN_IB_AVAIL_SIZE */
+	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_IB_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_IB_CONTROL, MIN_IB_AVAIL_SIZE, 3);
+	mqd->cp_hqd_ib_control = tmp;
+
+	/* activate the queue */
+	mqd->cp_hqd_active = 1;
+
+	return 0;
+}
+
+static int gfx_v10_0_kiq_init_register(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_compute_mqd *mqd = ring->mqd_ptr;
+	int j;
+
+	/* disable wptr polling */
+	WREG32_FIELD15(GC, 0, CP_PQ_WPTR_POLL_CNTL, EN, 0);
+
+	/* write the EOP addr */
+	WREG32_SOC15(GC, 0, mmCP_HQD_EOP_BASE_ADDR,
+	       mqd->cp_hqd_eop_base_addr_lo);
+	WREG32_SOC15(GC, 0, mmCP_HQD_EOP_BASE_ADDR_HI,
+	       mqd->cp_hqd_eop_base_addr_hi);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	WREG32_SOC15(GC, 0, mmCP_HQD_EOP_CONTROL,
+	       mqd->cp_hqd_eop_control);
+
+	/* enable doorbell? */
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL,
+	       mqd->cp_hqd_pq_doorbell_control);
+
+	/* disable the queue if it's active */
+	if (RREG32_SOC15(GC, 0, mmCP_HQD_ACTIVE) & 1) {
+		WREG32_SOC15(GC, 0, mmCP_HQD_DEQUEUE_REQUEST, 1);
+		for (j = 0; j < adev->usec_timeout; j++) {
+			if (!(RREG32_SOC15(GC, 0, mmCP_HQD_ACTIVE) & 1))
+				break;
+			udelay(1);
+		}
+		WREG32_SOC15(GC, 0, mmCP_HQD_DEQUEUE_REQUEST,
+		       mqd->cp_hqd_dequeue_request);
+		WREG32_SOC15(GC, 0, mmCP_HQD_PQ_RPTR,
+		       mqd->cp_hqd_pq_rptr);
+		WREG32_SOC15(GC, 0, mmCP_HQD_PQ_WPTR_LO,
+		       mqd->cp_hqd_pq_wptr_lo);
+		WREG32_SOC15(GC, 0, mmCP_HQD_PQ_WPTR_HI,
+		       mqd->cp_hqd_pq_wptr_hi);
+	}
+
+	/* set the pointer to the MQD */
+	WREG32_SOC15(GC, 0, mmCP_MQD_BASE_ADDR,
+	       mqd->cp_mqd_base_addr_lo);
+	WREG32_SOC15(GC, 0, mmCP_MQD_BASE_ADDR_HI,
+	       mqd->cp_mqd_base_addr_hi);
+
+	/* set MQD vmid to 0 */
+	WREG32_SOC15(GC, 0, mmCP_MQD_CONTROL,
+	       mqd->cp_mqd_control);
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_BASE,
+	       mqd->cp_hqd_pq_base_lo);
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_BASE_HI,
+	       mqd->cp_hqd_pq_base_hi);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_CONTROL,
+	       mqd->cp_hqd_pq_control);
+
+	/* set the wb address whether it's enabled or not */
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_RPTR_REPORT_ADDR,
+		mqd->cp_hqd_pq_rptr_report_addr_lo);
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_RPTR_REPORT_ADDR_HI,
+		mqd->cp_hqd_pq_rptr_report_addr_hi);
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR,
+	       mqd->cp_hqd_pq_wptr_poll_addr_lo);
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR_HI,
+	       mqd->cp_hqd_pq_wptr_poll_addr_hi);
+
+	/* enable the doorbell if requested */
+	if (ring->use_doorbell) {
+		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
+			(adev->doorbell_index.kiq * 2) << 2);
+		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
+			(adev->doorbell_index.userqueue_end * 2) << 2);
+	}
+
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL,
+	       mqd->cp_hqd_pq_doorbell_control);
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_WPTR_LO,
+	       mqd->cp_hqd_pq_wptr_lo);
+	WREG32_SOC15(GC, 0, mmCP_HQD_PQ_WPTR_HI,
+	       mqd->cp_hqd_pq_wptr_hi);
+
+	/* set the vmid for the queue */
+	WREG32_SOC15(GC, 0, mmCP_HQD_VMID, mqd->cp_hqd_vmid);
+
+	WREG32_SOC15(GC, 0, mmCP_HQD_PERSISTENT_STATE,
+	       mqd->cp_hqd_persistent_state);
+
+	/* activate the queue */
+	WREG32_SOC15(GC, 0, mmCP_HQD_ACTIVE,
+	       mqd->cp_hqd_active);
+
+	if (ring->use_doorbell)
+		WREG32_FIELD15(GC, 0, CP_PQ_STATUS, DOORBELL_ENABLE, 1);
+
+	return 0;
+}
+
+static int gfx_v10_0_kiq_init_queue(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_compute_mqd *mqd = ring->mqd_ptr;
+	int mqd_idx = AMDGPU_MAX_COMPUTE_RINGS;
+
+	gfx_v10_0_kiq_setting(ring);
+
+	if (adev->in_gpu_reset) { /* for GPU_RESET case */
+		/* reset MQD to a clean status */
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(*mqd));
+
+		/* reset ring buffer */
+		ring->wptr = 0;
+		amdgpu_ring_clear_ring(ring);
+
+		mutex_lock(&adev->srbm_mutex);
+		nv_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v10_0_kiq_init_register(ring);
+		nv_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+	} else {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		nv_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v10_0_compute_mqd_init(ring);
+		gfx_v10_0_kiq_init_register(ring);
+		nv_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(adev->gfx.mec.mqd_backup[mqd_idx], mqd, sizeof(*mqd));
+	}
+
+	return 0;
+}
+
+static int gfx_v10_0_kcq_init_queue(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_compute_mqd *mqd = ring->mqd_ptr;
+	int mqd_idx = ring - &adev->gfx.compute_ring[0];
+
+	if (!adev->in_gpu_reset && !adev->in_suspend) {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		nv_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v10_0_compute_mqd_init(ring);
+		nv_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(adev->gfx.mec.mqd_backup[mqd_idx], mqd, sizeof(*mqd));
+	} else if (adev->in_gpu_reset) { /* for GPU_RESET case */
+		/* reset MQD to a clean status */
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(*mqd));
+
+		/* reset ring buffer */
+		ring->wptr = 0;
+		amdgpu_ring_clear_ring(ring);
+	} else {
+		amdgpu_ring_clear_ring(ring);
+	}
+
+	return 0;
+}
+
+static int gfx_v10_0_kiq_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	int r;
+
+	ring = &adev->gfx.kiq.ring;
+
+	r = amdgpu_bo_reserve(ring->mqd_obj, false);
+	if (unlikely(r != 0))
+		return r;
+
+	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+	if (unlikely(r != 0))
+		return r;
+
+	gfx_v10_0_kiq_init_queue(ring);
+	amdgpu_bo_kunmap(ring->mqd_obj);
+	ring->mqd_ptr = NULL;
+	amdgpu_bo_unreserve(ring->mqd_obj);
+	ring->sched.ready = true;
+	return 0;
+}
+
+static int gfx_v10_0_kcq_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = NULL;
+	int r = 0, i;
+
+	gfx_v10_0_cp_compute_enable(adev, true);
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		ring = &adev->gfx.compute_ring[i];
+
+		r = amdgpu_bo_reserve(ring->mqd_obj, false);
+		if (unlikely(r != 0))
+			goto done;
+		r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+		if (!r) {
+			r = gfx_v10_0_kcq_init_queue(ring);
+			amdgpu_bo_kunmap(ring->mqd_obj);
+			ring->mqd_ptr = NULL;
+		}
+		amdgpu_bo_unreserve(ring->mqd_obj);
+		if (r)
+			goto done;
+	}
+
+	r = amdgpu_gfx_enable_kcq(adev);
+done:
+	return r;
+}
+
+static int gfx_v10_0_cp_resume(struct amdgpu_device *adev)
+{
+	int r, i;
+	struct amdgpu_ring *ring;
+
+	if (!(adev->flags & AMD_IS_APU))
+		gfx_v10_0_enable_gui_idle_interrupt(adev, false);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+		/* legacy firmware loading */
+		r = gfx_v10_0_cp_gfx_load_microcode(adev);
+		if (r)
+			return r;
+
+		r = gfx_v10_0_cp_compute_load_microcode(adev);
+		if (r)
+			return r;
+	}
+
+	r = gfx_v10_0_kiq_resume(adev);
+	if (r)
+		return r;
+
+	r = gfx_v10_0_kcq_resume(adev);
+	if (r)
+		return r;
+
+	if (!amdgpu_async_gfx_ring) {
+		r = gfx_v10_0_cp_gfx_resume(adev);
+		if (r)
+			return r;
+	} else {
+		r = gfx_v10_0_cp_async_gfx_ring_resume(adev);
+		if (r)
+			return r;
+	}
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+		DRM_INFO("gfx %d ring me %d pipe %d q %d\n",
+			 i, ring->me,ring->pipe,ring->queue);
+		r = amdgpu_ring_test_ring(ring);
+		if (r) {
+			ring->sched.ready = false;
+			return r;
+		}
+	}
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		ring = &adev->gfx.compute_ring[i];
+		ring->sched.ready = true;
+		DRM_INFO("compute ring %d mec %d pipe %d q %d\n",
+			 i, ring->me,ring->pipe,ring->queue);
+		r = amdgpu_ring_test_ring(ring);
+		if (r)
+			ring->sched.ready = false;
+	}
+
+	return 0;
+}
+
+static void gfx_v10_0_cp_enable(struct amdgpu_device *adev, bool enable)
+{
+	gfx_v10_0_cp_gfx_enable(adev, enable);
+	gfx_v10_0_cp_compute_enable(adev, enable);
+}
+
+static bool gfx_v10_0_check_grbm_cam_remapping(struct amdgpu_device *adev)
+{
+	uint32_t data, pattern = 0xDEADBEEF;
+
+	/* check if mmVGT_ESGS_RING_SIZE_UMD
+	 * has been remapped to mmVGT_ESGS_RING_SIZE */
+	data = RREG32_SOC15(GC, 0, mmVGT_ESGS_RING_SIZE);
+
+	WREG32_SOC15(GC, 0, mmVGT_ESGS_RING_SIZE, 0);
+
+	WREG32_SOC15(GC, 0, mmVGT_ESGS_RING_SIZE_UMD, pattern);
+
+	if (RREG32_SOC15(GC, 0, mmVGT_ESGS_RING_SIZE) == pattern) {
+		WREG32_SOC15(GC, 0, mmVGT_ESGS_RING_SIZE_UMD , data);
+		return true;
+	} else {
+		WREG32_SOC15(GC, 0, mmVGT_ESGS_RING_SIZE, data);
+		return false;
+	}
+}
+
+static void gfx_v10_0_setup_grbm_cam_remapping(struct amdgpu_device *adev)
+{
+	uint32_t data;
+
+	/* initialize cam_index to 0
+	 * index will auto-inc after each data writting */
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_INDEX, 0);
+
+	/* mmVGT_TF_RING_SIZE_UMD -> mmVGT_TF_RING_SIZE */
+	data = (SOC15_REG_OFFSET(GC, 0, mmVGT_TF_RING_SIZE_UMD) <<
+		GRBM_CAM_DATA__CAM_ADDR__SHIFT) |
+	       (SOC15_REG_OFFSET(GC, 0, mmVGT_TF_RING_SIZE) <<
+		GRBM_CAM_DATA__CAM_REMAPADDR__SHIFT);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA_UPPER, 0);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA, data);
+
+	/* mmVGT_TF_MEMORY_BASE_UMD -> mmVGT_TF_MEMORY_BASE */
+	data = (SOC15_REG_OFFSET(GC, 0, mmVGT_TF_MEMORY_BASE_UMD) <<
+		GRBM_CAM_DATA__CAM_ADDR__SHIFT) |
+	       (SOC15_REG_OFFSET(GC, 0, mmVGT_TF_MEMORY_BASE) <<
+		GRBM_CAM_DATA__CAM_REMAPADDR__SHIFT);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA_UPPER, 0);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA, data);
+
+	/* mmVGT_TF_MEMORY_BASE_HI_UMD -> mmVGT_TF_MEMORY_BASE_HI */
+	data = (SOC15_REG_OFFSET(GC, 0, mmVGT_TF_MEMORY_BASE_HI_UMD) <<
+		GRBM_CAM_DATA__CAM_ADDR__SHIFT) |
+	       (SOC15_REG_OFFSET(GC, 0, mmVGT_TF_MEMORY_BASE_HI) <<
+		GRBM_CAM_DATA__CAM_REMAPADDR__SHIFT);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA_UPPER, 0);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA, data);
+
+	/* mmVGT_HS_OFFCHIP_PARAM_UMD -> mmVGT_HS_OFFCHIP_PARAM */
+	data = (SOC15_REG_OFFSET(GC, 0, mmVGT_HS_OFFCHIP_PARAM_UMD) <<
+		GRBM_CAM_DATA__CAM_ADDR__SHIFT) |
+	       (SOC15_REG_OFFSET(GC, 0, mmVGT_HS_OFFCHIP_PARAM) <<
+		GRBM_CAM_DATA__CAM_REMAPADDR__SHIFT);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA_UPPER, 0);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA, data);
+
+	/* mmVGT_ESGS_RING_SIZE_UMD -> mmVGT_ESGS_RING_SIZE */
+	data = (SOC15_REG_OFFSET(GC, 0, mmVGT_ESGS_RING_SIZE_UMD) <<
+		GRBM_CAM_DATA__CAM_ADDR__SHIFT) |
+	       (SOC15_REG_OFFSET(GC, 0, mmVGT_ESGS_RING_SIZE) <<
+		GRBM_CAM_DATA__CAM_REMAPADDR__SHIFT);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA_UPPER, 0);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA, data);
+
+	/* mmVGT_GSVS_RING_SIZE_UMD -> mmVGT_GSVS_RING_SIZE */
+	data = (SOC15_REG_OFFSET(GC, 0, mmVGT_GSVS_RING_SIZE_UMD) <<
+		GRBM_CAM_DATA__CAM_ADDR__SHIFT) |
+	       (SOC15_REG_OFFSET(GC, 0, mmVGT_GSVS_RING_SIZE) <<
+		GRBM_CAM_DATA__CAM_REMAPADDR__SHIFT);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA_UPPER, 0);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA, data);
+
+	/* mmSPI_CONFIG_CNTL_REMAP -> mmSPI_CONFIG_CNTL */
+	data = (SOC15_REG_OFFSET(GC, 0, mmSPI_CONFIG_CNTL_REMAP) <<
+		GRBM_CAM_DATA__CAM_ADDR__SHIFT) |
+	       (SOC15_REG_OFFSET(GC, 0, mmSPI_CONFIG_CNTL) <<
+		GRBM_CAM_DATA__CAM_REMAPADDR__SHIFT);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA_UPPER, 0);
+	WREG32_SOC15(GC, 0, mmGRBM_CAM_DATA, data);
+}
+
+static int gfx_v10_0_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!amdgpu_emu_mode)
+		gfx_v10_0_init_golden_registers(adev);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+		/**
+		 * For gfx 10, rlc firmware loading relies on smu firmware is
+		 * loaded firstly, so in direct type, it has to load smc ucode
+		 * here before rlc.
+		 */
+		r = smu_load_microcode(&adev->smu);
+		if (r)
+			return r;
+
+		r = smu_check_fw_status(&adev->smu);
+		if (r) {
+			pr_err("SMC firmware status is not correct\n");
+			return r;
+		}
+	}
+
+	/* if GRBM CAM not remapped, set up the remapping */
+	if (!gfx_v10_0_check_grbm_cam_remapping(adev))
+		gfx_v10_0_setup_grbm_cam_remapping(adev);
+
+	gfx_v10_0_constants_init(adev);
+
+	r = gfx_v10_0_rlc_resume(adev);
+	if (r)
+		return r;
+
+	/*
+	 * init golden registers and rlc resume may override some registers,
+	 * reconfig them here
+	 */
+	gfx_v10_0_tcp_harvest(adev);
+
+	r = gfx_v10_0_cp_resume(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+#ifndef BRING_UP_DEBUG
+static int gfx10_0_disable_kgq(struct amdgpu_device *adev)
+{
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	struct amdgpu_ring *kiq_ring = &kiq->ring;
+	int i;
+
+	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+		return -EINVAL;
+
+	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+					adev->gfx.num_gfx_rings))
+		return -ENOMEM;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		kiq->pmf->kiq_unmap_queues(kiq_ring, &adev->gfx.gfx_ring[i],
+					   RESET_QUEUES, 0, 0);
+
+	return amdgpu_ring_test_ring(kiq_ring);
+}
+#endif
+
+static int gfx_v10_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
+	amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
+#ifndef BRING_UP_DEBUG
+	if (gfx10_0_disable_kgq(adev))
+		DRM_ERROR("KGQ disable failed\n");
+#endif
+	if (amdgpu_gfx_disable_kcq(adev))
+		DRM_ERROR("KCQ disable failed\n");
+	if (amdgpu_sriov_vf(adev)) {
+		pr_debug("For SRIOV client, shouldn't do anything.\n");
+		return 0;
+	}
+	gfx_v10_0_cp_enable(adev, false);
+	gfx_v10_0_rlc_stop(adev);
+
+	return 0;
+}
+
+static int gfx_v10_0_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->in_suspend = true;
+	return gfx_v10_0_hw_fini(adev);
+}
+
+static int gfx_v10_0_resume(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int r;
+
+	r = gfx_v10_0_hw_init(adev);
+	adev->in_suspend = false;
+	return r;
+}
+
+static bool gfx_v10_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (REG_GET_FIELD(RREG32_SOC15(GC, 0, mmGRBM_STATUS),
+				GRBM_STATUS, GUI_ACTIVE))
+		return false;
+	else
+		return true;
+}
+
+static int gfx_v10_0_wait_for_idle(void *handle)
+{
+	unsigned i;
+	u32 tmp;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		/* read MC_STATUS */
+		tmp = RREG32_SOC15(GC, 0, mmGRBM_STATUS) &
+			GRBM_STATUS__GUI_ACTIVE_MASK;
+
+		if (!REG_GET_FIELD(tmp, GRBM_STATUS, GUI_ACTIVE))
+			return 0;
+		udelay(1);
+	}
+	return -ETIMEDOUT;
+}
+
+static int gfx_v10_0_soft_reset(void *handle)
+{
+	u32 grbm_soft_reset = 0;
+	u32 tmp;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* GRBM_STATUS */
+	tmp = RREG32_SOC15(GC, 0, mmGRBM_STATUS);
+	if (tmp & (GRBM_STATUS__PA_BUSY_MASK | GRBM_STATUS__SC_BUSY_MASK |
+		   GRBM_STATUS__BCI_BUSY_MASK | GRBM_STATUS__SX_BUSY_MASK |
+		   GRBM_STATUS__TA_BUSY_MASK | GRBM_STATUS__DB_BUSY_MASK |
+		   GRBM_STATUS__CB_BUSY_MASK | GRBM_STATUS__GDS_BUSY_MASK |
+		   GRBM_STATUS__SPI_BUSY_MASK | GRBM_STATUS__GE_BUSY_NO_DMA_MASK
+		   | GRBM_STATUS__BCI_BUSY_MASK)) {
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_CP,
+						1);
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_GFX,
+						1);
+	}
+
+	if (tmp & (GRBM_STATUS__CP_BUSY_MASK | GRBM_STATUS__CP_COHERENCY_BUSY_MASK)) {
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_CP,
+						1);
+	}
+
+	/* GRBM_STATUS2 */
+	tmp = RREG32_SOC15(GC, 0, mmGRBM_STATUS2);
+	if (REG_GET_FIELD(tmp, GRBM_STATUS2, RLC_BUSY))
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_RLC,
+						1);
+
+	if (grbm_soft_reset) {
+		/* stop the rlc */
+		gfx_v10_0_rlc_stop(adev);
+
+		/* Disable GFX parsing/prefetching */
+		gfx_v10_0_cp_gfx_enable(adev, false);
+
+		/* Disable MEC parsing/prefetching */
+		gfx_v10_0_cp_compute_enable(adev, false);
+
+		if (grbm_soft_reset) {
+			tmp = RREG32_SOC15(GC, 0, mmGRBM_SOFT_RESET);
+			tmp |= grbm_soft_reset;
+			dev_info(adev->dev, "GRBM_SOFT_RESET=0x%08X\n", tmp);
+			WREG32_SOC15(GC, 0, mmGRBM_SOFT_RESET, tmp);
+			tmp = RREG32_SOC15(GC, 0, mmGRBM_SOFT_RESET);
+
+			udelay(50);
+
+			tmp &= ~grbm_soft_reset;
+			WREG32_SOC15(GC, 0, mmGRBM_SOFT_RESET, tmp);
+			tmp = RREG32_SOC15(GC, 0, mmGRBM_SOFT_RESET);
+		}
+
+		/* Wait a little for things to settle down */
+		udelay(50);
+	}
+	return 0;
+}
+
+static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
+{
+	uint64_t clock;
+
+	mutex_lock(&adev->gfx.gpu_clock_mutex);
+	WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
+	clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) |
+		((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
+	mutex_unlock(&adev->gfx.gpu_clock_mutex);
+	return clock;
+}
+
+static void gfx_v10_0_ring_emit_gds_switch(struct amdgpu_ring *ring,
+					   uint32_t vmid,
+					   uint32_t gds_base, uint32_t gds_size,
+					   uint32_t gws_base, uint32_t gws_size,
+					   uint32_t oa_base, uint32_t oa_size)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	/* GDS Base */
+	gfx_v10_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, mmGDS_VMID0_BASE) + 2 * vmid,
+				    gds_base);
+
+	/* GDS Size */
+	gfx_v10_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, mmGDS_VMID0_SIZE) + 2 * vmid,
+				    gds_size);
+
+	/* GWS */
+	gfx_v10_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID0) + vmid,
+				    gws_size << GDS_GWS_VMID0__SIZE__SHIFT | gws_base);
+
+	/* OA */
+	gfx_v10_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID0) + vmid,
+				    (1 << (oa_size + oa_base)) - (1 << oa_base));
+}
+
+static int gfx_v10_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->gfx.num_gfx_rings = GFX10_NUM_GFX_RINGS;
+	adev->gfx.num_compute_rings = AMDGPU_MAX_COMPUTE_RINGS;
+
+	gfx_v10_0_set_kiq_pm4_funcs(adev);
+	gfx_v10_0_set_ring_funcs(adev);
+	gfx_v10_0_set_irq_funcs(adev);
+	gfx_v10_0_set_gds_init(adev);
+	gfx_v10_0_set_rlc_funcs(adev);
+
+	return 0;
+}
+
+static int gfx_v10_0_late_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int r;
+
+	r = amdgpu_irq_get(adev, &adev->gfx.priv_reg_irq, 0);
+	if (r)
+		return r;
+
+	r = amdgpu_irq_get(adev, &adev->gfx.priv_inst_irq, 0);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static bool gfx_v10_0_is_rlc_enabled(struct amdgpu_device *adev)
+{
+	uint32_t rlc_cntl;
+
+	/* if RLC is not enabled, do nothing */
+	rlc_cntl = RREG32_SOC15(GC, 0, mmRLC_CNTL);
+	return (REG_GET_FIELD(rlc_cntl, RLC_CNTL, RLC_ENABLE_F32)) ? true : false;
+}
+
+static void gfx_v10_0_set_safe_mode(struct amdgpu_device *adev)
+{
+	uint32_t data;
+	unsigned i;
+
+	data = RLC_SAFE_MODE__CMD_MASK;
+	data |= (1 << RLC_SAFE_MODE__MESSAGE__SHIFT);
+	WREG32_SOC15(GC, 0, mmRLC_SAFE_MODE, data);
+
+	/* wait for RLC_SAFE_MODE */
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (!REG_GET_FIELD(RREG32_SOC15(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
+			break;
+		udelay(1);
+	}
+}
+
+static void gfx_v10_0_unset_safe_mode(struct amdgpu_device *adev)
+{
+	uint32_t data;
+
+	data = RLC_SAFE_MODE__CMD_MASK;
+	WREG32_SOC15(GC, 0, mmRLC_SAFE_MODE, data);
+}
+
+static void gfx_v10_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+						      bool enable)
+{
+	uint32_t data, def;
+
+	/* It is disabled by HW by default */
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGCG)) {
+		/* 1 - RLC_CGTT_MGCG_OVERRIDE */
+		def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+		data &= ~(RLC_CGTT_MGCG_OVERRIDE__GRBM_CGTT_SCLK_OVERRIDE_MASK |
+			  RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK |
+			  RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK);
+
+		/* only for Vega10 & Raven1 */
+		data |= RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK;
+
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
+
+		/* MGLS is a global flag to control all MGLS in GFX */
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGLS) {
+			/* 2 - RLC memory Light sleep */
+			if (adev->cg_flags & AMD_CG_SUPPORT_GFX_RLC_LS) {
+				def = data = RREG32_SOC15(GC, 0, mmRLC_MEM_SLP_CNTL);
+				data |= RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK;
+				if (def != data)
+					WREG32_SOC15(GC, 0, mmRLC_MEM_SLP_CNTL, data);
+			}
+			/* 3 - CP memory Light sleep */
+			if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CP_LS) {
+				def = data = RREG32_SOC15(GC, 0, mmCP_MEM_SLP_CNTL);
+				data |= CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK;
+				if (def != data)
+					WREG32_SOC15(GC, 0, mmCP_MEM_SLP_CNTL, data);
+			}
+		}
+	} else {
+		/* 1 - MGCG_OVERRIDE */
+		def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+		data |= (RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK |
+			 RLC_CGTT_MGCG_OVERRIDE__GRBM_CGTT_SCLK_OVERRIDE_MASK |
+			 RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK |
+			 RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK);
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
+
+		/* 2 - disable MGLS in RLC */
+		data = RREG32_SOC15(GC, 0, mmRLC_MEM_SLP_CNTL);
+		if (data & RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK) {
+			data &= ~RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK;
+			WREG32_SOC15(GC, 0, mmRLC_MEM_SLP_CNTL, data);
+		}
+
+		/* 3 - disable MGLS in CP */
+		data = RREG32_SOC15(GC, 0, mmCP_MEM_SLP_CNTL);
+		if (data & CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK) {
+			data &= ~CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK;
+			WREG32_SOC15(GC, 0, mmCP_MEM_SLP_CNTL, data);
+		}
+	}
+}
+
+static void gfx_v10_0_update_3d_clock_gating(struct amdgpu_device *adev,
+					   bool enable)
+{
+	uint32_t data, def;
+
+	/* Enable 3D CGCG/CGLS */
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)) {
+		/* write cmd to clear cgcg/cgls ov */
+		def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+		/* unset CGCG override */
+		data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_GFX3D_CG_OVERRIDE_MASK;
+		/* update CGCG and CGLS override bits */
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
+		/* enable 3Dcgcg FSM(0x0000363f) */
+		def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D);
+		data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+			RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
+			data |= (0x000F << RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
+				RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D, data);
+
+		/* set IDLE_POLL_COUNT(0x00900100) */
+		def = RREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_CNTL);
+		data = (0x0100 << CP_RB_WPTR_POLL_CNTL__POLL_FREQUENCY__SHIFT) |
+			(0x0090 << CP_RB_WPTR_POLL_CNTL__IDLE_POLL_COUNT__SHIFT);
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_CNTL, data);
+	} else {
+		/* Disable CGCG/CGLS */
+		def = data = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D);
+		/* disable cgcg, cgls should be disabled */
+		data &= ~(RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK |
+			  RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK);
+		/* disable cgcg and cgls in FSM */
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D, data);
+	}
+}
+
+static void gfx_v10_0_update_coarse_grain_clock_gating(struct amdgpu_device *adev,
+						      bool enable)
+{
+	uint32_t def, data;
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)) {
+		def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+		/* unset CGCG override */
+		data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGCG_OVERRIDE_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
+			data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGLS_OVERRIDE_MASK;
+		else
+			data |= RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGLS_OVERRIDE_MASK;
+		/* update CGCG and CGLS override bits */
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE, data);
+
+		/* enable cgcg FSM(0x0000363F) */
+		def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL);
+		data = (0x36 << RLC_CGCG_CGLS_CTRL__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+			RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
+			data |= (0x000F << RLC_CGCG_CGLS_CTRL__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
+				RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK;
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL, data);
+
+		/* set IDLE_POLL_COUNT(0x00900100) */
+		def = RREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_CNTL);
+		data = (0x0100 << CP_RB_WPTR_POLL_CNTL__POLL_FREQUENCY__SHIFT) |
+			(0x0090 << CP_RB_WPTR_POLL_CNTL__IDLE_POLL_COUNT__SHIFT);
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmCP_RB_WPTR_POLL_CNTL, data);
+	} else {
+		def = data = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL);
+		/* reset CGCG/CGLS bits */
+		data &= ~(RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK | RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK);
+		/* disable cgcg and cgls in FSM */
+		if (def != data)
+			WREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL, data);
+	}
+}
+
+static int gfx_v10_0_update_gfx_clock_gating(struct amdgpu_device *adev,
+					    bool enable)
+{
+	amdgpu_gfx_rlc_enter_safe_mode(adev);
+
+	if (enable) {
+		/* CGCG/CGLS should be enabled after MGCG/MGLS
+		 * ===  MGCG + MGLS ===
+		 */
+		gfx_v10_0_update_medium_grain_clock_gating(adev, enable);
+		/* ===  CGCG /CGLS for GFX 3D Only === */
+		gfx_v10_0_update_3d_clock_gating(adev, enable);
+		/* ===  CGCG + CGLS === */
+		gfx_v10_0_update_coarse_grain_clock_gating(adev, enable);
+	} else {
+		/* CGCG/CGLS should be disabled before MGCG/MGLS
+		 * ===  CGCG + CGLS ===
+		 */
+		gfx_v10_0_update_coarse_grain_clock_gating(adev, enable);
+		/* ===  CGCG /CGLS for GFX 3D Only === */
+		gfx_v10_0_update_3d_clock_gating(adev, enable);
+		/* ===  MGCG + MGLS === */
+		gfx_v10_0_update_medium_grain_clock_gating(adev, enable);
+	}
+
+	if (adev->cg_flags &
+	    (AMD_CG_SUPPORT_GFX_MGCG |
+	     AMD_CG_SUPPORT_GFX_CGLS |
+	     AMD_CG_SUPPORT_GFX_CGCG |
+	     AMD_CG_SUPPORT_GFX_CGLS |
+	     AMD_CG_SUPPORT_GFX_3D_CGCG |
+	     AMD_CG_SUPPORT_GFX_3D_CGLS))
+		gfx_v10_0_enable_gui_idle_interrupt(adev, enable);
+
+	amdgpu_gfx_rlc_exit_safe_mode(adev);
+
+	return 0;
+}
+
+static const struct amdgpu_rlc_funcs gfx_v10_0_rlc_funcs = {
+	.is_rlc_enabled = gfx_v10_0_is_rlc_enabled,
+	.set_safe_mode = gfx_v10_0_set_safe_mode,
+	.unset_safe_mode = gfx_v10_0_unset_safe_mode,
+	.init = gfx_v10_0_rlc_init,
+	.get_csb_size = gfx_v10_0_get_csb_size,
+	.get_csb_buffer = gfx_v10_0_get_csb_buffer,
+	.resume = gfx_v10_0_rlc_resume,
+	.stop = gfx_v10_0_rlc_stop,
+	.reset = gfx_v10_0_rlc_reset,
+	.start = gfx_v10_0_rlc_start
+};
+
+static int gfx_v10_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	bool enable = (state == AMD_PG_STATE_GATE) ? true : false;
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		if (!enable) {
+			amdgpu_gfx_off_ctrl(adev, false);
+			cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
+		} else
+			amdgpu_gfx_off_ctrl(adev, true);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v10_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		gfx_v10_0_update_gfx_clock_gating(adev,
+						 state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static void gfx_v10_0_get_clockgating_state(void *handle, u32 *flags)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int data;
+
+	/* AMD_CG_SUPPORT_GFX_MGCG */
+	data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
+	if (!(data & RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK))
+		*flags |= AMD_CG_SUPPORT_GFX_MGCG;
+
+	/* AMD_CG_SUPPORT_GFX_CGCG */
+	data = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL);
+	if (data & RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_CGCG;
+
+	/* AMD_CG_SUPPORT_GFX_CGLS */
+	if (data & RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_CGLS;
+
+	/* AMD_CG_SUPPORT_GFX_RLC_LS */
+	data = RREG32_SOC15(GC, 0, mmRLC_MEM_SLP_CNTL);
+	if (data & RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_RLC_LS | AMD_CG_SUPPORT_GFX_MGLS;
+
+	/* AMD_CG_SUPPORT_GFX_CP_LS */
+	data = RREG32_SOC15(GC, 0, mmCP_MEM_SLP_CNTL);
+	if (data & CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_CP_LS | AMD_CG_SUPPORT_GFX_MGLS;
+
+	/* AMD_CG_SUPPORT_GFX_3D_CGCG */
+	data = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D);
+	if (data & RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_3D_CGCG;
+
+	/* AMD_CG_SUPPORT_GFX_3D_CGLS */
+	if (data & RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_3D_CGLS;
+}
+
+static u64 gfx_v10_0_ring_get_rptr_gfx(struct amdgpu_ring *ring)
+{
+	return ring->adev->wb.wb[ring->rptr_offs]; /* gfx10 is 32bit rptr*/
+}
+
+static u64 gfx_v10_0_ring_get_wptr_gfx(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u64 wptr;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell) {
+		wptr = atomic64_read((atomic64_t *)&adev->wb.wb[ring->wptr_offs]);
+	} else {
+		wptr = RREG32_SOC15(GC, 0, mmCP_RB0_WPTR);
+		wptr += (u64)RREG32_SOC15(GC, 0, mmCP_RB0_WPTR_HI) << 32;
+	}
+
+	return wptr;
+}
+
+static void gfx_v10_0_ring_set_wptr_gfx(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell) {
+		/* XXX check if swapping is necessary on BE */
+		atomic64_set((atomic64_t*)&adev->wb.wb[ring->wptr_offs], ring->wptr);
+		WDOORBELL64(ring->doorbell_index, ring->wptr);
+	} else {
+		WREG32_SOC15(GC, 0, mmCP_RB0_WPTR, lower_32_bits(ring->wptr));
+		WREG32_SOC15(GC, 0, mmCP_RB0_WPTR_HI, upper_32_bits(ring->wptr));
+	}
+}
+
+static u64 gfx_v10_0_ring_get_rptr_compute(struct amdgpu_ring *ring)
+{
+	return ring->adev->wb.wb[ring->rptr_offs]; /* gfx10 hardware is 32bit rptr */
+}
+
+static u64 gfx_v10_0_ring_get_wptr_compute(struct amdgpu_ring *ring)
+{
+	u64 wptr;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell)
+		wptr = atomic64_read((atomic64_t *)&ring->adev->wb.wb[ring->wptr_offs]);
+	else
+		BUG();
+	return wptr;
+}
+
+static void gfx_v10_0_ring_set_wptr_compute(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell) {
+		atomic64_set((atomic64_t*)&adev->wb.wb[ring->wptr_offs], ring->wptr);
+		WDOORBELL64(ring->doorbell_index, ring->wptr);
+	} else {
+		BUG(); /* only DOORBELL method supported on gfx10 now */
+	}
+}
+
+static void gfx_v10_0_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u32 ref_and_mask, reg_mem_engine;
+	const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio_funcs->hdp_flush_reg;
+
+	if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
+		switch (ring->me) {
+		case 1:
+			ref_and_mask = nbio_hf_reg->ref_and_mask_cp2 << ring->pipe;
+			break;
+		case 2:
+			ref_and_mask = nbio_hf_reg->ref_and_mask_cp6 << ring->pipe;
+			break;
+		default:
+			return;
+		}
+		reg_mem_engine = 0;
+	} else {
+		ref_and_mask = nbio_hf_reg->ref_and_mask_cp0;
+		reg_mem_engine = 1; /* pfp */
+	}
+
+	gfx_v10_0_wait_reg_mem(ring, reg_mem_engine, 0, 1,
+			       adev->nbio_funcs->get_hdp_flush_req_offset(adev),
+			       adev->nbio_funcs->get_hdp_flush_done_offset(adev),
+			       ref_and_mask, ref_and_mask, 0x20);
+}
+
+static void gfx_v10_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
+				       struct amdgpu_job *job,
+				       struct amdgpu_ib *ib,
+				       uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+	u32 header, control = 0;
+
+	if (ib->flags & AMDGPU_IB_FLAG_CE)
+		header = PACKET3(PACKET3_INDIRECT_BUFFER_CNST, 2);
+	else
+		header = PACKET3(PACKET3_INDIRECT_BUFFER, 2);
+
+	control |= ib->length_dw | (vmid << 24);
+
+	if (amdgpu_mcbp && (ib->flags & AMDGPU_IB_FLAG_PREEMPT)) {
+		control |= INDIRECT_BUFFER_PRE_ENB(1);
+
+		if (flags & AMDGPU_IB_PREEMPTED)
+			control |= INDIRECT_BUFFER_PRE_RESUME(1);
+
+		if (!(ib->flags & AMDGPU_IB_FLAG_CE))
+			gfx_v10_0_ring_emit_de_meta(ring,
+				    flags & AMDGPU_IB_PREEMPTED ? true : false);
+	}
+
+	amdgpu_ring_write(ring, header);
+	BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
+	amdgpu_ring_write(ring,
+#ifdef __BIG_ENDIAN
+		(2 << 0) |
+#endif
+		lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, control);
+}
+
+static void gfx_v10_0_ring_emit_ib_compute(struct amdgpu_ring *ring,
+					   struct amdgpu_job *job,
+					   struct amdgpu_ib *ib,
+					   uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+	u32 control = INDIRECT_BUFFER_VALID | ib->length_dw | (vmid << 24);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
+	BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
+	amdgpu_ring_write(ring,
+#ifdef __BIG_ENDIAN
+				(2 << 0) |
+#endif
+				lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, control);
+}
+
+static void gfx_v10_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr,
+				     u64 seq, unsigned flags)
+{
+	struct amdgpu_device *adev = ring->adev;
+	bool write64bit = flags & AMDGPU_FENCE_FLAG_64BIT;
+	bool int_sel = flags & AMDGPU_FENCE_FLAG_INT;
+
+	/* Interrupt not work fine on GFX10.1 model yet. Use fallback instead */
+	if (adev->pdev->device == 0x50)
+		int_sel = false;
+
+	/* RELEASE_MEM - flush caches, send int */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_RELEASE_MEM, 6));
+	amdgpu_ring_write(ring, (PACKET3_RELEASE_MEM_GCR_SEQ |
+				 PACKET3_RELEASE_MEM_GCR_GL2_WB |
+				 PACKET3_RELEASE_MEM_GCR_GL2_INV |
+				 PACKET3_RELEASE_MEM_GCR_GL2_US |
+				 PACKET3_RELEASE_MEM_GCR_GL1_INV |
+				 PACKET3_RELEASE_MEM_GCR_GLV_INV |
+				 PACKET3_RELEASE_MEM_GCR_GLM_INV |
+				 PACKET3_RELEASE_MEM_GCR_GLM_WB |
+				 PACKET3_RELEASE_MEM_CACHE_POLICY(3) |
+				 PACKET3_RELEASE_MEM_EVENT_TYPE(CACHE_FLUSH_AND_INV_TS_EVENT) |
+				 PACKET3_RELEASE_MEM_EVENT_INDEX(5)));
+	amdgpu_ring_write(ring, (PACKET3_RELEASE_MEM_DATA_SEL(write64bit ? 2 : 1) |
+				 PACKET3_RELEASE_MEM_INT_SEL(int_sel ? 2 : 0)));
+
+	/*
+	 * the address should be Qword aligned if 64bit write, Dword
+	 * aligned if only send 32bit data low (discard data high)
+	 */
+	if (write64bit)
+		BUG_ON(addr & 0x7);
+	else
+		BUG_ON(addr & 0x3);
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+	amdgpu_ring_write(ring, upper_32_bits(seq));
+	amdgpu_ring_write(ring, 0);
+}
+
+static void gfx_v10_0_ring_emit_pipeline_sync(struct amdgpu_ring *ring)
+{
+	int usepfp = (ring->funcs->type == AMDGPU_RING_TYPE_GFX);
+	uint32_t seq = ring->fence_drv.sync_seq;
+	uint64_t addr = ring->fence_drv.gpu_addr;
+
+	gfx_v10_0_wait_reg_mem(ring, usepfp, 1, 0, lower_32_bits(addr),
+			       upper_32_bits(addr), seq, 0xffffffff, 4);
+}
+
+static void gfx_v10_0_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					 unsigned vmid, uint64_t pd_addr)
+{
+	amdgpu_gmc_emit_flush_gpu_tlb(ring, vmid, pd_addr);
+
+	/* compute doesn't have PFP */
+	if (ring->funcs->type == AMDGPU_RING_TYPE_GFX) {
+		/* sync PFP to ME, otherwise we might get invalid PFP reads */
+		amdgpu_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+		amdgpu_ring_write(ring, 0x0);
+	}
+}
+
+static void gfx_v10_0_ring_emit_fence_kiq(struct amdgpu_ring *ring, u64 addr,
+					  u64 seq, unsigned int flags)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	/* we only allocate 32bit for each seq wb address */
+	BUG_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	/* write fence seq to the "addr" */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+				 WRITE_DATA_DST_SEL(5) | WR_CONFIRM));
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+
+	if (flags & AMDGPU_FENCE_FLAG_INT) {
+		/* set register to trigger INT */
+		amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+		amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+					 WRITE_DATA_DST_SEL(0) | WR_CONFIRM));
+		amdgpu_ring_write(ring, SOC15_REG_OFFSET(GC, 0, mmCPC_INT_STATUS));
+		amdgpu_ring_write(ring, 0);
+		amdgpu_ring_write(ring, 0x20000000); /* src_id is 178 */
+	}
+}
+
+static void gfx_v10_0_ring_emit_sb(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SWITCH_BUFFER, 0));
+	amdgpu_ring_write(ring, 0);
+}
+
+static void gfx_v10_0_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
+{
+	uint32_t dw2 = 0;
+
+	if (amdgpu_mcbp)
+		gfx_v10_0_ring_emit_ce_meta(ring,
+				    flags & AMDGPU_IB_PREEMPTED ? true : false);
+
+	gfx_v10_0_ring_emit_tmz(ring, true);
+
+	dw2 |= 0x80000000; /* set load_enable otherwise this package is just NOPs */
+	if (flags & AMDGPU_HAVE_CTX_SWITCH) {
+		/* set load_global_config & load_global_uconfig */
+		dw2 |= 0x8001;
+		/* set load_cs_sh_regs */
+		dw2 |= 0x01000000;
+		/* set load_per_context_state & load_gfx_sh_regs for GFX */
+		dw2 |= 0x10002;
+
+		/* set load_ce_ram if preamble presented */
+		if (AMDGPU_PREAMBLE_IB_PRESENT & flags)
+			dw2 |= 0x10000000;
+	} else {
+		/* still load_ce_ram if this is the first time preamble presented
+		 * although there is no context switch happens.
+		 */
+		if (AMDGPU_PREAMBLE_IB_PRESENT_FIRST & flags)
+			dw2 |= 0x10000000;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	amdgpu_ring_write(ring, dw2);
+	amdgpu_ring_write(ring, 0);
+}
+
+static unsigned gfx_v10_0_ring_emit_init_cond_exec(struct amdgpu_ring *ring)
+{
+	unsigned ret;
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_COND_EXEC, 3));
+	amdgpu_ring_write(ring, lower_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, 0); /* discard following DWs if *cond_exec_gpu_addr==0 */
+	ret = ring->wptr & ring->buf_mask;
+	amdgpu_ring_write(ring, 0x55aa55aa); /* patch dummy value later */
+
+	return ret;
+}
+
+static void gfx_v10_0_ring_emit_patch_cond_exec(struct amdgpu_ring *ring, unsigned offset)
+{
+	unsigned cur;
+	BUG_ON(offset > ring->buf_mask);
+	BUG_ON(ring->ring[offset] != 0x55aa55aa);
+
+	cur = (ring->wptr - 1) & ring->buf_mask;
+	if (likely(cur > offset))
+		ring->ring[offset] = cur - offset;
+	else
+		ring->ring[offset] = (ring->buf_mask + 1) - offset + cur;
+}
+
+static int gfx_v10_0_ring_preempt_ib(struct amdgpu_ring *ring)
+{
+	int i, r = 0;
+	struct amdgpu_device *adev = ring->adev;
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	struct amdgpu_ring *kiq_ring = &kiq->ring;
+
+	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+		return -EINVAL;
+
+	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size))
+		return -ENOMEM;
+
+	/* assert preemption condition */
+	amdgpu_ring_set_preempt_cond_exec(ring, false);
+
+	/* assert IB preemption, emit the trailing fence */
+	kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP,
+				   ring->trail_fence_gpu_addr,
+				   ++ring->trail_seq);
+	amdgpu_ring_commit(kiq_ring);
+
+	/* poll the trailing fence */
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (ring->trail_seq ==
+		    le32_to_cpu(*(ring->trail_fence_cpu_addr)))
+			break;
+		DRM_UDELAY(1);
+	}
+
+	if (i >= adev->usec_timeout) {
+		r = -EINVAL;
+		DRM_ERROR("ring %d failed to preempt ib\n", ring->idx);
+	}
+
+	/* deassert preemption condition */
+	amdgpu_ring_set_preempt_cond_exec(ring, true);
+	return r;
+}
+
+static void gfx_v10_0_ring_emit_ce_meta(struct amdgpu_ring *ring, bool resume)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_ce_ib_state ce_payload = {0};
+	uint64_t csa_addr;
+	int cnt;
+
+	cnt = (sizeof(ce_payload) >> 2) + 4 - 2;
+	csa_addr = amdgpu_csa_vaddr(ring->adev);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(2) |
+				 WRITE_DATA_DST_SEL(8) |
+				 WR_CONFIRM) |
+				 WRITE_DATA_CACHE_POLICY(0));
+	amdgpu_ring_write(ring, lower_32_bits(csa_addr +
+			      offsetof(struct v10_gfx_meta_data, ce_payload)));
+	amdgpu_ring_write(ring, upper_32_bits(csa_addr +
+			      offsetof(struct v10_gfx_meta_data, ce_payload)));
+
+	if (resume)
+		amdgpu_ring_write_multiple(ring, adev->virt.csa_cpu_addr +
+					   offsetof(struct v10_gfx_meta_data,
+						    ce_payload),
+					   sizeof(ce_payload) >> 2);
+	else
+		amdgpu_ring_write_multiple(ring, (void *)&ce_payload,
+					   sizeof(ce_payload) >> 2);
+}
+
+static void gfx_v10_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_de_ib_state de_payload = {0};
+	uint64_t csa_addr, gds_addr;
+	int cnt;
+
+	csa_addr = amdgpu_csa_vaddr(ring->adev);
+	gds_addr = csa_addr + 4096;
+	de_payload.gds_backup_addrlo = lower_32_bits(gds_addr);
+	de_payload.gds_backup_addrhi = upper_32_bits(gds_addr);
+
+	cnt = (sizeof(de_payload) >> 2) + 4 - 2;
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
+				 WRITE_DATA_DST_SEL(8) |
+				 WR_CONFIRM) |
+				 WRITE_DATA_CACHE_POLICY(0));
+	amdgpu_ring_write(ring, lower_32_bits(csa_addr +
+			      offsetof(struct v10_gfx_meta_data, de_payload)));
+	amdgpu_ring_write(ring, upper_32_bits(csa_addr +
+			      offsetof(struct v10_gfx_meta_data, de_payload)));
+
+	if (resume)
+		amdgpu_ring_write_multiple(ring, adev->virt.csa_cpu_addr +
+					   offsetof(struct v10_gfx_meta_data,
+						    de_payload),
+					   sizeof(de_payload) >> 2);
+	else
+		amdgpu_ring_write_multiple(ring, (void *)&de_payload,
+					   sizeof(de_payload) >> 2);
+}
+
+static void gfx_v10_0_ring_emit_tmz(struct amdgpu_ring *ring, bool start)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_FRAME_CONTROL, 0));
+	amdgpu_ring_write(ring, FRAME_CMD(start ? 0 : 1)); /* frame_end */
+}
+
+static void gfx_v10_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_COPY_DATA, 4));
+	amdgpu_ring_write(ring, 0 |	/* src: register*/
+				(5 << 8) |	/* dst: memory */
+				(1 << 20));	/* write confirm */
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, lower_32_bits(adev->wb.gpu_addr +
+				adev->virt.reg_val_offs * 4));
+	amdgpu_ring_write(ring, upper_32_bits(adev->wb.gpu_addr +
+				adev->virt.reg_val_offs * 4));
+}
+
+static void gfx_v10_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
+				   uint32_t val)
+{
+	uint32_t cmd = 0;
+
+	switch (ring->funcs->type) {
+	case AMDGPU_RING_TYPE_GFX:
+		cmd = WRITE_DATA_ENGINE_SEL(1) | WR_CONFIRM;
+		break;
+	case AMDGPU_RING_TYPE_KIQ:
+		cmd = (1 << 16); /* no inc addr */
+		break;
+	default:
+		cmd = WR_CONFIRM;
+		break;
+	}
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, cmd);
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, val);
+}
+
+static void gfx_v10_0_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
+					uint32_t val, uint32_t mask)
+{
+	gfx_v10_0_wait_reg_mem(ring, 0, 0, 0, reg, 0, val, mask, 0x20);
+}
+
+static void
+gfx_v10_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
+				      uint32_t me, uint32_t pipe,
+				      enum amdgpu_interrupt_state state)
+{
+	uint32_t cp_int_cntl, cp_int_cntl_reg;
+
+	if (!me) {
+		switch (pipe) {
+		case 0:
+			cp_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0);
+			break;
+		case 1:
+			cp_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING1);
+			break;
+		default:
+			DRM_DEBUG("invalid pipe %d\n", pipe);
+			return;
+		}
+	} else {
+		DRM_DEBUG("invalid me %d\n", me);
+		return;
+	}
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		cp_int_cntl = RREG32(cp_int_cntl_reg);
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    TIME_STAMP_INT_ENABLE, 0);
+		WREG32(cp_int_cntl_reg, cp_int_cntl);
+	case AMDGPU_IRQ_STATE_ENABLE:
+		cp_int_cntl = RREG32(cp_int_cntl_reg);
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    TIME_STAMP_INT_ENABLE, 1);
+		WREG32(cp_int_cntl_reg, cp_int_cntl);
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v10_0_set_compute_eop_interrupt_state(struct amdgpu_device *adev,
+						     int me, int pipe,
+						     enum amdgpu_interrupt_state state)
+{
+	u32 mec_int_cntl, mec_int_cntl_reg;
+
+	/*
+	 * amdgpu controls only the first MEC. That's why this function only
+	 * handles the setting of interrupts for this specific MEC. All other
+	 * pipes' interrupts are set by amdkfd.
+	 */
+
+	if (me == 1) {
+		switch (pipe) {
+		case 0:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE0_INT_CNTL);
+			break;
+		case 1:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE1_INT_CNTL);
+			break;
+		case 2:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE2_INT_CNTL);
+			break;
+		case 3:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE3_INT_CNTL);
+			break;
+		default:
+			DRM_DEBUG("invalid pipe %d\n", pipe);
+			return;
+		}
+	} else {
+		DRM_DEBUG("invalid me %d\n", me);
+		return;
+	}
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		mec_int_cntl = RREG32(mec_int_cntl_reg);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     TIME_STAMP_INT_ENABLE, 0);
+		WREG32(mec_int_cntl_reg, mec_int_cntl);
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		mec_int_cntl = RREG32(mec_int_cntl_reg);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     TIME_STAMP_INT_ENABLE, 1);
+		WREG32(mec_int_cntl_reg, mec_int_cntl);
+		break;
+	default:
+		break;
+	}
+}
+
+static int gfx_v10_0_set_eop_interrupt_state(struct amdgpu_device *adev,
+					    struct amdgpu_irq_src *src,
+					    unsigned type,
+					    enum amdgpu_interrupt_state state)
+{
+	switch (type) {
+	case AMDGPU_CP_IRQ_GFX_ME0_PIPE0_EOP:
+		gfx_v10_0_set_gfx_eop_interrupt_state(adev, 0, 0, state);
+		break;
+	case AMDGPU_CP_IRQ_GFX_ME0_PIPE1_EOP:
+		gfx_v10_0_set_gfx_eop_interrupt_state(adev, 0, 1, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 1, 0, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE1_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 1, 1, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE2_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 1, 2, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE3_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 1, 3, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE0_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 2, 0, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE1_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 2, 1, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE2_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 2, 2, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE3_EOP:
+		gfx_v10_0_set_compute_eop_interrupt_state(adev, 2, 3, state);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v10_0_eop_irq(struct amdgpu_device *adev,
+			     struct amdgpu_irq_src *source,
+			     struct amdgpu_iv_entry *entry)
+{
+	int i;
+	u8 me_id, pipe_id, queue_id;
+	struct amdgpu_ring *ring;
+
+	DRM_DEBUG("IH: CP EOP\n");
+	me_id = (entry->ring_id & 0x0c) >> 2;
+	pipe_id = (entry->ring_id & 0x03) >> 0;
+	queue_id = (entry->ring_id & 0x70) >> 4;
+
+	switch (me_id) {
+	case 0:
+		if (pipe_id == 0)
+			amdgpu_fence_process(&adev->gfx.gfx_ring[0]);
+		else
+			amdgpu_fence_process(&adev->gfx.gfx_ring[1]);
+		break;
+	case 1:
+	case 2:
+		for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+			ring = &adev->gfx.compute_ring[i];
+			/* Per-queue interrupt is supported for MEC starting from VI.
+			  * The interrupt can only be enabled/disabled per pipe instead of per queue.
+			  */
+			if ((ring->me == me_id) && (ring->pipe == pipe_id) && (ring->queue == queue_id))
+				amdgpu_fence_process(ring);
+		}
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v10_0_set_priv_reg_fault_state(struct amdgpu_device *adev,
+					      struct amdgpu_irq_src *source,
+					      unsigned type,
+					      enum amdgpu_interrupt_state state)
+{
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+	case AMDGPU_IRQ_STATE_ENABLE:
+		WREG32_FIELD15(GC, 0, CP_INT_CNTL_RING0,
+			       PRIV_REG_INT_ENABLE,
+			       state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int gfx_v10_0_set_priv_inst_fault_state(struct amdgpu_device *adev,
+					       struct amdgpu_irq_src *source,
+					       unsigned type,
+					       enum amdgpu_interrupt_state state)
+{
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+	case AMDGPU_IRQ_STATE_ENABLE:
+		WREG32_FIELD15(GC, 0, CP_INT_CNTL_RING0,
+			       PRIV_INSTR_INT_ENABLE,
+			       state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static void gfx_v10_0_handle_priv_fault(struct amdgpu_device *adev,
+					struct amdgpu_iv_entry *entry)
+{
+	u8 me_id, pipe_id, queue_id;
+	struct amdgpu_ring *ring;
+	int i;
+
+	me_id = (entry->ring_id & 0x0c) >> 2;
+	pipe_id = (entry->ring_id & 0x03) >> 0;
+	queue_id = (entry->ring_id & 0x70) >> 4;
+
+	switch (me_id) {
+	case 0:
+		for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+			ring = &adev->gfx.gfx_ring[i];
+			/* we only enabled 1 gfx queue per pipe for now */
+			if (ring->me == me_id && ring->pipe == pipe_id)
+				drm_sched_fault(&ring->sched);
+		}
+		break;
+	case 1:
+	case 2:
+		for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+			ring = &adev->gfx.compute_ring[i];
+			if (ring->me == me_id && ring->pipe == pipe_id &&
+			    ring->queue == queue_id)
+				drm_sched_fault(&ring->sched);
+		}
+		break;
+	default:
+		BUG();
+	}
+}
+
+static int gfx_v10_0_priv_reg_irq(struct amdgpu_device *adev,
+				  struct amdgpu_irq_src *source,
+				  struct amdgpu_iv_entry *entry)
+{
+	DRM_ERROR("Illegal register access in command stream\n");
+	gfx_v10_0_handle_priv_fault(adev, entry);
+	return 0;
+}
+
+static int gfx_v10_0_priv_inst_irq(struct amdgpu_device *adev,
+				   struct amdgpu_irq_src *source,
+				   struct amdgpu_iv_entry *entry)
+{
+	DRM_ERROR("Illegal instruction in command stream\n");
+	gfx_v10_0_handle_priv_fault(adev, entry);
+	return 0;
+}
+
+static int gfx_v10_0_kiq_set_interrupt_state(struct amdgpu_device *adev,
+					     struct amdgpu_irq_src *src,
+					     unsigned int type,
+					     enum amdgpu_interrupt_state state)
+{
+	uint32_t tmp, target;
+	struct amdgpu_ring *ring = &(adev->gfx.kiq.ring);
+
+	if (ring->me == 1)
+		target = SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE0_INT_CNTL);
+	else
+		target = SOC15_REG_OFFSET(GC, 0, mmCP_ME2_PIPE0_INT_CNTL);
+	target += ring->pipe;
+
+	switch (type) {
+	case AMDGPU_CP_KIQ_IRQ_DRIVER0:
+		if (state == AMDGPU_IRQ_STATE_DISABLE) {
+			tmp = RREG32_SOC15(GC, 0, mmCPC_INT_CNTL);
+			tmp = REG_SET_FIELD(tmp, CPC_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 0);
+			WREG32_SOC15(GC, 0, mmCPC_INT_CNTL, tmp);
+
+			tmp = RREG32(target);
+			tmp = REG_SET_FIELD(tmp, CP_ME2_PIPE0_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 0);
+			WREG32(target, tmp);
+		} else {
+			tmp = RREG32_SOC15(GC, 0, mmCPC_INT_CNTL);
+			tmp = REG_SET_FIELD(tmp, CPC_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 1);
+			WREG32_SOC15(GC, 0, mmCPC_INT_CNTL, tmp);
+
+			tmp = RREG32(target);
+			tmp = REG_SET_FIELD(tmp, CP_ME2_PIPE0_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 1);
+			WREG32(target, tmp);
+		}
+		break;
+	default:
+		BUG(); /* kiq only support GENERIC2_INT now */
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v10_0_kiq_irq(struct amdgpu_device *adev,
+			     struct amdgpu_irq_src *source,
+			     struct amdgpu_iv_entry *entry)
+{
+	u8 me_id, pipe_id, queue_id;
+	struct amdgpu_ring *ring = &(adev->gfx.kiq.ring);
+
+	me_id = (entry->ring_id & 0x0c) >> 2;
+	pipe_id = (entry->ring_id & 0x03) >> 0;
+	queue_id = (entry->ring_id & 0x70) >> 4;
+	DRM_DEBUG("IH: CPC GENERIC2_INT, me:%d, pipe:%d, queue:%d\n",
+		   me_id, pipe_id, queue_id);
+
+	amdgpu_fence_process(ring);
+	return 0;
+}
+
+static const struct amd_ip_funcs gfx_v10_0_ip_funcs = {
+	.name = "gfx_v10_0",
+	.early_init = gfx_v10_0_early_init,
+	.late_init = gfx_v10_0_late_init,
+	.sw_init = gfx_v10_0_sw_init,
+	.sw_fini = gfx_v10_0_sw_fini,
+	.hw_init = gfx_v10_0_hw_init,
+	.hw_fini = gfx_v10_0_hw_fini,
+	.suspend = gfx_v10_0_suspend,
+	.resume = gfx_v10_0_resume,
+	.is_idle = gfx_v10_0_is_idle,
+	.wait_for_idle = gfx_v10_0_wait_for_idle,
+	.soft_reset = gfx_v10_0_soft_reset,
+	.set_clockgating_state = gfx_v10_0_set_clockgating_state,
+	.set_powergating_state = gfx_v10_0_set_powergating_state,
+	.get_clockgating_state = gfx_v10_0_get_clockgating_state,
+};
+
+static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_gfx = {
+	.type = AMDGPU_RING_TYPE_GFX,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.vmhub = AMDGPU_GFXHUB,
+	.get_rptr = gfx_v10_0_ring_get_rptr_gfx,
+	.get_wptr = gfx_v10_0_ring_get_wptr_gfx,
+	.set_wptr = gfx_v10_0_ring_set_wptr_gfx,
+	.emit_frame_size = /* totally 242 maximum if 16 IBs */
+		5 + /* COND_EXEC */
+		7 + /* PIPELINE_SYNC */
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 +
+		2 + /* VM_FLUSH */
+		8 + /* FENCE for VM_FLUSH */
+		20 + /* GDS switch */
+		4 + /* double SWITCH_BUFFER,
+		     * the first COND_EXEC jump to the place
+		     * just prior to this double SWITCH_BUFFER
+		     */
+		5 + /* COND_EXEC */
+		7 + /* HDP_flush */
+		4 + /* VGT_flush */
+		14 + /*	CE_META */
+		31 + /*	DE_META */
+		3 + /* CNTX_CTRL */
+		5 + /* HDP_INVL */
+		8 + 8 + /* FENCE x2 */
+		2, /* SWITCH_BUFFER */
+	.emit_ib_size =	4, /* gfx_v10_0_ring_emit_ib_gfx */
+	.emit_ib = gfx_v10_0_ring_emit_ib_gfx,
+	.emit_fence = gfx_v10_0_ring_emit_fence,
+	.emit_pipeline_sync = gfx_v10_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = gfx_v10_0_ring_emit_vm_flush,
+	.emit_gds_switch = gfx_v10_0_ring_emit_gds_switch,
+	.emit_hdp_flush = gfx_v10_0_ring_emit_hdp_flush,
+	.test_ring = gfx_v10_0_ring_test_ring,
+	.test_ib = gfx_v10_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_switch_buffer = gfx_v10_0_ring_emit_sb,
+	.emit_cntxcntl = gfx_v10_0_ring_emit_cntxcntl,
+	.init_cond_exec = gfx_v10_0_ring_emit_init_cond_exec,
+	.patch_cond_exec = gfx_v10_0_ring_emit_patch_cond_exec,
+	.preempt_ib = gfx_v10_0_ring_preempt_ib,
+	.emit_tmz = gfx_v10_0_ring_emit_tmz,
+	.emit_wreg = gfx_v10_0_ring_emit_wreg,
+	.emit_reg_wait = gfx_v10_0_ring_emit_reg_wait,
+};
+
+static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_compute = {
+	.type = AMDGPU_RING_TYPE_COMPUTE,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.vmhub = AMDGPU_GFXHUB,
+	.get_rptr = gfx_v10_0_ring_get_rptr_compute,
+	.get_wptr = gfx_v10_0_ring_get_wptr_compute,
+	.set_wptr = gfx_v10_0_ring_set_wptr_compute,
+	.emit_frame_size =
+		20 + /* gfx_v10_0_ring_emit_gds_switch */
+		7 + /* gfx_v10_0_ring_emit_hdp_flush */
+		5 + /* hdp invalidate */
+		7 + /* gfx_v10_0_ring_emit_pipeline_sync */
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 +
+		2 + /* gfx_v10_0_ring_emit_vm_flush */
+		8 + 8 + 8, /* gfx_v10_0_ring_emit_fence x3 for user fence, vm fence */
+	.emit_ib_size =	4, /* gfx_v10_0_ring_emit_ib_compute */
+	.emit_ib = gfx_v10_0_ring_emit_ib_compute,
+	.emit_fence = gfx_v10_0_ring_emit_fence,
+	.emit_pipeline_sync = gfx_v10_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = gfx_v10_0_ring_emit_vm_flush,
+	.emit_gds_switch = gfx_v10_0_ring_emit_gds_switch,
+	.emit_hdp_flush = gfx_v10_0_ring_emit_hdp_flush,
+	.test_ring = gfx_v10_0_ring_test_ring,
+	.test_ib = gfx_v10_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_wreg = gfx_v10_0_ring_emit_wreg,
+	.emit_reg_wait = gfx_v10_0_ring_emit_reg_wait,
+};
+
+static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_kiq = {
+	.type = AMDGPU_RING_TYPE_KIQ,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.vmhub = AMDGPU_GFXHUB,
+	.get_rptr = gfx_v10_0_ring_get_rptr_compute,
+	.get_wptr = gfx_v10_0_ring_get_wptr_compute,
+	.set_wptr = gfx_v10_0_ring_set_wptr_compute,
+	.emit_frame_size =
+		20 + /* gfx_v10_0_ring_emit_gds_switch */
+		7 + /* gfx_v10_0_ring_emit_hdp_flush */
+		5 + /*hdp invalidate */
+		7 + /* gfx_v10_0_ring_emit_pipeline_sync */
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 +
+		2 + /* gfx_v10_0_ring_emit_vm_flush */
+		8 + 8 + 8, /* gfx_v10_0_ring_emit_fence_kiq x3 for user fence, vm fence */
+	.emit_ib_size =	4, /* gfx_v10_0_ring_emit_ib_compute */
+	.emit_ib = gfx_v10_0_ring_emit_ib_compute,
+	.emit_fence = gfx_v10_0_ring_emit_fence_kiq,
+	.test_ring = gfx_v10_0_ring_test_ring,
+	.test_ib = gfx_v10_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_rreg = gfx_v10_0_ring_emit_rreg,
+	.emit_wreg = gfx_v10_0_ring_emit_wreg,
+	.emit_reg_wait = gfx_v10_0_ring_emit_reg_wait,
+};
+
+static void gfx_v10_0_set_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	adev->gfx.kiq.ring.funcs = &gfx_v10_0_ring_funcs_kiq;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		adev->gfx.gfx_ring[i].funcs = &gfx_v10_0_ring_funcs_gfx;
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++)
+		adev->gfx.compute_ring[i].funcs = &gfx_v10_0_ring_funcs_compute;
+}
+
+static const struct amdgpu_irq_src_funcs gfx_v10_0_eop_irq_funcs = {
+	.set = gfx_v10_0_set_eop_interrupt_state,
+	.process = gfx_v10_0_eop_irq,
+};
+
+static const struct amdgpu_irq_src_funcs gfx_v10_0_priv_reg_irq_funcs = {
+	.set = gfx_v10_0_set_priv_reg_fault_state,
+	.process = gfx_v10_0_priv_reg_irq,
+};
+
+static const struct amdgpu_irq_src_funcs gfx_v10_0_priv_inst_irq_funcs = {
+	.set = gfx_v10_0_set_priv_inst_fault_state,
+	.process = gfx_v10_0_priv_inst_irq,
+};
+
+static const struct amdgpu_irq_src_funcs gfx_v10_0_kiq_irq_funcs = {
+	.set = gfx_v10_0_kiq_set_interrupt_state,
+	.process = gfx_v10_0_kiq_irq,
+};
+
+static void gfx_v10_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->gfx.eop_irq.num_types = AMDGPU_CP_IRQ_LAST;
+	adev->gfx.eop_irq.funcs = &gfx_v10_0_eop_irq_funcs;
+
+	adev->gfx.kiq.irq.num_types = AMDGPU_CP_KIQ_IRQ_LAST;
+	adev->gfx.kiq.irq.funcs = &gfx_v10_0_kiq_irq_funcs;
+
+	adev->gfx.priv_reg_irq.num_types = 1;
+	adev->gfx.priv_reg_irq.funcs = &gfx_v10_0_priv_reg_irq_funcs;
+
+	adev->gfx.priv_inst_irq.num_types = 1;
+	adev->gfx.priv_inst_irq.funcs = &gfx_v10_0_priv_inst_irq_funcs;
+}
+
+static void gfx_v10_0_set_rlc_funcs(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		adev->gfx.rlc.funcs = &gfx_v10_0_rlc_funcs;
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v10_0_set_gds_init(struct amdgpu_device *adev)
+{
+	/* init asic gds info */
+	adev->gds.mem.total_size = RREG32_SOC15(GC, 0, mmGDS_VMID0_SIZE);
+	adev->gds.gws.total_size = 64;
+	adev->gds.oa.total_size = 16;
+
+	if (adev->gds.mem.total_size == 64 * 1024) {
+		adev->gds.mem.gfx_partition_size = 4096;
+		adev->gds.mem.cs_partition_size = 4096;
+
+		adev->gds.gws.gfx_partition_size = 4;
+		adev->gds.gws.cs_partition_size = 4;
+
+		adev->gds.oa.gfx_partition_size = 4;
+		adev->gds.oa.cs_partition_size = 1;
+	} else {
+		adev->gds.mem.gfx_partition_size = 1024;
+		adev->gds.mem.cs_partition_size = 1024;
+
+		adev->gds.gws.gfx_partition_size = 16;
+		adev->gds.gws.cs_partition_size = 16;
+
+		adev->gds.oa.gfx_partition_size = 4;
+		adev->gds.oa.cs_partition_size = 4;
+	}
+}
+
+static void gfx_v10_0_set_user_wgp_inactive_bitmap_per_sh(struct amdgpu_device *adev,
+							  u32 bitmap)
+{
+	u32 data;
+
+	if (!bitmap)
+		return;
+
+	data = bitmap << GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_WGPS__SHIFT;
+	data &= GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_WGPS_MASK;
+
+	WREG32_SOC15(GC, 0, mmGC_USER_SHADER_ARRAY_CONFIG, data);
+}
+
+static u32 gfx_v10_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev)
+{
+	u32 data, wgp_bitmask;
+	data = RREG32_SOC15(GC, 0, mmCC_GC_SHADER_ARRAY_CONFIG);
+	data |= RREG32_SOC15(GC, 0, mmGC_USER_SHADER_ARRAY_CONFIG);
+
+	data &= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS_MASK;
+	data >>= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS__SHIFT;
+
+	wgp_bitmask =
+		amdgpu_gfx_create_bitmask(adev->gfx.config.max_cu_per_sh >> 1);
+
+	return (~data) & wgp_bitmask;
+}
+
+static u32 gfx_v10_0_get_cu_active_bitmap_per_sh(struct amdgpu_device *adev)
+{
+	u32 wgp_idx, wgp_active_bitmap;
+	u32 cu_bitmap_per_wgp, cu_active_bitmap;
+
+	wgp_active_bitmap = gfx_v10_0_get_wgp_active_bitmap_per_sh(adev);
+	cu_active_bitmap = 0;
+
+	for (wgp_idx = 0; wgp_idx < 16; wgp_idx++) {
+		/* if there is one WGP enabled, it means 2 CUs will be enabled */
+		cu_bitmap_per_wgp = 3 << (2 * wgp_idx);
+		if (wgp_active_bitmap & (1 << wgp_idx))
+			cu_active_bitmap |= cu_bitmap_per_wgp;
+	}
+
+	return cu_active_bitmap;
+}
+
+static int gfx_v10_0_get_cu_info(struct amdgpu_device *adev,
+				 struct amdgpu_cu_info *cu_info)
+{
+	int i, j, k, counter, active_cu_number = 0;
+	u32 mask, bitmap, ao_bitmap, ao_cu_mask = 0;
+	unsigned disable_masks[4 * 2];
+
+	if (!adev || !cu_info)
+		return -EINVAL;
+
+	amdgpu_gfx_parse_disable_cu(disable_masks, 4, 2);
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+			mask = 1;
+			ao_bitmap = 0;
+			counter = 0;
+			gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff);
+			if (i < 4 && j < 2)
+				gfx_v10_0_set_user_wgp_inactive_bitmap_per_sh(
+					adev, disable_masks[i * 2 + j]);
+			bitmap = gfx_v10_0_get_cu_active_bitmap_per_sh(adev);
+			cu_info->bitmap[i][j] = bitmap;
+
+			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k ++) {
+				if (bitmap & mask) {
+					if (counter < adev->gfx.config.max_cu_per_sh)
+						ao_bitmap |= mask;
+					counter ++;
+				}
+				mask <<= 1;
+			}
+			active_cu_number += counter;
+			if (i < 2 && j < 2)
+				ao_cu_mask |= (ao_bitmap << (i * 16 + j * 8));
+			cu_info->ao_cu_bitmap[i][j] = ao_bitmap;
+		}
+	}
+	gfx_v10_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	cu_info->number = active_cu_number;
+	cu_info->ao_cu_mask = ao_cu_mask;
+	cu_info->simd_per_cu = NUM_SIMD_PER_CU;
+
+	return 0;
+}
+
+const struct amdgpu_ip_block_version gfx_v10_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_GFX,
+	.major = 10,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &gfx_v10_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.h b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.h
new file mode 100644
index 000000000000..b442e50324d0
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2019 dvanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __GFX_V10_0_H__
+#define __GFX_V10_0_H__
+
+extern const struct amdgpu_ip_block_version gfx_v10_0_ip_block;
+
+#endif
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 124/459] drm/amdgpu: avoid to use SOC15_REG_OFFSET in static array for navi10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (22 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 123/459] drm/amdgpu: add gfx v10 implementation (v8) Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 125/459] drm/amdgpu: add navi10 common ip block (v3) Alex Deucher
                     ` (68 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 9 ---------
 drivers/gpu/drm/amd/amdgpu/soc15.h | 8 ++++++++
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index d9fdd95fd6e6..3fbc3cd849ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -274,15 +274,6 @@ static bool soc15_read_bios_from_rom(struct amdgpu_device *adev,
 	return true;
 }
 
-struct soc15_allowed_register_entry {
-	uint32_t hwip;
-	uint32_t inst;
-	uint32_t seg;
-	uint32_t reg_offset;
-	bool grbm_indexed;
-};
-
-
 static struct soc15_allowed_register_entry soc15_allowed_read_registers[] = {
 	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS)},
 	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS2)},
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.h b/drivers/gpu/drm/amd/amdgpu/soc15.h
index 48e824d52ad9..7a6b2cc6d9f5 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.h
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.h
@@ -52,6 +52,14 @@ struct soc15_reg_entry {
 	uint32_t instance;
 };
 
+struct soc15_allowed_register_entry {
+	uint32_t hwip;
+	uint32_t inst;
+	uint32_t seg;
+	uint32_t reg_offset;
+	bool grbm_indexed;
+};
+
 #define SOC15_REG_ENTRY(ip, inst, reg)	ip##_HWIP, inst, reg##_BASE_IDX, reg
 
 #define SOC15_REG_ENTRY_OFFSET(entry)	(adev->reg_offset[entry.hwip][entry.inst][entry.seg] + entry.reg_offset)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 125/459] drm/amdgpu: add navi10 common ip block (v3)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (23 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 124/459] drm/amdgpu: avoid to use SOC15_REG_OFFSET in static array for navi10 Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 126/459] drm/amdgpu: Add navi10 kfd support for amdgpu (v3) Alex Deucher
                     ` (67 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

This adds the core SOC code for navi asics.

v1: add place holder and initial basic function (Ray)
v2: add new introduced functions to avoid reference
    NULL pointer (Hawking)
v3L squash in updates (Alex)

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile          |   2 +-
 drivers/gpu/drm/amd/amdgpu/navi10_reg_init.c |  66 ++
 drivers/gpu/drm/amd/amdgpu/nv.c              | 777 +++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/nv.h              |  33 +
 4 files changed, 877 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/navi10_reg_init.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/nv.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/nv.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 49479c93fab0..205ea6a1d893 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -64,7 +64,7 @@ amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce
 
 amdgpu-y += \
 	vi.o mxgpu_vi.o nbio_v6_1.o soc15.o emu_soc.o mxgpu_ai.o nbio_v7_0.o vega10_reg_init.o \
-	vega20_reg_init.o nbio_v7_4.o nbio_v2_3.o
+	vega20_reg_init.o nbio_v7_4.o nbio_v2_3.o nv.o navi10_reg_init.o
 
 # add DF block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/navi10_reg_init.c b/drivers/gpu/drm/amd/amdgpu/navi10_reg_init.c
new file mode 100644
index 000000000000..8cd4568c07ee
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/navi10_reg_init.c
@@ -0,0 +1,66 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "amdgpu.h"
+#include "nv.h"
+
+#include "soc15_common.h"
+#include "soc15_hw_ip.h"
+#include "navi10_ip_offset.h"
+
+int navi10_reg_base_init(struct amdgpu_device *adev)
+{
+	int r, i;
+
+	if (amdgpu_discovery) {
+		r = amdgpu_discovery_reg_base_init(adev);
+		if (r) {
+			DRM_WARN("failed to init reg base from ip discovery table, "
+					"fallback to legacy init method\n");
+			goto legacy_init;
+		}
+
+		return 0;
+	}
+
+legacy_init:
+	for (i = 0 ; i < MAX_INSTANCE ; ++i) {
+		adev->reg_offset[GC_HWIP][i] = (uint32_t *)(&(GC_BASE.instance[i]));
+		adev->reg_offset[HDP_HWIP][i] = (uint32_t *)(&(HDP_BASE.instance[i]));
+		adev->reg_offset[MMHUB_HWIP][i] = (uint32_t *)(&(MMHUB_BASE.instance[i]));
+		adev->reg_offset[ATHUB_HWIP][i] = (uint32_t *)(&(ATHUB_BASE.instance[i]));
+		adev->reg_offset[NBIO_HWIP][i] = (uint32_t *)(&(NBIO_BASE.instance[i]));
+		adev->reg_offset[MP0_HWIP][i] = (uint32_t *)(&(MP0_BASE.instance[i]));
+		adev->reg_offset[MP1_HWIP][i] = (uint32_t *)(&(MP1_BASE.instance[i]));
+		adev->reg_offset[VCN_HWIP][i] = (uint32_t *)(&(VCN_BASE.instance[i]));
+		adev->reg_offset[DF_HWIP][i] = (uint32_t *)(&(DF_BASE.instance[i]));
+		adev->reg_offset[DCE_HWIP][i] = (uint32_t *)(&(DCN_BASE.instance[i]));
+		adev->reg_offset[OSSSYS_HWIP][i] = (uint32_t *)(&(OSSSYS_BASE.instance[i]));
+		adev->reg_offset[SDMA0_HWIP][i] = (uint32_t *)(&(GC_BASE.instance[i]));
+		adev->reg_offset[SDMA1_HWIP][i] = (uint32_t *)(&(GC_BASE.instance[i]));
+		adev->reg_offset[SMUIO_HWIP][i] = (uint32_t *)(&(SMUIO_BASE.instance[i]));
+	}
+
+	return 0;
+}
+
+
diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
new file mode 100644
index 000000000000..a0d19b9d329c
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/nv.c
@@ -0,0 +1,777 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/firmware.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_atombios.h"
+#include "amdgpu_ih.h"
+#include "amdgpu_uvd.h"
+#include "amdgpu_vce.h"
+#include "amdgpu_ucode.h"
+#include "amdgpu_psp.h"
+#include "atom.h"
+#include "amd_pcie.h"
+
+#include "gc/gc_10_1_0_offset.h"
+#include "gc/gc_10_1_0_sh_mask.h"
+#include "hdp/hdp_5_0_0_offset.h"
+#include "hdp/hdp_5_0_0_sh_mask.h"
+
+#include "soc15.h"
+#include "soc15_common.h"
+#include "gmc_v10_0.h"
+#include "gfxhub_v2_0.h"
+#include "mmhub_v2_0.h"
+#include "nv.h"
+#include "navi10_ih.h"
+#include "gfx_v10_0.h"
+#include "sdma_v5_0.h"
+#include "vcn_v2_0.h"
+#include "dce_virtual.h"
+#include "mes_v10_1.h"
+
+static const struct amd_ip_funcs nv_common_ip_funcs;
+
+/*
+ * Indirect registers accessor
+ */
+static u32 nv_pcie_rreg(struct amdgpu_device *adev, u32 reg)
+{
+	unsigned long flags, address, data;
+	u32 r;
+	address = adev->nbio_funcs->get_pcie_index_offset(adev);
+	data = adev->nbio_funcs->get_pcie_data_offset(adev);
+
+	spin_lock_irqsave(&adev->pcie_idx_lock, flags);
+	WREG32(address, reg);
+	(void)RREG32(address);
+	r = RREG32(data);
+	spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
+	return r;
+}
+
+static void nv_pcie_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
+{
+	unsigned long flags, address, data;
+
+	address = adev->nbio_funcs->get_pcie_index_offset(adev);
+	data = adev->nbio_funcs->get_pcie_data_offset(adev);
+
+	spin_lock_irqsave(&adev->pcie_idx_lock, flags);
+	WREG32(address, reg);
+	(void)RREG32(address);
+	WREG32(data, v);
+	(void)RREG32(data);
+	spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
+}
+
+static u32 nv_didt_rreg(struct amdgpu_device *adev, u32 reg)
+{
+	unsigned long flags, address, data;
+	u32 r;
+
+	address = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_INDEX);
+	data = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_DATA);
+
+	spin_lock_irqsave(&adev->didt_idx_lock, flags);
+	WREG32(address, (reg));
+	r = RREG32(data);
+	spin_unlock_irqrestore(&adev->didt_idx_lock, flags);
+	return r;
+}
+
+static void nv_didt_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
+{
+	unsigned long flags, address, data;
+
+	address = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_INDEX);
+	data = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_DATA);
+
+	spin_lock_irqsave(&adev->didt_idx_lock, flags);
+	WREG32(address, (reg));
+	WREG32(data, (v));
+	spin_unlock_irqrestore(&adev->didt_idx_lock, flags);
+}
+
+static u32 nv_get_config_memsize(struct amdgpu_device *adev)
+{
+	return adev->nbio_funcs->get_memsize(adev);
+}
+
+static u32 nv_get_xclk(struct amdgpu_device *adev)
+{
+	return adev->clock.spll.reference_freq / 4;
+}
+
+
+void nv_grbm_select(struct amdgpu_device *adev,
+		     u32 me, u32 pipe, u32 queue, u32 vmid)
+{
+	u32 grbm_gfx_cntl = 0;
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, PIPEID, pipe);
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, MEID, me);
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, VMID, vmid);
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, QUEUEID, queue);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_GFX_CNTL), grbm_gfx_cntl);
+}
+
+static void nv_vga_set_state(struct amdgpu_device *adev, bool state)
+{
+	/* todo */
+}
+
+static bool nv_read_disabled_bios(struct amdgpu_device *adev)
+{
+	/* todo */
+	return false;
+}
+
+static bool nv_read_bios_from_rom(struct amdgpu_device *adev,
+				  u8 *bios, u32 length_bytes)
+{
+	/* TODO: will implement it when SMU header is available */
+	return false;
+}
+
+static struct soc15_allowed_register_entry nv_allowed_read_registers[] = {
+	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS)},
+	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS2)},
+	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS_SE0)},
+	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS_SE1)},
+	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS_SE2)},
+	{ SOC15_REG_ENTRY(GC, 0, mmGRBM_STATUS_SE3)},
+#if 0	/* TODO: will set it when SDMA header is available */
+	{ SOC15_REG_ENTRY(SDMA0, 0, mmSDMA0_STATUS_REG)},
+	{ SOC15_REG_ENTRY(SDMA1, 0, mmSDMA1_STATUS_REG)},
+#endif
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_STAT)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_STALLED_STAT1)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_STALLED_STAT2)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_STALLED_STAT3)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_CPF_BUSY_STAT)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_CPF_STALLED_STAT1)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_CPF_STATUS)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_CPC_STALLED_STAT1)},
+	{ SOC15_REG_ENTRY(GC, 0, mmCP_CPC_STATUS)},
+	{ SOC15_REG_ENTRY(GC, 0, mmGB_ADDR_CONFIG)},
+};
+
+static uint32_t nv_read_indexed_register(struct amdgpu_device *adev, u32 se_num,
+					 u32 sh_num, u32 reg_offset)
+{
+	uint32_t val;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	if (se_num != 0xffffffff || sh_num != 0xffffffff)
+		amdgpu_gfx_select_se_sh(adev, se_num, sh_num, 0xffffffff);
+
+	val = RREG32(reg_offset);
+
+	if (se_num != 0xffffffff || sh_num != 0xffffffff)
+		amdgpu_gfx_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+	return val;
+}
+
+static uint32_t nv_get_register_value(struct amdgpu_device *adev,
+				      bool indexed, u32 se_num,
+				      u32 sh_num, u32 reg_offset)
+{
+	if (indexed) {
+		return nv_read_indexed_register(adev, se_num, sh_num, reg_offset);
+	} else {
+		if (reg_offset == SOC15_REG_OFFSET(GC, 0, mmGB_ADDR_CONFIG))
+			return adev->gfx.config.gb_addr_config;
+		return RREG32(reg_offset);
+	}
+}
+
+static int nv_read_register(struct amdgpu_device *adev, u32 se_num,
+			    u32 sh_num, u32 reg_offset, u32 *value)
+{
+	uint32_t i;
+	struct soc15_allowed_register_entry  *en;
+
+	*value = 0;
+	for (i = 0; i < ARRAY_SIZE(nv_allowed_read_registers); i++) {
+		en = &nv_allowed_read_registers[i];
+		if (reg_offset !=
+		    (adev->reg_offset[en->hwip][en->inst][en->seg] + en->reg_offset))
+			continue;
+
+		*value = nv_get_register_value(adev,
+					       nv_allowed_read_registers[i].grbm_indexed,
+					       se_num, sh_num, reg_offset);
+		return 0;
+	}
+	return -EINVAL;
+}
+
+#if 0
+static void nv_gpu_pci_config_reset(struct amdgpu_device *adev)
+{
+	u32 i;
+
+	dev_info(adev->dev, "GPU pci config reset\n");
+
+	/* disable BM */
+	pci_clear_master(adev->pdev);
+	/* reset */
+	amdgpu_pci_config_reset(adev);
+
+	udelay(100);
+
+	/* wait for asic to come out of reset */
+	for (i = 0; i < adev->usec_timeout; i++) {
+		u32 memsize = nbio_v2_3_get_memsize(adev);
+		if (memsize != 0xffffffff)
+			break;
+		udelay(1);
+	}
+
+}
+#endif
+
+static int nv_asic_reset(struct amdgpu_device *adev)
+{
+
+	/* FIXME: it doesn't work since vega10 */
+#if 0
+	amdgpu_atombios_scratch_regs_engine_hung(adev, true);
+
+	nv_gpu_pci_config_reset(adev);
+
+	amdgpu_atombios_scratch_regs_engine_hung(adev, false);
+#endif
+
+	return 0;
+}
+
+static int nv_set_uvd_clocks(struct amdgpu_device *adev, u32 vclk, u32 dclk)
+{
+	/* todo */
+	return 0;
+}
+
+static int nv_set_vce_clocks(struct amdgpu_device *adev, u32 evclk, u32 ecclk)
+{
+	/* todo */
+	return 0;
+}
+
+static void nv_pcie_gen3_enable(struct amdgpu_device *adev)
+{
+	if (pci_is_root_bus(adev->pdev->bus))
+		return;
+
+	if (amdgpu_pcie_gen2 == 0)
+		return;
+
+	if (!(adev->pm.pcie_gen_mask & (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 |
+					CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)))
+		return;
+
+	/* todo */
+}
+
+static void nv_program_aspm(struct amdgpu_device *adev)
+{
+
+	if (amdgpu_aspm == 0)
+		return;
+
+	/* todo */
+}
+
+static void nv_enable_doorbell_aperture(struct amdgpu_device *adev,
+					bool enable)
+{
+	adev->nbio_funcs->enable_doorbell_aperture(adev, enable);
+	adev->nbio_funcs->enable_doorbell_selfring_aperture(adev, enable);
+}
+
+static const struct amdgpu_ip_block_version nv_common_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_COMMON,
+	.major = 1,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &nv_common_ip_funcs,
+};
+
+int nv_set_ip_blocks(struct amdgpu_device *adev)
+{
+	/* Set IP register base before any HW register access */
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		navi10_reg_base_init(adev);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	adev->nbio_funcs = &nbio_v2_3_funcs;
+
+	adev->nbio_funcs->detect_hw_virt(adev);
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		amdgpu_device_ip_block_add(adev, &nv_common_ip_block);
+		amdgpu_device_ip_block_add(adev, &gmc_v10_0_ip_block);
+		amdgpu_device_ip_block_add(adev, &navi10_ih_ip_block);
+		amdgpu_device_ip_block_add(adev, &psp_v11_0_ip_block);
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP &&
+		    is_support_sw_smu(adev))
+			amdgpu_device_ip_block_add(adev, &smu_v11_0_ip_block);
+		if (adev->enable_virtual_display || amdgpu_sriov_vf(adev))
+			amdgpu_device_ip_block_add(adev, &dce_virtual_ip_block);
+		amdgpu_device_ip_block_add(adev, &gfx_v10_0_ip_block);
+		amdgpu_device_ip_block_add(adev, &sdma_v5_0_ip_block);
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT &&
+		    is_support_sw_smu(adev))
+			amdgpu_device_ip_block_add(adev, &smu_v11_0_ip_block);
+		amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block);
+		if (adev->enable_mes)
+			amdgpu_device_ip_block_add(adev, &mes_v10_1_ip_block);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static uint32_t nv_get_rev_id(struct amdgpu_device *adev)
+{
+	return adev->nbio_funcs->get_rev_id(adev);
+}
+
+static void nv_flush_hdp(struct amdgpu_device *adev, struct amdgpu_ring *ring)
+{
+	adev->nbio_funcs->hdp_flush(adev, ring);
+}
+
+static void nv_invalidate_hdp(struct amdgpu_device *adev,
+				struct amdgpu_ring *ring)
+{
+	if (!ring || !ring->funcs->emit_wreg) {
+		WREG32_SOC15_NO_KIQ(NBIO, 0, mmHDP_READ_CACHE_INVALIDATE, 1);
+	} else {
+		amdgpu_ring_emit_wreg(ring, SOC15_REG_OFFSET(
+					HDP, 0, mmHDP_READ_CACHE_INVALIDATE), 1);
+	}
+}
+
+static bool nv_need_full_reset(struct amdgpu_device *adev)
+{
+	return true;
+}
+
+static void nv_get_pcie_usage(struct amdgpu_device *adev,
+                              uint64_t *count0,
+                              uint64_t *count1)
+{
+	/*TODO*/
+}
+
+static bool nv_need_reset_on_init(struct amdgpu_device *adev)
+{
+#if 0
+	u32 sol_reg;
+
+	if (adev->flags & AMD_IS_APU)
+		return false;
+
+	/* Check sOS sign of life register to confirm sys driver and sOS
+	 * are already been loaded.
+	 */
+	sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
+	if (sol_reg)
+		return true;
+#endif
+	/* TODO: re-enable it when mode1 reset is functional */
+	return false;
+}
+
+static void nv_init_doorbell_index(struct amdgpu_device *adev)
+{
+	adev->doorbell_index.kiq = AMDGPU_NAVI10_DOORBELL_KIQ;
+	adev->doorbell_index.mec_ring0 = AMDGPU_NAVI10_DOORBELL_MEC_RING0;
+	adev->doorbell_index.mec_ring1 = AMDGPU_NAVI10_DOORBELL_MEC_RING1;
+	adev->doorbell_index.mec_ring2 = AMDGPU_NAVI10_DOORBELL_MEC_RING2;
+	adev->doorbell_index.mec_ring3 = AMDGPU_NAVI10_DOORBELL_MEC_RING3;
+	adev->doorbell_index.mec_ring4 = AMDGPU_NAVI10_DOORBELL_MEC_RING4;
+	adev->doorbell_index.mec_ring5 = AMDGPU_NAVI10_DOORBELL_MEC_RING5;
+	adev->doorbell_index.mec_ring6 = AMDGPU_NAVI10_DOORBELL_MEC_RING6;
+	adev->doorbell_index.mec_ring7 = AMDGPU_NAVI10_DOORBELL_MEC_RING7;
+	adev->doorbell_index.userqueue_start = AMDGPU_NAVI10_DOORBELL_USERQUEUE_START;
+	adev->doorbell_index.userqueue_end = AMDGPU_NAVI10_DOORBELL_USERQUEUE_END;
+	adev->doorbell_index.gfx_ring0 = AMDGPU_NAVI10_DOORBELL_GFX_RING0;
+	adev->doorbell_index.gfx_ring1 = AMDGPU_NAVI10_DOORBELL_GFX_RING1;
+	adev->doorbell_index.sdma_engine[0] = AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE0;
+	adev->doorbell_index.sdma_engine[1] = AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE1;
+	adev->doorbell_index.ih = AMDGPU_NAVI10_DOORBELL_IH;
+	adev->doorbell_index.vcn.vcn_ring0_1 = AMDGPU_NAVI10_DOORBELL64_VCN0_1;
+	adev->doorbell_index.vcn.vcn_ring2_3 = AMDGPU_NAVI10_DOORBELL64_VCN2_3;
+	adev->doorbell_index.vcn.vcn_ring4_5 = AMDGPU_NAVI10_DOORBELL64_VCN4_5;
+	adev->doorbell_index.vcn.vcn_ring6_7 = AMDGPU_NAVI10_DOORBELL64_VCN6_7;
+	adev->doorbell_index.first_non_cp = AMDGPU_NAVI10_DOORBELL64_FIRST_NON_CP;
+	adev->doorbell_index.last_non_cp = AMDGPU_NAVI10_DOORBELL64_LAST_NON_CP;
+
+	adev->doorbell_index.max_assignment = AMDGPU_NAVI10_DOORBELL_MAX_ASSIGNMENT << 1;
+	adev->doorbell_index.sdma_doorbell_range = 20;
+}
+
+static const struct amdgpu_asic_funcs nv_asic_funcs =
+{
+	.read_disabled_bios = &nv_read_disabled_bios,
+	.read_bios_from_rom = &nv_read_bios_from_rom,
+	.read_register = &nv_read_register,
+	.reset = &nv_asic_reset,
+	.set_vga_state = &nv_vga_set_state,
+	.get_xclk = &nv_get_xclk,
+	.set_uvd_clocks = &nv_set_uvd_clocks,
+	.set_vce_clocks = &nv_set_vce_clocks,
+	.get_config_memsize = &nv_get_config_memsize,
+	.flush_hdp = &nv_flush_hdp,
+	.invalidate_hdp = &nv_invalidate_hdp,
+	.init_doorbell_index = &nv_init_doorbell_index,
+	.need_full_reset = &nv_need_full_reset,
+	.get_pcie_usage = &nv_get_pcie_usage,
+	.need_reset_on_init = &nv_need_reset_on_init,
+};
+
+static int nv_common_early_init(void *handle)
+{
+	bool psp_enabled = false;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->smc_rreg = NULL;
+	adev->smc_wreg = NULL;
+	adev->pcie_rreg = &nv_pcie_rreg;
+	adev->pcie_wreg = &nv_pcie_wreg;
+
+	/* TODO: will add them during VCN v2 implementation */
+	adev->uvd_ctx_rreg = NULL;
+	adev->uvd_ctx_wreg = NULL;
+
+	adev->didt_rreg = &nv_didt_rreg;
+	adev->didt_wreg = &nv_didt_wreg;
+
+	adev->asic_funcs = &nv_asic_funcs;
+
+	if (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_PSP) &&
+	    (amdgpu_ip_block_mask & (1 << AMD_IP_BLOCK_TYPE_PSP)))
+		psp_enabled = true;
+
+	adev->rev_id = nv_get_rev_id(adev);
+	adev->external_rev_id = 0xff;
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
+			AMD_CG_SUPPORT_GFX_MGLS |
+			AMD_CG_SUPPORT_GFX_RLC_LS |
+			AMD_CG_SUPPORT_GFX_CP_LS |
+			AMD_CG_SUPPORT_GFX_CGLS |
+			AMD_CG_SUPPORT_GFX_CGCG |
+			AMD_CG_SUPPORT_IH_CG |
+			AMD_CG_SUPPORT_HDP_MGCG |
+			AMD_CG_SUPPORT_HDP_LS |
+			AMD_CG_SUPPORT_SDMA_MGCG |
+			AMD_CG_SUPPORT_SDMA_LS |
+			AMD_CG_SUPPORT_MC_MGCG |
+			AMD_CG_SUPPORT_MC_LS |
+			AMD_CG_SUPPORT_ATHUB_MGCG |
+			AMD_CG_SUPPORT_ATHUB_LS |
+			AMD_CG_SUPPORT_VCN_MGCG |
+			AMD_CG_SUPPORT_BIF_MGCG |
+			AMD_CG_SUPPORT_BIF_LS;
+		adev->pg_flags = 0;
+		adev->external_rev_id = adev->rev_id + 0x1;
+		break;
+	default:
+		/* FIXME: not supported yet */
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int nv_common_late_init(void *handle)
+{
+	return 0;
+}
+
+static int nv_common_sw_init(void *handle)
+{
+	return 0;
+}
+
+static int nv_common_sw_fini(void *handle)
+{
+	return 0;
+}
+
+static int nv_common_hw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* enable pcie gen2/3 link */
+	nv_pcie_gen3_enable(adev);
+	/* enable aspm */
+	nv_program_aspm(adev);
+	/* setup nbio registers */
+	adev->nbio_funcs->init_registers(adev);
+	/* enable the doorbell aperture */
+	nv_enable_doorbell_aperture(adev, true);
+
+	return 0;
+}
+
+static int nv_common_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* disable the doorbell aperture */
+	nv_enable_doorbell_aperture(adev, false);
+
+	return 0;
+}
+
+static int nv_common_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return nv_common_hw_fini(adev);
+}
+
+static int nv_common_resume(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return nv_common_hw_init(adev);
+}
+
+static bool nv_common_is_idle(void *handle)
+{
+	return true;
+}
+
+static int nv_common_wait_for_idle(void *handle)
+{
+	return 0;
+}
+
+static int nv_common_soft_reset(void *handle)
+{
+	return 0;
+}
+
+static void nv_update_hdp_mem_power_gating(struct amdgpu_device *adev,
+					   bool enable)
+{
+	uint32_t hdp_clk_cntl, hdp_clk_cntl1;
+	uint32_t hdp_mem_pwr_cntl;
+
+	if (!(adev->cg_flags & (AMD_CG_SUPPORT_HDP_LS |
+				AMD_CG_SUPPORT_HDP_DS |
+				AMD_CG_SUPPORT_HDP_SD)))
+		return;
+
+	hdp_clk_cntl = hdp_clk_cntl1 = RREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL);
+	hdp_mem_pwr_cntl = RREG32_SOC15(HDP, 0, mmHDP_MEM_POWER_CTRL);
+
+	/* Before doing clock/power mode switch,
+	 * forced on IPH & RC clock */
+	hdp_clk_cntl = REG_SET_FIELD(hdp_clk_cntl, HDP_CLK_CNTL,
+				     IPH_MEM_CLK_SOFT_OVERRIDE, 1);
+	hdp_clk_cntl = REG_SET_FIELD(hdp_clk_cntl, HDP_CLK_CNTL,
+				     RC_MEM_CLK_SOFT_OVERRIDE, 1);
+	WREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL, hdp_clk_cntl);
+
+	/* HDP 5.0 doesn't support dynamic power mode switch,
+	 * disable clock and power gating before any changing */
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 IPH_MEM_POWER_CTRL_EN, 0);
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 IPH_MEM_POWER_LS_EN, 0);
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 IPH_MEM_POWER_DS_EN, 0);
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 IPH_MEM_POWER_SD_EN, 0);
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 RC_MEM_POWER_CTRL_EN, 0);
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 RC_MEM_POWER_LS_EN, 0);
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 RC_MEM_POWER_DS_EN, 0);
+	hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl, HDP_MEM_POWER_CTRL,
+					 RC_MEM_POWER_SD_EN, 0);
+	WREG32_SOC15(HDP, 0, mmHDP_MEM_POWER_CTRL, hdp_mem_pwr_cntl);
+
+	/* only one clock gating mode (LS/DS/SD) can be enabled */
+	if (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS) {
+		hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+						 HDP_MEM_POWER_CTRL,
+						 IPH_MEM_POWER_LS_EN, enable);
+		hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+						 HDP_MEM_POWER_CTRL,
+						 RC_MEM_POWER_LS_EN, enable);
+	} else if (adev->cg_flags & AMD_CG_SUPPORT_HDP_DS) {
+		hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+						 HDP_MEM_POWER_CTRL,
+						 IPH_MEM_POWER_DS_EN, enable);
+		hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+						 HDP_MEM_POWER_CTRL,
+						 RC_MEM_POWER_DS_EN, enable);
+	} else if (adev->cg_flags & AMD_CG_SUPPORT_HDP_SD) {
+		hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+						 HDP_MEM_POWER_CTRL,
+						 IPH_MEM_POWER_SD_EN, enable);
+		/* RC should not use shut down mode, fallback to ds */
+		hdp_mem_pwr_cntl = REG_SET_FIELD(hdp_mem_pwr_cntl,
+						 HDP_MEM_POWER_CTRL,
+						 RC_MEM_POWER_DS_EN, enable);
+	}
+
+	WREG32_SOC15(HDP, 0, mmHDP_MEM_POWER_CTRL, hdp_mem_pwr_cntl);
+
+	/* restore IPH & RC clock override after clock/power mode changing */
+	WREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL, hdp_clk_cntl1);
+}
+
+static void nv_update_hdp_clock_gating(struct amdgpu_device *adev,
+				       bool enable)
+{
+	uint32_t hdp_clk_cntl;
+
+	if (!(adev->cg_flags & AMD_CG_SUPPORT_HDP_MGCG))
+		return;
+
+	hdp_clk_cntl = RREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL);
+
+	if (enable) {
+		hdp_clk_cntl &=
+			~(uint32_t)
+			  (HDP_CLK_CNTL__IPH_MEM_CLK_SOFT_OVERRIDE_MASK |
+			   HDP_CLK_CNTL__RC_MEM_CLK_SOFT_OVERRIDE_MASK |
+			   HDP_CLK_CNTL__DBUS_CLK_SOFT_OVERRIDE_MASK |
+			   HDP_CLK_CNTL__DYN_CLK_SOFT_OVERRIDE_MASK |
+			   HDP_CLK_CNTL__XDP_REG_CLK_SOFT_OVERRIDE_MASK |
+			   HDP_CLK_CNTL__HDP_REG_CLK_SOFT_OVERRIDE_MASK);
+	} else {
+		hdp_clk_cntl |= HDP_CLK_CNTL__IPH_MEM_CLK_SOFT_OVERRIDE_MASK |
+			HDP_CLK_CNTL__RC_MEM_CLK_SOFT_OVERRIDE_MASK |
+			HDP_CLK_CNTL__DBUS_CLK_SOFT_OVERRIDE_MASK |
+			HDP_CLK_CNTL__DYN_CLK_SOFT_OVERRIDE_MASK |
+			HDP_CLK_CNTL__XDP_REG_CLK_SOFT_OVERRIDE_MASK |
+			HDP_CLK_CNTL__HDP_REG_CLK_SOFT_OVERRIDE_MASK;
+	}
+
+	WREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL, hdp_clk_cntl);
+}
+
+static int nv_common_set_clockgating_state(void *handle,
+					   enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (amdgpu_sriov_vf(adev))
+		return 0;
+
+	switch (adev->asic_type) {
+	case CHIP_NAVI10:
+		adev->nbio_funcs->update_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		adev->nbio_funcs->update_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		nv_update_hdp_mem_power_gating(adev,
+				   state == AMD_CG_STATE_GATE ? true : false);
+		nv_update_hdp_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int nv_common_set_powergating_state(void *handle,
+					   enum amd_powergating_state state)
+{
+	/* TODO */
+	return 0;
+}
+
+static void nv_common_get_clockgating_state(void *handle, u32 *flags)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	uint32_t tmp;
+
+	if (amdgpu_sriov_vf(adev))
+		*flags = 0;
+
+	adev->nbio_funcs->get_clockgating_state(adev, flags);
+
+	/* AMD_CG_SUPPORT_HDP_MGCG */
+	tmp = RREG32_SOC15(HDP, 0, mmHDP_CLK_CNTL);
+	if (!(tmp & (HDP_CLK_CNTL__IPH_MEM_CLK_SOFT_OVERRIDE_MASK |
+		     HDP_CLK_CNTL__RC_MEM_CLK_SOFT_OVERRIDE_MASK |
+		     HDP_CLK_CNTL__DBUS_CLK_SOFT_OVERRIDE_MASK |
+		     HDP_CLK_CNTL__DYN_CLK_SOFT_OVERRIDE_MASK |
+		     HDP_CLK_CNTL__XDP_REG_CLK_SOFT_OVERRIDE_MASK |
+		     HDP_CLK_CNTL__HDP_REG_CLK_SOFT_OVERRIDE_MASK)))
+		*flags |= AMD_CG_SUPPORT_HDP_MGCG;
+
+	/* AMD_CG_SUPPORT_HDP_LS/DS/SD */
+	tmp = RREG32_SOC15(HDP, 0, mmHDP_MEM_POWER_CTRL);
+	if (tmp & HDP_MEM_POWER_CTRL__IPH_MEM_POWER_LS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_HDP_LS;
+	else if (tmp & HDP_MEM_POWER_CTRL__IPH_MEM_POWER_DS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_HDP_DS;
+	else if (tmp & HDP_MEM_POWER_CTRL__IPH_MEM_POWER_SD_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_HDP_SD;
+
+	return;
+}
+
+static const struct amd_ip_funcs nv_common_ip_funcs = {
+	.name = "nv_common",
+	.early_init = nv_common_early_init,
+	.late_init = nv_common_late_init,
+	.sw_init = nv_common_sw_init,
+	.sw_fini = nv_common_sw_fini,
+	.hw_init = nv_common_hw_init,
+	.hw_fini = nv_common_hw_fini,
+	.suspend = nv_common_suspend,
+	.resume = nv_common_resume,
+	.is_idle = nv_common_is_idle,
+	.wait_for_idle = nv_common_wait_for_idle,
+	.soft_reset = nv_common_soft_reset,
+	.set_clockgating_state = nv_common_set_clockgating_state,
+	.set_powergating_state = nv_common_set_powergating_state,
+	.get_clockgating_state= nv_common_get_clockgating_state,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/nv.h b/drivers/gpu/drm/amd/amdgpu/nv.h
new file mode 100644
index 000000000000..639c54933cc5
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/nv.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __NV_H__
+#define __NV_H__
+
+#include "nbio_v2_3.h"
+
+void nv_grbm_select(struct amdgpu_device *adev,
+		    u32 me, u32 pipe, u32 queue, u32 vmid);
+int nv_set_ip_blocks(struct amdgpu_device *adev);
+int navi10_reg_base_init(struct amdgpu_device *adev);
+#endif
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 126/459] drm/amdgpu: Add navi10 kfd support for amdgpu (v3)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (24 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 125/459] drm/amdgpu: add navi10 common ip block (v3) Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 127/459] drm/amdgpu: update golden setting programming logic Alex Deucher
                     ` (66 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Philip Cox, Oak Zeng, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

KFD (Kernel Fusion Driver) is the compute backend driver
for AMD GPUs.

v2: squash in updates (Alex)
v3: fix warnings (Alex)

Signed-off-by: Oak Zeng <Oak.Zeng@amd.com>
Signed-off-by: Philip Cox <Philip.Cox@amd.com>
Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile           |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |  17 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h    |   1 +
 .../drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c    | 975 ++++++++++++++++++
 4 files changed, 992 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 205ea6a1d893..1712937ae07a 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -159,7 +159,8 @@ amdgpu-y += \
 	 amdgpu_amdkfd_fence.o \
 	 amdgpu_amdkfd_gpuvm.o \
 	 amdgpu_amdkfd_gfx_v8.o \
-	 amdgpu_amdkfd_gfx_v9.o
+	 amdgpu_amdkfd_gfx_v9.o \
+	 amdgpu_amdkfd_gfx_v10.o
 
 ifneq ($(CONFIG_DRM_AMDGPU_CIK),)
 amdgpu-y += amdgpu_amdkfd_gfx_v7.o
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index 0578beb4297a..ab05325d6742 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -87,6 +87,9 @@ void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev)
 	case CHIP_RAVEN:
 		kfd2kgd = amdgpu_amdkfd_gfx_9_0_get_functions();
 		break;
+	case CHIP_NAVI10:
+		kfd2kgd = amdgpu_amdkfd_gfx_10_0_get_functions();
+		break;
 	default:
 		dev_info(adev->dev, "kfd not supported on this ASIC\n");
 		return;
@@ -437,9 +440,12 @@ void amdgpu_amdkfd_get_local_mem_info(struct kgd_dev *kgd,
 
 	if (amdgpu_sriov_vf(adev))
 		mem_info->mem_clk_max = adev->clock.default_mclk / 100;
-	else if (adev->powerplay.pp_funcs)
-		mem_info->mem_clk_max = amdgpu_dpm_get_mclk(adev, false) / 100;
-	else
+	else if (adev->powerplay.pp_funcs) {
+		if (amdgpu_emu_mode == 1)
+			mem_info->mem_clk_max = 0;
+		else
+			mem_info->mem_clk_max = amdgpu_dpm_get_mclk(adev, false) / 100;
+	} else
 		mem_info->mem_clk_max = 100;
 }
 
@@ -702,6 +708,11 @@ struct kfd2kgd_calls *amdgpu_amdkfd_gfx_9_0_get_functions(void)
 	return NULL;
 }
 
+struct kfd2kgd_calls *amdgpu_amdkfd_gfx_10_0_get_functions(void)
+{
+	return NULL;
+}
+
 struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev,
 			      const struct kfd2kgd_calls *f2g)
 {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
index f968bf147c5e..93a25c799d75 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
@@ -139,6 +139,7 @@ void amdgpu_amdkfd_set_compute_idle(struct kgd_dev *kgd, bool idle);
 struct kfd2kgd_calls *amdgpu_amdkfd_gfx_7_get_functions(void);
 struct kfd2kgd_calls *amdgpu_amdkfd_gfx_8_0_get_functions(void);
 struct kfd2kgd_calls *amdgpu_amdkfd_gfx_9_0_get_functions(void);
+struct kfd2kgd_calls *amdgpu_amdkfd_gfx_10_0_get_functions(void);
 
 bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
new file mode 100644
index 000000000000..39ffb078beb4
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
@@ -0,0 +1,975 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#undef pr_fmt
+#define pr_fmt(fmt) "kfd2kgd: " fmt
+
+#include <linux/module.h>
+#include <linux/fdtable.h>
+#include <linux/uaccess.h>
+#include <linux/firmware.h>
+#include <linux/mmu_context.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_amdkfd.h"
+#include "amdgpu_ucode.h"
+#include "soc15_hw_ip.h"
+#include "gc/gc_10_1_0_offset.h"
+#include "gc/gc_10_1_0_sh_mask.h"
+#include "navi10_enum.h"
+#include "athub/athub_2_0_0_offset.h"
+#include "athub/athub_2_0_0_sh_mask.h"
+#include "oss/osssys_5_0_0_offset.h"
+#include "oss/osssys_5_0_0_sh_mask.h"
+#include "soc15_common.h"
+#include "v10_structs.h"
+#include "nv.h"
+#include "nvd.h"
+
+enum hqd_dequeue_request_type {
+	NO_ACTION = 0,
+	DRAIN_PIPE,
+	RESET_WAVES,
+	SAVE_WAVES
+};
+
+/*
+ * Register access functions
+ */
+
+static void kgd_program_sh_mem_settings(struct kgd_dev *kgd, uint32_t vmid,
+		uint32_t sh_mem_config,
+		uint32_t sh_mem_ape1_base, uint32_t sh_mem_ape1_limit,
+		uint32_t sh_mem_bases);
+static int kgd_set_pasid_vmid_mapping(struct kgd_dev *kgd, unsigned int pasid,
+		unsigned int vmid);
+static int kgd_init_interrupts(struct kgd_dev *kgd, uint32_t pipe_id);
+static int kgd_hqd_load(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
+			uint32_t queue_id, uint32_t __user *wptr,
+			uint32_t wptr_shift, uint32_t wptr_mask,
+			struct mm_struct *mm);
+static int kgd_hqd_dump(struct kgd_dev *kgd,
+			uint32_t pipe_id, uint32_t queue_id,
+			uint32_t (**dump)[2], uint32_t *n_regs);
+static int kgd_hqd_sdma_load(struct kgd_dev *kgd, void *mqd,
+			     uint32_t __user *wptr, struct mm_struct *mm);
+static int kgd_hqd_sdma_dump(struct kgd_dev *kgd,
+			     uint32_t engine_id, uint32_t queue_id,
+			     uint32_t (**dump)[2], uint32_t *n_regs);
+static bool kgd_hqd_is_occupied(struct kgd_dev *kgd, uint64_t queue_address,
+		uint32_t pipe_id, uint32_t queue_id);
+static bool kgd_hqd_sdma_is_occupied(struct kgd_dev *kgd, void *mqd);
+static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
+				enum kfd_preempt_type reset_type,
+				unsigned int utimeout, uint32_t pipe_id,
+				uint32_t queue_id);
+static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
+				unsigned int utimeout);
+#if 0
+static uint32_t get_watch_base_addr(struct amdgpu_device *adev);
+#endif
+static int kgd_address_watch_disable(struct kgd_dev *kgd);
+static int kgd_address_watch_execute(struct kgd_dev *kgd,
+					unsigned int watch_point_id,
+					uint32_t cntl_val,
+					uint32_t addr_hi,
+					uint32_t addr_lo);
+static int kgd_wave_control_execute(struct kgd_dev *kgd,
+					uint32_t gfx_index_val,
+					uint32_t sq_cmd);
+static uint32_t kgd_address_watch_get_offset(struct kgd_dev *kgd,
+					unsigned int watch_point_id,
+					unsigned int reg_offset);
+
+static bool get_atc_vmid_pasid_mapping_valid(struct kgd_dev *kgd,
+		uint8_t vmid);
+static uint16_t get_atc_vmid_pasid_mapping_pasid(struct kgd_dev *kgd,
+		uint8_t vmid);
+static void set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmid,
+		uint64_t page_table_base);
+static int invalidate_tlbs(struct kgd_dev *kgd, uint16_t pasid);
+static int invalidate_tlbs_vmid(struct kgd_dev *kgd, uint16_t vmid);
+
+/* Because of REG_GET_FIELD() being used, we put this function in the
+ * asic specific file.
+ */
+static int amdgpu_amdkfd_get_tile_config(struct kgd_dev *kgd,
+		struct tile_config *config)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
+
+	config->gb_addr_config = adev->gfx.config.gb_addr_config;
+#if 0
+/* TODO - confirm REG_GET_FIELD x2, should be OK as is... but
+ * MC_ARB_RAMCFG register doesn't exist on Vega10 - initial amdgpu
+ * changes commented out related code, doing the same here for now but
+ * need to sync with Ken et al
+ */
+	config->num_banks = REG_GET_FIELD(adev->gfx.config.mc_arb_ramcfg,
+				MC_ARB_RAMCFG, NOOFBANK);
+	config->num_ranks = REG_GET_FIELD(adev->gfx.config.mc_arb_ramcfg,
+				MC_ARB_RAMCFG, NOOFRANKS);
+#endif
+
+	config->tile_config_ptr = adev->gfx.config.tile_mode_array;
+	config->num_tile_configs =
+			ARRAY_SIZE(adev->gfx.config.tile_mode_array);
+	config->macro_tile_config_ptr =
+			adev->gfx.config.macrotile_mode_array;
+	config->num_macro_tile_configs =
+			ARRAY_SIZE(adev->gfx.config.macrotile_mode_array);
+
+	return 0;
+}
+
+static const struct kfd2kgd_calls kfd2kgd = {
+	.program_sh_mem_settings = kgd_program_sh_mem_settings,
+	.set_pasid_vmid_mapping = kgd_set_pasid_vmid_mapping,
+	.init_interrupts = kgd_init_interrupts,
+	.hqd_load = kgd_hqd_load,
+	.hqd_sdma_load = kgd_hqd_sdma_load,
+	.hqd_dump = kgd_hqd_dump,
+	.hqd_sdma_dump = kgd_hqd_sdma_dump,
+	.hqd_is_occupied = kgd_hqd_is_occupied,
+	.hqd_sdma_is_occupied = kgd_hqd_sdma_is_occupied,
+	.hqd_destroy = kgd_hqd_destroy,
+	.hqd_sdma_destroy = kgd_hqd_sdma_destroy,
+	.address_watch_disable = kgd_address_watch_disable,
+	.address_watch_execute = kgd_address_watch_execute,
+	.wave_control_execute = kgd_wave_control_execute,
+	.address_watch_get_offset = kgd_address_watch_get_offset,
+	.get_atc_vmid_pasid_mapping_pasid =
+			get_atc_vmid_pasid_mapping_pasid,
+	.get_atc_vmid_pasid_mapping_valid =
+			get_atc_vmid_pasid_mapping_valid,
+	.invalidate_tlbs = invalidate_tlbs,
+	.invalidate_tlbs_vmid = invalidate_tlbs_vmid,
+	.set_vm_context_page_table_base = set_vm_context_page_table_base,
+	.get_tile_config = amdgpu_amdkfd_get_tile_config,
+};
+
+struct kfd2kgd_calls *amdgpu_amdkfd_gfx_10_0_get_functions()
+{
+	return (struct kfd2kgd_calls *)&kfd2kgd;
+}
+
+static inline struct amdgpu_device *get_amdgpu_device(struct kgd_dev *kgd)
+{
+	return (struct amdgpu_device *)kgd;
+}
+
+static void lock_srbm(struct kgd_dev *kgd, uint32_t mec, uint32_t pipe,
+			uint32_t queue, uint32_t vmid)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+	mutex_lock(&adev->srbm_mutex);
+	nv_grbm_select(adev, mec, pipe, queue, vmid);
+}
+
+static void unlock_srbm(struct kgd_dev *kgd)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+	nv_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+}
+
+static void acquire_queue(struct kgd_dev *kgd, uint32_t pipe_id,
+				uint32_t queue_id)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+	uint32_t mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
+	uint32_t pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
+
+	lock_srbm(kgd, mec, pipe, queue_id, 0);
+}
+
+static uint32_t get_queue_mask(struct amdgpu_device *adev,
+			       uint32_t pipe_id, uint32_t queue_id)
+{
+	unsigned int bit = (pipe_id * adev->gfx.mec.num_queue_per_pipe +
+			    queue_id) & 31;
+
+	return ((uint32_t)1) << bit;
+}
+
+static void release_queue(struct kgd_dev *kgd)
+{
+	unlock_srbm(kgd);
+}
+
+static void kgd_program_sh_mem_settings(struct kgd_dev *kgd, uint32_t vmid,
+					uint32_t sh_mem_config,
+					uint32_t sh_mem_ape1_base,
+					uint32_t sh_mem_ape1_limit,
+					uint32_t sh_mem_bases)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+	lock_srbm(kgd, 0, 0, 0, vmid);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_CONFIG), sh_mem_config);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_BASES), sh_mem_bases);
+	/* APE1 no longer exists on GFX9 */
+
+	unlock_srbm(kgd);
+}
+
+static int kgd_set_pasid_vmid_mapping(struct kgd_dev *kgd, unsigned int pasid,
+					unsigned int vmid)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+	/*
+	 * We have to assume that there is no outstanding mapping.
+	 * The ATC_VMID_PASID_MAPPING_UPDATE_STATUS bit could be 0 because
+	 * a mapping is in progress or because a mapping finished
+	 * and the SW cleared it.
+	 * So the protocol is to always wait & clear.
+	 */
+	uint32_t pasid_mapping = (pasid == 0) ? 0 : (uint32_t)pasid |
+			ATC_VMID0_PASID_MAPPING__VALID_MASK;
+
+	pr_debug("pasid 0x%x vmid %d, reg value %x\n", pasid, vmid, pasid_mapping);
+	/*
+	 * need to do this twice, once for gfx and once for mmhub
+	 * for ATC add 16 to VMID for mmhub, for IH different registers.
+	 * ATC_VMID0..15 registers are separate from ATC_VMID16..31.
+	 */
+
+	pr_debug("ATHUB, reg %x\n",SOC15_REG_OFFSET(ATHUB, 0, mmATC_VMID0_PASID_MAPPING) + vmid);
+	WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATC_VMID0_PASID_MAPPING) + vmid,
+	       pasid_mapping);
+
+#if 0
+	/* TODO: uncomment this code when the hardware support is ready. */
+	while (!(RREG32(SOC15_REG_OFFSET(
+				ATHUB, 0,
+				mmATC_VMID_PASID_MAPPING_UPDATE_STATUS)) &
+		 (1U << vmid)))
+		cpu_relax();
+
+	pr_debug("ATHUB mapping update finished\n");
+	WREG32(SOC15_REG_OFFSET(ATHUB, 0,
+				mmATC_VMID_PASID_MAPPING_UPDATE_STATUS),
+	       1U << vmid);
+#endif
+
+	/* Mapping vmid to pasid also for IH block */
+	pr_debug("update mapping for IH block and mmhub");
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_VMID_0_LUT) + vmid,
+	       pasid_mapping);
+
+	return 0;
+}
+
+/* TODO - RING0 form of field is obsolete, seems to date back to SI
+ * but still works
+ */
+
+static int kgd_init_interrupts(struct kgd_dev *kgd, uint32_t pipe_id)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	uint32_t mec;
+	uint32_t pipe;
+
+	mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
+	pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
+
+	lock_srbm(kgd, mec, pipe, 0, 0);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCPC_INT_CNTL),
+		CP_INT_CNTL_RING0__TIME_STAMP_INT_ENABLE_MASK |
+		CP_INT_CNTL_RING0__OPCODE_ERROR_INT_ENABLE_MASK);
+
+	unlock_srbm(kgd);
+
+	return 0;
+}
+
+static uint32_t get_sdma_base_addr(struct amdgpu_device *adev,
+				unsigned int engine_id,
+				unsigned int queue_id)
+{
+	uint32_t base[2] = {
+		SOC15_REG_OFFSET(SDMA0, 0,
+				 mmSDMA0_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL,
+		/* On gfx10, mmSDMA1_xxx registers are defined NOT based
+		 * on SDMA1 base address (dw 0x1860) but based on SDMA0
+		 * base address (dw 0x1260). Therefore use mmSDMA0_RLC0_RB_CNTL
+		 * instead of mmSDMA1_RLC0_RB_CNTL for the base address calc
+		 * below
+		 */
+		SOC15_REG_OFFSET(SDMA1, 0,
+				 mmSDMA1_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL
+	};
+	uint32_t retval;
+
+	retval = base[engine_id] + queue_id * (mmSDMA0_RLC1_RB_CNTL -
+					       mmSDMA0_RLC0_RB_CNTL);
+
+	pr_debug("sdma base address: 0x%x\n", retval);
+
+	return retval;
+}
+
+#if 0
+static uint32_t get_watch_base_addr(struct amdgpu_device *adev)
+{
+	uint32_t retval = SOC15_REG_OFFSET(GC, 0, mmTCP_WATCH0_ADDR_H) -
+			mmTCP_WATCH0_ADDR_H;
+
+	pr_debug("kfd: reg watch base address: 0x%x\n", retval);
+
+	return retval;
+}
+#endif
+
+static inline struct v10_compute_mqd *get_mqd(void *mqd)
+{
+	return (struct v10_compute_mqd *)mqd;
+}
+
+static inline struct v10_sdma_mqd *get_sdma_mqd(void *mqd)
+{
+	return (struct v10_sdma_mqd *)mqd;
+}
+
+static int kgd_hqd_load(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
+			uint32_t queue_id, uint32_t __user *wptr,
+			uint32_t wptr_shift, uint32_t wptr_mask,
+			struct mm_struct *mm)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	struct v10_compute_mqd *m;
+	uint32_t *mqd_hqd;
+	uint32_t reg, hqd_base, data;
+
+	m = get_mqd(mqd);
+
+	pr_debug("Load hqd of pipe %d queue %d\n", pipe_id, queue_id);
+	acquire_queue(kgd, pipe_id, queue_id);
+
+	/* HIQ is set during driver init period with vmid set to 0*/
+	if (m->cp_hqd_vmid == 0) {
+		uint32_t value, mec, pipe;
+
+		mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
+		pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
+
+		pr_debug("kfd: set HIQ, mec:%d, pipe:%d, queue:%d.\n",
+			mec, pipe, queue_id);
+		value = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CP_SCHEDULERS));
+		value = REG_SET_FIELD(value, RLC_CP_SCHEDULERS, scheduler1,
+			((mec << 5) | (pipe << 3) | queue_id | 0x80));
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CP_SCHEDULERS), value);
+	}
+
+	/* HQD registers extend from CP_MQD_BASE_ADDR to CP_HQD_EOP_WPTR_MEM. */
+	mqd_hqd = &m->cp_mqd_base_addr_lo;
+	hqd_base = SOC15_REG_OFFSET(GC, 0, mmCP_MQD_BASE_ADDR);
+
+	for (reg = hqd_base;
+	     reg <= SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_HI); reg++)
+		WREG32(reg, mqd_hqd[reg - hqd_base]);
+
+
+	/* Activate doorbell logic before triggering WPTR poll. */
+	data = REG_SET_FIELD(m->cp_hqd_pq_doorbell_control,
+			     CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL), data);
+
+	if (wptr) {
+		/* Don't read wptr with get_user because the user
+		 * context may not be accessible (if this function
+		 * runs in a work queue). Instead trigger a one-shot
+		 * polling read from memory in the CP. This assumes
+		 * that wptr is GPU-accessible in the queue's VMID via
+		 * ATC or SVM. WPTR==RPTR before starting the poll so
+		 * the CP starts fetching new commands from the right
+		 * place.
+		 *
+		 * Guessing a 64-bit WPTR from a 32-bit RPTR is a bit
+		 * tricky. Assume that the queue didn't overflow. The
+		 * number of valid bits in the 32-bit RPTR depends on
+		 * the queue size. The remaining bits are taken from
+		 * the saved 64-bit WPTR. If the WPTR wrapped, add the
+		 * queue size.
+		 */
+		uint32_t queue_size =
+			2 << REG_GET_FIELD(m->cp_hqd_pq_control,
+					   CP_HQD_PQ_CONTROL, QUEUE_SIZE);
+		uint64_t guessed_wptr = m->cp_hqd_pq_rptr & (queue_size - 1);
+
+		if ((m->cp_hqd_pq_wptr_lo & (queue_size - 1)) < guessed_wptr)
+			guessed_wptr += queue_size;
+		guessed_wptr += m->cp_hqd_pq_wptr_lo & ~(queue_size - 1);
+		guessed_wptr += (uint64_t)m->cp_hqd_pq_wptr_hi << 32;
+
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_LO),
+		       lower_32_bits(guessed_wptr));
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_HI),
+		       upper_32_bits(guessed_wptr));
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR),
+		       lower_32_bits((uint64_t)wptr));
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR_HI),
+		       upper_32_bits((uint64_t)wptr));
+		pr_debug("%s setting CP_PQ_WPTR_POLL_CNTL1 to %x\n", __func__, get_queue_mask(adev, pipe_id, queue_id));
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_WPTR_POLL_CNTL1),
+		       get_queue_mask(adev, pipe_id, queue_id));
+	}
+
+	/* Start the EOP fetcher */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_RPTR),
+	       REG_SET_FIELD(m->cp_hqd_eop_rptr,
+			     CP_HQD_EOP_RPTR, INIT_FETCHER, 1));
+
+	data = REG_SET_FIELD(m->cp_hqd_active, CP_HQD_ACTIVE, ACTIVE, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE), data);
+
+	release_queue(kgd);
+
+	return 0;
+}
+
+static int kgd_hqd_dump(struct kgd_dev *kgd,
+			uint32_t pipe_id, uint32_t queue_id,
+			uint32_t (**dump)[2], uint32_t *n_regs)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	uint32_t i = 0, reg;
+#define HQD_N_REGS 56
+#define DUMP_REG(addr) do {				\
+		if (WARN_ON_ONCE(i >= HQD_N_REGS))	\
+			break;				\
+		(*dump)[i][0] = (addr) << 2;		\
+		(*dump)[i++][1] = RREG32(addr);		\
+	} while (0)
+
+	*dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL);
+	if (*dump == NULL)
+		return -ENOMEM;
+
+	acquire_queue(kgd, pipe_id, queue_id);
+
+	for (reg = SOC15_REG_OFFSET(GC, 0, mmCP_MQD_BASE_ADDR);
+	     reg <= SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_HI); reg++)
+		DUMP_REG(reg);
+
+	release_queue(kgd);
+
+	WARN_ON_ONCE(i != HQD_N_REGS);
+	*n_regs = i;
+
+	return 0;
+}
+
+static int kgd_hqd_sdma_load(struct kgd_dev *kgd, void *mqd,
+			     uint32_t __user *wptr, struct mm_struct *mm)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	struct v10_sdma_mqd *m;
+	uint32_t sdma_base_addr, sdmax_gfx_context_cntl;
+	unsigned long end_jiffies;
+	uint32_t data;
+	uint64_t data64;
+	uint64_t __user *wptr64 = (uint64_t __user *)wptr;
+
+	m = get_sdma_mqd(mqd);
+	sdma_base_addr = get_sdma_base_addr(adev, m->sdma_engine_id,
+					    m->sdma_queue_id);
+	pr_debug("sdma load base addr %x for engine %d, queue %d\n", sdma_base_addr, m->sdma_engine_id, m->sdma_queue_id);
+	sdmax_gfx_context_cntl = m->sdma_engine_id ?
+		SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_GFX_CONTEXT_CNTL) :
+		SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_GFX_CONTEXT_CNTL);
+
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_CNTL,
+		m->sdmax_rlcx_rb_cntl & (~SDMA0_RLC0_RB_CNTL__RB_ENABLE_MASK));
+
+	end_jiffies = msecs_to_jiffies(2000) + jiffies;
+	while (true) {
+		data = RREG32(sdma_base_addr + mmSDMA0_RLC0_CONTEXT_STATUS);
+		if (data & SDMA0_RLC0_CONTEXT_STATUS__IDLE_MASK)
+			break;
+		if (time_after(jiffies, end_jiffies))
+			return -ETIME;
+		usleep_range(500, 1000);
+	}
+	data = RREG32(sdmax_gfx_context_cntl);
+	data = REG_SET_FIELD(data, SDMA0_GFX_CONTEXT_CNTL,
+			     RESUME_CTX, 0);
+	WREG32(sdmax_gfx_context_cntl, data);
+
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_DOORBELL_OFFSET,
+	       m->sdmax_rlcx_doorbell_offset);
+
+	data = REG_SET_FIELD(m->sdmax_rlcx_doorbell, SDMA0_RLC0_DOORBELL,
+			     ENABLE, 1);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_DOORBELL, data);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_RPTR, m->sdmax_rlcx_rb_rptr);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_RPTR_HI,
+				m->sdmax_rlcx_rb_rptr_hi);
+
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_MINOR_PTR_UPDATE, 1);
+	if (read_user_wptr(mm, wptr64, data64)) {
+		WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_WPTR,
+		       lower_32_bits(data64));
+		WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_WPTR_HI,
+		       upper_32_bits(data64));
+	} else {
+		WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_WPTR,
+		       m->sdmax_rlcx_rb_rptr);
+		WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_WPTR_HI,
+		       m->sdmax_rlcx_rb_rptr_hi);
+	}
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_MINOR_PTR_UPDATE, 0);
+
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_BASE, m->sdmax_rlcx_rb_base);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_BASE_HI,
+			m->sdmax_rlcx_rb_base_hi);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_RPTR_ADDR_LO,
+			m->sdmax_rlcx_rb_rptr_addr_lo);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_RPTR_ADDR_HI,
+			m->sdmax_rlcx_rb_rptr_addr_hi);
+
+	data = REG_SET_FIELD(m->sdmax_rlcx_rb_cntl, SDMA0_RLC0_RB_CNTL,
+			     RB_ENABLE, 1);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_CNTL, data);
+
+	return 0;
+}
+
+static int kgd_hqd_sdma_dump(struct kgd_dev *kgd,
+			     uint32_t engine_id, uint32_t queue_id,
+			     uint32_t (**dump)[2], uint32_t *n_regs)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	uint32_t sdma_base_addr = get_sdma_base_addr(adev, engine_id, queue_id);
+	uint32_t i = 0, reg;
+#undef HQD_N_REGS
+#define HQD_N_REGS (19+6+7+10)
+
+	pr_debug("sdma dump engine id %d queue_id %d\n", engine_id, queue_id);
+	pr_debug("sdma base addr %x\n", sdma_base_addr);
+
+	*dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL);
+	if (*dump == NULL)
+		return -ENOMEM;
+
+	for (reg = mmSDMA0_RLC0_RB_CNTL; reg <= mmSDMA0_RLC0_DOORBELL; reg++)
+		DUMP_REG(sdma_base_addr + reg);
+	for (reg = mmSDMA0_RLC0_STATUS; reg <= mmSDMA0_RLC0_CSA_ADDR_HI; reg++)
+		DUMP_REG(sdma_base_addr + reg);
+	for (reg = mmSDMA0_RLC0_IB_SUB_REMAIN;
+	     reg <= mmSDMA0_RLC0_MINOR_PTR_UPDATE; reg++)
+		DUMP_REG(sdma_base_addr + reg);
+	for (reg = mmSDMA0_RLC0_MIDCMD_DATA0;
+	     reg <= mmSDMA0_RLC0_MIDCMD_CNTL; reg++)
+		DUMP_REG(sdma_base_addr + reg);
+
+	WARN_ON_ONCE(i != HQD_N_REGS);
+	*n_regs = i;
+
+	return 0;
+}
+
+static bool kgd_hqd_is_occupied(struct kgd_dev *kgd, uint64_t queue_address,
+				uint32_t pipe_id, uint32_t queue_id)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	uint32_t act;
+	bool retval = false;
+	uint32_t low, high;
+
+	acquire_queue(kgd, pipe_id, queue_id);
+	act = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE));
+	if (act) {
+		low = lower_32_bits(queue_address >> 8);
+		high = upper_32_bits(queue_address >> 8);
+
+		if (low == RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_BASE)) &&
+		   high == RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_BASE_HI)))
+			retval = true;
+	}
+	release_queue(kgd);
+	return retval;
+}
+
+static bool kgd_hqd_sdma_is_occupied(struct kgd_dev *kgd, void *mqd)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	struct v10_sdma_mqd *m;
+	uint32_t sdma_base_addr;
+	uint32_t sdma_rlc_rb_cntl;
+
+	m = get_sdma_mqd(mqd);
+	sdma_base_addr = get_sdma_base_addr(adev, m->sdma_engine_id,
+					    m->sdma_queue_id);
+
+	sdma_rlc_rb_cntl = RREG32(sdma_base_addr + mmSDMA0_RLC0_RB_CNTL);
+
+	if (sdma_rlc_rb_cntl & SDMA0_RLC0_RB_CNTL__RB_ENABLE_MASK)
+		return true;
+
+	return false;
+}
+
+static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
+				enum kfd_preempt_type reset_type,
+				unsigned int utimeout, uint32_t pipe_id,
+				uint32_t queue_id)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	enum hqd_dequeue_request_type type;
+	unsigned long end_jiffies;
+	uint32_t temp;
+	struct v10_compute_mqd *m = get_mqd(mqd);
+
+#if 0
+	unsigned long flags;
+	int retry;
+#endif
+
+	acquire_queue(kgd, pipe_id, queue_id);
+
+	if (m->cp_hqd_vmid == 0)
+		WREG32_FIELD15(GC, 0, RLC_CP_SCHEDULERS, scheduler1, 0);
+
+	switch (reset_type) {
+	case KFD_PREEMPT_TYPE_WAVEFRONT_DRAIN:
+		type = DRAIN_PIPE;
+		break;
+	case KFD_PREEMPT_TYPE_WAVEFRONT_RESET:
+		type = RESET_WAVES;
+		break;
+	default:
+		type = DRAIN_PIPE;
+		break;
+	}
+
+#if 0 /* Is this still needed? */
+	/* Workaround: If IQ timer is active and the wait time is close to or
+	 * equal to 0, dequeueing is not safe. Wait until either the wait time
+	 * is larger or timer is cleared. Also, ensure that IQ_REQ_PEND is
+	 * cleared before continuing. Also, ensure wait times are set to at
+	 * least 0x3.
+	 */
+	local_irq_save(flags);
+	preempt_disable();
+	retry = 5000; /* wait for 500 usecs at maximum */
+	while (true) {
+		temp = RREG32(mmCP_HQD_IQ_TIMER);
+		if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, PROCESSING_IQ)) {
+			pr_debug("HW is processing IQ\n");
+			goto loop;
+		}
+		if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, ACTIVE)) {
+			if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, RETRY_TYPE)
+					== 3) /* SEM-rearm is safe */
+				break;
+			/* Wait time 3 is safe for CP, but our MMIO read/write
+			 * time is close to 1 microsecond, so check for 10 to
+			 * leave more buffer room
+			 */
+			if (REG_GET_FIELD(temp, CP_HQD_IQ_TIMER, WAIT_TIME)
+					>= 10)
+				break;
+			pr_debug("IQ timer is active\n");
+		} else
+			break;
+loop:
+		if (!retry) {
+			pr_err("CP HQD IQ timer status time out\n");
+			break;
+		}
+		ndelay(100);
+		--retry;
+	}
+	retry = 1000;
+	while (true) {
+		temp = RREG32(mmCP_HQD_DEQUEUE_REQUEST);
+		if (!(temp & CP_HQD_DEQUEUE_REQUEST__IQ_REQ_PEND_MASK))
+			break;
+		pr_debug("Dequeue request is pending\n");
+
+		if (!retry) {
+			pr_err("CP HQD dequeue request time out\n");
+			break;
+		}
+		ndelay(100);
+		--retry;
+	}
+	local_irq_restore(flags);
+	preempt_enable();
+#endif
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_DEQUEUE_REQUEST), type);
+
+	end_jiffies = (utimeout * HZ / 1000) + jiffies;
+	while (true) {
+		temp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE));
+		if (!(temp & CP_HQD_ACTIVE__ACTIVE_MASK))
+			break;
+		if (time_after(jiffies, end_jiffies)) {
+			pr_err("cp queue preemption time out.\n");
+			release_queue(kgd);
+			return -ETIME;
+		}
+		usleep_range(500, 1000);
+	}
+
+	release_queue(kgd);
+	return 0;
+}
+
+static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
+				unsigned int utimeout)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	struct v10_sdma_mqd *m;
+	uint32_t sdma_base_addr;
+	uint32_t temp;
+	unsigned long end_jiffies = (utimeout * HZ / 1000) + jiffies;
+
+	m = get_sdma_mqd(mqd);
+	sdma_base_addr = get_sdma_base_addr(adev, m->sdma_engine_id,
+					    m->sdma_queue_id);
+
+	temp = RREG32(sdma_base_addr + mmSDMA0_RLC0_RB_CNTL);
+	temp = temp & ~SDMA0_RLC0_RB_CNTL__RB_ENABLE_MASK;
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_CNTL, temp);
+
+	while (true) {
+		temp = RREG32(sdma_base_addr + mmSDMA0_RLC0_CONTEXT_STATUS);
+		if (temp & SDMA0_RLC0_CONTEXT_STATUS__IDLE_MASK)
+			break;
+		if (time_after(jiffies, end_jiffies))
+			return -ETIME;
+		usleep_range(500, 1000);
+	}
+
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_DOORBELL, 0);
+	WREG32(sdma_base_addr + mmSDMA0_RLC0_RB_CNTL,
+		RREG32(sdma_base_addr + mmSDMA0_RLC0_RB_CNTL) |
+		SDMA0_RLC0_RB_CNTL__RB_ENABLE_MASK);
+
+	m->sdmax_rlcx_rb_rptr = RREG32(sdma_base_addr + mmSDMA0_RLC0_RB_RPTR);
+	m->sdmax_rlcx_rb_rptr_hi =
+		RREG32(sdma_base_addr + mmSDMA0_RLC0_RB_RPTR_HI);
+
+	return 0;
+}
+
+static bool get_atc_vmid_pasid_mapping_valid(struct kgd_dev *kgd,
+							uint8_t vmid)
+{
+	uint32_t reg;
+	struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
+
+	reg = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATC_VMID0_PASID_MAPPING)
+		     + vmid);
+	return reg & ATC_VMID0_PASID_MAPPING__VALID_MASK;
+}
+
+static uint16_t get_atc_vmid_pasid_mapping_pasid(struct kgd_dev *kgd,
+								uint8_t vmid)
+{
+	uint32_t reg;
+	struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
+
+	reg = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATC_VMID0_PASID_MAPPING)
+		     + vmid);
+	return reg & ATC_VMID0_PASID_MAPPING__PASID_MASK;
+}
+
+static void write_vmid_invalidate_request(struct kgd_dev *kgd, uint8_t vmid)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
+	uint32_t req = (1 << vmid) |
+		(0 << GCVM_INVALIDATE_ENG0_REQ__FLUSH_TYPE__SHIFT) |/* legacy */
+		GCVM_INVALIDATE_ENG0_REQ__INVALIDATE_L2_PTES_MASK |
+		GCVM_INVALIDATE_ENG0_REQ__INVALIDATE_L2_PDE0_MASK |
+		GCVM_INVALIDATE_ENG0_REQ__INVALIDATE_L2_PDE1_MASK |
+		GCVM_INVALIDATE_ENG0_REQ__INVALIDATE_L2_PDE2_MASK |
+		GCVM_INVALIDATE_ENG0_REQ__INVALIDATE_L1_PTES_MASK;
+
+	mutex_lock(&adev->srbm_mutex);
+
+	/* Use light weight invalidation.
+	 *
+	 * TODO 1: agree on the right set of invalidation registers for
+	 * KFD use. Use the last one for now. Invalidate only GCHUB as
+	 * SDMA is now moved to GCHUB
+	 *
+	 * TODO 2: support range-based invalidation, requires kfg2kgd
+	 * interface change
+	 */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_INVALIDATE_ENG0_ADDR_RANGE_LO32),
+				0xffffffff);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_INVALIDATE_ENG0_ADDR_RANGE_HI32),
+				0x0000001f);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_INVALIDATE_ENG0_REQ), req);
+
+	while (!(RREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_INVALIDATE_ENG0_ACK)) &
+					(1 << vmid)))
+		cpu_relax();
+
+	mutex_unlock(&adev->srbm_mutex);
+}
+
+static int invalidate_tlbs_with_kiq(struct amdgpu_device *adev, uint16_t pasid)
+{
+	signed long r;
+	uint32_t seq;
+	struct amdgpu_ring *ring = &adev->gfx.kiq.ring;
+
+	spin_lock(&adev->gfx.kiq.ring_lock);
+	amdgpu_ring_alloc(ring, 12); /* fence + invalidate_tlbs package*/
+	amdgpu_ring_write(ring, PACKET3(PACKET3_INVALIDATE_TLBS, 0));
+	amdgpu_ring_write(ring,
+			PACKET3_INVALIDATE_TLBS_DST_SEL(1) |
+			PACKET3_INVALIDATE_TLBS_PASID(pasid));
+	amdgpu_fence_emit_polling(ring, &seq);
+	amdgpu_ring_commit(ring);
+	spin_unlock(&adev->gfx.kiq.ring_lock);
+
+	r = amdgpu_fence_wait_polling(ring, seq, adev->usec_timeout);
+	if (r < 1) {
+		DRM_ERROR("wait for kiq fence error: %ld.\n", r);
+		return -ETIME;
+	}
+
+	return 0;
+}
+
+static int invalidate_tlbs(struct kgd_dev *kgd, uint16_t pasid)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
+	int vmid;
+	struct amdgpu_ring *ring = &adev->gfx.kiq.ring;
+
+	if (amdgpu_emu_mode == 0 && ring->sched.ready)
+		return invalidate_tlbs_with_kiq(adev, pasid);
+
+	for (vmid = 0; vmid < 16; vmid++) {
+		if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid))
+			continue;
+		if (get_atc_vmid_pasid_mapping_valid(kgd, vmid)) {
+			if (get_atc_vmid_pasid_mapping_pasid(kgd, vmid)
+				== pasid) {
+				write_vmid_invalidate_request(kgd, vmid);
+				break;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int invalidate_tlbs_vmid(struct kgd_dev *kgd, uint16_t vmid)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
+
+	if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid)) {
+		pr_err("non kfd vmid %d\n", vmid);
+		return 0;
+	}
+
+	write_vmid_invalidate_request(kgd, vmid);
+	return 0;
+}
+
+static int kgd_address_watch_disable(struct kgd_dev *kgd)
+{
+	return 0;
+}
+
+static int kgd_address_watch_execute(struct kgd_dev *kgd,
+					unsigned int watch_point_id,
+					uint32_t cntl_val,
+					uint32_t addr_hi,
+					uint32_t addr_lo)
+{
+	return 0;
+}
+
+static int kgd_wave_control_execute(struct kgd_dev *kgd,
+					uint32_t gfx_index_val,
+					uint32_t sq_cmd)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	uint32_t data = 0;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_GFX_INDEX), gfx_index_val);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_CMD), sq_cmd);
+
+	data = REG_SET_FIELD(data, GRBM_GFX_INDEX,
+		INSTANCE_BROADCAST_WRITES, 1);
+	data = REG_SET_FIELD(data, GRBM_GFX_INDEX,
+		SA_BROADCAST_WRITES, 1);
+	data = REG_SET_FIELD(data, GRBM_GFX_INDEX,
+		SE_BROADCAST_WRITES, 1);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_GFX_INDEX), data);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	return 0;
+}
+
+static uint32_t kgd_address_watch_get_offset(struct kgd_dev *kgd,
+					unsigned int watch_point_id,
+					unsigned int reg_offset)
+{
+	return 0;
+}
+
+static void set_vm_context_page_table_base(struct kgd_dev *kgd, uint32_t vmid,
+		uint64_t page_table_base)
+{
+	struct amdgpu_device *adev = get_amdgpu_device(kgd);
+	uint64_t base = page_table_base | AMDGPU_PTE_VALID;
+
+	if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid)) {
+		pr_err("trying to set page table base for wrong VMID %u\n",
+		       vmid);
+		return;
+	}
+
+	/* TODO: take advantage of per-process address space size. For
+	 * now, all processes share the same address space size, like
+	 * on GFX8 and older.
+	 */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32) + (vmid*2), 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32) + (vmid*2), 0);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32) + (vmid*2),
+			lower_32_bits(adev->vm_manager.max_pfn - 1));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32) + (vmid*2),
+			upper_32_bits(adev->vm_manager.max_pfn - 1));
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32) + (vmid*2), lower_32_bits(base));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32) + (vmid*2), upper_32_bits(base));
+}
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 127/459] drm/amdgpu: update golden setting programming logic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (25 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 126/459] drm/amdgpu: Add navi10 kfd support for amdgpu (v3) Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 128/459] drm/amdkfd: Add navi10 support to amdkfd. (v2) Alex Deucher
                     ` (65 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Le Ma, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

Since from soc15, make sure only AndMasked bit get changed
when applied or_mask

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Le Ma <Le.Ma@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 ++++-
 drivers/gpu/drm/amd/amdgpu/soc15.c         | 2 +-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 182dc834f7b6..bf5650f7ac8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -508,7 +508,10 @@ void amdgpu_device_program_register_sequence(struct amdgpu_device *adev,
 		} else {
 			tmp = RREG32(reg);
 			tmp &= ~and_mask;
-			tmp |= or_mask;
+			if (adev->family >= AMDGPU_FAMILY_AI)
+				tmp |= (or_mask & and_mask);
+			else
+				tmp |= or_mask;
 		}
 		WREG32(reg, tmp);
 	}
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 3fbc3cd849ed..9dfbbc65ea67 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -378,7 +378,7 @@ void soc15_program_register_sequence(struct amdgpu_device *adev,
 		} else {
 			tmp = RREG32(reg);
 			tmp &= ~(entry->and_mask);
-			tmp |= entry->or_mask;
+			tmp |= (entry->or_mask & entry->and_mask);
 		}
 
 		if (reg == SOC15_REG_OFFSET(GC, 0, mmPA_SC_BINNER_EVENT_CNTL_3) ||
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 128/459] drm/amdkfd: Add navi10 support to amdkfd. (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (26 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 127/459] drm/amdgpu: update golden setting programming logic Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 129/459] drm/amdkfd: Added cwsr trap handler for gfx10 Alex Deucher
                     ` (64 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Philip Cox, Oak Zeng, Hawking Zhang

From: Philip Cox <Philip.Cox@amd.com>

KFD (kernel fusion driver) is the kernel driver
for the compute backend for usermode compute
stack.

v2: squash in updates (Alex)

Signed-off-by: Oak Zeng <Oak.Zeng@amd.com>
Signed-off-by: Philip Cox <Philip.Cox@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/Makefile           |   3 +
 drivers/gpu/drm/amd/amdkfd/kfd_crat.c         |   5 +
 drivers/gpu/drm/amd/amdkfd/kfd_device.c       |  21 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.c |  26 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |   2 +
 .../amd/amdkfd/kfd_device_queue_manager_v10.c |  87 +++
 drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c  |   1 +
 drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c |   3 +
 drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.h |   1 +
 .../gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c | 348 ++++++++++++
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  | 519 ++++++++++++++++++
 .../gpu/drm/amd/amdkfd/kfd_packet_manager.c   |   3 +
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |  11 +-
 drivers/gpu/drm/amd/amdkfd/kfd_process.c      |   1 +
 drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |   1 +
 15 files changed, 1019 insertions(+), 13 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c

diff --git a/drivers/gpu/drm/amd/amdkfd/Makefile b/drivers/gpu/drm/amd/amdkfd/Makefile
index 69ec96998bb9..48155060a57c 100644
--- a/drivers/gpu/drm/amd/amdkfd/Makefile
+++ b/drivers/gpu/drm/amd/amdkfd/Makefile
@@ -36,16 +36,19 @@ AMDKFD_FILES	:= $(AMDKFD_PATH)/kfd_module.o \
 		$(AMDKFD_PATH)/kfd_mqd_manager_cik.o \
 		$(AMDKFD_PATH)/kfd_mqd_manager_vi.o \
 		$(AMDKFD_PATH)/kfd_mqd_manager_v9.o \
+		$(AMDKFD_PATH)/kfd_mqd_manager_v10.o \
 		$(AMDKFD_PATH)/kfd_kernel_queue.o \
 		$(AMDKFD_PATH)/kfd_kernel_queue_cik.o \
 		$(AMDKFD_PATH)/kfd_kernel_queue_vi.o \
 		$(AMDKFD_PATH)/kfd_kernel_queue_v9.o \
+		$(AMDKFD_PATH)/kfd_kernel_queue_v10.o \
 		$(AMDKFD_PATH)/kfd_packet_manager.o \
 		$(AMDKFD_PATH)/kfd_process_queue_manager.o \
 		$(AMDKFD_PATH)/kfd_device_queue_manager.o \
 		$(AMDKFD_PATH)/kfd_device_queue_manager_cik.o \
 		$(AMDKFD_PATH)/kfd_device_queue_manager_vi.o \
 		$(AMDKFD_PATH)/kfd_device_queue_manager_v9.o \
+		$(AMDKFD_PATH)/kfd_device_queue_manager_v10.o \
 		$(AMDKFD_PATH)/kfd_interrupt.o \
 		$(AMDKFD_PATH)/kfd_events.o \
 		$(AMDKFD_PATH)/cik_event_interrupt.o \
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
index 59f8ca4297db..792371442195 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
@@ -138,6 +138,8 @@ static struct kfd_gpu_cache_info carrizo_cache_info[] = {
 /* TODO - check & update Vega10 cache details */
 #define vega10_cache_info carrizo_cache_info
 #define raven_cache_info carrizo_cache_info
+/* TODO - check & update Navi10 cache details */
+#define navi10_cache_info carrizo_cache_info
 
 static void kfd_populated_cu_info_cpu(struct kfd_topology_device *dev,
 		struct crat_subtype_computeunit *cu)
@@ -666,6 +668,9 @@ static int kfd_fill_gpu_cache_info(struct kfd_dev *kdev,
 	case CHIP_RAVEN:
 		pcache_info = raven_cache_info;
 		num_of_cache_types = ARRAY_SIZE(raven_cache_info);
+	case CHIP_NAVI10:
+		pcache_info = navi10_cache_info;
+		num_of_cache_types = ARRAY_SIZE(navi10_cache_info);
 		break;
 	default:
 		return -EINVAL;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index ebac7d7f9956..955d72179da1 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -317,6 +317,22 @@ static const struct kfd_device_info vega20_device_info = {
 	.num_sdma_queues_per_engine = 8,
 };
 
+static const struct kfd_device_info navi10_device_info = {
+	.asic_family = CHIP_NAVI10,
+	.max_pasid_bits = 16,
+	.max_no_of_hqd  = 24,
+	.doorbell_size  = 8,
+	.ih_ring_entry_size = 8 * sizeof(uint32_t),
+	.event_interrupt_class = &event_interrupt_class_v9,
+	.num_of_watch_points = 4,
+	.mqd_size_aligned = MQD_SIZE_ALIGNED,
+	.needs_iommu_device = false,
+	.supports_cwsr = false,
+	.needs_pci_atomics = false,
+	.num_sdma_engines = 2,
+	.num_sdma_queues_per_engine = 8,
+};
+
 struct kfd_deviceid {
 	unsigned short did;
 	const struct kfd_device_info *device_info;
@@ -434,7 +450,9 @@ static const struct kfd_deviceid supported_devices[] = {
 	{ 0x66a3, &vega20_device_info },	/* Vega20 */
 	{ 0x66a4, &vega20_device_info },	/* Vega20 */
 	{ 0x66a7, &vega20_device_info },	/* Vega20 */
-	{ 0x66af, &vega20_device_info }		/* Vega20 */
+	{ 0x66af, &vega20_device_info },	/* Vega20 */
+	/* Navi10 */
+	{ 0x7310, &navi10_device_info },	/* Navi10 */
 };
 
 static int kfd_gtt_sa_init(struct kfd_dev *kfd, unsigned int buf_size,
@@ -517,6 +535,7 @@ static void kfd_cwsr_init(struct kfd_dev *kfd)
 			kfd->cwsr_isa = cwsr_trap_gfx8_hex;
 			kfd->cwsr_isa_size = sizeof(cwsr_trap_gfx8_hex);
 		} else {
+			/* TODO: Do we need another trap handler for navi10? */
 			BUILD_BUG_ON(sizeof(cwsr_trap_gfx9_hex) > PAGE_SIZE);
 			kfd->cwsr_isa = cwsr_trap_gfx9_hex;
 			kfd->cwsr_isa_size = sizeof(cwsr_trap_gfx9_hex);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 3528590ae90b..632e510b5396 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -1264,6 +1264,7 @@ static int map_queues_cpsch(struct device_queue_manager *dqm)
 		return 0;
 
 	retval = pm_send_runlist(&dqm->packets, &dqm->queues);
+	pr_debug("%s sent runlist\n", __func__);
 	if (retval) {
 		pr_err("failed to execute runlist\n");
 		return retval;
@@ -1785,6 +1786,9 @@ struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev)
 	case CHIP_RAVEN:
 		device_queue_manager_init_v9(&dqm->asic_ops);
 		break;
+	case CHIP_NAVI10:
+		device_queue_manager_init_v10_navi10(&dqm->asic_ops);
+		break;
 	default:
 		WARN(1, "Unexpected ASIC family %u",
 		     dev->device_info->asic_family);
@@ -1875,17 +1879,17 @@ int dqm_debugfs_hqds(struct seq_file *m, void *data)
 	int pipe, queue;
 	int r = 0;
 
-	r = dqm->dev->kfd2kgd->hqd_dump(dqm->dev->kgd,
-		KFD_CIK_HIQ_PIPE, KFD_CIK_HIQ_QUEUE, &dump, &n_regs);
-	if (!r) {
-		seq_printf(m, "  HIQ on MEC %d Pipe %d Queue %d\n",
-				KFD_CIK_HIQ_PIPE/get_pipes_per_mec(dqm)+1,
-				KFD_CIK_HIQ_PIPE%get_pipes_per_mec(dqm),
-				KFD_CIK_HIQ_QUEUE);
-		seq_reg_dump(m, dump, n_regs);
-
-		kfree(dump);
-	}
+        r = dqm->dev->kfd2kgd->hqd_dump(dqm->dev->kgd,
+                KFD_CIK_HIQ_PIPE, KFD_CIK_HIQ_QUEUE, &dump, &n_regs);
+        if (!r) {
+                seq_printf(m, "  HIQ on MEC %d Pipe %d Queue %d\n",
+                                KFD_CIK_HIQ_PIPE/get_pipes_per_mec(dqm)+1,
+                                KFD_CIK_HIQ_PIPE%get_pipes_per_mec(dqm),
+                                KFD_CIK_HIQ_QUEUE);
+                seq_reg_dump(m, dump, n_regs);
+
+                kfree(dump);
+        }
 
 	for (pipe = 0; pipe < get_pipes_per_mec(dqm); pipe++) {
 		int pipe_offset = pipe * get_queues_per_pipe(dqm);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
index 88b4c007696e..ff9cdc584120 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
@@ -212,6 +212,8 @@ void device_queue_manager_init_vi_tonga(
 		struct device_queue_manager_asic_ops *asic_ops);
 void device_queue_manager_init_v9(
 		struct device_queue_manager_asic_ops *asic_ops);
+void device_queue_manager_init_v10_navi10(
+		struct device_queue_manager_asic_ops *asic_ops);
 void program_sh_mem_settings(struct device_queue_manager *dqm,
 					struct qcm_process_device *qpd);
 unsigned int get_queues_num(struct device_queue_manager *dqm);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
new file mode 100644
index 000000000000..adb38850366c
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
@@ -0,0 +1,87 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "kfd_device_queue_manager.h"
+#include "navi10_enum.h"
+#include "gc/gc_10_1_0_offset.h"
+#include "gc/gc_10_1_0_sh_mask.h"
+
+static int update_qpd_v10(struct device_queue_manager *dqm,
+			 struct qcm_process_device *qpd);
+static void init_sdma_vm_v10(struct device_queue_manager *dqm, struct queue *q,
+			    struct qcm_process_device *qpd);
+
+void device_queue_manager_init_v10_navi10(
+	struct device_queue_manager_asic_ops *asic_ops)
+{
+	asic_ops->update_qpd = update_qpd_v10;
+	asic_ops->init_sdma_vm = init_sdma_vm_v10;
+}
+
+static uint32_t compute_sh_mem_bases_64bit(struct kfd_process_device *pdd)
+{
+	uint32_t shared_base = pdd->lds_base >> 48;
+	uint32_t private_base = pdd->scratch_base >> 48;
+
+	return (shared_base << SH_MEM_BASES__SHARED_BASE__SHIFT) |
+		private_base;
+}
+
+static int update_qpd_v10(struct device_queue_manager *dqm,
+			 struct qcm_process_device *qpd)
+{
+	struct kfd_process_device *pdd;
+
+	pdd = qpd_to_pdd(qpd);
+
+	/* check if sh_mem_config register already configured */
+	if (qpd->sh_mem_config == 0) {
+		qpd->sh_mem_config =
+				SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+					SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT;
+#if 0
+		/* TODO:
+		 *    This shouldn't be an issue with Navi10.  Verify.
+		 */
+		if (vega10_noretry)
+			qpd->sh_mem_config |=
+				1 << SH_MEM_CONFIG__RETRY_DISABLE__SHIFT;
+#endif
+
+		qpd->sh_mem_ape1_limit = 0;
+		qpd->sh_mem_ape1_base = 0;
+	}
+
+	qpd->sh_mem_bases = compute_sh_mem_bases_64bit(pdd);
+
+	pr_debug("sh_mem_bases 0x%X\n", qpd->sh_mem_bases);
+
+	return 0;
+}
+
+static void init_sdma_vm_v10(struct device_queue_manager *dqm, struct queue *q,
+			    struct qcm_process_device *qpd)
+{
+	/* Not needed on SDMAv4 onwards any more */
+	q->properties.sdma_vm_addr = 0;
+}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c b/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
index 22a8e88b6a67..60521366dd31 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
@@ -405,6 +405,7 @@ int kfd_init_apertures(struct kfd_process *process)
 			case CHIP_VEGA12:
 			case CHIP_VEGA20:
 			case CHIP_RAVEN:
+			case CHIP_NAVI10:
 				kfd_init_apertures_v9(pdd, id);
 				break;
 			default:
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
index 229500c8c958..29c0bd2d7a5c 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
@@ -332,6 +332,9 @@ struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
 	case CHIP_RAVEN:
 		kernel_queue_init_v9(&kq->ops_asic_specific);
 		break;
+	case CHIP_NAVI10:
+		kernel_queue_init_v10(&kq->ops_asic_specific);
+		break;
 	default:
 		WARN(1, "Unexpected ASIC family %u",
 		     dev->device_info->asic_family);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.h b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.h
index a7116a939029..365fc674fea4 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.h
@@ -102,5 +102,6 @@ struct kernel_queue {
 void kernel_queue_init_cik(struct kernel_queue_ops *ops);
 void kernel_queue_init_vi(struct kernel_queue_ops *ops);
 void kernel_queue_init_v9(struct kernel_queue_ops *ops);
+void kernel_queue_init_v10(struct kernel_queue_ops *ops);
 
 #endif /* KFD_KERNEL_QUEUE_H_ */
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
new file mode 100644
index 000000000000..209ad518fba1
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
@@ -0,0 +1,348 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "kfd_kernel_queue.h"
+#include "kfd_device_queue_manager.h"
+#include "kfd_pm4_headers_ai.h"
+#include "kfd_pm4_opcodes.h"
+#include "gc/gc_10_1_0_sh_mask.h"
+
+static bool initialize_v10(struct kernel_queue *kq, struct kfd_dev *dev,
+			enum kfd_queue_type type, unsigned int queue_size);
+static void uninitialize_v10(struct kernel_queue *kq);
+static void submit_packet_v10(struct kernel_queue *kq);
+
+void kernel_queue_init_v10(struct kernel_queue_ops *ops)
+{
+	ops->initialize = initialize_v10;
+	ops->uninitialize = uninitialize_v10;
+	ops->submit_packet = submit_packet_v10;
+}
+
+static bool initialize_v10(struct kernel_queue *kq, struct kfd_dev *dev,
+			enum kfd_queue_type type, unsigned int queue_size)
+{
+	int retval;
+
+	retval = kfd_gtt_sa_allocate(dev, PAGE_SIZE, &kq->eop_mem);
+	if (retval != 0)
+		return false;
+
+	kq->eop_gpu_addr = kq->eop_mem->gpu_addr;
+	kq->eop_kernel_addr = kq->eop_mem->cpu_ptr;
+
+	memset(kq->eop_kernel_addr, 0, PAGE_SIZE);
+
+	return true;
+}
+
+static void uninitialize_v10(struct kernel_queue *kq)
+{
+	kfd_gtt_sa_free(kq->dev, kq->eop_mem);
+}
+
+static void submit_packet_v10(struct kernel_queue *kq)
+{
+	*kq->wptr64_kernel = kq->pending_wptr64;
+	write_kernel_doorbell64(kq->queue->properties.doorbell_ptr,
+				kq->pending_wptr64);
+}
+
+static int pm_map_process_v10(struct packet_manager *pm,
+		uint32_t *buffer, struct qcm_process_device *qpd)
+{
+	struct pm4_mes_map_process *packet;
+	uint64_t vm_page_table_base_addr = qpd->page_table_base;
+
+	packet = (struct pm4_mes_map_process *)buffer;
+	memset(buffer, 0, sizeof(struct pm4_mes_map_process));
+
+	packet->header.u32All = pm_build_pm4_header(IT_MAP_PROCESS,
+					sizeof(struct pm4_mes_map_process));
+	packet->bitfields2.diq_enable = (qpd->is_debug) ? 1 : 0;
+	packet->bitfields2.process_quantum = 1;
+	packet->bitfields2.pasid = qpd->pqm->process->pasid;
+	packet->bitfields14.gds_size = qpd->gds_size;
+	packet->bitfields14.num_gws = qpd->num_gws;
+	packet->bitfields14.num_oac = qpd->num_oac;
+	packet->bitfields14.sdma_enable = 1;
+
+	packet->bitfields14.num_queues = (qpd->is_debug) ? 0 : qpd->queue_count;
+
+	packet->sh_mem_config = qpd->sh_mem_config;
+	packet->sh_mem_bases = qpd->sh_mem_bases;
+	if (qpd->tba_addr) {
+		packet->sq_shader_tba_lo = lower_32_bits(qpd->tba_addr >> 8);
+		packet->sq_shader_tba_hi = (1 << SQ_SHADER_TBA_HI__TRAP_EN__SHIFT) |
+			upper_32_bits(qpd->tba_addr >> 8);
+		packet->sq_shader_tma_lo = lower_32_bits(qpd->tma_addr >> 8);
+		packet->sq_shader_tma_hi = upper_32_bits(qpd->tma_addr >> 8);
+	}
+
+	packet->gds_addr_lo = lower_32_bits(qpd->gds_context_area);
+	packet->gds_addr_hi = upper_32_bits(qpd->gds_context_area);
+
+	packet->vm_context_page_table_base_addr_lo32 =
+			lower_32_bits(vm_page_table_base_addr);
+	packet->vm_context_page_table_base_addr_hi32 =
+			upper_32_bits(vm_page_table_base_addr);
+
+	return 0;
+}
+
+static int pm_runlist_v10(struct packet_manager *pm, uint32_t *buffer,
+			uint64_t ib, size_t ib_size_in_dwords, bool chain)
+{
+	struct pm4_mes_runlist *packet;
+
+	int concurrent_proc_cnt = 0;
+	struct kfd_dev *kfd = pm->dqm->dev;
+
+	/* Determine the number of processes to map together to HW:
+	 * it can not exceed the number of VMIDs available to the
+	 * scheduler, and it is determined by the smaller of the number
+	 * of processes in the runlist and kfd module parameter
+	 * hws_max_conc_proc.
+	 * Note: the arbitration between the number of VMIDs and
+	 * hws_max_conc_proc has been done in
+	 * kgd2kfd_device_init().
+	 */
+	concurrent_proc_cnt = min(pm->dqm->processes_count,
+			kfd->max_proc_per_quantum);
+
+
+	packet = (struct pm4_mes_runlist *)buffer;
+
+	memset(buffer, 0, sizeof(struct pm4_mes_runlist));
+	packet->header.u32All = pm_build_pm4_header(IT_RUN_LIST,
+						sizeof(struct pm4_mes_runlist));
+
+	packet->bitfields4.ib_size = ib_size_in_dwords;
+	packet->bitfields4.chain = chain ? 1 : 0;
+	packet->bitfields4.offload_polling = 0;
+	packet->bitfields4.valid = 1;
+	packet->bitfields4.process_cnt = concurrent_proc_cnt;
+	packet->ordinal2 = lower_32_bits(ib);
+	packet->ib_base_hi = upper_32_bits(ib);
+
+	return 0;
+}
+
+static int pm_map_queues_v10(struct packet_manager *pm, uint32_t *buffer,
+		struct queue *q, bool is_static)
+{
+	struct pm4_mes_map_queues *packet;
+	bool use_static = is_static;
+
+	packet = (struct pm4_mes_map_queues *)buffer;
+	memset(buffer, 0, sizeof(struct pm4_mes_map_queues));
+
+	packet->header.u32All = pm_build_pm4_header(IT_MAP_QUEUES,
+					sizeof(struct pm4_mes_map_queues));
+	packet->bitfields2.alloc_format =
+		alloc_format__mes_map_queues__one_per_pipe_vi;
+	packet->bitfields2.num_queues = 1;
+	packet->bitfields2.queue_sel =
+		queue_sel__mes_map_queues__map_to_hws_determined_queue_slots_vi;
+
+	packet->bitfields2.engine_sel =
+		engine_sel__mes_map_queues__compute_vi;
+	packet->bitfields2.queue_type =
+		queue_type__mes_map_queues__normal_compute_vi;
+
+	switch (q->properties.type) {
+	case KFD_QUEUE_TYPE_COMPUTE:
+		if (use_static)
+			packet->bitfields2.queue_type =
+		queue_type__mes_map_queues__normal_latency_static_queue_vi;
+		break;
+	case KFD_QUEUE_TYPE_DIQ:
+		packet->bitfields2.queue_type =
+			queue_type__mes_map_queues__debug_interface_queue_vi;
+		break;
+	case KFD_QUEUE_TYPE_SDMA:
+		packet->bitfields2.engine_sel = q->properties.sdma_engine_id +
+				engine_sel__mes_map_queues__sdma0_vi;
+		use_static = false; /* no static queues under SDMA */
+		break;
+	default:
+		WARN(1, "queue type %d\n", q->properties.type);
+		return -EINVAL;
+	}
+	packet->bitfields3.doorbell_offset =
+			q->properties.doorbell_off;
+
+	packet->mqd_addr_lo =
+			lower_32_bits(q->gart_mqd_addr);
+
+	packet->mqd_addr_hi =
+			upper_32_bits(q->gart_mqd_addr);
+
+	packet->wptr_addr_lo =
+			lower_32_bits((uint64_t)q->properties.write_ptr);
+
+	packet->wptr_addr_hi =
+			upper_32_bits((uint64_t)q->properties.write_ptr);
+
+	return 0;
+}
+
+static int pm_unmap_queues_v10(struct packet_manager *pm, uint32_t *buffer,
+			enum kfd_queue_type type,
+			enum kfd_unmap_queues_filter filter,
+			uint32_t filter_param, bool reset,
+			unsigned int sdma_engine)
+{
+	struct pm4_mes_unmap_queues *packet;
+
+	packet = (struct pm4_mes_unmap_queues *)buffer;
+	memset(buffer, 0, sizeof(struct pm4_mes_unmap_queues));
+
+	packet->header.u32All = pm_build_pm4_header(IT_UNMAP_QUEUES,
+					sizeof(struct pm4_mes_unmap_queues));
+	switch (type) {
+	case KFD_QUEUE_TYPE_COMPUTE:
+	case KFD_QUEUE_TYPE_DIQ:
+		packet->bitfields2.engine_sel =
+			engine_sel__mes_unmap_queues__compute;
+		break;
+	case KFD_QUEUE_TYPE_SDMA:
+		packet->bitfields2.engine_sel =
+			engine_sel__mes_unmap_queues__sdma0 + sdma_engine;
+		break;
+	default:
+		WARN(1, "queue type %d\n", type);
+		break;
+	}
+
+	if (reset)
+		packet->bitfields2.action =
+			action__mes_unmap_queues__reset_queues;
+	else
+		packet->bitfields2.action =
+			action__mes_unmap_queues__preempt_queues;
+
+	switch (filter) {
+	case KFD_UNMAP_QUEUES_FILTER_SINGLE_QUEUE:
+		packet->bitfields2.queue_sel =
+			queue_sel__mes_unmap_queues__perform_request_on_specified_queues;
+		packet->bitfields2.num_queues = 1;
+		packet->bitfields3b.doorbell_offset0 = filter_param;
+		break;
+	case KFD_UNMAP_QUEUES_FILTER_BY_PASID:
+		packet->bitfields2.queue_sel =
+			queue_sel__mes_unmap_queues__perform_request_on_pasid_queues;
+		packet->bitfields3a.pasid = filter_param;
+		break;
+	case KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES:
+		packet->bitfields2.queue_sel =
+			queue_sel__mes_unmap_queues__unmap_all_queues;
+		break;
+	case KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES:
+		/* in this case, we do not preempt static queues */
+		packet->bitfields2.queue_sel =
+			queue_sel__mes_unmap_queues__unmap_all_non_static_queues;
+		break;
+	default:
+		WARN(1, "filter %d\n", filter);
+		break;
+	}
+
+	return 0;
+
+}
+
+static int pm_query_status_v10(struct packet_manager *pm, uint32_t *buffer,
+			uint64_t fence_address,	uint32_t fence_value)
+{
+	struct pm4_mes_query_status *packet;
+
+	packet = (struct pm4_mes_query_status *)buffer;
+	memset(buffer, 0, sizeof(struct pm4_mes_query_status));
+
+
+	packet->header.u32All = pm_build_pm4_header(IT_QUERY_STATUS,
+					sizeof(struct pm4_mes_query_status));
+
+	packet->bitfields2.context_id = 0;
+	packet->bitfields2.interrupt_sel =
+			interrupt_sel__mes_query_status__completion_status;
+	packet->bitfields2.command =
+			command__mes_query_status__fence_only_after_write_ack;
+
+	packet->addr_hi = upper_32_bits((uint64_t)fence_address);
+	packet->addr_lo = lower_32_bits((uint64_t)fence_address);
+	packet->data_hi = upper_32_bits((uint64_t)fence_value);
+	packet->data_lo = lower_32_bits((uint64_t)fence_value);
+
+	return 0;
+}
+
+
+static int pm_release_mem_v10(uint64_t gpu_addr, uint32_t *buffer)
+{
+	struct pm4_mec_release_mem *packet;
+
+	WARN_ON(!buffer);
+
+	packet = (struct pm4_mec_release_mem *)buffer;
+	memset(buffer, 0, sizeof(struct pm4_mec_release_mem));
+
+	packet->header.u32All = pm_build_pm4_header(IT_RELEASE_MEM,
+					sizeof(struct pm4_mec_release_mem));
+
+	packet->bitfields2.event_type = CACHE_FLUSH_AND_INV_TS_EVENT;
+	packet->bitfields2.event_index = event_index__mec_release_mem__end_of_pipe;
+	packet->bitfields2.tcl1_action_ena = 1;
+	packet->bitfields2.tc_action_ena = 1;
+	packet->bitfields2.cache_policy = cache_policy__mec_release_mem__lru;
+
+	packet->bitfields3.data_sel = data_sel__mec_release_mem__send_32_bit_low;
+	packet->bitfields3.int_sel =
+		int_sel__mec_release_mem__send_interrupt_after_write_confirm;
+
+	packet->bitfields4.address_lo_32b = (gpu_addr & 0xffffffff) >> 2;
+	packet->address_hi = upper_32_bits(gpu_addr);
+
+	packet->data_lo = 0;
+
+	return sizeof(struct pm4_mec_release_mem) / sizeof(unsigned int);
+}
+
+const struct packet_manager_funcs kfd_v10_pm_funcs = {
+	.map_process			= pm_map_process_v10,
+	.runlist			= pm_runlist_v10,
+	.set_resources			= pm_set_resources_vi,
+	.map_queues			= pm_map_queues_v10,
+	.unmap_queues			= pm_unmap_queues_v10,
+	.query_status			= pm_query_status_v10,
+	.release_mem			= pm_release_mem_v10,
+	.map_process_size		= sizeof(struct pm4_mes_map_process),
+	.runlist_size			= sizeof(struct pm4_mes_runlist),
+	.set_resources_size		= sizeof(struct pm4_mes_set_resources),
+	.map_queues_size		= sizeof(struct pm4_mes_map_queues),
+	.unmap_queues_size		= sizeof(struct pm4_mes_unmap_queues),
+	.query_status_size		= sizeof(struct pm4_mes_query_status),
+	.release_mem_size		= sizeof(struct pm4_mec_release_mem)
+};
+
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
new file mode 100644
index 000000000000..6663b72370f6
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
@@ -0,0 +1,519 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include "kfd_priv.h"
+#include "kfd_mqd_manager.h"
+#include "v10_structs.h"
+#include "gc/gc_10_1_0_offset.h"
+#include "gc/gc_10_1_0_sh_mask.h"
+#include "amdgpu_amdkfd.h"
+
+static inline struct v10_compute_mqd *get_mqd(void *mqd)
+{
+	return (struct v10_compute_mqd *)mqd;
+}
+
+static inline struct v10_sdma_mqd *get_sdma_mqd(void *mqd)
+{
+	return (struct v10_sdma_mqd *)mqd;
+}
+
+static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+			struct queue_properties *q)
+{
+	struct v10_compute_mqd *m;
+	uint32_t se_mask[4] = {0}; /* 4 is the max # of SEs */
+
+	if (q->cu_mask_count == 0)
+		return;
+
+	mqd_symmetrically_map_cu_mask(mm,
+		q->cu_mask, q->cu_mask_count, se_mask);
+
+	m = get_mqd(mqd);
+	m->compute_static_thread_mgmt_se0 = se_mask[0];
+	m->compute_static_thread_mgmt_se1 = se_mask[1];
+	m->compute_static_thread_mgmt_se2 = se_mask[2];
+	m->compute_static_thread_mgmt_se3 = se_mask[3];
+
+	pr_debug("update cu mask to %#x %#x %#x %#x\n",
+		m->compute_static_thread_mgmt_se0,
+		m->compute_static_thread_mgmt_se1,
+		m->compute_static_thread_mgmt_se2,
+		m->compute_static_thread_mgmt_se3);
+}
+
+static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
+		struct queue_properties *q)
+{
+	int retval;
+	struct kfd_mem_obj *mqd_mem_obj = NULL;
+
+	/* From V9,  for CWSR, the control stack is located on the next page
+	 * boundary after the mqd, we will use the gtt allocation function
+	 * instead of sub-allocation function.
+	 */
+	if (kfd->cwsr_enabled && (q->type == KFD_QUEUE_TYPE_COMPUTE)) {
+		mqd_mem_obj = kzalloc(sizeof(struct kfd_mem_obj), GFP_NOIO);
+		if (!mqd_mem_obj)
+			return NULL;
+		retval = amdgpu_amdkfd_alloc_gtt_mem(kfd->kgd,
+			ALIGN(q->ctl_stack_size, PAGE_SIZE) +
+				ALIGN(sizeof(struct v10_compute_mqd), PAGE_SIZE),
+			&(mqd_mem_obj->gtt_mem),
+			&(mqd_mem_obj->gpu_addr),
+			(void *)&(mqd_mem_obj->cpu_ptr), true);
+	} else {
+		retval = kfd_gtt_sa_allocate(kfd, sizeof(struct v10_compute_mqd),
+				&mqd_mem_obj);
+	}
+
+	if (retval) {
+		kfree(mqd_mem_obj);
+		return NULL;
+	}
+
+	return mqd_mem_obj;
+
+}
+
+static int init_mqd(struct mqd_manager *mm, void **mqd,
+			struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
+			struct queue_properties *q)
+{
+	int retval;
+	uint64_t addr;
+	struct v10_compute_mqd *m;
+	struct kfd_dev *kfd = mm->dev;
+
+	*mqd_mem_obj = allocate_mqd(kfd, q);
+	if (!*mqd_mem_obj)
+		return -ENOMEM;
+
+	m = (struct v10_compute_mqd *) (*mqd_mem_obj)->cpu_ptr;
+	addr = (*mqd_mem_obj)->gpu_addr;
+
+	memset(m, 0, sizeof(struct v10_compute_mqd));
+
+	m->header = 0xC0310800;
+	m->compute_pipelinestat_enable = 1;
+	m->compute_static_thread_mgmt_se0 = 0xFFFFFFFF;
+	m->compute_static_thread_mgmt_se1 = 0xFFFFFFFF;
+	m->compute_static_thread_mgmt_se2 = 0xFFFFFFFF;
+	m->compute_static_thread_mgmt_se3 = 0xFFFFFFFF;
+
+	m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+			0x53 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
+	m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+	m->cp_mqd_base_addr_lo        = lower_32_bits(addr);
+	m->cp_mqd_base_addr_hi        = upper_32_bits(addr);
+
+	m->cp_hqd_quantum = 1 << CP_HQD_QUANTUM__QUANTUM_EN__SHIFT |
+			1 << CP_HQD_QUANTUM__QUANTUM_SCALE__SHIFT |
+			10 << CP_HQD_QUANTUM__QUANTUM_DURATION__SHIFT;
+
+	m->cp_hqd_pipe_priority = 1;
+	m->cp_hqd_queue_priority = 15;
+
+	if (q->format == KFD_QUEUE_FORMAT_AQL) {
+		m->cp_hqd_aql_control =
+			1 << CP_HQD_AQL_CONTROL__CONTROL0__SHIFT;
+	}
+
+	if (mm->dev->cwsr_enabled) {
+		m->cp_hqd_persistent_state |=
+			(1 << CP_HQD_PERSISTENT_STATE__QSWITCH_MODE__SHIFT);
+		m->cp_hqd_ctx_save_base_addr_lo =
+			lower_32_bits(q->ctx_save_restore_area_address);
+		m->cp_hqd_ctx_save_base_addr_hi =
+			upper_32_bits(q->ctx_save_restore_area_address);
+		m->cp_hqd_ctx_save_size = q->ctx_save_restore_area_size;
+		m->cp_hqd_cntl_stack_size = q->ctl_stack_size;
+		m->cp_hqd_cntl_stack_offset = q->ctl_stack_size;
+		m->cp_hqd_wg_state_offset = q->ctl_stack_size;
+	}
+
+	*mqd = m;
+	if (gart_addr)
+		*gart_addr = addr;
+	retval = mm->update_mqd(mm, m, q);
+
+	return retval;
+}
+
+static int load_mqd(struct mqd_manager *mm, void *mqd,
+			uint32_t pipe_id, uint32_t queue_id,
+			struct queue_properties *p, struct mm_struct *mms)
+{
+	int r = 0;
+	/* AQL write pointer counts in 64B packets, PM4/CP counts in dwords. */
+	uint32_t wptr_shift = (p->format == KFD_QUEUE_FORMAT_AQL ? 4 : 0);
+
+	r = mm->dev->kfd2kgd->hqd_load(mm->dev->kgd, mqd, pipe_id, queue_id,
+					  (uint32_t __user *)p->write_ptr,
+					  wptr_shift, 0, mms);
+	return r;
+}
+
+static int update_mqd(struct mqd_manager *mm, void *mqd,
+		      struct queue_properties *q)
+{
+	struct v10_compute_mqd *m;
+
+	m = get_mqd(mqd);
+
+	m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
+	m->cp_hqd_pq_control |=
+			ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+	pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+	m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+	m->cp_hqd_pq_base_hi = upper_32_bits((uint64_t)q->queue_address >> 8);
+
+	m->cp_hqd_pq_rptr_report_addr_lo = lower_32_bits((uint64_t)q->read_ptr);
+	m->cp_hqd_pq_rptr_report_addr_hi = upper_32_bits((uint64_t)q->read_ptr);
+	m->cp_hqd_pq_wptr_poll_addr_lo = lower_32_bits((uint64_t)q->write_ptr);
+	m->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits((uint64_t)q->write_ptr);
+
+	m->cp_hqd_pq_doorbell_control =
+		q->doorbell_off <<
+			CP_HQD_PQ_DOORBELL_CONTROL__DOORBELL_OFFSET__SHIFT;
+	pr_debug("cp_hqd_pq_doorbell_control 0x%x\n",
+			m->cp_hqd_pq_doorbell_control);
+
+	m->cp_hqd_ib_control = 3 << CP_HQD_IB_CONTROL__MIN_IB_AVAIL_SIZE__SHIFT;
+
+	/*
+	 * HW does not clamp this field correctly. Maximum EOP queue size
+	 * is constrained by per-SE EOP done signal count, which is 8-bit.
+	 * Limit is 0xFF EOP entries (= 0x7F8 dwords). CP will not submit
+	 * more than (EOP entry count - 1) so a queue size of 0x800 dwords
+	 * is safe, giving a maximum field value of 0xA.
+	 */
+	m->cp_hqd_eop_control = min(0xA,
+		ffs(q->eop_ring_buffer_size / sizeof(unsigned int)) - 1 - 1);
+	m->cp_hqd_eop_base_addr_lo =
+			lower_32_bits(q->eop_ring_buffer_address >> 8);
+	m->cp_hqd_eop_base_addr_hi =
+			upper_32_bits(q->eop_ring_buffer_address >> 8);
+
+	m->cp_hqd_iq_timer = 0;
+
+	m->cp_hqd_vmid = q->vmid;
+
+	if (q->format == KFD_QUEUE_FORMAT_AQL) {
+		/* GC 10 removed WPP_CLAMP from PQ Control */
+		m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__NO_UPDATE_RPTR_MASK |
+				2 << CP_HQD_PQ_CONTROL__SLOT_BASED_WPTR__SHIFT |
+				1 << CP_HQD_PQ_CONTROL__QUEUE_FULL_EN__SHIFT ;
+		m->cp_hqd_pq_doorbell_control |=
+			1 << CP_HQD_PQ_DOORBELL_CONTROL__DOORBELL_BIF_DROP__SHIFT;
+	}
+	if (mm->dev->cwsr_enabled)
+		m->cp_hqd_ctx_save_control = 0;
+
+	update_cu_mask(mm, mqd, q);
+
+	q->is_active = (q->queue_size > 0 &&
+			q->queue_address != 0 &&
+			q->queue_percent > 0 &&
+			!q->is_evicted);
+
+	return 0;
+}
+
+static int destroy_mqd(struct mqd_manager *mm, void *mqd,
+                        enum kfd_preempt_type type,
+                        unsigned int timeout, uint32_t pipe_id,
+                        uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_destroy
+		(mm->dev->kgd, mqd, type, timeout,
+		pipe_id, queue_id);
+}
+
+static void uninit_mqd(struct mqd_manager *mm, void *mqd,
+			struct kfd_mem_obj *mqd_mem_obj)
+{
+	struct kfd_dev *kfd = mm->dev;
+
+	if (mqd_mem_obj->gtt_mem) {
+		amdgpu_amdkfd_free_gtt_mem(kfd->kgd, mqd_mem_obj->gtt_mem);
+		kfree(mqd_mem_obj);
+	} else {
+		kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
+	}
+}
+
+static bool is_occupied(struct mqd_manager *mm, void *mqd,
+			uint64_t queue_address,	uint32_t pipe_id,
+			uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_is_occupied(
+		mm->dev->kgd, queue_address,
+		pipe_id, queue_id);
+}
+
+static int get_wave_state(struct mqd_manager *mm, void *mqd,
+			  void __user *ctl_stack,
+			  u32 *ctl_stack_used_size,
+			  u32 *save_area_used_size)
+{
+	struct v10_compute_mqd *m;
+
+	/* Control stack is located one page after MQD. */
+	void *mqd_ctl_stack = (void *)((uintptr_t)mqd + PAGE_SIZE);
+
+	m = get_mqd(mqd);
+
+	*ctl_stack_used_size = m->cp_hqd_cntl_stack_size -
+		m->cp_hqd_cntl_stack_offset;
+	*save_area_used_size = m->cp_hqd_wg_state_offset -
+		m->cp_hqd_cntl_stack_size;
+
+	if (copy_to_user(ctl_stack, mqd_ctl_stack, m->cp_hqd_cntl_stack_size))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int init_mqd_hiq(struct mqd_manager *mm, void **mqd,
+			struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
+			struct queue_properties *q)
+{
+	struct v10_compute_mqd *m;
+	int retval;
+
+
+	retval = init_mqd(mm, mqd, mqd_mem_obj, gart_addr, q);
+
+	if (retval != 0)
+		return retval;
+
+	m = get_mqd(*mqd);
+
+	m->cp_hqd_pq_control |= 1 << CP_HQD_PQ_CONTROL__PRIV_STATE__SHIFT |
+			1 << CP_HQD_PQ_CONTROL__KMD_QUEUE__SHIFT;
+
+	return retval;
+}
+
+static int update_mqd_hiq(struct mqd_manager *mm, void *mqd,
+			struct queue_properties *q)
+{
+	struct v10_compute_mqd *m;
+	int retval;
+
+	retval = update_mqd(mm, mqd, q);
+
+	if (retval != 0)
+		return retval;
+
+	/* TODO: what's the point? update_mqd already does this. */
+	m = get_mqd(mqd);
+	m->cp_hqd_vmid = q->vmid;
+	return retval;
+}
+
+static int init_mqd_sdma(struct mqd_manager *mm, void **mqd,
+		struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
+		struct queue_properties *q)
+{
+	int retval;
+	struct v10_sdma_mqd *m;
+
+
+	retval = kfd_gtt_sa_allocate(mm->dev,
+			sizeof(struct v10_sdma_mqd),
+			mqd_mem_obj);
+
+	if (retval != 0)
+		return -ENOMEM;
+
+	m = (struct v10_sdma_mqd *) (*mqd_mem_obj)->cpu_ptr;
+
+	memset(m, 0, sizeof(struct v10_sdma_mqd));
+
+	*mqd = m;
+	if (gart_addr)
+		*gart_addr = (*mqd_mem_obj)->gpu_addr;
+
+	retval = mm->update_mqd(mm, m, q);
+
+	return retval;
+}
+
+static void uninit_mqd_sdma(struct mqd_manager *mm, void *mqd,
+		struct kfd_mem_obj *mqd_mem_obj)
+{
+	kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
+}
+
+static int load_mqd_sdma(struct mqd_manager *mm, void *mqd,
+		uint32_t pipe_id, uint32_t queue_id,
+		struct queue_properties *p, struct mm_struct *mms)
+{
+	return mm->dev->kfd2kgd->hqd_sdma_load(mm->dev->kgd, mqd,
+					       (uint32_t __user *)p->write_ptr,
+					       mms);
+}
+
+#define SDMA_RLC_DUMMY_DEFAULT 0xf
+
+static int update_mqd_sdma(struct mqd_manager *mm, void *mqd,
+		struct queue_properties *q)
+{
+	struct v10_sdma_mqd *m;
+
+	m = get_sdma_mqd(mqd);
+	m->sdmax_rlcx_rb_cntl = (ffs(q->queue_size / sizeof(unsigned int)) - 1)
+		<< SDMA0_RLC0_RB_CNTL__RB_SIZE__SHIFT |
+		q->vmid << SDMA0_RLC0_RB_CNTL__RB_VMID__SHIFT |
+		1 << SDMA0_RLC0_RB_CNTL__RPTR_WRITEBACK_ENABLE__SHIFT |
+		6 << SDMA0_RLC0_RB_CNTL__RPTR_WRITEBACK_TIMER__SHIFT;
+
+	m->sdmax_rlcx_rb_base = lower_32_bits(q->queue_address >> 8);
+	m->sdmax_rlcx_rb_base_hi = upper_32_bits(q->queue_address >> 8);
+	m->sdmax_rlcx_rb_rptr_addr_lo = lower_32_bits((uint64_t)q->read_ptr);
+	m->sdmax_rlcx_rb_rptr_addr_hi = upper_32_bits((uint64_t)q->read_ptr);
+	m->sdmax_rlcx_doorbell_offset =
+		q->doorbell_off << SDMA0_RLC0_DOORBELL_OFFSET__OFFSET__SHIFT;
+
+	m->sdma_engine_id = q->sdma_engine_id;
+	m->sdma_queue_id = q->sdma_queue_id;
+	m->sdmax_rlcx_dummy_reg = SDMA_RLC_DUMMY_DEFAULT;
+
+
+	q->is_active = (q->queue_size > 0 &&
+			q->queue_address != 0 &&
+			q->queue_percent > 0 &&
+			!q->is_evicted);
+	return 0;
+}
+
+/*
+ *  * preempt type here is ignored because there is only one way
+ *  * to preempt sdma queue
+ */
+static int destroy_mqd_sdma(struct mqd_manager *mm, void *mqd,
+		enum kfd_preempt_type type,
+		unsigned int timeout, uint32_t pipe_id,
+		uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_sdma_destroy(mm->dev->kgd, mqd, timeout);
+}
+
+static bool is_occupied_sdma(struct mqd_manager *mm, void *mqd,
+		uint64_t queue_address, uint32_t pipe_id,
+		uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_sdma_is_occupied(mm->dev->kgd, mqd);
+}
+
+#if defined(CONFIG_DEBUG_FS)
+
+static int debugfs_show_mqd(struct seq_file *m, void *data)
+{
+	seq_hex_dump(m, "    ", DUMP_PREFIX_OFFSET, 32, 4,
+		     data, sizeof(struct v10_compute_mqd), false);
+	return 0;
+}
+
+static int debugfs_show_mqd_sdma(struct seq_file *m, void *data)
+{
+	seq_hex_dump(m, "    ", DUMP_PREFIX_OFFSET, 32, 4,
+		     data, sizeof(struct v10_sdma_mqd), false);
+	return 0;
+}
+
+#endif
+
+struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
+		struct kfd_dev *dev)
+{
+	struct mqd_manager *mqd;
+
+	if (WARN_ON(type >= KFD_MQD_TYPE_MAX))
+		return NULL;
+
+	mqd = kzalloc(sizeof(*mqd), GFP_NOIO);
+	if (!mqd)
+		return NULL;
+
+	mqd->dev = dev;
+
+	switch (type) {
+	case KFD_MQD_TYPE_CP:
+		pr_debug("%s@%i\n", __func__, __LINE__);
+	case KFD_MQD_TYPE_COMPUTE:
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->init_mqd = init_mqd;
+		mqd->uninit_mqd = uninit_mqd;
+		mqd->load_mqd = load_mqd;
+		mqd->update_mqd = update_mqd;
+		mqd->destroy_mqd = destroy_mqd;
+		mqd->is_occupied = is_occupied;
+		mqd->get_wave_state = get_wave_state;
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd;
+#endif
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		break;
+	case KFD_MQD_TYPE_HIQ:
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->init_mqd = init_mqd_hiq;
+		mqd->uninit_mqd = uninit_mqd;
+		mqd->load_mqd = load_mqd;
+		mqd->update_mqd = update_mqd_hiq;
+		mqd->destroy_mqd = destroy_mqd;
+		mqd->is_occupied = is_occupied;
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd;
+#endif
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		break;
+	case KFD_MQD_TYPE_SDMA:
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->init_mqd = init_mqd_sdma;
+		mqd->uninit_mqd = uninit_mqd_sdma;
+		mqd->load_mqd = load_mqd_sdma;
+		mqd->update_mqd = update_mqd_sdma;
+		mqd->destroy_mqd = destroy_mqd_sdma;
+		mqd->is_occupied = is_occupied_sdma;
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd_sdma;
+#endif
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		break;
+	default:
+		kfree(mqd);
+		return NULL;
+	}
+
+	return mqd;
+}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
index 808194663a7d..c72c8f5fd54c 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
@@ -237,6 +237,9 @@ int pm_init(struct packet_manager *pm, struct device_queue_manager *dqm)
 	case CHIP_RAVEN:
 		pm->pmf = &kfd_v9_pm_funcs;
 		break;
+	case CHIP_NAVI10:
+		pm->pmf = &kfd_v10_pm_funcs;
+		break;
 	default:
 		WARN(1, "Unexpected ASIC family %u",
 		     dqm->dev->device_info->asic_family);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index da589ee1366c..40e40d1e4dd2 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -171,6 +171,10 @@ enum cache_policy {
 	cache_policy_noncoherent
 };
 
+#define KFD_IS_VI(chip) ((chip) >= CHIP_CARRIZO && (chip) <= CHIP_POLARIS11)
+#define KFD_IS_DGPU(chip) (((chip) >= CHIP_TONGA && \
+			   (chip) <= CHIP_NAVI10) || \
+			   (chip) == CHIP_HAWAII)
 #define KFD_IS_SOC15(chip) ((chip) >= CHIP_VEGA10)
 
 struct kfd_event_interrupt_class {
@@ -861,6 +865,8 @@ struct mqd_manager *mqd_manager_init_vi_tonga(enum KFD_MQD_TYPE type,
 		struct kfd_dev *dev);
 struct mqd_manager *mqd_manager_init_v9(enum KFD_MQD_TYPE type,
 		struct kfd_dev *dev);
+struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
+		struct kfd_dev *dev);
 struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev);
 void device_queue_manager_uninit(struct device_queue_manager *dqm);
 struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
@@ -950,6 +956,7 @@ struct packet_manager_funcs {
 
 extern const struct packet_manager_funcs kfd_vi_pm_funcs;
 extern const struct packet_manager_funcs kfd_v9_pm_funcs;
+extern const struct packet_manager_funcs kfd_v10_pm_funcs;
 
 int pm_init(struct packet_manager *pm, struct device_queue_manager *dqm);
 void pm_uninit(struct packet_manager *pm);
@@ -969,7 +976,9 @@ void pm_release_ib(struct packet_manager *pm);
 /* Following PM funcs can be shared among VI and AI */
 unsigned int pm_build_pm4_header(unsigned int opcode, size_t packet_size);
 int pm_set_resources_vi(struct packet_manager *pm, uint32_t *buffer,
-				struct scheduling_resources *res);
+                               struct scheduling_resources *res);
+void kfd_pm_func_init_v10(struct packet_manager *pm, uint16_t fw_ver);
+
 
 uint64_t kfd_get_number_elems(struct kfd_dev *kfd);
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index 4bdae78bab8e..8382742e296a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -1107,3 +1107,4 @@ int kfd_debugfs_mqds_by_process(struct seq_file *m, void *data)
 }
 
 #endif
+
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
index 2c40ab4fe8de..c2e6e47abaf2 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
@@ -1321,6 +1321,7 @@ int kfd_topology_add_device(struct kfd_dev *gpu)
 	case CHIP_VEGA12:
 	case CHIP_VEGA20:
 	case CHIP_RAVEN:
+	case CHIP_NAVI10:
 		dev->node_props.capability |= ((HSA_CAP_DOORBELL_TYPE_2_0 <<
 			HSA_CAP_DOORBELL_TYPE_TOTALBITS_SHIFT) &
 			HSA_CAP_DOORBELL_TYPE_TOTALBITS_MASK);
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 129/459] drm/amdkfd: Added cwsr trap handler for gfx10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (27 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 128/459] drm/amdkfd: Add navi10 support to amdkfd. (v2) Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 130/459] drm/amdkfd: Moved gfx10 cwsr binary to cwsr_trap_handler.h Alex Deucher
                     ` (63 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Oak Zeng, Felix Kuehling

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 70152 bytes --]

From: Oak Zeng <Oak.Zeng-5C7GfCeVMHo@public.gmane.org>

CWSR (compute wave save restore) is used for preempting
compute queues.

Signed-off-by: Oak Zeng <Oak.Zeng-5C7GfCeVMHo@public.gmane.org>
Reviewed-by: Felix Kuehling <Felix.Kuehling-5C7GfCeVMHo@public.gmane.org>
Signed-off-by: Alex Deucher <alexander.deucher-5C7GfCeVMHo@public.gmane.org>
---
 .../amd/amdkfd/cwsr_trap_handler_gfx10.asm    | 1424 +++++++++++++++++
 drivers/gpu/drm/amd/amdkfd/kfd_device.c       |   10 +-
 2 files changed, 1431 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm

diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
new file mode 100644
index 000000000000..e6d345f7998b
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
@@ -0,0 +1,1424 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+
+#if 0
+shader main
+
+asic(DEFAULT)
+
+type(CS)
+
+wave_size(32)
+/*************************************************************************/
+/*					control on how to run the shader					 */
+/*************************************************************************/
+//any hack that needs to be made to run this code in EMU (either becasue various EMU code are not ready or no compute save & restore in EMU run)
+var EMU_RUN_HACK					=	0
+var EMU_RUN_HACK_RESTORE_NORMAL		=	0
+var EMU_RUN_HACK_SAVE_NORMAL_EXIT	=	0
+var	EMU_RUN_HACK_SAVE_SINGLE_WAVE	=	0
+var EMU_RUN_HACK_SAVE_FIRST_TIME	= 	0					//for interrupted restore in which the first save is through EMU_RUN_HACK
+var SAVE_LDS						= 	0
+var WG_BASE_ADDR_LO					=   0x9000a000
+var WG_BASE_ADDR_HI					=	0x0
+var WAVE_SPACE						=	0x9000				//memory size that each wave occupies in workgroup state mem, increase from 5000 to 9000 for more SGPR need to be saved
+var CTX_SAVE_CONTROL				=	0x0
+var CTX_RESTORE_CONTROL				=	CTX_SAVE_CONTROL
+var SIM_RUN_HACK					=	0					//any hack that needs to be made to run this code in SIM (either becasue various RTL code are not ready or no compute save & restore in RTL run)
+var	SGPR_SAVE_USE_SQC				=	0					//use SQC D$ to do the write
+var USE_MTBUF_INSTEAD_OF_MUBUF		=	0					//need to change BUF_DATA_FORMAT in S_SAVE_BUF_RSRC_WORD3_MISC from 0 to BUF_DATA_FORMAT_32 if set to 1 (i.e. 0x00827FAC)
+var SWIZZLE_EN						=	0					//whether we use swizzled buffer addressing
+var SAVE_RESTORE_HWID_DDID          =   0
+var RESTORE_DDID_IN_SGPR18          =   0
+/**************************************************************************/
+/*                     	variables							              */
+/**************************************************************************/
+var SQ_WAVE_STATUS_INST_ATC_SHIFT  = 23
+var SQ_WAVE_STATUS_INST_ATC_MASK   = 0x00800000
+var SQ_WAVE_STATUS_SPI_PRIO_MASK   = 0x00000006
+
+var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT	= 12
+var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE		= 9
+var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT	= 8
+var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE	= 6
+var SQ_WAVE_GPR_ALLOC_SGPR_SIZE_SHIFT	= 24
+var SQ_WAVE_GPR_ALLOC_SGPR_SIZE_SIZE	= 4						//FIXME	 sq.blk still has 4 bits at this time while SQ programming guide has 3 bits
+var SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT    = 24
+var SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE     = 4
+var SQ_WAVE_IB_STS2_WAVE64_SHIFT        = 11
+var SQ_WAVE_IB_STS2_WAVE64_SIZE         = 1
+
+var	SQ_WAVE_TRAPSTS_SAVECTX_MASK	=	0x400
+var SQ_WAVE_TRAPSTS_EXCE_MASK       =   0x1FF          			// Exception mask
+var	SQ_WAVE_TRAPSTS_SAVECTX_SHIFT	=	10					
+var	SQ_WAVE_TRAPSTS_MEM_VIOL_MASK	=	0x100					
+var	SQ_WAVE_TRAPSTS_MEM_VIOL_SHIFT	=	8		
+var	SQ_WAVE_TRAPSTS_PRE_SAVECTX_MASK 	=	0x3FF
+var	SQ_WAVE_TRAPSTS_PRE_SAVECTX_SHIFT 	=	0x0
+var	SQ_WAVE_TRAPSTS_PRE_SAVECTX_SIZE 	=	10
+var	SQ_WAVE_TRAPSTS_POST_SAVECTX_MASK 	=	0xFFFFF800	
+var	SQ_WAVE_TRAPSTS_POST_SAVECTX_SHIFT 	=	11
+var	SQ_WAVE_TRAPSTS_POST_SAVECTX_SIZE 	=	21	
+
+var SQ_WAVE_IB_STS_RCNT_SHIFT			=	16					//FIXME
+var SQ_WAVE_IB_STS_FIRST_REPLAY_SHIFT	=	15					//FIXME
+var SQ_WAVE_IB_STS_FIRST_REPLAY_SIZE    =   1                   //FIXME
+var SQ_WAVE_IB_STS_RCNT_SIZE            =   6                   //FIXME
+var SQ_WAVE_IB_STS_RCNT_FIRST_REPLAY_MASK_NEG	= 0x00007FFF	//FIXME
+ 
+var	SQ_BUF_RSRC_WORD1_ATC_SHIFT		=	24
+var	SQ_BUF_RSRC_WORD3_MTYPE_SHIFT	=	27
+
+
+/*      Save        */
+var	S_SAVE_BUF_RSRC_WORD1_STRIDE		=	0x00040000  		//stride is 4 bytes 
+var	S_SAVE_BUF_RSRC_WORD3_MISC			= 	0x00807FAC			//SQ_SEL_X/Y/Z/W, BUF_NUM_FORMAT_FLOAT, (0 for MUBUF stride[17:14] when ADD_TID_ENABLE and BUF_DATA_FORMAT_32 for MTBUF), ADD_TID_ENABLE			
+
+var	S_SAVE_SPI_INIT_ATC_MASK			=	0x08000000			//bit[27]: ATC bit
+var	S_SAVE_SPI_INIT_ATC_SHIFT			=	27
+var	S_SAVE_SPI_INIT_MTYPE_MASK			=	0x70000000			//bit[30:28]: Mtype
+var	S_SAVE_SPI_INIT_MTYPE_SHIFT			=	28
+var	S_SAVE_SPI_INIT_FIRST_WAVE_MASK		=	0x04000000			//bit[26]: FirstWaveInTG
+var	S_SAVE_SPI_INIT_FIRST_WAVE_SHIFT	=	26
+
+var S_SAVE_PC_HI_RCNT_SHIFT				=	28					//FIXME	 check with Brian to ensure all fields other than PC[47:0] can be used
+var S_SAVE_PC_HI_RCNT_MASK				=   0xF0000000			//FIXME
+var S_SAVE_PC_HI_FIRST_REPLAY_SHIFT		=	27					//FIXME
+var S_SAVE_PC_HI_FIRST_REPLAY_MASK		=	0x08000000			//FIXME
+
+var	s_save_spi_init_lo				=	exec_lo
+var s_save_spi_init_hi				=	exec_hi
+
+var	s_save_pc_lo			=	ttmp0			//{TTMP1, TTMP0} = {3¡¯h0,pc_rewind[3:0], HT[0],trapID[7:0], PC[47:0]}
+var	s_save_pc_hi			=	ttmp1			
+var s_save_exec_lo			=	ttmp2
+var s_save_exec_hi			= 	ttmp3			
+var	s_save_status			=	ttmp4			
+var	s_save_trapsts			=	ttmp5			//not really used until the end of the SAVE routine
+var s_wave_size         	=	ttmp6           //ttmp6 is not needed now, since it's only 32bit xnack mask, now use it to determine wave32 or wave64 in EMU_HACK
+var s_save_xnack_mask	    =	ttmp7
+var	s_save_buf_rsrc0		=	ttmp8
+var	s_save_buf_rsrc1		=	ttmp9
+var	s_save_buf_rsrc2		=	ttmp10
+var	s_save_buf_rsrc3		=	ttmp11
+
+var s_save_mem_offset		= 	ttmp14
+var s_sgpr_save_num         =   106                     //in gfx10, all sgpr must be saved
+var s_save_alloc_size		=	s_save_trapsts			//conflict
+var s_save_tmp              =   s_save_buf_rsrc2       	//shared with s_save_buf_rsrc2  (conflict: should not use mem access with s_save_tmp at the same time)
+var s_save_m0				=	ttmp15					
+
+/*      Restore     */
+var	S_RESTORE_BUF_RSRC_WORD1_STRIDE			=	S_SAVE_BUF_RSRC_WORD1_STRIDE 
+var	S_RESTORE_BUF_RSRC_WORD3_MISC			= 	S_SAVE_BUF_RSRC_WORD3_MISC		 
+
+var	S_RESTORE_SPI_INIT_ATC_MASK			    =	0x08000000			//bit[27]: ATC bit
+var	S_RESTORE_SPI_INIT_ATC_SHIFT			=	27
+var	S_RESTORE_SPI_INIT_MTYPE_MASK			=	0x70000000			//bit[30:28]: Mtype
+var	S_RESTORE_SPI_INIT_MTYPE_SHIFT			=	28
+var	S_RESTORE_SPI_INIT_FIRST_WAVE_MASK		=	0x04000000			//bit[26]: FirstWaveInTG
+var	S_RESTORE_SPI_INIT_FIRST_WAVE_SHIFT	    =	26
+
+var S_RESTORE_PC_HI_RCNT_SHIFT				=	S_SAVE_PC_HI_RCNT_SHIFT
+var S_RESTORE_PC_HI_RCNT_MASK				=   S_SAVE_PC_HI_RCNT_MASK
+var S_RESTORE_PC_HI_FIRST_REPLAY_SHIFT		=	S_SAVE_PC_HI_FIRST_REPLAY_SHIFT
+var S_RESTORE_PC_HI_FIRST_REPLAY_MASK		=	S_SAVE_PC_HI_FIRST_REPLAY_MASK
+
+var s_restore_spi_init_lo                   =   exec_lo
+var s_restore_spi_init_hi                   =   exec_hi
+
+var s_restore_mem_offset		= 	ttmp12
+var s_restore_alloc_size		=	ttmp3
+var s_restore_tmp           	=   ttmp6
+var s_restore_mem_offset_save	= 	s_restore_tmp 		//no conflict
+
+var s_restore_m0			=	s_restore_alloc_size	//no conflict			
+
+var s_restore_mode			=  	ttmp13
+var s_restore_hwid1         =  ttmp2
+var s_restore_ddid          =  s_restore_hwid1
+var	s_restore_pc_lo		    =	ttmp0			
+var	s_restore_pc_hi		    =	ttmp1
+var s_restore_exec_lo		=	ttmp14
+var s_restore_exec_hi		= 	ttmp15
+var	s_restore_status	    =	ttmp4			
+var	s_restore_trapsts	    =	ttmp5
+//var s_restore_xnack_mask_lo	=	xnack_mask_lo
+//var s_restore_xnack_mask_hi	=	xnack_mask_hi
+var s_restore_xnack_mask    =   ttmp7
+var	s_restore_buf_rsrc0		=	ttmp8
+var	s_restore_buf_rsrc1		=	ttmp9
+var	s_restore_buf_rsrc2		=	ttmp10
+var	s_restore_buf_rsrc3		=	ttmp11
+var s_restore_size         	=	ttmp13                  //ttmp13 has no conflict
+
+/**************************************************************************/
+/*                     	trap handler entry points			              */
+/**************************************************************************/
+    if ((EMU_RUN_HACK) && (!EMU_RUN_HACK_RESTORE_NORMAL)) 					//hack to use trap_id for determining save/restore
+		//FIXME VCCZ un-init assertion s_getreg_b32  	s_save_status, hwreg(HW_REG_STATUS)			//save STATUS since we will change SCC
+		s_and_b32 s_save_tmp, s_save_pc_hi, 0xffff0000 				//change SCC
+    	s_cmp_eq_u32 s_save_tmp, 0x007e0000  						//Save: trap_id = 0x7e. Restore: trap_id = 0x7f.  
+    	s_cbranch_scc0 L_JUMP_TO_RESTORE							//do not need to recover STATUS here  since we are going to RESTORE
+		//FIXME  s_setreg_b32 	hwreg(HW_REG_STATUS), 	s_save_status		//need to recover STATUS since we are going to SAVE	
+		s_branch L_SKIP_RESTORE 									//NOT restore, SAVE actually
+	else	
+		s_branch L_SKIP_RESTORE 									//NOT restore. might be a regular trap or save
+    end
+
+L_JUMP_TO_RESTORE:
+    s_branch L_RESTORE												//restore
+
+L_SKIP_RESTORE:
+	
+	s_getreg_b32  	s_save_status, hwreg(HW_REG_STATUS)								//save STATUS since we will change SCC
+    s_andn2_b32		s_save_status, s_save_status, SQ_WAVE_STATUS_SPI_PRIO_MASK      //check whether this is for save
+	s_getreg_b32  	s_save_trapsts, hwreg(HW_REG_TRAPSTS)    		 				
+	s_and_b32		s_save_trapsts, s_save_trapsts, SQ_WAVE_TRAPSTS_SAVECTX_MASK	//check whether this is for save  
+	s_cbranch_scc1	L_SAVE															//this is the operation for save
+
+    // *********    Handle non-CWSR traps       *******************
+    if (!EMU_RUN_HACK)
+		s_getreg_b32     s_save_trapsts, hwreg(HW_REG_TRAPSTS)
+		s_and_b32        s_save_trapsts, s_save_trapsts, SQ_WAVE_TRAPSTS_EXCE_MASK // Check whether it is an exception
+		s_cbranch_scc1  L_EXCP_CASE   // Exception, jump back to the shader program directly.
+		s_add_u32    ttmp0, ttmp0, 4   // S_TRAP case, add 4 to ttmp0 
+		
+		L_EXCP_CASE:
+		s_and_b32    ttmp1, ttmp1, 0xFFFF
+		s_rfe_b64    [ttmp0, ttmp1]
+	end
+    // *********        End handling of non-CWSR traps   *******************
+
+/**************************************************************************/
+/*                     	save routine						              */
+/**************************************************************************/
+
+L_SAVE:	
+	
+	//check whether there is mem_viol
+	s_getreg_b32  	s_save_trapsts, hwreg(HW_REG_TRAPSTS)    		 				
+	s_and_b32		s_save_trapsts, s_save_trapsts, SQ_WAVE_TRAPSTS_MEM_VIOL_MASK			
+	s_cbranch_scc0	L_NO_PC_REWIND
+    
+	//if so, need rewind PC assuming GDS operation gets NACKed
+	s_mov_b32       s_save_tmp, 0															//clear mem_viol bit
+	s_setreg_b32	hwreg(HW_REG_TRAPSTS, SQ_WAVE_TRAPSTS_MEM_VIOL_SHIFT, 1), s_save_tmp	//clear mem_viol bit 
+	s_and_b32 		s_save_pc_hi, s_save_pc_hi, 0x0000ffff    //pc[47:32]
+	s_sub_u32 		s_save_pc_lo, s_save_pc_lo, 8             //pc[31:0]-8
+	s_subb_u32 		s_save_pc_hi, s_save_pc_hi, 0x0			  // -scc
+
+L_NO_PC_REWIND:
+    s_mov_b32       s_save_tmp, 0															//clear saveCtx bit
+	s_setreg_b32	hwreg(HW_REG_TRAPSTS, SQ_WAVE_TRAPSTS_SAVECTX_SHIFT, 1), s_save_tmp		//clear saveCtx bit   
+
+	//s_mov_b32		s_save_xnack_mask_lo,	xnack_mask_lo									//save XNACK_MASK  
+	//s_mov_b32		s_save_xnack_mask_hi,	xnack_mask_hi
+    s_getreg_b32	s_save_xnack_mask,  hwreg(HW_REG_SHADER_XNACK_MASK)  
+	s_getreg_b32	s_save_tmp, hwreg(HW_REG_IB_STS, SQ_WAVE_IB_STS_RCNT_SHIFT, SQ_WAVE_IB_STS_RCNT_SIZE)					//save RCNT
+	s_lshl_b32		s_save_tmp, s_save_tmp, S_SAVE_PC_HI_RCNT_SHIFT
+	s_or_b32		s_save_pc_hi, s_save_pc_hi, s_save_tmp
+	s_getreg_b32	s_save_tmp, hwreg(HW_REG_IB_STS, SQ_WAVE_IB_STS_FIRST_REPLAY_SHIFT, SQ_WAVE_IB_STS_FIRST_REPLAY_SIZE)	//save FIRST_REPLAY
+	s_lshl_b32		s_save_tmp, s_save_tmp, S_SAVE_PC_HI_FIRST_REPLAY_SHIFT
+	s_or_b32		s_save_pc_hi, s_save_pc_hi, s_save_tmp
+	s_getreg_b32	s_save_tmp, hwreg(HW_REG_IB_STS)										//clear RCNT and FIRST_REPLAY in IB_STS
+	s_and_b32		s_save_tmp, s_save_tmp, SQ_WAVE_IB_STS_RCNT_FIRST_REPLAY_MASK_NEG
+
+	s_setreg_b32	hwreg(HW_REG_IB_STS), s_save_tmp
+    
+	/*		inform SPI the readiness and wait for SPI's go signal */
+	s_mov_b32		s_save_exec_lo,	exec_lo													//save EXEC and use EXEC for the go signal from SPI
+	s_mov_b32		s_save_exec_hi,	exec_hi
+	s_mov_b64		exec, 	0x0																//clear EXEC to get ready to receive
+	if (EMU_RUN_HACK)
+	
+	else
+		s_sendmsg	sendmsg(MSG_SAVEWAVE)													//send SPI a message and wait for SPI's write to EXEC  
+	end
+
+  L_SLEEP:		
+	s_sleep 0x2
+	
+	if (EMU_RUN_HACK)
+																							
+	else
+		s_cbranch_execz	L_SLEEP                                                         
+	end
+
+
+	/*      setup Resource Contants    */
+	if ((EMU_RUN_HACK) && (!EMU_RUN_HACK_SAVE_SINGLE_WAVE))	
+		//calculate wd_addr using absolute thread id 
+		v_readlane_b32 s_save_tmp, v9, 0
+        //determine it is wave32 or wave64
+        s_getreg_b32 	s_wave_size, hwreg(HW_REG_IB_STS2,SQ_WAVE_IB_STS2_WAVE64_SHIFT,SQ_WAVE_IB_STS2_WAVE64_SIZE)
+        s_cmp_eq_u32    s_wave_size, 0
+        s_cbranch_scc1  L_SAVE_WAVE32
+        s_lshr_b32 s_save_tmp, s_save_tmp, 6 //SAVE WAVE64
+        s_branch    L_SAVE_CON
+    L_SAVE_WAVE32:
+        s_lshr_b32 s_save_tmp, s_save_tmp, 5 //SAVE WAVE32
+    L_SAVE_CON:
+		s_mul_i32 s_save_tmp, s_save_tmp, WAVE_SPACE
+		s_add_i32 s_save_spi_init_lo, s_save_tmp, WG_BASE_ADDR_LO
+		s_mov_b32 s_save_spi_init_hi, WG_BASE_ADDR_HI
+		s_and_b32 s_save_spi_init_hi, s_save_spi_init_hi, CTX_SAVE_CONTROL		
+	else
+	end
+	if ((EMU_RUN_HACK) && (EMU_RUN_HACK_SAVE_SINGLE_WAVE))
+		s_add_i32 s_save_spi_init_lo, s_save_tmp, WG_BASE_ADDR_LO
+		s_mov_b32 s_save_spi_init_hi, WG_BASE_ADDR_HI
+		s_and_b32 s_save_spi_init_hi, s_save_spi_init_hi, CTX_SAVE_CONTROL		
+	else
+	end
+	
+	
+	s_mov_b32		s_save_buf_rsrc0, 	s_save_spi_init_lo														//base_addr_lo
+	s_and_b32		s_save_buf_rsrc1, 	s_save_spi_init_hi, 0x0000FFFF											//base_addr_hi
+	s_or_b32		s_save_buf_rsrc1, 	s_save_buf_rsrc1,  S_SAVE_BUF_RSRC_WORD1_STRIDE
+    s_mov_b32       s_save_buf_rsrc2,   0                                               						//NUM_RECORDS initial value = 0 (in bytes) although not neccessarily inited
+	s_mov_b32		s_save_buf_rsrc3, 	S_SAVE_BUF_RSRC_WORD3_MISC
+	s_and_b32		s_save_tmp,         s_save_spi_init_hi, S_SAVE_SPI_INIT_ATC_MASK		
+	s_lshr_b32		s_save_tmp,  		s_save_tmp, (S_SAVE_SPI_INIT_ATC_SHIFT-SQ_BUF_RSRC_WORD1_ATC_SHIFT)			//get ATC bit into position
+	s_or_b32		s_save_buf_rsrc3, 	s_save_buf_rsrc3,  s_save_tmp											//or ATC
+	s_and_b32		s_save_tmp,         s_save_spi_init_hi, S_SAVE_SPI_INIT_MTYPE_MASK		
+	s_lshr_b32		s_save_tmp,  		s_save_tmp, (S_SAVE_SPI_INIT_MTYPE_SHIFT-SQ_BUF_RSRC_WORD3_MTYPE_SHIFT)		//get MTYPE bits into position
+	s_or_b32		s_save_buf_rsrc3, 	s_save_buf_rsrc3,  s_save_tmp											//or MTYPE	
+	
+	s_mov_b32		s_save_m0,			m0																	//save M0
+	
+	/* 		global mem offset			*/
+	s_mov_b32		s_save_mem_offset, 	0x0																		//mem offset initial value = 0
+    s_getreg_b32 	s_wave_size, hwreg(HW_REG_IB_STS2,SQ_WAVE_IB_STS2_WAVE64_SHIFT,SQ_WAVE_IB_STS2_WAVE64_SIZE) //get wave_save_size
+    s_or_b32        s_wave_size, s_save_spi_init_hi,    s_wave_size                                             //share s_wave_size with exec_hi
+
+    /*      	save VGPRs	    */
+	//////////////////////////////
+  L_SAVE_VGPR:
+  
+ 	s_mov_b32		exec_lo, 0xFFFFFFFF 											//need every thread from now on
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1  
+    s_cbranch_scc1  L_ENABLE_SAVE_VGPR_EXEC_HI   
+    s_mov_b32		exec_hi, 0x00000000
+    s_branch        L_SAVE_VGPR_NORMAL
+  L_ENABLE_SAVE_VGPR_EXEC_HI:
+	s_mov_b32		exec_hi, 0xFFFFFFFF
+  L_SAVE_VGPR_NORMAL:	
+	s_getreg_b32 	s_save_alloc_size, hwreg(HW_REG_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE) 					//vpgr_size
+	//for wave32 and wave64, the num of vgpr function is the same?
+    s_add_u32 		s_save_alloc_size, s_save_alloc_size, 1
+	s_lshl_b32 		s_save_alloc_size, s_save_alloc_size, 2 						//Number of VGPRs = (vgpr_size + 1) * 4    (non-zero value)   //FIXME for GFX, zero is possible
+    //determine it is wave32 or wave64
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_SAVE_VGPR_WAVE64
+
+    //zhenxu added it for save vgpr for wave32
+	s_lshl_b32		s_save_buf_rsrc2,  s_save_alloc_size, 7							//NUM_RECORDS in bytes (32 threads*4)
+	if (SWIZZLE_EN)
+		s_add_u32		s_save_buf_rsrc2, s_save_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_save_buf_rsrc2,  0x1000000								//NUM_RECORDS in bytes
+	end
+	
+    s_mov_b32 		m0, 0x0 														//VGPR initial index value =0
+	//s_set_gpr_idx_on  m0, 0x1														//M0[7:0] = M0[7:0] and M0[15:12] = 0x1
+    //s_add_u32		s_save_alloc_size, s_save_alloc_size, 0x1000					//add 0x1000 since we compare m0 against it later, doesn't need this in gfx10
+
+  L_SAVE_VGPR_WAVE32_LOOP: 										
+	v_movrels_b32 		v0, v0															//v0 = v[0+m0]	
+	    
+    if(USE_MTBUF_INSTEAD_OF_MUBUF)       
+		tbuffer_store_format_x v0, v0, s_save_buf_rsrc0, s_save_mem_offset format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+    else
+		buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset slc:1 glc:1
+	end
+
+    s_add_u32		m0, m0, 1														//next vgpr index
+	s_add_u32		s_save_mem_offset, s_save_mem_offset, 128						//every buffer_store_dword does 128 bytes
+	s_cmp_lt_u32 	m0,	s_save_alloc_size 											//scc = (m0 < s_save_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_SAVE_VGPR_WAVE32_LOOP												//VGPR save is complete?
+    s_branch    L_SAVE_LDS
+    //save vgpr for wave32 ends
+
+  L_SAVE_VGPR_WAVE64:
+	s_lshl_b32		s_save_buf_rsrc2,  s_save_alloc_size, 8							//NUM_RECORDS in bytes (64 threads*4)
+	if (SWIZZLE_EN)
+		s_add_u32		s_save_buf_rsrc2, s_save_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_save_buf_rsrc2,  0x1000000								//NUM_RECORDS in bytes
+	end
+	
+    s_mov_b32 		m0, 0x0 														//VGPR initial index value =0
+	//s_set_gpr_idx_on  m0, 0x1														//M0[7:0] = M0[7:0] and M0[15:12] = 0x1
+    //s_add_u32		s_save_alloc_size, s_save_alloc_size, 0x1000					//add 0x1000 since we compare m0 against it later, doesn't need this in gfx10
+
+  L_SAVE_VGPR_WAVE64_LOOP: 										
+	v_movrels_b32 		v0, v0															//v0 = v[0+m0]	
+	    
+    if(USE_MTBUF_INSTEAD_OF_MUBUF)       
+		tbuffer_store_format_x v0, v0, s_save_buf_rsrc0, s_save_mem_offset format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+    else
+		buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset slc:1 glc:1
+	end
+
+    s_add_u32		m0, m0, 1														//next vgpr index
+	s_add_u32		s_save_mem_offset, s_save_mem_offset, 256						//every buffer_store_dword does 256 bytes
+	s_cmp_lt_u32 	m0,	s_save_alloc_size 											//scc = (m0 < s_save_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_SAVE_VGPR_WAVE64_LOOP												//VGPR save is complete?
+	//s_set_gpr_idx_off
+    //
+    //Below part will be the save shared vgpr part (new for gfx10)
+    s_getreg_b32 	s_save_alloc_size, hwreg(HW_REG_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE) 			//shared_vgpr_size
+    s_and_b32		s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF				//shared_vgpr_size is zero?
+    s_cbranch_scc0	L_SAVE_LDS													    //no shared_vgpr used? jump to L_SAVE_LDS
+    s_lshl_b32 		s_save_alloc_size, s_save_alloc_size, 3 						//Number of SHARED_VGPRs = shared_vgpr_size * 8    (non-zero value)
+    //m0 now has the value of normal vgpr count, just add the m0 with shared_vgpr count to get the total count.
+    //save shared_vgpr will start from the index of m0
+    s_add_u32       s_save_alloc_size, s_save_alloc_size, m0
+    s_mov_b32		exec_lo, 0xFFFFFFFF
+    s_mov_b32		exec_hi, 0x00000000
+    L_SAVE_SHARED_VGPR_WAVE64_LOOP: 										
+	v_movrels_b32 		v0, v0															//v0 = v[0+m0]	
+	buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset slc:1 glc:1
+    s_add_u32		m0, m0, 1														//next vgpr index
+	s_add_u32		s_save_mem_offset, s_save_mem_offset, 128						//every buffer_store_dword does 256 bytes
+	s_cmp_lt_u32 	m0,	s_save_alloc_size 											//scc = (m0 < s_save_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_SAVE_SHARED_VGPR_WAVE64_LOOP									//SHARED_VGPR save is complete?
+    
+	/*      	save LDS	    */
+	//////////////////////////////
+  L_SAVE_LDS:
+
+    //Only check the first wave need LDS
+	/*      the first wave in the threadgroup    */
+	s_barrier																		//FIXME  not performance-optimal "LDS is used? wait for other waves in the same TG" 
+	s_and_b32		s_save_tmp, s_wave_size, S_SAVE_SPI_INIT_FIRST_WAVE_MASK								//exec is still used here
+	s_cbranch_scc0	L_SAVE_SGPR
+	
+	s_mov_b32		exec_lo, 0xFFFFFFFF 											//need every thread from now on
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_ENABLE_SAVE_LDS_EXEC_HI   
+    s_mov_b32		exec_hi, 0x00000000
+    s_branch        L_SAVE_LDS_NORMAL
+  L_ENABLE_SAVE_LDS_EXEC_HI:
+	s_mov_b32		exec_hi, 0xFFFFFFFF
+  L_SAVE_LDS_NORMAL:	
+	s_getreg_b32 	s_save_alloc_size, hwreg(HW_REG_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE) 			//lds_size
+	s_and_b32		s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF				//lds_size is zero?
+	s_cbranch_scc0	L_SAVE_SGPR														//no lds used? jump to L_SAVE_VGPR
+	s_lshl_b32 		s_save_alloc_size, s_save_alloc_size, 6 						//LDS size in dwords = lds_size * 64dw
+	s_lshl_b32 		s_save_alloc_size, s_save_alloc_size, 2 						//LDS size in bytes
+	s_mov_b32		s_save_buf_rsrc2,  s_save_alloc_size  							//NUM_RECORDS in bytes
+	if (SWIZZLE_EN)
+		s_add_u32		s_save_buf_rsrc2, s_save_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_save_buf_rsrc2,  0x1000000								//NUM_RECORDS in bytes
+	end
+
+    //load 0~63*4(byte address) to vgpr v15
+    v_mbcnt_lo_u32_b32 v0, -1, 0
+    v_mbcnt_hi_u32_b32 v0, -1, v0
+    v_mul_u32_u24 v0, 4, v0
+
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_mov_b32 		m0, 0x0
+    s_cbranch_scc1  L_SAVE_LDS_LOOP_W64
+
+  L_SAVE_LDS_LOOP_W32:									
+	if (SAVE_LDS)
+    ds_read_b32 v1, v0
+    s_waitcnt 0														    //ensure data ready
+    buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset slc:1 glc:1
+	//buffer_store_lds_dword	s_save_buf_rsrc0, s_save_mem_offset lds:1               //save lds to memory doesn't exist in 10
+	end
+	s_add_u32		m0, m0, 128															//every buffer_store_lds does 128 bytes
+	s_add_u32		s_save_mem_offset, s_save_mem_offset, 128							//mem offset increased by 128 bytes
+    v_add_nc_u32    v0, v0, 128
+	s_cmp_lt_u32	m0, s_save_alloc_size												//scc=(m0 < s_save_alloc_size) ? 1 : 0
+	s_cbranch_scc1  L_SAVE_LDS_LOOP_W32													//LDS save is complete?
+    s_branch        L_SAVE_SGPR
+
+  L_SAVE_LDS_LOOP_W64:									
+	if (SAVE_LDS)
+    ds_read_b32 v1, v0
+    s_waitcnt 0														    //ensure data ready
+    buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset slc:1 glc:1
+	//buffer_store_lds_dword	s_save_buf_rsrc0, s_save_mem_offset lds:1               //save lds to memory doesn't exist in 10
+	end
+	s_add_u32		m0, m0, 256															//every buffer_store_lds does 256 bytes
+	s_add_u32		s_save_mem_offset, s_save_mem_offset, 256							//mem offset increased by 256 bytes
+    v_add_nc_u32    v0, v0, 256
+	s_cmp_lt_u32	m0, s_save_alloc_size												//scc=(m0 < s_save_alloc_size) ? 1 : 0
+	s_cbranch_scc1  L_SAVE_LDS_LOOP_W64													//LDS save is complete?
+   
+	
+	/*      	save SGPRs	    */
+	//////////////////////////////
+	//s_getreg_b32 	s_save_alloc_size, hwreg(HW_REG_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_SGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_SGPR_SIZE_SIZE) 				//spgr_size
+	//s_add_u32 		s_save_alloc_size, s_save_alloc_size, 1
+	//s_lshl_b32 		s_save_alloc_size, s_save_alloc_size, 4 						//Number of SGPRs = (sgpr_size + 1) * 16   (non-zero value) 
+	//s_lshl_b32 		s_save_alloc_size, s_save_alloc_size, 3 						//In gfx10, Number of SGPRs = (sgpr_size + 1) * 8   (non-zero value) 
+  L_SAVE_SGPR:
+    //need to look at it is wave32 or wave64
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_SAVE_SGPR_VMEM_WAVE64
+    if (SGPR_SAVE_USE_SQC)
+		s_lshl_b32		s_save_buf_rsrc2,	s_sgpr_save_num, 2					//NUM_RECORDS in bytes
+    else
+        s_lshl_b32		s_save_buf_rsrc2,	s_sgpr_save_num, 7					//NUM_RECORDS in bytes (32 threads)
+    end
+    s_branch    L_SAVE_SGPR_CONT    
+  L_SAVE_SGPR_VMEM_WAVE64:
+	if (SGPR_SAVE_USE_SQC)
+		s_lshl_b32		s_save_buf_rsrc2,	s_sgpr_save_num, 2					//NUM_RECORDS in bytes 
+	else
+		s_lshl_b32		s_save_buf_rsrc2,	s_sgpr_save_num, 8					//NUM_RECORDS in bytes (64 threads)
+	end
+  L_SAVE_SGPR_CONT:
+	if (SWIZZLE_EN)
+		s_add_u32		s_save_buf_rsrc2, s_save_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_save_buf_rsrc2,  0x1000000								//NUM_RECORDS in bytes
+	end
+	
+	//s_mov_b32 		m0, 0x0 														//SGPR initial index value =0		
+    //s_nop           0x0                                                             //Manually inserted wait states
+	
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    
+    s_mov_b32 		m0, 0x0 														//SGPR initial index value =0		
+    s_nop           0x0                                                             //Manually inserted wait states
+
+    s_cbranch_scc1  L_SAVE_SGPR_LOOP_WAVE64
+
+  L_SAVE_SGPR_LOOP_WAVE32: 										
+	s_movrels_b32 	s0, s0 															//s0 = s[0+m0]
+    //zhenxu, adding one more argument to save sgpr function, this is only for vmem, using sqc is not change    
+	write_sgpr_to_mem_wave32(s0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)							//PV: the best performance should be using s_buffer_store_dwordx4
+	s_add_u32		m0, m0, 1														//next sgpr index
+	s_cmp_lt_u32 	m0, s_sgpr_save_num 											//scc = (m0 < s_sgpr_save_num) ? 1 : 0
+	s_cbranch_scc1 	L_SAVE_SGPR_LOOP_WAVE32												//SGPR save is complete?
+    s_branch    L_SAVE_HWREG
+
+  L_SAVE_SGPR_LOOP_WAVE64: 										
+	s_movrels_b32 	s0, s0 															//s0 = s[0+m0]
+    //zhenxu, adding one more argument to save sgpr function, this is only for vmem, using sqc is not change    
+	write_sgpr_to_mem_wave64(s0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)							//PV: the best performance should be using s_buffer_store_dwordx4
+	s_add_u32		m0, m0, 1														//next sgpr index
+	s_cmp_lt_u32 	m0, s_sgpr_save_num 											//scc = (m0 < s_sgpr_save_num) ? 1 : 0
+	s_cbranch_scc1 	L_SAVE_SGPR_LOOP_WAVE64												//SGPR save is complete?
+
+	
+	/* 		save HW registers	*/
+	//////////////////////////////
+  L_SAVE_HWREG:
+    s_mov_b32		s_save_buf_rsrc2, 0x4								//NUM_RECORDS	in bytes
+	if (SWIZZLE_EN)
+		s_add_u32		s_save_buf_rsrc2, s_save_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_save_buf_rsrc2,  0x1000000								//NUM_RECORDS in bytes
+	end
+
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_SAVE_HWREG_WAVE64
+	
+	write_sgpr_to_mem_wave32(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)					//M0
+
+	if ((EMU_RUN_HACK) && (EMU_RUN_HACK_SAVE_FIRST_TIME))      
+		s_add_u32 s_save_pc_lo, s_save_pc_lo, 4             //pc[31:0]+4
+		s_addc_u32 s_save_pc_hi, s_save_pc_hi, 0x0			//carry bit over
+	end
+
+	write_sgpr_to_mem_wave32(s_save_pc_lo, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)					//PC
+	write_sgpr_to_mem_wave32(s_save_pc_hi, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+	write_sgpr_to_mem_wave32(s_save_exec_lo, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)				//EXEC
+	write_sgpr_to_mem_wave32(s_save_exec_hi, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+	write_sgpr_to_mem_wave32(s_save_status, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)				//STATUS 
+	
+	//s_save_trapsts conflicts with s_save_alloc_size
+	s_getreg_b32    s_save_trapsts, hwreg(HW_REG_TRAPSTS)
+	write_sgpr_to_mem_wave32(s_save_trapsts, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)				//TRAPSTS
+	
+	//write_sgpr_to_mem_wave32(s_save_xnack_mask_lo, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)			//XNACK_MASK_LO
+	write_sgpr_to_mem_wave32(s_save_xnack_mask, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)			//XNACK_MASK_HI
+	
+	//use s_save_tmp would introduce conflict here between s_save_tmp and s_save_buf_rsrc2
+	s_getreg_b32 	s_save_m0, hwreg(HW_REG_MODE)																						//MODE
+	write_sgpr_to_mem_wave32(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+    if(SAVE_RESTORE_HWID_DDID)
+    s_getreg_b32 	s_save_m0, hwreg(HW_REG_HW_ID1)																						//HW_ID1, handler records the SE/SA/WGP/SIMD/wave of the original wave
+    write_sgpr_to_mem_wave32(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+    end
+    s_branch   L_S_PGM_END_SAVED
+
+  L_SAVE_HWREG_WAVE64:
+    write_sgpr_to_mem_wave64(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)					//M0
+
+	if ((EMU_RUN_HACK) && (EMU_RUN_HACK_SAVE_FIRST_TIME))      
+		s_add_u32 s_save_pc_lo, s_save_pc_lo, 4             //pc[31:0]+4
+		s_addc_u32 s_save_pc_hi, s_save_pc_hi, 0x0			//carry bit over
+	end
+
+	write_sgpr_to_mem_wave64(s_save_pc_lo, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)					//PC
+	write_sgpr_to_mem_wave64(s_save_pc_hi, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+	write_sgpr_to_mem_wave64(s_save_exec_lo, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)				//EXEC
+	write_sgpr_to_mem_wave64(s_save_exec_hi, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+	write_sgpr_to_mem_wave64(s_save_status, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)				//STATUS 
+	
+	//s_save_trapsts conflicts with s_save_alloc_size
+	s_getreg_b32    s_save_trapsts, hwreg(HW_REG_TRAPSTS)
+	write_sgpr_to_mem_wave64(s_save_trapsts, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)				//TRAPSTS
+	
+	//write_sgpr_to_mem_wave64(s_save_xnack_mask_lo, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)			//XNACK_MASK_LO
+	write_sgpr_to_mem_wave64(s_save_xnack_mask, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)			//XNACK_MASK_HI
+	
+	//use s_save_tmp would introduce conflict here between s_save_tmp and s_save_buf_rsrc2
+	s_getreg_b32 	s_save_m0, hwreg(HW_REG_MODE)																						//MODE
+	write_sgpr_to_mem_wave64(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+
+
+    if(SAVE_RESTORE_HWID_DDID)
+    s_getreg_b32 	s_save_m0, hwreg(HW_REG_HW_ID1)																						//HW_ID1, handler records the SE/SA/WGP/SIMD/wave of the original wave
+    write_sgpr_to_mem_wave64(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF)
+
+	/* 		save DDID	*/
+	//////////////////////////////
+  L_SAVE_DDID:
+    //EXEC has been saved, no vector inst following
+    s_mov_b32	exec_lo, 0x80000000    //Set MSB to 1. Cleared when draw index is returned
+    s_sendmsg sendmsg(MSG_GET_DDID)
+
+  L_WAIT_DDID_LOOP:    
+    s_nop		7			// sleep a bit
+    s_bitcmp0_b32 exec_lo, 31	// test to see if MSB is cleared, meaning done
+    s_cbranch_scc0	L_WAIT_DDID_LOOP
+
+    s_mov_b32	s_save_m0, exec_lo
+
+
+    s_mov_b32		s_save_buf_rsrc2, 0x4								//NUM_RECORDS	in bytes
+	if (SWIZZLE_EN)
+		s_add_u32		s_save_buf_rsrc2, s_save_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_save_buf_rsrc2,  0x1000000								//NUM_RECORDS in bytes
+	end
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_SAVE_DDID_WAVE64
+
+    write_sgpr_to_mem_wave32(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF) 
+
+  L_SAVE_DDID_WAVE64:
+    write_sgpr_to_mem_wave64(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset, SGPR_SAVE_USE_SQC, USE_MTBUF_INSTEAD_OF_MUBUF) 
+
+    end
+   
+  L_S_PGM_END_SAVED:
+	/*     S_PGM_END_SAVED  */    							//FIXME  graphics ONLY
+	if ((EMU_RUN_HACK) && (!EMU_RUN_HACK_SAVE_NORMAL_EXIT))	
+		s_and_b32 s_save_pc_hi, s_save_pc_hi, 0x0000ffff    //pc[47:32]
+		s_add_u32 s_save_pc_lo, s_save_pc_lo, 4             //pc[31:0]+4
+		s_addc_u32 s_save_pc_hi, s_save_pc_hi, 0x0			//carry bit over
+		s_rfe_b64 s_save_pc_lo                              //Return to the main shader program
+	else
+	end
+
+	
+    s_branch	L_END_PGM
+	
+
+				
+/**************************************************************************/
+/*                     	restore routine						              */
+/**************************************************************************/
+
+L_RESTORE:
+    /*      Setup Resource Contants    */
+    if ((EMU_RUN_HACK) && (!EMU_RUN_HACK_RESTORE_NORMAL))
+		//calculate wd_addr using absolute thread id
+		v_readlane_b32 s_restore_tmp, v9, 0
+        //determine it is wave32 or wave64
+        s_getreg_b32 	s_restore_size, hwreg(HW_REG_IB_STS2,SQ_WAVE_IB_STS2_WAVE64_SHIFT,SQ_WAVE_IB_STS2_WAVE64_SIZE) //change to ttmp13
+        s_cmp_eq_u32    s_restore_size, 0
+        s_cbranch_scc1  L_RESTORE_WAVE32
+        s_lshr_b32 s_restore_tmp, s_restore_tmp, 6 //SAVE WAVE64
+        s_branch    L_RESTORE_CON
+    L_RESTORE_WAVE32:
+        s_lshr_b32 s_restore_tmp, s_restore_tmp, 5 //SAVE WAVE32
+    L_RESTORE_CON:
+		s_mul_i32 s_restore_tmp, s_restore_tmp, WAVE_SPACE
+		s_add_i32 s_restore_spi_init_lo, s_restore_tmp, WG_BASE_ADDR_LO
+		s_mov_b32 s_restore_spi_init_hi, WG_BASE_ADDR_HI
+		s_and_b32 s_restore_spi_init_hi, s_restore_spi_init_hi, CTX_RESTORE_CONTROL	
+	else
+	end
+	
+    s_mov_b32		s_restore_buf_rsrc0, 	s_restore_spi_init_lo															//base_addr_lo
+	s_and_b32		s_restore_buf_rsrc1, 	s_restore_spi_init_hi, 0x0000FFFF												//base_addr_hi
+	s_or_b32		s_restore_buf_rsrc1, 	s_restore_buf_rsrc1,  S_RESTORE_BUF_RSRC_WORD1_STRIDE
+    s_mov_b32       s_restore_buf_rsrc2,   	0                                               								//NUM_RECORDS initial value = 0 (in bytes)
+	s_mov_b32		s_restore_buf_rsrc3, 	S_RESTORE_BUF_RSRC_WORD3_MISC
+	s_and_b32		s_restore_tmp,         	s_restore_spi_init_hi, S_RESTORE_SPI_INIT_ATC_MASK		
+	s_lshr_b32		s_restore_tmp,  		s_restore_tmp, (S_RESTORE_SPI_INIT_ATC_SHIFT-SQ_BUF_RSRC_WORD1_ATC_SHIFT)		//get ATC bit into position
+	s_or_b32		s_restore_buf_rsrc3, 	s_restore_buf_rsrc3,  s_restore_tmp												//or ATC
+	s_and_b32		s_restore_tmp,         	s_restore_spi_init_hi, S_RESTORE_SPI_INIT_MTYPE_MASK		
+	s_lshr_b32		s_restore_tmp,  		s_restore_tmp, (S_RESTORE_SPI_INIT_MTYPE_SHIFT-SQ_BUF_RSRC_WORD3_MTYPE_SHIFT)	//get MTYPE bits into position
+	s_or_b32		s_restore_buf_rsrc3, 	s_restore_buf_rsrc3,  s_restore_tmp												//or MTYPE
+    //determine it is wave32 or wave64
+    s_getreg_b32 	s_restore_size, hwreg(HW_REG_IB_STS2,SQ_WAVE_IB_STS2_WAVE64_SHIFT,SQ_WAVE_IB_STS2_WAVE64_SIZE)
+    s_or_b32        s_restore_size, s_restore_spi_init_hi,    s_restore_size                                             //share s_wave_size with exec_hi
+	
+	/* 		global mem offset			*/
+	s_mov_b32		s_restore_mem_offset, 0x0								//mem offset initial value = 0
+
+        /*      	restore VGPRs	    */
+	//////////////////////////////
+  L_RESTORE_VGPR:
+  
+ 	s_mov_b32		exec_lo, 0xFFFFFFFF 													//need every thread from now on   //be consistent with SAVE although can be moved ahead
+    s_and_b32       m0, s_restore_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_ENABLE_RESTORE_VGPR_EXEC_HI   
+    s_mov_b32		exec_hi, 0x00000000
+    s_branch        L_RESTORE_VGPR_NORMAL
+  L_ENABLE_RESTORE_VGPR_EXEC_HI:
+	s_mov_b32		exec_hi, 0xFFFFFFFF
+  L_RESTORE_VGPR_NORMAL:	
+	s_getreg_b32 	s_restore_alloc_size, hwreg(HW_REG_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE) 	//vpgr_size
+	s_add_u32 		s_restore_alloc_size, s_restore_alloc_size, 1
+	s_lshl_b32 		s_restore_alloc_size, s_restore_alloc_size, 2 							//Number of VGPRs = (vgpr_size + 1) * 4    (non-zero value)
+    //determine it is wave32 or wave64
+    s_and_b32       m0, s_restore_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_RESTORE_VGPR_WAVE64
+
+    s_lshl_b32		s_restore_buf_rsrc2,  s_restore_alloc_size, 7						    //NUM_RECORDS in bytes (32 threads*4)
+	if (SWIZZLE_EN)
+		s_add_u32		s_restore_buf_rsrc2, s_restore_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_restore_buf_rsrc2,  0x1000000										//NUM_RECORDS in bytes
+	end	
+
+	s_mov_b32		s_restore_mem_offset_save, s_restore_mem_offset							// restore start with v1, v0 will be the last
+	s_add_u32		s_restore_mem_offset, s_restore_mem_offset, 128
+    s_mov_b32 		m0, 1 																	//VGPR initial index value = 1
+	//s_set_gpr_idx_on  m0, 0x8																//M0[7:0] = M0[7:0] and M0[15:12] = 0x8
+    //s_add_u32		s_restore_alloc_size, s_restore_alloc_size, 0x8000						//add 0x8000 since we compare m0 against it later, might not need this in gfx10	
+
+  L_RESTORE_VGPR_WAVE32_LOOP: 										
+    if(USE_MTBUF_INSTEAD_OF_MUBUF)       
+		tbuffer_load_format_x v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+    else
+		buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset	slc:1 glc:1	
+	end
+	s_waitcnt		vmcnt(0)																//ensure data ready
+	v_movreld_b32		v0, v0																	//v[0+m0] = v0
+    s_add_u32		m0, m0, 1																//next vgpr index
+	s_add_u32		s_restore_mem_offset, s_restore_mem_offset, 128							//every buffer_load_dword does 128 bytes
+	s_cmp_lt_u32 	m0,	s_restore_alloc_size 												//scc = (m0 < s_restore_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_RESTORE_VGPR_WAVE32_LOOP														//VGPR restore (except v0) is complete?
+	//s_set_gpr_idx_off
+																							/* VGPR restore on v0 */
+    if(USE_MTBUF_INSTEAD_OF_MUBUF)       
+		tbuffer_load_format_x v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+    else
+		buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save	slc:1 glc:1	
+	end
+
+    s_branch    L_RESTORE_LDS
+
+  L_RESTORE_VGPR_WAVE64:
+    s_lshl_b32		s_restore_buf_rsrc2,  s_restore_alloc_size, 8						    //NUM_RECORDS in bytes (64 threads*4)
+	if (SWIZZLE_EN)
+		s_add_u32		s_restore_buf_rsrc2, s_restore_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_restore_buf_rsrc2,  0x1000000										//NUM_RECORDS in bytes
+	end	
+
+	s_mov_b32		s_restore_mem_offset_save, s_restore_mem_offset							// restore start with v1, v0 will be the last
+	s_add_u32		s_restore_mem_offset, s_restore_mem_offset, 256
+    s_mov_b32 		m0, 1 																	//VGPR initial index value = 1
+  L_RESTORE_VGPR_WAVE64_LOOP: 										
+    if(USE_MTBUF_INSTEAD_OF_MUBUF)       
+		tbuffer_load_format_x v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+    else
+		buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset	slc:1 glc:1	
+	end
+	s_waitcnt		vmcnt(0)																//ensure data ready
+	v_movreld_b32		v0, v0																	//v[0+m0] = v0
+    s_add_u32		m0, m0, 1																//next vgpr index
+	s_add_u32		s_restore_mem_offset, s_restore_mem_offset, 256							//every buffer_load_dword does 256 bytes
+	s_cmp_lt_u32 	m0,	s_restore_alloc_size 												//scc = (m0 < s_restore_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_RESTORE_VGPR_WAVE64_LOOP														//VGPR restore (except v0) is complete?
+	//s_set_gpr_idx_off
+    //
+    //Below part will be the restore shared vgpr part (new for gfx10)
+    s_getreg_b32 	s_restore_alloc_size, hwreg(HW_REG_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE) 			//shared_vgpr_size
+    s_and_b32		s_restore_alloc_size, s_restore_alloc_size, 0xFFFFFFFF				//shared_vgpr_size is zero?
+    s_cbranch_scc0	L_RESTORE_V0													    //no shared_vgpr used? jump to L_SAVE_LDS
+    s_lshl_b32 		s_restore_alloc_size, s_restore_alloc_size, 3 						//Number of SHARED_VGPRs = shared_vgpr_size * 8    (non-zero value)
+    //m0 now has the value of normal vgpr count, just add the m0 with shared_vgpr count to get the total count.
+    //restore shared_vgpr will start from the index of m0
+    s_add_u32       s_restore_alloc_size, s_restore_alloc_size, m0
+    s_mov_b32		exec_lo, 0xFFFFFFFF
+    s_mov_b32		exec_hi, 0x00000000
+    L_RESTORE_SHARED_VGPR_WAVE64_LOOP: 
+    buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset	slc:1 glc:1
+    s_waitcnt		vmcnt(0)																//ensure data ready
+	v_movreld_b32		v0, v0																	//v[0+m0] = v0
+    s_add_u32		m0, m0, 1																//next vgpr index
+	s_add_u32		s_restore_mem_offset, s_restore_mem_offset, 128							//every buffer_load_dword does 256 bytes
+	s_cmp_lt_u32 	m0,	s_restore_alloc_size 												//scc = (m0 < s_restore_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_RESTORE_SHARED_VGPR_WAVE64_LOOP														//VGPR restore (except v0) is complete?
+
+    s_mov_b32 exec_hi, 0xFFFFFFFF                                                           //restore back exec_hi before restoring V0!!
+	
+    /* VGPR restore on v0 */
+  L_RESTORE_V0:
+    if(USE_MTBUF_INSTEAD_OF_MUBUF)       
+		tbuffer_load_format_x v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+    else
+		buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save	slc:1 glc:1	
+	end
+
+
+    /*      	restore LDS	    */
+	//////////////////////////////
+  L_RESTORE_LDS:
+
+    //Only need to check the first wave    
+	/*      the first wave in the threadgroup    */
+	s_and_b32		s_restore_tmp, s_restore_size, S_RESTORE_SPI_INIT_FIRST_WAVE_MASK			
+	s_cbranch_scc0	L_RESTORE_SGPR
+	
+    s_mov_b32		exec_lo, 0xFFFFFFFF 													//need every thread from now on   //be consistent with SAVE although can be moved ahead
+    s_and_b32       m0, s_restore_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_ENABLE_RESTORE_LDS_EXEC_HI   
+    s_mov_b32		exec_hi, 0x00000000
+    s_branch        L_RESTORE_LDS_NORMAL
+  L_ENABLE_RESTORE_LDS_EXEC_HI:
+	s_mov_b32		exec_hi, 0xFFFFFFFF
+  L_RESTORE_LDS_NORMAL:	
+	s_getreg_b32 	s_restore_alloc_size, hwreg(HW_REG_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE) 				//lds_size
+	s_and_b32		s_restore_alloc_size, s_restore_alloc_size, 0xFFFFFFFF					//lds_size is zero?
+	s_cbranch_scc0	L_RESTORE_SGPR															//no lds used? jump to L_RESTORE_VGPR
+	s_lshl_b32 		s_restore_alloc_size, s_restore_alloc_size, 6 							//LDS size in dwords = lds_size * 64dw
+	s_lshl_b32 		s_restore_alloc_size, s_restore_alloc_size, 2 							//LDS size in bytes
+	s_mov_b32		s_restore_buf_rsrc2,	s_restore_alloc_size							//NUM_RECORDS in bytes
+	if (SWIZZLE_EN)
+		s_add_u32		s_restore_buf_rsrc2, s_restore_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_restore_buf_rsrc2,  0x1000000										//NUM_RECORDS in bytes
+	end
+
+    s_and_b32       m0, s_wave_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_mov_b32 		m0, 0x0
+    s_cbranch_scc1  L_RESTORE_LDS_LOOP_W64
+
+  L_RESTORE_LDS_LOOP_W32:									
+	if (SAVE_LDS)
+	buffer_load_dword	v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset lds:1
+    s_waitcnt 0
+	end
+    s_add_u32		m0, m0, 128																//every buffer_load_dword does 256 bytes
+	s_add_u32		s_restore_mem_offset, s_restore_mem_offset, 128						//mem offset increased by 256 bytes
+	s_cmp_lt_u32	m0, s_restore_alloc_size												//scc=(m0 < s_restore_alloc_size) ? 1 : 0
+	s_cbranch_scc1  L_RESTORE_LDS_LOOP_W32														//LDS restore is complete?
+    s_branch        L_RESTORE_SGPR
+
+  L_RESTORE_LDS_LOOP_W64:									
+	if (SAVE_LDS)
+	buffer_load_dword	v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset lds:1
+    s_waitcnt 0
+	end
+    s_add_u32		m0, m0, 256																//every buffer_load_dword does 256 bytes
+	s_add_u32		s_restore_mem_offset, s_restore_mem_offset, 256							//mem offset increased by 256 bytes
+	s_cmp_lt_u32	m0, s_restore_alloc_size												//scc=(m0 < s_restore_alloc_size) ? 1 : 0
+	s_cbranch_scc1  L_RESTORE_LDS_LOOP_W64														//LDS restore is complete?
+
+	
+    /*      	restore SGPRs	    */
+	//////////////////////////////
+	//s_getreg_b32 	s_restore_alloc_size, hwreg(HW_REG_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_SGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_SGPR_SIZE_SIZE) 				//spgr_size
+	//s_add_u32 		s_restore_alloc_size, s_restore_alloc_size, 1
+	//s_lshl_b32 		s_restore_alloc_size, s_restore_alloc_size, 4 							//Number of SGPRs = (sgpr_size + 1) * 16   (non-zero value)
+	//s_lshl_b32 		s_restore_alloc_size, s_restore_alloc_size, 3 							//Number of SGPRs = (sgpr_size + 1) * 8   (non-zero value)
+  L_RESTORE_SGPR:
+    //need to look at it is wave32 or wave64
+    s_and_b32       m0, s_restore_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_RESTORE_SGPR_VMEM_WAVE64
+	if (SGPR_SAVE_USE_SQC)
+		s_lshl_b32		s_restore_buf_rsrc2,	s_sgpr_save_num, 2						//NUM_RECORDS in bytes 
+	else
+        s_lshl_b32		s_restore_buf_rsrc2,	s_sgpr_save_num, 7						//NUM_RECORDS in bytes (32 threads)
+    end
+    s_branch        L_RESTORE_SGPR_CONT
+  L_RESTORE_SGPR_VMEM_WAVE64:
+    if (SGPR_SAVE_USE_SQC)
+		s_lshl_b32		s_restore_buf_rsrc2,	s_sgpr_save_num, 2						//NUM_RECORDS in bytes 
+	else
+		s_lshl_b32		s_restore_buf_rsrc2,	s_sgpr_save_num, 8						//NUM_RECORDS in bytes (64 threads)
+	end
+
+  L_RESTORE_SGPR_CONT:
+	if (SWIZZLE_EN)
+		s_add_u32		s_restore_buf_rsrc2, s_restore_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_restore_buf_rsrc2,  0x1000000										//NUM_RECORDS in bytes
+	end
+
+    s_and_b32       m0, s_restore_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_RESTORE_SGPR_WAVE64
+
+    read_sgpr_from_mem_wave32(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)		//save s0 to s_restore_tmp
+	s_mov_b32 		m0, 0x1
+
+  L_RESTORE_SGPR_LOOP_WAVE32:
+    read_sgpr_from_mem_wave32(s0, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)															//PV: further performance improvement can be made
+	s_waitcnt		lgkmcnt(0)																//ensure data ready
+	s_movreld_b32 	s0, s0                                                                  //s[0+m0] = s0
+    s_nop 0                                                                                 // hazard SALU M0=> S_MOVREL
+	s_add_u32		m0, m0, 1																//next sgpr index
+	s_cmp_lt_u32 	m0, s_sgpr_save_num												//scc = (m0 < s_restore_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_RESTORE_SGPR_LOOP_WAVE32														//SGPR restore (except s0) is complete?
+	s_mov_b32		s0, s_restore_tmp															/* SGPR restore on s0 */
+    s_branch        L_RESTORE_HWREG
+  
+  L_RESTORE_SGPR_WAVE64:
+	read_sgpr_from_mem_wave64(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)		//save s0 to s_restore_tmp
+	s_mov_b32 		m0, 0x1																				//SGPR initial index value =1	//go on with with s1
+	
+  L_RESTORE_SGPR_LOOP_WAVE64: 																					
+	read_sgpr_from_mem_wave64(s0, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)															//PV: further performance improvement can be made
+	s_waitcnt		lgkmcnt(0)																//ensure data ready
+	s_movreld_b32 	s0, s0                                                                  //s[0+m0] = s0
+    s_nop 0                                                                                 // hazard SALU M0=> S_MOVREL
+	s_add_u32		m0, m0, 1																//next sgpr index
+	s_cmp_lt_u32 	m0, s_sgpr_save_num												//scc = (m0 < s_restore_alloc_size) ? 1 : 0
+	s_cbranch_scc1 	L_RESTORE_SGPR_LOOP_WAVE64														//SGPR restore (except s0) is complete?
+	s_mov_b32		s0, s_restore_tmp															/* SGPR restore on s0 */
+
+	
+    /* 		restore HW registers	*/
+	//////////////////////////////
+  L_RESTORE_HWREG:
+    s_mov_b32		s_restore_buf_rsrc2, 0x4												//NUM_RECORDS	in bytes
+	if (SWIZZLE_EN)
+		s_add_u32		s_restore_buf_rsrc2, s_restore_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_restore_buf_rsrc2,  0x1000000										//NUM_RECORDS in bytes
+	end
+
+    s_and_b32       m0, s_restore_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_RESTORE_HWREG_WAVE64
+
+    read_sgpr_from_mem_wave32(s_restore_m0, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//M0
+	read_sgpr_from_mem_wave32(s_restore_pc_lo, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//PC
+	read_sgpr_from_mem_wave32(s_restore_pc_hi, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)
+	read_sgpr_from_mem_wave32(s_restore_exec_lo, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//EXEC
+	read_sgpr_from_mem_wave32(s_restore_exec_hi, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)
+	read_sgpr_from_mem_wave32(s_restore_status, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//STATUS
+	read_sgpr_from_mem_wave32(s_restore_trapsts, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//TRAPSTS
+    //read_sgpr_from_mem_wave32(xnack_mask_lo, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//XNACK_MASK_LO
+	//read_sgpr_from_mem_wave32(xnack_mask_hi, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//XNACK_MASK_HI
+    read_sgpr_from_mem_wave32(s_restore_xnack_mask, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//XNACK_MASK
+	read_sgpr_from_mem_wave32(s_restore_mode, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//MODE
+    if(SAVE_RESTORE_HWID_DDID)
+    read_sgpr_from_mem_wave32(s_restore_hwid1, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//HW_ID1
+    end
+    s_branch        L_RESTORE_HWREG_FINISH
+
+  L_RESTORE_HWREG_WAVE64:
+	read_sgpr_from_mem_wave64(s_restore_m0, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//M0
+	read_sgpr_from_mem_wave64(s_restore_pc_lo, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//PC
+	read_sgpr_from_mem_wave64(s_restore_pc_hi, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)
+	read_sgpr_from_mem_wave64(s_restore_exec_lo, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//EXEC
+	read_sgpr_from_mem_wave64(s_restore_exec_hi, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)
+	read_sgpr_from_mem_wave64(s_restore_status, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//STATUS
+	read_sgpr_from_mem_wave64(s_restore_trapsts, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//TRAPSTS
+    //read_sgpr_from_mem_wave64(xnack_mask_lo, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//XNACK_MASK_LO
+	//read_sgpr_from_mem_wave64(xnack_mask_hi, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//XNACK_MASK_HI
+    read_sgpr_from_mem_wave64(s_restore_xnack_mask, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)					//XNACK_MASK
+	read_sgpr_from_mem_wave64(s_restore_mode, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//MODE
+    if(SAVE_RESTORE_HWID_DDID)
+    read_sgpr_from_mem_wave64(s_restore_hwid1, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)				//HW_ID1
+    end
+  L_RESTORE_HWREG_FINISH:
+	s_waitcnt		lgkmcnt(0)																						//from now on, it is safe to restore STATUS and IB_STS
+  
+
+
+    if(SAVE_RESTORE_HWID_DDID)
+  L_RESTORE_DDID:
+    s_mov_b32      m0, s_restore_hwid1                                                      //virture ttrace support: The save-context handler records the SE/SA/WGP/SIMD/wave of the original wave
+    s_ttracedata                                                                            //and then can output it as SHADER_DATA to ttrace on restore to provide a correlation across the save-restore
+                                    
+    s_mov_b32		s_restore_buf_rsrc2, 0x4												//NUM_RECORDS	in bytes
+	if (SWIZZLE_EN)
+		s_add_u32		s_restore_buf_rsrc2, s_restore_buf_rsrc2, 0x0						//FIXME need to use swizzle to enable bounds checking?
+	else
+		s_mov_b32		s_restore_buf_rsrc2,  0x1000000										//NUM_RECORDS in bytes
+	end
+
+    s_and_b32       m0, s_restore_size, 1
+    s_cmp_eq_u32    m0, 1
+    s_cbranch_scc1  L_RESTORE_DDID_WAVE64
+
+    read_sgpr_from_mem_wave32(s_restore_ddid, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)	
+    s_branch        L_RESTORE_DDID_FINISH
+  L_RESTORE_DDID_WAVE64:
+    read_sgpr_from_mem_wave64(s_restore_ddid, s_restore_buf_rsrc0, s_restore_mem_offset, SGPR_SAVE_USE_SQC)	
+
+  L_RESTORE_DDID_FINISH:
+    s_waitcnt		lgkmcnt(0)
+    //s_mov_b32      m0, s_restore_ddid
+    //s_ttracedata   
+    if (RESTORE_DDID_IN_SGPR18)
+        s_mov_b32   s18, s_restore_ddid
+	end	
+    
+    end   
+
+	s_and_b32 s_restore_pc_hi, s_restore_pc_hi, 0x0000ffff    	//pc[47:32]        //Do it here in order not to affect STATUS
+
+	//for normal save & restore, the saved PC points to the next inst to execute, no adjustment needs to be made, otherwise:
+	if ((EMU_RUN_HACK) && (!EMU_RUN_HACK_RESTORE_NORMAL))
+		s_add_u32 s_restore_pc_lo, s_restore_pc_lo, 8            //pc[31:0]+8	  //two back-to-back s_trap are used (first for save and second for restore)
+		s_addc_u32	s_restore_pc_hi, s_restore_pc_hi, 0x0		 //carry bit over
+	end	
+	if ((EMU_RUN_HACK) && (EMU_RUN_HACK_RESTORE_NORMAL))	      
+		s_add_u32 s_restore_pc_lo, s_restore_pc_lo, 4            //pc[31:0]+4     // save is hack through s_trap but restore is normal
+		s_addc_u32	s_restore_pc_hi, s_restore_pc_hi, 0x0		 //carry bit over
+	end
+	
+	s_mov_b32 		m0, 		s_restore_m0
+	s_mov_b32 		exec_lo, 	s_restore_exec_lo
+	s_mov_b32 		exec_hi, 	s_restore_exec_hi
+	
+	s_and_b32		s_restore_m0, SQ_WAVE_TRAPSTS_PRE_SAVECTX_MASK, s_restore_trapsts
+	s_setreg_b32	hwreg(HW_REG_TRAPSTS, SQ_WAVE_TRAPSTS_PRE_SAVECTX_SHIFT, SQ_WAVE_TRAPSTS_PRE_SAVECTX_SIZE), s_restore_m0
+    s_setreg_b32    hwreg(HW_REG_SHADER_XNACK_MASK), s_restore_xnack_mask         //restore xnack_mask
+	s_and_b32		s_restore_m0, SQ_WAVE_TRAPSTS_POST_SAVECTX_MASK, s_restore_trapsts
+	s_lshr_b32		s_restore_m0, s_restore_m0, SQ_WAVE_TRAPSTS_POST_SAVECTX_SHIFT
+	s_setreg_b32	hwreg(HW_REG_TRAPSTS, SQ_WAVE_TRAPSTS_POST_SAVECTX_SHIFT, SQ_WAVE_TRAPSTS_POST_SAVECTX_SIZE), s_restore_m0
+	//s_setreg_b32 	hwreg(HW_REG_TRAPSTS), 	s_restore_trapsts      //don't overwrite SAVECTX bit as it may be set through external SAVECTX during restore
+	s_setreg_b32 	hwreg(HW_REG_MODE), 	s_restore_mode
+	//reuse s_restore_m0 as a temp register
+	s_and_b32		s_restore_m0, s_restore_pc_hi, S_SAVE_PC_HI_RCNT_MASK
+	s_lshr_b32		s_restore_m0, s_restore_m0, S_SAVE_PC_HI_RCNT_SHIFT
+	s_lshl_b32		s_restore_m0, s_restore_m0, SQ_WAVE_IB_STS_RCNT_SHIFT
+	s_mov_b32		s_restore_tmp, 0x0																				//IB_STS is zero
+	s_or_b32		s_restore_tmp, s_restore_tmp, s_restore_m0
+	s_and_b32		s_restore_m0, s_restore_pc_hi, S_SAVE_PC_HI_FIRST_REPLAY_MASK
+	s_lshr_b32		s_restore_m0, s_restore_m0, S_SAVE_PC_HI_FIRST_REPLAY_SHIFT
+	s_lshl_b32		s_restore_m0, s_restore_m0, SQ_WAVE_IB_STS_FIRST_REPLAY_SHIFT
+	s_or_b32		s_restore_tmp, s_restore_tmp, s_restore_m0
+    s_and_b32       s_restore_m0, s_restore_status, SQ_WAVE_STATUS_INST_ATC_MASK 
+    s_lshr_b32		s_restore_m0, s_restore_m0, SQ_WAVE_STATUS_INST_ATC_SHIFT
+	s_setreg_b32 	hwreg(HW_REG_IB_STS), 	s_restore_tmp
+	s_setreg_b32 	hwreg(HW_REG_STATUS), 	s_restore_status
+
+	s_barrier													//barrier to ensure the readiness of LDS before access attemps from any other wave in the same TG //FIXME not performance-optimal at this time
+	
+	
+//	s_rfe_b64 s_restore_pc_lo                              		//Return to the main shader program and resume execution
+    s_rfe_b64  s_restore_pc_lo            // s_restore_m0[0] is used to set STATUS.inst_atc 
+
+
+/**************************************************************************/
+/*                     	the END								              */
+/**************************************************************************/	
+L_END_PGM:	
+	s_endpgm
+	
+end	
+
+
+/**************************************************************************/
+/*                     	the helper functions							  */
+/**************************************************************************/
+function write_sgpr_to_mem_wave32(s, s_rsrc, s_mem_offset, use_sqc, use_mtbuf)
+	if (use_sqc)
+		s_mov_b32 exec_lo, m0					//assuming exec_lo is not needed anymore from this point on
+		s_mov_b32 m0, s_mem_offset
+		s_buffer_store_dword s, s_rsrc, m0		glc:1	
+		s_add_u32		s_mem_offset, s_mem_offset, 4
+		s_mov_b32	m0, exec_lo
+    elsif (use_mtbuf)
+        v_mov_b32	v0,	s
+        tbuffer_store_format_x v0, v0, s_rsrc, s_mem_offset format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+		s_add_u32		s_mem_offset, s_mem_offset, 128
+    else 
+        v_mov_b32	v0,	s
+		buffer_store_dword	v0, v0, s_rsrc, s_mem_offset	slc:1 glc:1
+        s_add_u32		s_mem_offset, s_mem_offset, 128
+	end
+end
+
+function write_sgpr_to_mem_wave64(s, s_rsrc, s_mem_offset, use_sqc, use_mtbuf)
+	if (use_sqc)
+		s_mov_b32 exec_lo, m0					//assuming exec_lo is not needed anymore from this point on
+		s_mov_b32 m0, s_mem_offset
+		s_buffer_store_dword s, s_rsrc, m0		glc:1	
+		s_add_u32		s_mem_offset, s_mem_offset, 4
+		s_mov_b32	m0, exec_lo
+    elsif (use_mtbuf)
+        v_mov_b32	v0,	s
+        tbuffer_store_format_x v0, v0, s_rsrc, s_mem_offset format:BUF_NUM_FORMAT_FLOAT format: BUF_DATA_FORMAT_32 slc:1 glc:1
+		s_add_u32		s_mem_offset, s_mem_offset, 256
+    else 
+        v_mov_b32	v0,	s
+		buffer_store_dword	v0, v0, s_rsrc, s_mem_offset	slc:1 glc:1
+        s_add_u32		s_mem_offset, s_mem_offset, 256
+	end
+end
+
+function read_sgpr_from_mem_wave32(s, s_rsrc, s_mem_offset, use_sqc)
+	s_buffer_load_dword s, s_rsrc, s_mem_offset		glc:1
+	if (use_sqc)
+		s_add_u32		s_mem_offset, s_mem_offset, 4
+	else
+        s_add_u32		s_mem_offset, s_mem_offset, 128
+	end
+end
+
+function read_sgpr_from_mem_wave64(s, s_rsrc, s_mem_offset, use_sqc)
+	s_buffer_load_dword s, s_rsrc, s_mem_offset		glc:1
+	if (use_sqc)
+		s_add_u32		s_mem_offset, s_mem_offset, 4
+	else
+        s_add_u32		s_mem_offset, s_mem_offset, 256
+	end
+end
+#endif
+
+static const uint32_t cwsr_trap_gfx10_hex[] = {
+	0xbf820001, 0xbf82012e,
+	0xb0804004, 0xb970f802,
+	0x8a708670, 0xb971f803,
+	0x8771ff71, 0x00000400,
+	0xbf850008, 0xb971f803,
+	0x8771ff71, 0x000001ff,
+	0xbf850001, 0x806c846c,
+	0x876dff6d, 0x0000ffff,
+	0xbe80226c, 0xb971f803,
+	0x8771ff71, 0x00000100,
+	0xbf840006, 0xbef60380,
+	0xb9f60203, 0x876dff6d,
+	0x0000ffff, 0x80ec886c,
+	0x82ed806d, 0xbef60380,
+	0xb9f60283, 0xb973f816,
+	0xb9762c07, 0x8f769c76,
+	0x886d766d, 0xb97603c7,
+	0x8f769b76, 0x886d766d,
+	0xb976f807, 0x8776ff76,
+	0x00007fff, 0xb9f6f807,
+	0xbeee037e, 0xbeef037f,
+	0xbefe0480, 0xbf900004,
+	0xbf8e0002, 0xbf88fffe,
+	0xbef4037e, 0x8775ff7f,
+	0x0000ffff, 0x8875ff75,
+	0x00040000, 0xbef60380,
+	0xbef703ff, 0x00807fac,
+	0x8776ff7f, 0x08000000,
+	0x90768376, 0x88777677,
+	0x8776ff7f, 0x70000000,
+	0x90768176, 0x88777677,
+	0xbefb037c, 0xbefa0380,
+	0xb97202dc, 0x8872727f,
+	0xbefe03c1, 0x877c8172,
+	0xbf06817c, 0xbf850002,
+	0xbeff0380, 0xbf820001,
+	0xbeff03c1, 0xb9712a05,
+	0x80718171, 0x8f718271,
+	0x877c8172, 0xbf06817c,
+	0xbf85000d, 0x8f768771,
+	0xbef603ff, 0x01000000,
+	0xbefc0380, 0x7e008700,
+	0xe0704000, 0x7a5d0000,
+	0x807c817c, 0x807aff7a,
+	0x00000080, 0xbf0a717c,
+	0xbf85fff8, 0xbf82001b,
+	0x8f768871, 0xbef603ff,
+	0x01000000, 0xbefc0380,
+	0x7e008700, 0xe0704000,
+	0x7a5d0000, 0x807c817c,
+	0x807aff7a, 0x00000100,
+	0xbf0a717c, 0xbf85fff8,
+	0xb9711e06, 0x8771c171,
+	0xbf84000c, 0x8f718371,
+	0x80717c71, 0xbefe03c1,
+	0xbeff0380, 0x7e008700,
+	0xe0704000, 0x7a5d0000,
+	0x807c817c, 0x807aff7a,
+	0x00000080, 0xbf0a717c,
+	0xbf85fff8, 0xbf8a0000,
+	0x8776ff72, 0x04000000,
+	0xbf84002b, 0xbefe03c1,
+	0x877c8172, 0xbf06817c,
+	0xbf850002, 0xbeff0380,
+	0xbf820001, 0xbeff03c1,
+	0xb9714306, 0x8771c171,
+	0xbf840021, 0x8f718671,
+	0x8f718271, 0xbef60371,
+	0xbef603ff, 0x01000000,
+	0xd7650000, 0x000100c1,
+	0xd7660000, 0x000200c1,
+	0x16000084, 0x877c8172,
+	0xbf06817c, 0xbefc0380,
+	0xbf85000a, 0x807cff7c,
+	0x00000080, 0x807aff7a,
+	0x00000080, 0xd5250000,
+	0x0001ff00, 0x00000080,
+	0xbf0a717c, 0xbf85fff7,
+	0xbf820009, 0x807cff7c,
+	0x00000100, 0x807aff7a,
+	0x00000100, 0xd5250000,
+	0x0001ff00, 0x00000100,
+	0xbf0a717c, 0xbf85fff7,
+	0x877c8172, 0xbf06817c,
+	0xbf850003, 0x8f7687ff,
+	0x0000006a, 0xbf820002,
+	0x8f7688ff, 0x0000006a,
+	0xbef603ff, 0x01000000,
+	0x877c8172, 0xbf06817c,
+	0xbefc0380, 0xbf800000,
+	0xbf85000b, 0xbe802e00,
+	0x7e000200, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x807c817c,
+	0xbf0aff7c, 0x0000006a,
+	0xbf85fff6, 0xbf82000a,
+	0xbe802e00, 0x7e000200,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x807c817c, 0xbf0aff7c,
+	0x0000006a, 0xbf85fff6,
+	0xbef60384, 0xbef603ff,
+	0x01000000, 0x877c8172,
+	0xbf06817c, 0xbf850030,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x7e00026c,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0x7e00026d, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x7e00026e,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0x7e00026f, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x7e000270,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0xb971f803, 0x7e000271,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0x7e000273, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0xb97bf801,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0xbf82002f,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0x7e00026c,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x7e00026d, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0x7e00026e,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x7e00026f, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0x7e000270,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0xb971f803, 0x7e000271,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x7e000273, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0xb97bf801,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0xbf820119,
+	0xbef4037e, 0x8775ff7f,
+	0x0000ffff, 0x8875ff75,
+	0x00040000, 0xbef60380,
+	0xbef703ff, 0x00807fac,
+	0x8772ff7f, 0x08000000,
+	0x90728372, 0x88777277,
+	0x8772ff7f, 0x70000000,
+	0x90728172, 0x88777277,
+	0xb97902dc, 0x8879797f,
+	0xbef80380, 0xbefe03c1,
+	0x877c8179, 0xbf06817c,
+	0xbf850002, 0xbeff0380,
+	0xbf820001, 0xbeff03c1,
+	0xb96f2a05, 0x806f816f,
+	0x8f6f826f, 0x877c8179,
+	0xbf06817c, 0xbf850013,
+	0x8f76876f, 0xbef603ff,
+	0x01000000, 0xbef20378,
+	0x8078ff78, 0x00000080,
+	0xbefc0381, 0xe0304000,
+	0x785d0000, 0xbf8c3f70,
+	0x7e008500, 0x807c817c,
+	0x8078ff78, 0x00000080,
+	0xbf0a6f7c, 0xbf85fff7,
+	0xe0304000, 0x725d0000,
+	0xbf820023, 0x8f76886f,
+	0xbef603ff, 0x01000000,
+	0xbef20378, 0x8078ff78,
+	0x00000100, 0xbefc0381,
+	0xe0304000, 0x785d0000,
+	0xbf8c3f70, 0x7e008500,
+	0x807c817c, 0x8078ff78,
+	0x00000100, 0xbf0a6f7c,
+	0xbf85fff7, 0xb96f1e06,
+	0x876fc16f, 0xbf84000e,
+	0x8f6f836f, 0x806f7c6f,
+	0xbefe03c1, 0xbeff0380,
+	0xe0304000, 0x785d0000,
+	0xbf8c3f70, 0x7e008500,
+	0x807c817c, 0x8078ff78,
+	0x00000080, 0xbf0a6f7c,
+	0xbf85fff7, 0xbeff03c1,
+	0xe0304000, 0x725d0000,
+	0x8772ff79, 0x04000000,
+	0xbf840020, 0xbefe03c1,
+	0x877c8179, 0xbf06817c,
+	0xbf850002, 0xbeff0380,
+	0xbf820001, 0xbeff03c1,
+	0xb96f4306, 0x876fc16f,
+	0xbf840016, 0x8f6f866f,
+	0x8f6f826f, 0xbef6036f,
+	0xbef603ff, 0x01000000,
+	0x877c8172, 0xbf06817c,
+	0xbefc0380, 0xbf850007,
+	0x807cff7c, 0x00000080,
+	0x8078ff78, 0x00000080,
+	0xbf0a6f7c, 0xbf85fffa,
+	0xbf820006, 0x807cff7c,
+	0x00000100, 0x8078ff78,
+	0x00000100, 0xbf0a6f7c,
+	0xbf85fffa, 0x877c8179,
+	0xbf06817c, 0xbf850003,
+	0x8f7687ff, 0x0000006a,
+	0xbf820002, 0x8f7688ff,
+	0x0000006a, 0xbef603ff,
+	0x01000000, 0x877c8179,
+	0xbf06817c, 0xbf850012,
+	0xf4211cba, 0xf0000000,
+	0x8078ff78, 0x00000080,
+	0xbefc0381, 0xf421003a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xbf8cc07f,
+	0xbe803000, 0xbf800000,
+	0x807c817c, 0xbf0aff7c,
+	0x0000006a, 0xbf85fff5,
+	0xbe800372, 0xbf820011,
+	0xf4211cba, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xbefc0381, 0xf421003a,
+	0xf0000000, 0x8078ff78,
+	0x00000100, 0xbf8cc07f,
+	0xbe803000, 0xbf800000,
+	0x807c817c, 0xbf0aff7c,
+	0x0000006a, 0xbf85fff5,
+	0xbe800372, 0xbef60384,
+	0xbef603ff, 0x01000000,
+	0x877c8179, 0xbf06817c,
+	0xbf850025, 0xf4211bfa,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211b3a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211b7a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211eba,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211efa,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211c3a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211c7a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211cfa,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211e7a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xbf820024,
+	0xf4211bfa, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211b3a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211b7a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211eba, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211efa, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211c3a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211c7a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211cfa, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211e7a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xbf8cc07f, 0x876dff6d,
+	0x0000ffff, 0xbefc036f,
+	0xbefe037a, 0xbeff037b,
+	0x876f71ff, 0x000003ff,
+	0xb9ef4803, 0xb9f3f816,
+	0x876f71ff, 0xfffff800,
+	0x906f8b6f, 0xb9efa2c3,
+	0xb9f9f801, 0x876fff6d,
+	0xf0000000, 0x906f9c6f,
+	0x8f6f906f, 0xbef20380,
+	0x88726f72, 0x876fff6d,
+	0x08000000, 0x906f9b6f,
+	0x8f6f8f6f, 0x88726f72,
+	0x876fff70, 0x00800000,
+	0x906f976f, 0xb9f2f807,
+	0xb9f0f802, 0xbf8a0000,
+	0xbe80226c, 0xbf810000,
+	0xbf9f0000, 0xbf9f0000,
+	0xbf9f0000, 0xbf9f0000,
+	0xbf9f0000, 0x00000000,
+};
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index 955d72179da1..9015fac24414 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -29,6 +29,7 @@
 #include "cwsr_trap_handler.h"
 #include "kfd_iommu.h"
 #include "amdgpu_amdkfd.h"
+#include "cwsr_trap_handler_gfx10.asm"
 
 #define MQD_SIZE_ALIGNED 768
 
@@ -327,7 +328,7 @@ static const struct kfd_device_info navi10_device_info = {
 	.num_of_watch_points = 4,
 	.mqd_size_aligned = MQD_SIZE_ALIGNED,
 	.needs_iommu_device = false,
-	.supports_cwsr = false,
+	.supports_cwsr = true,
 	.needs_pci_atomics = false,
 	.num_sdma_engines = 2,
 	.num_sdma_queues_per_engine = 8,
@@ -534,11 +535,14 @@ static void kfd_cwsr_init(struct kfd_dev *kfd)
 			BUILD_BUG_ON(sizeof(cwsr_trap_gfx8_hex) > PAGE_SIZE);
 			kfd->cwsr_isa = cwsr_trap_gfx8_hex;
 			kfd->cwsr_isa_size = sizeof(cwsr_trap_gfx8_hex);
-		} else {
-			/* TODO: Do we need another trap handler for navi10? */
+		} else if (kfd->device_info->asic_family < CHIP_NAVI10) {
 			BUILD_BUG_ON(sizeof(cwsr_trap_gfx9_hex) > PAGE_SIZE);
 			kfd->cwsr_isa = cwsr_trap_gfx9_hex;
 			kfd->cwsr_isa_size = sizeof(cwsr_trap_gfx9_hex);
+		} else {
+			BUILD_BUG_ON(sizeof(cwsr_trap_gfx10_hex) > PAGE_SIZE);
+			kfd->cwsr_isa = cwsr_trap_gfx10_hex;
+			kfd->cwsr_isa_size = sizeof(cwsr_trap_gfx10_hex);
 		}
 
 		kfd->cwsr_enabled = true;
-- 
2.20.1


[-- Attachment #2: Type: text/plain, Size: 153 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 130/459] drm/amdkfd: Moved gfx10 cwsr binary to cwsr_trap_handler.h
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (28 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 129/459] drm/amdkfd: Added cwsr trap handler for gfx10 Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 131/459] drm/amdkfd: Parameterize queue_preemption_timeout_ms Alex Deucher
                     ` (62 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Yong Zhao, Oak Zeng

From: Oak Zeng <Oak.Zeng@amd.com>

Same thing was done for previous HW generations. Do the same thing for
gfx10 to make codes consistent.

Signed-off-by: Oak Zeng <Oak.Zeng@amd.com>
Reviewd-by: Yong Zhao <Yong.Zhao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/amdkfd/cwsr_trap_handler.h    | 299 +++++++++++++++++
 .../amd/amdkfd/cwsr_trap_handler_gfx10.asm    | 302 +-----------------
 drivers/gpu/drm/amd/amdkfd/kfd_device.c       |   1 -
 3 files changed, 300 insertions(+), 302 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
index e413d4a71fa3..826913c70766 100644
--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
+++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
@@ -561,3 +561,302 @@ static const uint32_t cwsr_trap_gfx9_hex[] = {
 	0xbf8a0000, 0x95806f6c,
 	0xbf810000, 0x00000000,
 };
+
+static const uint32_t cwsr_trap_gfx10_hex[] = {
+	0xbf820001, 0xbf82012e,
+	0xb0804004, 0xb970f802,
+	0x8a708670, 0xb971f803,
+	0x8771ff71, 0x00000400,
+	0xbf850008, 0xb971f803,
+	0x8771ff71, 0x000001ff,
+	0xbf850001, 0x806c846c,
+	0x876dff6d, 0x0000ffff,
+	0xbe80226c, 0xb971f803,
+	0x8771ff71, 0x00000100,
+	0xbf840006, 0xbef60380,
+	0xb9f60203, 0x876dff6d,
+	0x0000ffff, 0x80ec886c,
+	0x82ed806d, 0xbef60380,
+	0xb9f60283, 0xb973f816,
+	0xb9762c07, 0x8f769c76,
+	0x886d766d, 0xb97603c7,
+	0x8f769b76, 0x886d766d,
+	0xb976f807, 0x8776ff76,
+	0x00007fff, 0xb9f6f807,
+	0xbeee037e, 0xbeef037f,
+	0xbefe0480, 0xbf900004,
+	0xbf8e0002, 0xbf88fffe,
+	0xbef4037e, 0x8775ff7f,
+	0x0000ffff, 0x8875ff75,
+	0x00040000, 0xbef60380,
+	0xbef703ff, 0x00807fac,
+	0x8776ff7f, 0x08000000,
+	0x90768376, 0x88777677,
+	0x8776ff7f, 0x70000000,
+	0x90768176, 0x88777677,
+	0xbefb037c, 0xbefa0380,
+	0xb97202dc, 0x8872727f,
+	0xbefe03c1, 0x877c8172,
+	0xbf06817c, 0xbf850002,
+	0xbeff0380, 0xbf820001,
+	0xbeff03c1, 0xb9712a05,
+	0x80718171, 0x8f718271,
+	0x877c8172, 0xbf06817c,
+	0xbf85000d, 0x8f768771,
+	0xbef603ff, 0x01000000,
+	0xbefc0380, 0x7e008700,
+	0xe0704000, 0x7a5d0000,
+	0x807c817c, 0x807aff7a,
+	0x00000080, 0xbf0a717c,
+	0xbf85fff8, 0xbf82001b,
+	0x8f768871, 0xbef603ff,
+	0x01000000, 0xbefc0380,
+	0x7e008700, 0xe0704000,
+	0x7a5d0000, 0x807c817c,
+	0x807aff7a, 0x00000100,
+	0xbf0a717c, 0xbf85fff8,
+	0xb9711e06, 0x8771c171,
+	0xbf84000c, 0x8f718371,
+	0x80717c71, 0xbefe03c1,
+	0xbeff0380, 0x7e008700,
+	0xe0704000, 0x7a5d0000,
+	0x807c817c, 0x807aff7a,
+	0x00000080, 0xbf0a717c,
+	0xbf85fff8, 0xbf8a0000,
+	0x8776ff72, 0x04000000,
+	0xbf84002b, 0xbefe03c1,
+	0x877c8172, 0xbf06817c,
+	0xbf850002, 0xbeff0380,
+	0xbf820001, 0xbeff03c1,
+	0xb9714306, 0x8771c171,
+	0xbf840021, 0x8f718671,
+	0x8f718271, 0xbef60371,
+	0xbef603ff, 0x01000000,
+	0xd7650000, 0x000100c1,
+	0xd7660000, 0x000200c1,
+	0x16000084, 0x877c8172,
+	0xbf06817c, 0xbefc0380,
+	0xbf85000a, 0x807cff7c,
+	0x00000080, 0x807aff7a,
+	0x00000080, 0xd5250000,
+	0x0001ff00, 0x00000080,
+	0xbf0a717c, 0xbf85fff7,
+	0xbf820009, 0x807cff7c,
+	0x00000100, 0x807aff7a,
+	0x00000100, 0xd5250000,
+	0x0001ff00, 0x00000100,
+	0xbf0a717c, 0xbf85fff7,
+	0x877c8172, 0xbf06817c,
+	0xbf850003, 0x8f7687ff,
+	0x0000006a, 0xbf820002,
+	0x8f7688ff, 0x0000006a,
+	0xbef603ff, 0x01000000,
+	0x877c8172, 0xbf06817c,
+	0xbefc0380, 0xbf800000,
+	0xbf85000b, 0xbe802e00,
+	0x7e000200, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x807c817c,
+	0xbf0aff7c, 0x0000006a,
+	0xbf85fff6, 0xbf82000a,
+	0xbe802e00, 0x7e000200,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x807c817c, 0xbf0aff7c,
+	0x0000006a, 0xbf85fff6,
+	0xbef60384, 0xbef603ff,
+	0x01000000, 0x877c8172,
+	0xbf06817c, 0xbf850030,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x7e00026c,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0x7e00026d, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x7e00026e,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0x7e00026f, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0x7e000270,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0xb971f803, 0x7e000271,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000080,
+	0x7e000273, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0xb97bf801,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000080, 0xbf82002f,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0x7e00026c,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x7e00026d, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0x7e00026e,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x7e00026f, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0x7e000270,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0xb971f803, 0x7e000271,
+	0xe0704000, 0x7a5d0000,
+	0x807aff7a, 0x00000100,
+	0x7e000273, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0xb97bf801,
+	0x7e00027b, 0xe0704000,
+	0x7a5d0000, 0x807aff7a,
+	0x00000100, 0xbf820119,
+	0xbef4037e, 0x8775ff7f,
+	0x0000ffff, 0x8875ff75,
+	0x00040000, 0xbef60380,
+	0xbef703ff, 0x00807fac,
+	0x8772ff7f, 0x08000000,
+	0x90728372, 0x88777277,
+	0x8772ff7f, 0x70000000,
+	0x90728172, 0x88777277,
+	0xb97902dc, 0x8879797f,
+	0xbef80380, 0xbefe03c1,
+	0x877c8179, 0xbf06817c,
+	0xbf850002, 0xbeff0380,
+	0xbf820001, 0xbeff03c1,
+	0xb96f2a05, 0x806f816f,
+	0x8f6f826f, 0x877c8179,
+	0xbf06817c, 0xbf850013,
+	0x8f76876f, 0xbef603ff,
+	0x01000000, 0xbef20378,
+	0x8078ff78, 0x00000080,
+	0xbefc0381, 0xe0304000,
+	0x785d0000, 0xbf8c3f70,
+	0x7e008500, 0x807c817c,
+	0x8078ff78, 0x00000080,
+	0xbf0a6f7c, 0xbf85fff7,
+	0xe0304000, 0x725d0000,
+	0xbf820023, 0x8f76886f,
+	0xbef603ff, 0x01000000,
+	0xbef20378, 0x8078ff78,
+	0x00000100, 0xbefc0381,
+	0xe0304000, 0x785d0000,
+	0xbf8c3f70, 0x7e008500,
+	0x807c817c, 0x8078ff78,
+	0x00000100, 0xbf0a6f7c,
+	0xbf85fff7, 0xb96f1e06,
+	0x876fc16f, 0xbf84000e,
+	0x8f6f836f, 0x806f7c6f,
+	0xbefe03c1, 0xbeff0380,
+	0xe0304000, 0x785d0000,
+	0xbf8c3f70, 0x7e008500,
+	0x807c817c, 0x8078ff78,
+	0x00000080, 0xbf0a6f7c,
+	0xbf85fff7, 0xbeff03c1,
+	0xe0304000, 0x725d0000,
+	0x8772ff79, 0x04000000,
+	0xbf840020, 0xbefe03c1,
+	0x877c8179, 0xbf06817c,
+	0xbf850002, 0xbeff0380,
+	0xbf820001, 0xbeff03c1,
+	0xb96f4306, 0x876fc16f,
+	0xbf840016, 0x8f6f866f,
+	0x8f6f826f, 0xbef6036f,
+	0xbef603ff, 0x01000000,
+	0x877c8172, 0xbf06817c,
+	0xbefc0380, 0xbf850007,
+	0x807cff7c, 0x00000080,
+	0x8078ff78, 0x00000080,
+	0xbf0a6f7c, 0xbf85fffa,
+	0xbf820006, 0x807cff7c,
+	0x00000100, 0x8078ff78,
+	0x00000100, 0xbf0a6f7c,
+	0xbf85fffa, 0x877c8179,
+	0xbf06817c, 0xbf850003,
+	0x8f7687ff, 0x0000006a,
+	0xbf820002, 0x8f7688ff,
+	0x0000006a, 0xbef603ff,
+	0x01000000, 0x877c8179,
+	0xbf06817c, 0xbf850012,
+	0xf4211cba, 0xf0000000,
+	0x8078ff78, 0x00000080,
+	0xbefc0381, 0xf421003a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xbf8cc07f,
+	0xbe803000, 0xbf800000,
+	0x807c817c, 0xbf0aff7c,
+	0x0000006a, 0xbf85fff5,
+	0xbe800372, 0xbf820011,
+	0xf4211cba, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xbefc0381, 0xf421003a,
+	0xf0000000, 0x8078ff78,
+	0x00000100, 0xbf8cc07f,
+	0xbe803000, 0xbf800000,
+	0x807c817c, 0xbf0aff7c,
+	0x0000006a, 0xbf85fff5,
+	0xbe800372, 0xbef60384,
+	0xbef603ff, 0x01000000,
+	0x877c8179, 0xbf06817c,
+	0xbf850025, 0xf4211bfa,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211b3a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211b7a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211eba,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211efa,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211c3a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211c7a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211cfa,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xf4211e7a,
+	0xf0000000, 0x8078ff78,
+	0x00000080, 0xbf820024,
+	0xf4211bfa, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211b3a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211b7a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211eba, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211efa, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211c3a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211c7a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211cfa, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xf4211e7a, 0xf0000000,
+	0x8078ff78, 0x00000100,
+	0xbf8cc07f, 0x876dff6d,
+	0x0000ffff, 0xbefc036f,
+	0xbefe037a, 0xbeff037b,
+	0x876f71ff, 0x000003ff,
+	0xb9ef4803, 0xb9f3f816,
+	0x876f71ff, 0xfffff800,
+	0x906f8b6f, 0xb9efa2c3,
+	0xb9f9f801, 0x876fff6d,
+	0xf0000000, 0x906f9c6f,
+	0x8f6f906f, 0xbef20380,
+	0x88726f72, 0x876fff6d,
+	0x08000000, 0x906f9b6f,
+	0x8f6f8f6f, 0x88726f72,
+	0x876fff70, 0x00800000,
+	0x906f976f, 0xb9f2f807,
+	0xb9f0f802, 0xbf8a0000,
+	0xbe80226c, 0xbf810000,
+	0xbf9f0000, 0xbf9f0000,
+	0xbf9f0000, 0xbf9f0000,
+	0xbf9f0000, 0x00000000,
+};
diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
index e6d345f7998b..f20e463e748b 100644
--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
+++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
@@ -21,7 +21,6 @@
  */
 
 
-#if 0
 shader main
 
 asic(DEFAULT)
@@ -1122,303 +1121,4 @@ function read_sgpr_from_mem_wave64(s, s_rsrc, s_mem_offset, use_sqc)
         s_add_u32		s_mem_offset, s_mem_offset, 256
 	end
 end
-#endif
-
-static const uint32_t cwsr_trap_gfx10_hex[] = {
-	0xbf820001, 0xbf82012e,
-	0xb0804004, 0xb970f802,
-	0x8a708670, 0xb971f803,
-	0x8771ff71, 0x00000400,
-	0xbf850008, 0xb971f803,
-	0x8771ff71, 0x000001ff,
-	0xbf850001, 0x806c846c,
-	0x876dff6d, 0x0000ffff,
-	0xbe80226c, 0xb971f803,
-	0x8771ff71, 0x00000100,
-	0xbf840006, 0xbef60380,
-	0xb9f60203, 0x876dff6d,
-	0x0000ffff, 0x80ec886c,
-	0x82ed806d, 0xbef60380,
-	0xb9f60283, 0xb973f816,
-	0xb9762c07, 0x8f769c76,
-	0x886d766d, 0xb97603c7,
-	0x8f769b76, 0x886d766d,
-	0xb976f807, 0x8776ff76,
-	0x00007fff, 0xb9f6f807,
-	0xbeee037e, 0xbeef037f,
-	0xbefe0480, 0xbf900004,
-	0xbf8e0002, 0xbf88fffe,
-	0xbef4037e, 0x8775ff7f,
-	0x0000ffff, 0x8875ff75,
-	0x00040000, 0xbef60380,
-	0xbef703ff, 0x00807fac,
-	0x8776ff7f, 0x08000000,
-	0x90768376, 0x88777677,
-	0x8776ff7f, 0x70000000,
-	0x90768176, 0x88777677,
-	0xbefb037c, 0xbefa0380,
-	0xb97202dc, 0x8872727f,
-	0xbefe03c1, 0x877c8172,
-	0xbf06817c, 0xbf850002,
-	0xbeff0380, 0xbf820001,
-	0xbeff03c1, 0xb9712a05,
-	0x80718171, 0x8f718271,
-	0x877c8172, 0xbf06817c,
-	0xbf85000d, 0x8f768771,
-	0xbef603ff, 0x01000000,
-	0xbefc0380, 0x7e008700,
-	0xe0704000, 0x7a5d0000,
-	0x807c817c, 0x807aff7a,
-	0x00000080, 0xbf0a717c,
-	0xbf85fff8, 0xbf82001b,
-	0x8f768871, 0xbef603ff,
-	0x01000000, 0xbefc0380,
-	0x7e008700, 0xe0704000,
-	0x7a5d0000, 0x807c817c,
-	0x807aff7a, 0x00000100,
-	0xbf0a717c, 0xbf85fff8,
-	0xb9711e06, 0x8771c171,
-	0xbf84000c, 0x8f718371,
-	0x80717c71, 0xbefe03c1,
-	0xbeff0380, 0x7e008700,
-	0xe0704000, 0x7a5d0000,
-	0x807c817c, 0x807aff7a,
-	0x00000080, 0xbf0a717c,
-	0xbf85fff8, 0xbf8a0000,
-	0x8776ff72, 0x04000000,
-	0xbf84002b, 0xbefe03c1,
-	0x877c8172, 0xbf06817c,
-	0xbf850002, 0xbeff0380,
-	0xbf820001, 0xbeff03c1,
-	0xb9714306, 0x8771c171,
-	0xbf840021, 0x8f718671,
-	0x8f718271, 0xbef60371,
-	0xbef603ff, 0x01000000,
-	0xd7650000, 0x000100c1,
-	0xd7660000, 0x000200c1,
-	0x16000084, 0x877c8172,
-	0xbf06817c, 0xbefc0380,
-	0xbf85000a, 0x807cff7c,
-	0x00000080, 0x807aff7a,
-	0x00000080, 0xd5250000,
-	0x0001ff00, 0x00000080,
-	0xbf0a717c, 0xbf85fff7,
-	0xbf820009, 0x807cff7c,
-	0x00000100, 0x807aff7a,
-	0x00000100, 0xd5250000,
-	0x0001ff00, 0x00000100,
-	0xbf0a717c, 0xbf85fff7,
-	0x877c8172, 0xbf06817c,
-	0xbf850003, 0x8f7687ff,
-	0x0000006a, 0xbf820002,
-	0x8f7688ff, 0x0000006a,
-	0xbef603ff, 0x01000000,
-	0x877c8172, 0xbf06817c,
-	0xbefc0380, 0xbf800000,
-	0xbf85000b, 0xbe802e00,
-	0x7e000200, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000080, 0x807c817c,
-	0xbf0aff7c, 0x0000006a,
-	0xbf85fff6, 0xbf82000a,
-	0xbe802e00, 0x7e000200,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000100,
-	0x807c817c, 0xbf0aff7c,
-	0x0000006a, 0xbf85fff6,
-	0xbef60384, 0xbef603ff,
-	0x01000000, 0x877c8172,
-	0xbf06817c, 0xbf850030,
-	0x7e00027b, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000080, 0x7e00026c,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000080,
-	0x7e00026d, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000080, 0x7e00026e,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000080,
-	0x7e00026f, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000080, 0x7e000270,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000080,
-	0xb971f803, 0x7e000271,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000080,
-	0x7e000273, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000080, 0xb97bf801,
-	0x7e00027b, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000080, 0xbf82002f,
-	0x7e00027b, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000100, 0x7e00026c,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000100,
-	0x7e00026d, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000100, 0x7e00026e,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000100,
-	0x7e00026f, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000100, 0x7e000270,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000100,
-	0xb971f803, 0x7e000271,
-	0xe0704000, 0x7a5d0000,
-	0x807aff7a, 0x00000100,
-	0x7e000273, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000100, 0xb97bf801,
-	0x7e00027b, 0xe0704000,
-	0x7a5d0000, 0x807aff7a,
-	0x00000100, 0xbf820119,
-	0xbef4037e, 0x8775ff7f,
-	0x0000ffff, 0x8875ff75,
-	0x00040000, 0xbef60380,
-	0xbef703ff, 0x00807fac,
-	0x8772ff7f, 0x08000000,
-	0x90728372, 0x88777277,
-	0x8772ff7f, 0x70000000,
-	0x90728172, 0x88777277,
-	0xb97902dc, 0x8879797f,
-	0xbef80380, 0xbefe03c1,
-	0x877c8179, 0xbf06817c,
-	0xbf850002, 0xbeff0380,
-	0xbf820001, 0xbeff03c1,
-	0xb96f2a05, 0x806f816f,
-	0x8f6f826f, 0x877c8179,
-	0xbf06817c, 0xbf850013,
-	0x8f76876f, 0xbef603ff,
-	0x01000000, 0xbef20378,
-	0x8078ff78, 0x00000080,
-	0xbefc0381, 0xe0304000,
-	0x785d0000, 0xbf8c3f70,
-	0x7e008500, 0x807c817c,
-	0x8078ff78, 0x00000080,
-	0xbf0a6f7c, 0xbf85fff7,
-	0xe0304000, 0x725d0000,
-	0xbf820023, 0x8f76886f,
-	0xbef603ff, 0x01000000,
-	0xbef20378, 0x8078ff78,
-	0x00000100, 0xbefc0381,
-	0xe0304000, 0x785d0000,
-	0xbf8c3f70, 0x7e008500,
-	0x807c817c, 0x8078ff78,
-	0x00000100, 0xbf0a6f7c,
-	0xbf85fff7, 0xb96f1e06,
-	0x876fc16f, 0xbf84000e,
-	0x8f6f836f, 0x806f7c6f,
-	0xbefe03c1, 0xbeff0380,
-	0xe0304000, 0x785d0000,
-	0xbf8c3f70, 0x7e008500,
-	0x807c817c, 0x8078ff78,
-	0x00000080, 0xbf0a6f7c,
-	0xbf85fff7, 0xbeff03c1,
-	0xe0304000, 0x725d0000,
-	0x8772ff79, 0x04000000,
-	0xbf840020, 0xbefe03c1,
-	0x877c8179, 0xbf06817c,
-	0xbf850002, 0xbeff0380,
-	0xbf820001, 0xbeff03c1,
-	0xb96f4306, 0x876fc16f,
-	0xbf840016, 0x8f6f866f,
-	0x8f6f826f, 0xbef6036f,
-	0xbef603ff, 0x01000000,
-	0x877c8172, 0xbf06817c,
-	0xbefc0380, 0xbf850007,
-	0x807cff7c, 0x00000080,
-	0x8078ff78, 0x00000080,
-	0xbf0a6f7c, 0xbf85fffa,
-	0xbf820006, 0x807cff7c,
-	0x00000100, 0x8078ff78,
-	0x00000100, 0xbf0a6f7c,
-	0xbf85fffa, 0x877c8179,
-	0xbf06817c, 0xbf850003,
-	0x8f7687ff, 0x0000006a,
-	0xbf820002, 0x8f7688ff,
-	0x0000006a, 0xbef603ff,
-	0x01000000, 0x877c8179,
-	0xbf06817c, 0xbf850012,
-	0xf4211cba, 0xf0000000,
-	0x8078ff78, 0x00000080,
-	0xbefc0381, 0xf421003a,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xbf8cc07f,
-	0xbe803000, 0xbf800000,
-	0x807c817c, 0xbf0aff7c,
-	0x0000006a, 0xbf85fff5,
-	0xbe800372, 0xbf820011,
-	0xf4211cba, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xbefc0381, 0xf421003a,
-	0xf0000000, 0x8078ff78,
-	0x00000100, 0xbf8cc07f,
-	0xbe803000, 0xbf800000,
-	0x807c817c, 0xbf0aff7c,
-	0x0000006a, 0xbf85fff5,
-	0xbe800372, 0xbef60384,
-	0xbef603ff, 0x01000000,
-	0x877c8179, 0xbf06817c,
-	0xbf850025, 0xf4211bfa,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211b3a,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211b7a,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211eba,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211efa,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211c3a,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211c7a,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211cfa,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xf4211e7a,
-	0xf0000000, 0x8078ff78,
-	0x00000080, 0xbf820024,
-	0xf4211bfa, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211b3a, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211b7a, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211eba, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211efa, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211c3a, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211c7a, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211cfa, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xf4211e7a, 0xf0000000,
-	0x8078ff78, 0x00000100,
-	0xbf8cc07f, 0x876dff6d,
-	0x0000ffff, 0xbefc036f,
-	0xbefe037a, 0xbeff037b,
-	0x876f71ff, 0x000003ff,
-	0xb9ef4803, 0xb9f3f816,
-	0x876f71ff, 0xfffff800,
-	0x906f8b6f, 0xb9efa2c3,
-	0xb9f9f801, 0x876fff6d,
-	0xf0000000, 0x906f9c6f,
-	0x8f6f906f, 0xbef20380,
-	0x88726f72, 0x876fff6d,
-	0x08000000, 0x906f9b6f,
-	0x8f6f8f6f, 0x88726f72,
-	0x876fff70, 0x00800000,
-	0x906f976f, 0xb9f2f807,
-	0xb9f0f802, 0xbf8a0000,
-	0xbe80226c, 0xbf810000,
-	0xbf9f0000, 0xbf9f0000,
-	0xbf9f0000, 0xbf9f0000,
-	0xbf9f0000, 0x00000000,
-};
+
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index 9015fac24414..75a95279c178 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -29,7 +29,6 @@
 #include "cwsr_trap_handler.h"
 #include "kfd_iommu.h"
 #include "amdgpu_amdkfd.h"
-#include "cwsr_trap_handler_gfx10.asm"
 
 #define MQD_SIZE_ALIGNED 768
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 131/459] drm/amdkfd: Parameterize queue_preemption_timeout_ms
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (29 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 130/459] drm/amdkfd: Moved gfx10 cwsr binary to cwsr_trap_handler.h Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 132/459] drm/amdkfd: Introduce DIQ type mqd manager for gfx10 Alex Deucher
                     ` (61 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Oak Zeng

From: Oak Zeng <Oak.Zeng@amd.com>

Added a module parameter queue_preemption_timeout_ms. This is helpful
for debugging kfd on emulator environment which is much slower than
a real chip so the fence wait timeout value should be much bigger.

Signed-off-by: Oak Zeng <Oak.Zeng@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c               | 8 ++++++++
 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c | 2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h | 2 --
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h                 | 7 +++++++
 4 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 9646de2daa02..4a0000aa2144 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -693,6 +693,14 @@ MODULE_PARM_DESC(halt_if_hws_hang, "Halt if HWS hang is detected (0 = off (defau
 bool hws_gws_support;
 module_param(hws_gws_support, bool, 0444);
 MODULE_PARM_DESC(hws_gws_support, "MEC FW support gws barriers (false = not supported (Default), true = supported)");
+
+/**
+  * DOC: queue_preemption_timeout_ms (int)
+  * queue preemption timeout in ms (1 = Minimum, 9000 = default)
+  */
+int queue_preemption_timeout_ms;
+module_param(queue_preemption_timeout_ms, int, 0644);
+MODULE_PARM_DESC(queue_preemption_timeout_ms, "queue preemption timeout in ms (1 = Minimum, 9000 = default)");
 #endif
 
 /**
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 632e510b5396..e71978b06fe7 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -1302,7 +1302,7 @@ static int unmap_queues_cpsch(struct device_queue_manager *dqm,
 				KFD_FENCE_COMPLETED);
 	/* should be timed out */
 	retval = amdkfd_fence_wait_timeout(dqm->fence_addr, KFD_FENCE_COMPLETED,
-				QUEUE_PREEMPT_DEFAULT_TIMEOUT_MS);
+				queue_preemption_timeout_ms);
 	if (retval)
 		return retval;
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
index ff9cdc584120..90db2c9275f6 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
@@ -31,8 +31,6 @@
 #include "kfd_priv.h"
 #include "kfd_mqd_manager.h"
 
-#define KFD_UNMAP_LATENCY_MS			(4000)
-#define QUEUE_PREEMPT_DEFAULT_TIMEOUT_MS (2 * KFD_UNMAP_LATENCY_MS + 1000)
 
 struct device_process_node {
 	struct qcm_process_device *qpd;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index 40e40d1e4dd2..a4b81db19082 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -104,6 +104,8 @@
 
 #define KFD_KERNEL_QUEUE_SIZE 2048
 
+#define KFD_UNMAP_LATENCY_MS	(4000)
+
 /*
  * 512 = 0x200
  * The doorbell index distance between SDMA RLC (2*i) and (2*i+1) in the
@@ -166,6 +168,11 @@ extern int halt_if_hws_hang;
  */
 extern bool hws_gws_support;
 
+/*
+ * Queue preemption timeout in ms
+ */
+extern int queue_preemption_timeout_ms;
+
 enum cache_policy {
 	cache_policy_coherent,
 	cache_policy_noncoherent
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 132/459] drm/amdkfd: Introduce DIQ type mqd manager for gfx10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (30 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 131/459] drm/amdkfd: Parameterize queue_preemption_timeout_ms Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 133/459] drm/amdkfd: Add mqd size in mqd manager struct " Alex Deucher
                     ` (60 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

With introduction of new mqd allocation scheme for HIQ,
DIQ and HIQ use different mqd allocation scheme, DIQ
can't reuse HIQ mqd manager

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
index 6663b72370f6..db3979520f54 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
@@ -497,6 +497,18 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 #endif
 		pr_debug("%s@%i\n", __func__, __LINE__);
 		break;
+	case KFD_MQD_TYPE_DIQ:
+		mqd->init_mqd = init_mqd_hiq;
+		mqd->uninit_mqd = uninit_mqd;
+		mqd->load_mqd = load_mqd;
+		mqd->update_mqd = update_mqd_hiq;
+		mqd->destroy_mqd = destroy_mqd;
+		mqd->is_occupied = is_occupied;
+		mqd->mqd_size = sizeof(struct v10_compute_mqd);
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd;
+#endif
+		break;
 	case KFD_MQD_TYPE_SDMA:
 		pr_debug("%s@%i\n", __func__, __LINE__);
 		mqd->init_mqd = init_mqd_sdma;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 133/459] drm/amdkfd: Add mqd size in mqd manager struct for gfx10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (31 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 132/459] drm/amdkfd: Introduce DIQ type mqd manager for gfx10 Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:25   ` [PATCH 134/459] drm/amdkfd: Allocate hiq and sdma mqd from mqd trunk " Alex Deucher
                     ` (59 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Also initialize mqd size on mqd manager initialization

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
index db3979520f54..5ecc6d3a1b09 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
@@ -478,6 +478,7 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		mqd->update_mqd = update_mqd;
 		mqd->destroy_mqd = destroy_mqd;
 		mqd->is_occupied = is_occupied;
+		mqd->mqd_size = sizeof(struct v10_compute_mqd);
 		mqd->get_wave_state = get_wave_state;
 #if defined(CONFIG_DEBUG_FS)
 		mqd->debugfs_show_mqd = debugfs_show_mqd;
@@ -492,6 +493,7 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		mqd->update_mqd = update_mqd_hiq;
 		mqd->destroy_mqd = destroy_mqd;
 		mqd->is_occupied = is_occupied;
+		mqd->mqd_size = sizeof(struct v10_compute_mqd);
 #if defined(CONFIG_DEBUG_FS)
 		mqd->debugfs_show_mqd = debugfs_show_mqd;
 #endif
@@ -517,6 +519,7 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		mqd->update_mqd = update_mqd_sdma;
 		mqd->destroy_mqd = destroy_mqd_sdma;
 		mqd->is_occupied = is_occupied_sdma;
+		mqd->mqd_size = sizeof(struct v10_sdma_mqd);
 #if defined(CONFIG_DEBUG_FS)
 		mqd->debugfs_show_mqd = debugfs_show_mqd_sdma;
 #endif
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 134/459] drm/amdkfd: Allocate hiq and sdma mqd from mqd trunk for gfx10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (32 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 133/459] drm/amdkfd: Add mqd size in mqd manager struct " Alex Deucher
@ 2019-06-17 19:25   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 135/459] drm/amdkfd: Introduce XGMI SDMA queue type " Alex Deucher
                     ` (58 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:25 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Instead of allocat hiq and sdma mqd from sub-allocator, allocate
them from a mqd trunk pool. This is done for all asics

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  | 22 +++++++------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
index 5ecc6d3a1b09..0650999c15f4 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
@@ -72,6 +72,9 @@ static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
 	int retval;
 	struct kfd_mem_obj *mqd_mem_obj = NULL;
 
+	if (q->type == KFD_QUEUE_TYPE_HIQ)
+		return allocate_hiq_mqd(kfd);
+
 	/* From V9,  for CWSR, the control stack is located on the next page
 	 * boundary after the mqd, we will use the gtt allocation function
 	 * instead of sub-allocation function.
@@ -346,13 +349,10 @@ static int init_mqd_sdma(struct mqd_manager *mm, void **mqd,
 {
 	int retval;
 	struct v10_sdma_mqd *m;
+	struct kfd_dev *dev = mm->dev;
 
-
-	retval = kfd_gtt_sa_allocate(mm->dev,
-			sizeof(struct v10_sdma_mqd),
-			mqd_mem_obj);
-
-	if (retval != 0)
+	*mqd_mem_obj = allocate_sdma_mqd(dev, q);
+	if (!*mqd_mem_obj)
 		return -ENOMEM;
 
 	m = (struct v10_sdma_mqd *) (*mqd_mem_obj)->cpu_ptr;
@@ -368,12 +368,6 @@ static int init_mqd_sdma(struct mqd_manager *mm, void **mqd,
 	return retval;
 }
 
-static void uninit_mqd_sdma(struct mqd_manager *mm, void *mqd,
-		struct kfd_mem_obj *mqd_mem_obj)
-{
-	kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
-}
-
 static int load_mqd_sdma(struct mqd_manager *mm, void *mqd,
 		uint32_t pipe_id, uint32_t queue_id,
 		struct queue_properties *p, struct mm_struct *mms)
@@ -488,7 +482,7 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 	case KFD_MQD_TYPE_HIQ:
 		pr_debug("%s@%i\n", __func__, __LINE__);
 		mqd->init_mqd = init_mqd_hiq;
-		mqd->uninit_mqd = uninit_mqd;
+		mqd->uninit_mqd = uninit_mqd_hiq_sdma;
 		mqd->load_mqd = load_mqd;
 		mqd->update_mqd = update_mqd_hiq;
 		mqd->destroy_mqd = destroy_mqd;
@@ -514,7 +508,7 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 	case KFD_MQD_TYPE_SDMA:
 		pr_debug("%s@%i\n", __func__, __LINE__);
 		mqd->init_mqd = init_mqd_sdma;
-		mqd->uninit_mqd = uninit_mqd_sdma;
+		mqd->uninit_mqd = uninit_mqd_hiq_sdma;
 		mqd->load_mqd = load_mqd_sdma;
 		mqd->update_mqd = update_mqd_sdma;
 		mqd->destroy_mqd = destroy_mqd_sdma;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 135/459] drm/amdkfd: Introduce XGMI SDMA queue type for gfx10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (33 preceding siblings ...)
  2019-06-17 19:25   ` [PATCH 134/459] drm/amdkfd: Allocate hiq and sdma mqd from mqd trunk " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 136/459] drm/amdkfd: Delete alloc_format field from map_queue struct " Alex Deucher
                     ` (57 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Existing QUEUE_TYPE_SDMA means PCIe optimized SDMA queues.
Introduce a new QUEUE_TYPE_SDMA_XGMI, which is optimized
for non-PCIe transfer such as XGMI.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/kfd_device.c           | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index 75a95279c178..c7fc011264f0 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -330,6 +330,7 @@ static const struct kfd_device_info navi10_device_info = {
 	.supports_cwsr = true,
 	.needs_pci_atomics = false,
 	.num_sdma_engines = 2,
+	.num_xgmi_sdma_engines = 0,
 	.num_sdma_queues_per_engine = 8,
 };
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
index 209ad518fba1..26153c51493a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
@@ -181,6 +181,7 @@ static int pm_map_queues_v10(struct packet_manager *pm, uint32_t *buffer,
 			queue_type__mes_map_queues__debug_interface_queue_vi;
 		break;
 	case KFD_QUEUE_TYPE_SDMA:
+	case KFD_QUEUE_TYPE_SDMA_XGMI:
 		packet->bitfields2.engine_sel = q->properties.sdma_engine_id +
 				engine_sel__mes_map_queues__sdma0_vi;
 		use_static = false; /* no static queues under SDMA */
@@ -227,6 +228,7 @@ static int pm_unmap_queues_v10(struct packet_manager *pm, uint32_t *buffer,
 			engine_sel__mes_unmap_queues__compute;
 		break;
 	case KFD_QUEUE_TYPE_SDMA:
+	case KFD_QUEUE_TYPE_SDMA_XGMI:
 		packet->bitfields2.engine_sel =
 			engine_sel__mes_unmap_queues__sdma0 + sdma_engine;
 		break;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 136/459] drm/amdkfd: Delete alloc_format field from map_queue struct for gfx10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (34 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 135/459] drm/amdkfd: Introduce XGMI SDMA queue type " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 137/459] drm/amdkfd: update gfx10 support for latest kfd changes Alex Deucher
                     ` (56 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Alloc format was never really supported by MEC FW. FW always
does one per pipe allocation.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
index 26153c51493a..aed32ab7102e 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue_v10.c
@@ -159,8 +159,6 @@ static int pm_map_queues_v10(struct packet_manager *pm, uint32_t *buffer,
 
 	packet->header.u32All = pm_build_pm4_header(IT_MAP_QUEUES,
 					sizeof(struct pm4_mes_map_queues));
-	packet->bitfields2.alloc_format =
-		alloc_format__mes_map_queues__one_per_pipe_vi;
 	packet->bitfields2.num_queues = 1;
 	packet->bitfields2.queue_sel =
 		queue_sel__mes_map_queues__map_to_hws_determined_queue_slots_vi;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 137/459] drm/amdkfd: update gfx10 support for latest kfd changes
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (35 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 136/459] drm/amdkfd: Delete alloc_format field from map_queue struct " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 138/459] drm/amdkfd: add more navi10 pci ids Alex Deucher
                     ` (55 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Separate mqd allocation and initialization

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  | 82 ++++++-------------
 1 file changed, 26 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
index 0650999c15f4..0b68a17eb902 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
@@ -72,9 +72,6 @@ static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
 	int retval;
 	struct kfd_mem_obj *mqd_mem_obj = NULL;
 
-	if (q->type == KFD_QUEUE_TYPE_HIQ)
-		return allocate_hiq_mqd(kfd);
-
 	/* From V9,  for CWSR, the control stack is located on the next page
 	 * boundary after the mqd, we will use the gtt allocation function
 	 * instead of sub-allocation function.
@@ -103,21 +100,15 @@ static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
 
 }
 
-static int init_mqd(struct mqd_manager *mm, void **mqd,
-			struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
+static void init_mqd(struct mqd_manager *mm, void **mqd,
+			struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
 			struct queue_properties *q)
 {
-	int retval;
 	uint64_t addr;
 	struct v10_compute_mqd *m;
-	struct kfd_dev *kfd = mm->dev;
 
-	*mqd_mem_obj = allocate_mqd(kfd, q);
-	if (!*mqd_mem_obj)
-		return -ENOMEM;
-
-	m = (struct v10_compute_mqd *) (*mqd_mem_obj)->cpu_ptr;
-	addr = (*mqd_mem_obj)->gpu_addr;
+	m = (struct v10_compute_mqd *) mqd_mem_obj->cpu_ptr;
+	addr = mqd_mem_obj->gpu_addr;
 
 	memset(m, 0, sizeof(struct v10_compute_mqd));
 
@@ -164,9 +155,7 @@ static int init_mqd(struct mqd_manager *mm, void **mqd,
 	*mqd = m;
 	if (gart_addr)
 		*gart_addr = addr;
-	retval = mm->update_mqd(mm, m, q);
-
-	return retval;
+	mm->update_mqd(mm, m, q);
 }
 
 static int load_mqd(struct mqd_manager *mm, void *mqd,
@@ -183,7 +172,7 @@ static int load_mqd(struct mqd_manager *mm, void *mqd,
 	return r;
 }
 
-static int update_mqd(struct mqd_manager *mm, void *mqd,
+static void update_mqd(struct mqd_manager *mm, void *mqd,
 		      struct queue_properties *q)
 {
 	struct v10_compute_mqd *m;
@@ -246,8 +235,6 @@ static int update_mqd(struct mqd_manager *mm, void *mqd,
 			q->queue_address != 0 &&
 			q->queue_percent > 0 &&
 			!q->is_evicted);
-
-	return 0;
 }
 
 static int destroy_mqd(struct mqd_manager *mm, void *mqd,
@@ -260,7 +247,7 @@ static int destroy_mqd(struct mqd_manager *mm, void *mqd,
 		pipe_id, queue_id);
 }
 
-static void uninit_mqd(struct mqd_manager *mm, void *mqd,
+static void free_mqd(struct mqd_manager *mm, void *mqd,
 			struct kfd_mem_obj *mqd_mem_obj)
 {
 	struct kfd_dev *kfd = mm->dev;
@@ -305,67 +292,47 @@ static int get_wave_state(struct mqd_manager *mm, void *mqd,
 	return 0;
 }
 
-static int init_mqd_hiq(struct mqd_manager *mm, void **mqd,
-			struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
+static void init_mqd_hiq(struct mqd_manager *mm, void **mqd,
+			struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
 			struct queue_properties *q)
 {
 	struct v10_compute_mqd *m;
-	int retval;
-
 
-	retval = init_mqd(mm, mqd, mqd_mem_obj, gart_addr, q);
-
-	if (retval != 0)
-		return retval;
+	init_mqd(mm, mqd, mqd_mem_obj, gart_addr, q);
 
 	m = get_mqd(*mqd);
 
 	m->cp_hqd_pq_control |= 1 << CP_HQD_PQ_CONTROL__PRIV_STATE__SHIFT |
 			1 << CP_HQD_PQ_CONTROL__KMD_QUEUE__SHIFT;
-
-	return retval;
 }
 
-static int update_mqd_hiq(struct mqd_manager *mm, void *mqd,
+static void update_mqd_hiq(struct mqd_manager *mm, void *mqd,
 			struct queue_properties *q)
 {
 	struct v10_compute_mqd *m;
-	int retval;
-
-	retval = update_mqd(mm, mqd, q);
 
-	if (retval != 0)
-		return retval;
+	update_mqd(mm, mqd, q);
 
 	/* TODO: what's the point? update_mqd already does this. */
 	m = get_mqd(mqd);
 	m->cp_hqd_vmid = q->vmid;
-	return retval;
 }
 
-static int init_mqd_sdma(struct mqd_manager *mm, void **mqd,
-		struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
+static void init_mqd_sdma(struct mqd_manager *mm, void **mqd,
+		struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
 		struct queue_properties *q)
 {
-	int retval;
 	struct v10_sdma_mqd *m;
-	struct kfd_dev *dev = mm->dev;
-
-	*mqd_mem_obj = allocate_sdma_mqd(dev, q);
-	if (!*mqd_mem_obj)
-		return -ENOMEM;
 
-	m = (struct v10_sdma_mqd *) (*mqd_mem_obj)->cpu_ptr;
+	m = (struct v10_sdma_mqd *) mqd_mem_obj->cpu_ptr;
 
 	memset(m, 0, sizeof(struct v10_sdma_mqd));
 
 	*mqd = m;
 	if (gart_addr)
-		*gart_addr = (*mqd_mem_obj)->gpu_addr;
+		*gart_addr = mqd_mem_obj->gpu_addr;
 
-	retval = mm->update_mqd(mm, m, q);
-
-	return retval;
+	mm->update_mqd(mm, m, q);
 }
 
 static int load_mqd_sdma(struct mqd_manager *mm, void *mqd,
@@ -379,7 +346,7 @@ static int load_mqd_sdma(struct mqd_manager *mm, void *mqd,
 
 #define SDMA_RLC_DUMMY_DEFAULT 0xf
 
-static int update_mqd_sdma(struct mqd_manager *mm, void *mqd,
+static void update_mqd_sdma(struct mqd_manager *mm, void *mqd,
 		struct queue_properties *q)
 {
 	struct v10_sdma_mqd *m;
@@ -407,7 +374,6 @@ static int update_mqd_sdma(struct mqd_manager *mm, void *mqd,
 			q->queue_address != 0 &&
 			q->queue_percent > 0 &&
 			!q->is_evicted);
-	return 0;
 }
 
 /*
@@ -466,8 +432,9 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		pr_debug("%s@%i\n", __func__, __LINE__);
 	case KFD_MQD_TYPE_COMPUTE:
 		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->allocate_mqd = allocate_mqd;
 		mqd->init_mqd = init_mqd;
-		mqd->uninit_mqd = uninit_mqd;
+		mqd->free_mqd = free_mqd;
 		mqd->load_mqd = load_mqd;
 		mqd->update_mqd = update_mqd;
 		mqd->destroy_mqd = destroy_mqd;
@@ -481,8 +448,9 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		break;
 	case KFD_MQD_TYPE_HIQ:
 		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->allocate_mqd = allocate_hiq_mqd;
 		mqd->init_mqd = init_mqd_hiq;
-		mqd->uninit_mqd = uninit_mqd_hiq_sdma;
+		mqd->free_mqd = free_mqd_hiq_sdma;
 		mqd->load_mqd = load_mqd;
 		mqd->update_mqd = update_mqd_hiq;
 		mqd->destroy_mqd = destroy_mqd;
@@ -494,8 +462,9 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		pr_debug("%s@%i\n", __func__, __LINE__);
 		break;
 	case KFD_MQD_TYPE_DIQ:
+		mqd->allocate_mqd = allocate_hiq_mqd;
 		mqd->init_mqd = init_mqd_hiq;
-		mqd->uninit_mqd = uninit_mqd;
+		mqd->free_mqd = free_mqd;
 		mqd->load_mqd = load_mqd;
 		mqd->update_mqd = update_mqd_hiq;
 		mqd->destroy_mqd = destroy_mqd;
@@ -507,8 +476,9 @@ struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		break;
 	case KFD_MQD_TYPE_SDMA:
 		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->allocate_mqd = allocate_sdma_mqd;
 		mqd->init_mqd = init_mqd_sdma;
-		mqd->uninit_mqd = uninit_mqd_hiq_sdma;
+		mqd->free_mqd = free_mqd_hiq_sdma;
 		mqd->load_mqd = load_mqd_sdma;
 		mqd->update_mqd = update_mqd_sdma;
 		mqd->destroy_mqd = destroy_mqd_sdma;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 138/459] drm/amdkfd: add more navi10 pci ids
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (36 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 137/459] drm/amdkfd: update gfx10 support for latest kfd changes Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 139/459] drm/amdgpu: add Navi10 " Alex Deucher
                     ` (54 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/kfd_device.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index c7fc011264f0..26ea46de3722 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -454,6 +454,10 @@ static const struct kfd_deviceid supported_devices[] = {
 	{ 0x66af, &vega20_device_info },	/* Vega20 */
 	/* Navi10 */
 	{ 0x7310, &navi10_device_info },	/* Navi10 */
+	{ 0x7312, &navi10_device_info },	/* Navi10 */
+	{ 0x7318, &navi10_device_info },	/* Navi10 */
+	{ 0x731a, &navi10_device_info },	/* Navi10 */
+	{ 0x731f, &navi10_device_info },	/* Navi10 */
 };
 
 static int kfd_gtt_sa_init(struct kfd_dev *kfd, unsigned int buf_size,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 139/459] drm/amdgpu: add Navi10 pci ids
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (37 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 138/459] drm/amdkfd: add more navi10 pci ids Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 140/459] drm/amdgpu: add to set navi ip blocks Alex Deucher
                     ` (53 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 4a0000aa2144..a9d1ceb11e5d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -983,6 +983,12 @@ static const struct pci_device_id pciidlist[] = {
 	/* Raven */
 	{0x1002, 0x15dd, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RAVEN|AMD_IS_APU},
 	{0x1002, 0x15d8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RAVEN|AMD_IS_APU},
+	/* Navi10 */
+	{0x1002, 0x7310, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
+	{0x1002, 0x7312, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
+	{0x1002, 0x7318, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
+	{0x1002, 0x731A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
+	{0x1002, 0x731F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
 
 	{0, 0, 0}
 };
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 140/459] drm/amdgpu: add to set navi ip blocks
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (38 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 139/459] drm/amdgpu: add Navi10 " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 141/459] drm/amd/powerplay: update smu v11 ppsmc header Alex Deucher
                     ` (52 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index bf5650f7ac8b..cd29c5476b1c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -51,6 +51,7 @@
 #endif
 #include "vi.h"
 #include "soc15.h"
+#include "nv.h"
 #include "bif/bif_4_1_d.h"
 #include <linux/pci.h>
 #include <linux/firmware.h>
@@ -1527,6 +1528,13 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
 		if (r)
 			return r;
 		break;
+	case  CHIP_NAVI10:
+		adev->family = AMDGPU_FAMILY_NV;
+
+		r = nv_set_ip_blocks(adev);
+		if (r)
+			return r;
+		break;
 	default:
 		/* FIXME: not supported yet */
 		return -EINVAL;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 141/459] drm/amd/powerplay: update smu v11 ppsmc header
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (39 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 140/459] drm/amdgpu: add to set navi ip blocks Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 142/459] drm/amd/powerplay: update smu 11 driver if header for navi10 Alex Deucher
                     ` (51 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch updates smu v11 ppsmc header for navi10.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../drm/amd/powerplay/inc/smu_v11_0_ppsmc.h   | 38 ++++++++++---------
 1 file changed, 20 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_ppsmc.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_ppsmc.h
index f466f624ad32..2cb063664557 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_ppsmc.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_ppsmc.h
@@ -60,6 +60,7 @@
 //BACO/BAMACO/BOMACO
 #define PPSMC_MSG_EnterBaco                      0x18
 #define PPSMC_MSG_ExitBaco                       0x19
+#define PPSMC_MSG_ArmD3						            	 0x46
 
 //DPM
 #define PPSMC_MSG_SetSoftMinByFreq               0x1A
@@ -71,26 +72,23 @@
 #define PPSMC_MSG_GetDpmFreqByIndex              0x20
 #define PPSMC_MSG_OverridePcieParameters         0x21
 #define PPSMC_MSG_SetMinDeepSleepDcefclk         0x22
-#define PPSMC_MSG_SetWorkloadMask                0x23 
-#define PPSMC_MSG_SetUclkFastSwitch              0x24
-#define PPSMC_MSG_GetAvfsVoltageByDpm            0x25
-#define PPSMC_MSG_SetVideoFps                    0x26
-#define PPSMC_MSG_GetDcModeMaxDpmFreq            0x27
 
-//Power Gating
-#define PPSMC_MSG_AllowGfxOff                    0x28
-#define PPSMC_MSG_DisallowGfxOff                 0x29
-#define PPSMC_MSG_PowerUpVcn					 0x2A
-#define PPSMC_MSG_PowerDownVcn					 0x2B	
-#define PPSMC_MSG_PowerUpJpeg                    0x2C
-#define PPSMC_MSG_PowerDownJpeg					 0x2D
-//reserve 0x2A to 0x2F for PG harvesting TBD
+#define PPSMC_MSG_SetWorkloadMask                0x24 
+#define PPSMC_MSG_SetUclkFastSwitch              0x25
+#define PPSMC_MSG_GetVoltageByDpm                0x26
+#define PPSMC_MSG_SetVideoFps                    0x27
+#define PPSMC_MSG_GetDcModeMaxDpmFreq            0x28
 
-//I2C Interface
-#define PPSMC_RequestI2cTransaction              0x30
+//Power Gating
+#define PPSMC_MSG_AllowGfxOff                    0x29
+#define PPSMC_MSG_DisallowGfxOff                 0x2A
+#define PPSMC_MSG_PowerUpVcn					           0x2B
+#define PPSMC_MSG_PowerDownVcn					         0x2C	
+#define PPSMC_MSG_PowerUpJpeg                    0x2D
+#define PPSMC_MSG_PowerDownJpeg					         0x2E
+//reserve 0x29 to 0x30 for PG harvesting TBD
 
 //Resets
-#define PPSMC_MSG_SoftReset                      0x31  //FIXME Need confirmation from driver
 #define PPSMC_MSG_PrepareMp1ForUnload            0x32
 #define PPSMC_MSG_PrepareMp1ForReset             0x33
 #define PPSMC_MSG_PrepareMp1ForShutdown          0x34
@@ -100,7 +98,6 @@
 #define PPSMC_MSG_GetPptLimit                    0x36
 #define PPSMC_MSG_ReenableAcDcInterrupt          0x37
 #define PPSMC_MSG_NotifyPowerSource              0x38
-//#define PPSMC_MSG_GfxDeviceDriverReset           0x39 //FIXME mode1 and 2 resets will go directly go PSP
 
 //BTC
 #define PPSMC_MSG_RunBtc                         0x3A
@@ -120,9 +117,14 @@
 #define PPSMC_MSG_SetGeminiApertureHigh          0x43
 #define PPSMC_MSG_SetGeminiApertureLow           0x44
 
-#define PPSMC_Message_Count                      0x45
+#define PPSMC_MSG_GetVoltageByDpmOverdrive       0x45
+
+#define PPSMC_Message_Count                      0x47
 
 typedef uint32_t PPSMC_Result;
 typedef uint32_t PPSMC_Msg;
 
+//for use with PPSMC_MSG_GetVoltageByDpmOverdrive
+#define PPSMC_GET_AVFS_CURVE 0
+#define PPSMC_GET_OVERDRIVE_CURVE 1
 #endif
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 142/459] drm/amd/powerplay: update smu 11 driver if header for navi10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (40 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 141/459] drm/amd/powerplay: update smu v11 ppsmc header Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 143/459] drm/amd/powerplay: fix the mp/smuio " Alex Deucher
                     ` (50 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch updates smu 11 driver if header for navi10.

UVD/VCE won't be used for navi10. Here, reverve them for vega20.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../amd/powerplay/inc/smu_11_0_driver_if.h    | 1039 +++++++++++++++++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     |    2 +-
 2 files changed, 1040 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h b/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
new file mode 100644
index 000000000000..b98cb005a46c
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
@@ -0,0 +1,1039 @@
+#ifndef __SMU11_DRIVER_IF_H__
+#define __SMU11_DRIVER_IF_H__
+
+// *** IMPORTANT ***
+// SMU TEAM: Always increment the interface version if 
+// any structure is changed in this file
+#define SMU11_DRIVER_IF_VERSION 0x2D
+
+#define PPTABLE_NV10_SMU_VERSION 8
+
+#define NUM_GFXCLK_DPM_LEVELS  16
+#define NUM_SMNCLK_DPM_LEVELS  2
+#define NUM_SOCCLK_DPM_LEVELS  8
+#define NUM_MP0CLK_DPM_LEVELS  2
+#define NUM_DCLK_DPM_LEVELS    8
+#define NUM_VCLK_DPM_LEVELS    8
+#define NUM_DCEFCLK_DPM_LEVELS 8
+#define NUM_PHYCLK_DPM_LEVELS  8
+#define NUM_DISPCLK_DPM_LEVELS 8
+#define NUM_PIXCLK_DPM_LEVELS  8
+#define NUM_UCLK_DPM_LEVELS    4 
+#define NUM_MP1CLK_DPM_LEVELS  2
+#define NUM_LINK_LEVELS        2
+
+
+#define MAX_GFXCLK_DPM_LEVEL  (NUM_GFXCLK_DPM_LEVELS  - 1)
+#define MAX_SMNCLK_DPM_LEVEL  (NUM_SMNCLK_DPM_LEVELS  - 1)
+#define MAX_SOCCLK_DPM_LEVEL  (NUM_SOCCLK_DPM_LEVELS  - 1)
+#define MAX_MP0CLK_DPM_LEVEL  (NUM_MP0CLK_DPM_LEVELS  - 1)
+#define MAX_DCLK_DPM_LEVEL    (NUM_DCLK_DPM_LEVELS    - 1)
+#define MAX_VCLK_DPM_LEVEL    (NUM_VCLK_DPM_LEVELS    - 1)
+#define MAX_DCEFCLK_DPM_LEVEL (NUM_DCEFCLK_DPM_LEVELS - 1)
+#define MAX_DISPCLK_DPM_LEVEL (NUM_DISPCLK_DPM_LEVELS - 1)
+#define MAX_PIXCLK_DPM_LEVEL  (NUM_PIXCLK_DPM_LEVELS  - 1)
+#define MAX_PHYCLK_DPM_LEVEL  (NUM_PHYCLK_DPM_LEVELS  - 1)
+#define MAX_UCLK_DPM_LEVEL    (NUM_UCLK_DPM_LEVELS    - 1)
+#define MAX_MP1CLK_DPM_LEVEL  (NUM_MP1CLK_DPM_LEVELS  - 1)
+#define MAX_LINK_LEVEL        (NUM_LINK_LEVELS        - 1)
+
+//Gemini Modes
+#define PPSMC_GeminiModeNone   0  //Single GPU board
+#define PPSMC_GeminiModeMaster 1  //Master GPU on a Gemini board
+#define PPSMC_GeminiModeSlave  2  //Slave GPU on a Gemini board
+
+// Feature Control Defines
+// DPM
+#define FEATURE_DPM_PREFETCHER_BIT      0
+#define FEATURE_DPM_GFXCLK_BIT          1
+#define FEATURE_DPM_GFX_PACE_BIT        2
+#define FEATURE_DPM_UCLK_BIT            3
+#define FEATURE_DPM_SOCCLK_BIT          4
+#define FEATURE_DPM_MP0CLK_BIT          5
+#define FEATURE_DPM_LINK_BIT            6
+#define FEATURE_DPM_DCEFCLK_BIT         7
+#define FEATURE_MEM_VDDCI_SCALING_BIT   8 
+#define FEATURE_MEM_MVDD_SCALING_BIT    9
+                                        
+//Idle                                  
+#define FEATURE_DS_GFXCLK_BIT           10
+#define FEATURE_DS_SOCCLK_BIT           11
+#define FEATURE_DS_LCLK_BIT             12
+#define FEATURE_DS_DCEFCLK_BIT          13
+#define FEATURE_DS_UCLK_BIT             14
+#define FEATURE_GFX_ULV_BIT             15  
+#define FEATURE_FW_DSTATE_BIT           16 
+#define FEATURE_GFXOFF_BIT              17
+#define FEATURE_BACO_BIT                18
+#define FEATURE_VCN_PG_BIT              19  
+#define FEATURE_JPEG_PG_BIT             20
+#define FEATURE_USB_PG_BIT              21
+#define FEATURE_RSMU_SMN_CG_BIT         22
+//Throttler/Response                    
+#define FEATURE_PPT_BIT                 23
+#define FEATURE_TDC_BIT                 24
+#define FEATURE_GFX_EDC_BIT             25
+#define FEATURE_APCC_PLUS_BIT           26
+#define FEATURE_GTHR_BIT                27
+#define FEATURE_ACDC_BIT                28
+#define FEATURE_VR0HOT_BIT              29
+#define FEATURE_VR1HOT_BIT              30  
+#define FEATURE_FW_CTF_BIT              31
+#define FEATURE_FAN_CONTROL_BIT         32
+#define FEATURE_THERMAL_BIT             33
+#define FEATURE_GFX_DCS_BIT             34
+//VF                                    
+#define FEATURE_RM_BIT                  35
+#define FEATURE_LED_DISPLAY_BIT         36
+//Other                                 
+#define FEATURE_GFX_SS_BIT              37
+#define FEATURE_OUT_OF_BAND_MONITOR_BIT 38
+#define FEATURE_TEMP_DEPENDENT_VMIN_BIT 39
+
+#define FEATURE_MMHUB_PG                40 
+#define FEATURE_ATHUB_PG                41
+#define FEATURE_SPARE_42_BIT            42
+#define FEATURE_SPARE_43_BIT            43
+#define FEATURE_SPARE_44_BIT            44
+#define FEATURE_SPARE_45_BIT            45
+#define FEATURE_SPARE_46_BIT            46
+#define FEATURE_SPARE_47_BIT            47
+#define FEATURE_SPARE_48_BIT            48
+#define FEATURE_SPARE_49_BIT            49
+#define FEATURE_SPARE_50_BIT            50
+#define FEATURE_SPARE_51_BIT            51
+#define FEATURE_SPARE_52_BIT            52
+#define FEATURE_SPARE_53_BIT            53
+#define FEATURE_SPARE_54_BIT            54
+#define FEATURE_SPARE_55_BIT            55
+#define FEATURE_SPARE_56_BIT            56
+#define FEATURE_SPARE_57_BIT            57
+#define FEATURE_SPARE_58_BIT            58
+#define FEATURE_SPARE_59_BIT            59
+#define FEATURE_SPARE_60_BIT            60
+#define FEATURE_SPARE_61_BIT            61
+#define FEATURE_SPARE_62_BIT            62
+#define FEATURE_SPARE_63_BIT            63
+#define NUM_FEATURES                    64
+
+// Debug Overrides Bitmask
+#define DPM_OVERRIDE_DISABLE_SOCCLK_PID             0x00000001
+#define DPM_OVERRIDE_DISABLE_UCLK_PID               0x00000002
+#define DPM_OVERRIDE_DISABLE_VOLT_LINK_VCN_SOCCLK   0x00000004
+#define DPM_OVERRIDE_ENABLE_FREQ_LINK_VCLK_SOCCLK   0x00000008
+#define DPM_OVERRIDE_ENABLE_FREQ_LINK_DCLK_SOCCLK   0x00000010
+#define DPM_OVERRIDE_ENABLE_FREQ_LINK_GFXCLK_SOCCLK 0x00000020
+#define DPM_OVERRIDE_ENABLE_FREQ_LINK_GFXCLK_UCLK   0x00000040
+#define DPM_OVERRIDE_DISABLE_VOLT_LINK_DCE_SOCCLK   0x00000080
+#define DPM_OVERRIDE_DISABLE_VOLT_LINK_MP0_SOCCLK   0x00000100
+#define DPM_OVERRIDE_DISABLE_DFLL_PLL_SHUTDOWN      0x00000200
+#define DPM_OVERRIDE_DISABLE_MEMORY_TEMPERATURE_READ 0x00000400
+
+// VR Mapping Bit Defines
+#define VR_MAPPING_VR_SELECT_MASK  0x01
+#define VR_MAPPING_VR_SELECT_SHIFT 0x00
+
+#define VR_MAPPING_PLANE_SELECT_MASK  0x02
+#define VR_MAPPING_PLANE_SELECT_SHIFT 0x01
+
+// PSI Bit Defines
+#define PSI_SEL_VR0_PLANE0_PSI0  0x01
+#define PSI_SEL_VR0_PLANE0_PSI1  0x02
+#define PSI_SEL_VR0_PLANE1_PSI0  0x04
+#define PSI_SEL_VR0_PLANE1_PSI1  0x08
+#define PSI_SEL_VR1_PLANE0_PSI0  0x10
+#define PSI_SEL_VR1_PLANE0_PSI1  0x20
+#define PSI_SEL_VR1_PLANE1_PSI0  0x40
+#define PSI_SEL_VR1_PLANE1_PSI1  0x80
+
+// Throttler Control/Status Bits
+#define THROTTLER_PADDING_BIT      0
+#define THROTTLER_TEMP_EDGE_BIT    1
+#define THROTTLER_TEMP_HOTSPOT_BIT 2
+#define THROTTLER_TEMP_MEM_BIT     3
+#define THROTTLER_TEMP_VR_GFX_BIT  4
+#define THROTTLER_TEMP_VR_MEM0_BIT 5
+#define THROTTLER_TEMP_VR_MEM1_BIT 6
+#define THROTTLER_TEMP_VR_SOC_BIT  7
+#define THROTTLER_TEMP_LIQUID0_BIT 8
+#define THROTTLER_TEMP_LIQUID1_BIT 9
+#define THROTTLER_TEMP_PLX_BIT     10
+#define THROTTLER_TEMP_SKIN_BIT    11
+#define THROTTLER_TDC_GFX_BIT      12
+#define THROTTLER_TDC_SOC_BIT      13
+#define THROTTLER_PPT0_BIT         14
+#define THROTTLER_PPT1_BIT         15
+#define THROTTLER_PPT2_BIT         16
+#define THROTTLER_PPT3_BIT         17
+#define THROTTLER_FIT_BIT          18
+#define THROTTLER_PPM_BIT          19
+#define THROTTLER_APCC_BIT         20
+
+// FW DState Features Control Bits
+#define FW_DSTATE_SOC_ULV_BIT              0
+#define FW_DSTATE_G6_HSR_BIT               1
+#define FW_DSTATE_G6_PHY_VDDCI_OFF_BIT     2
+#define FW_DSTATE_MP0_DS_BIT               3
+#define FW_DSTATE_SMN_DS_BIT               4
+#define FW_DSTATE_MP1_DS_BIT               5
+#define FW_DSTATE_MP1_WHISPER_MODE_BIT     6
+#define FW_DSTATE_LIV_MIN_BIT              7
+#define FW_DSTATE_SOC_PLL_PWRDN_BIT        8   
+
+#define FW_DSTATE_SOC_ULV_MASK             (1 << FW_DSTATE_SOC_ULV_BIT          )
+#define FW_DSTATE_G6_HSR_MASK              (1 << FW_DSTATE_G6_HSR_BIT           )
+#define FW_DSTATE_G6_PHY_VDDCI_OFF_MASK    (1 << FW_DSTATE_G6_PHY_VDDCI_OFF_BIT )
+#define FW_DSTATE_MP1_DS_MASK              (1 << FW_DSTATE_MP1_DS_BIT           )  
+#define FW_DSTATE_MP0_DS_MASK              (1 << FW_DSTATE_MP0_DS_BIT           )   
+#define FW_DSTATE_SMN_DS_MASK              (1 << FW_DSTATE_SMN_DS_BIT           )
+#define FW_DSTATE_MP1_WHISPER_MODE_MASK    (1 << FW_DSTATE_MP1_WHISPER_MODE_BIT )
+#define FW_DSTATE_LIV_MIN_MASK             (1 << FW_DSTATE_LIV_MIN_BIT          )
+#define FW_DSTATE_SOC_PLL_PWRDN_MASK       (1 << FW_DSTATE_SOC_PLL_PWRDN_BIT    )
+
+//I2C Interface
+
+#define NUM_I2C_CONTROLLERS                8
+
+#define I2C_CONTROLLER_ENABLED             1
+#define I2C_CONTROLLER_DISABLED            0
+
+#define MAX_SW_I2C_COMMANDS                8
+
+typedef enum {
+  I2C_CONTROLLER_PORT_0 = 0,  //CKSVII2C0
+  I2C_CONTROLLER_PORT_1 = 1,  //CKSVII2C1
+  I2C_CONTROLLER_PORT_COUNT,
+} I2cControllerPort_e;
+
+typedef enum {
+  I2C_CONTROLLER_NAME_VR_GFX = 0,
+  I2C_CONTROLLER_NAME_VR_SOC,
+  I2C_CONTROLLER_NAME_VR_VDDCI,
+  I2C_CONTROLLER_NAME_VR_MVDD,
+  I2C_CONTROLLER_NAME_LIQUID0,
+  I2C_CONTROLLER_NAME_LIQUID1,  
+  I2C_CONTROLLER_NAME_PLX,
+  I2C_CONTROLLER_NAME_SPARE,
+  I2C_CONTROLLER_NAME_COUNT,  
+} I2cControllerName_e;
+
+typedef enum {
+  I2C_CONTROLLER_THROTTLER_TYPE_NONE = 0,
+  I2C_CONTROLLER_THROTTLER_VR_GFX,
+  I2C_CONTROLLER_THROTTLER_VR_SOC,
+  I2C_CONTROLLER_THROTTLER_VR_VDDCI,
+  I2C_CONTROLLER_THROTTLER_VR_MVDD,
+  I2C_CONTROLLER_THROTTLER_LIQUID0,
+  I2C_CONTROLLER_THROTTLER_LIQUID1,  
+  I2C_CONTROLLER_THROTTLER_PLX,
+  I2C_CONTROLLER_THROTTLER_COUNT,  
+} I2cControllerThrottler_e;
+
+typedef enum {
+  I2C_CONTROLLER_PROTOCOL_VR_0,
+  I2C_CONTROLLER_PROTOCOL_VR_1,
+  I2C_CONTROLLER_PROTOCOL_TMP_0,
+  I2C_CONTROLLER_PROTOCOL_TMP_1,
+  I2C_CONTROLLER_PROTOCOL_SPARE_0,
+  I2C_CONTROLLER_PROTOCOL_SPARE_1,
+  I2C_CONTROLLER_PROTOCOL_COUNT,  
+} I2cControllerProtocol_e;
+
+typedef struct {
+  uint8_t   Enabled;
+  uint8_t   Speed;
+  uint8_t   Padding[2];
+  uint32_t  SlaveAddress;
+  uint8_t   ControllerPort;
+  uint8_t   ControllerName;
+  uint8_t   ThermalThrotter;
+  uint8_t   I2cProtocol;
+} I2cControllerConfig_t;
+
+typedef enum {
+  I2C_PORT_SVD_SCL = 0,  
+  I2C_PORT_GPIO,      
+} I2cPort_e; 
+
+typedef enum {
+  I2C_SPEED_FAST_50K = 0,      //50  Kbits/s
+  I2C_SPEED_FAST_100K,         //100 Kbits/s
+  I2C_SPEED_FAST_400K,         //400 Kbits/s
+  I2C_SPEED_FAST_PLUS_1M,      //1   Mbits/s (in fast mode)
+  I2C_SPEED_HIGH_1M,           //1   Mbits/s (in high speed mode)
+  I2C_SPEED_HIGH_2M,           //2.3 Mbits/s  
+  I2C_SPEED_COUNT,  
+} I2cSpeed_e;
+
+typedef enum {
+  I2C_CMD_READ = 0,
+  I2C_CMD_WRITE,
+  I2C_CMD_COUNT,  
+} I2cCmdType_e;
+
+#define CMDCONFIG_STOP_BIT      0
+#define CMDCONFIG_RESTART_BIT   1
+
+#define CMDCONFIG_STOP_MASK     (1 << CMDCONFIG_STOP_BIT)
+#define CMDCONFIG_RESTART_MASK  (1 << CMDCONFIG_RESTART_BIT)
+
+typedef struct {
+  uint8_t RegisterAddr; ////only valid for write, ignored for read
+  uint8_t Cmd;  //Read(0) or Write(1) 
+  uint8_t Data;  //Return data for read. Data to send for write
+  uint8_t CmdConfig; //Includes whether associated command should have a stop or restart command
+} SwI2cCmd_t; //SW I2C Command Table
+
+typedef struct {
+  uint8_t     I2CcontrollerPort; //CKSVII2C0(0) or //CKSVII2C1(1)
+  uint8_t     I2CSpeed;          //Slow(0) or Fast(1)
+  uint16_t    SlaveAddress;
+  uint8_t     NumCmds;           //Number of commands
+  uint8_t     Padding[3];
+
+  SwI2cCmd_t  SwI2cCmds[MAX_SW_I2C_COMMANDS];
+
+  uint32_t     MmHubPadding[8]; // SMU internal use
+  
+} SwI2cRequest_t; // SW I2C Request Table
+
+//THis is aligned with RSMU PGFSM Register Mapping
+typedef enum {
+  PG_DYNAMIC_MODE = 0,
+  PG_STATIC_MODE,
+} PowerGatingMode_e;
+
+//This is aligned with RSMU PGFSM Register Mapping
+typedef enum {
+  PG_POWER_DOWN = 0,
+  PG_POWER_UP,
+} PowerGatingSettings_e;
+
+typedef struct {            
+  uint32_t a;  // store in IEEE float format in this variable
+  uint32_t b;  // store in IEEE float format in this variable
+  uint32_t c;  // store in IEEE float format in this variable
+} QuadraticInt_t;
+
+typedef struct {            
+  uint32_t m;  // store in IEEE float format in this variable
+  uint32_t b;  // store in IEEE float format in this variable
+} LinearInt_t;
+
+typedef struct {            
+  uint32_t a;  // store in IEEE float format in this variable
+  uint32_t b;  // store in IEEE float format in this variable
+  uint32_t c;  // store in IEEE float format in this variable
+} DroopInt_t;
+
+typedef enum {
+  GFXCLK_SOURCE_PLL = 0, 
+  GFXCLK_SOURCE_DFLL, 
+  GFXCLK_SOURCE_COUNT, 
+} GfxclkSrc_e; 
+
+//Only Clks that have DPM descriptors are listed here 
+typedef enum {
+  PPCLK_GFXCLK = 0,
+  PPCLK_SOCCLK,
+  PPCLK_UCLK,
+  PPCLK_DCLK,
+  PPCLK_VCLK,
+  PPCLK_DCEFCLK,
+  PPCLK_DISPCLK,
+  PPCLK_PIXCLK,
+  PPCLK_PHYCLK,
+  PPCLK_COUNT,
+} PPCLK_e;
+
+typedef enum {
+  POWER_SOURCE_AC,
+  POWER_SOURCE_DC,
+  POWER_SOURCE_COUNT,
+} POWER_SOURCE_e;
+
+typedef enum  {
+  PPT_THROTTLER_PPT0,
+  PPT_THROTTLER_PPT1,
+  PPT_THROTTLER_PPT2,
+  PPT_THROTTLER_PPT3,       
+  PPT_THROTTLER_COUNT
+} PPT_THROTTLER_e;
+
+typedef enum {
+  VOLTAGE_MODE_AVFS = 0,
+  VOLTAGE_MODE_AVFS_SS,
+  VOLTAGE_MODE_SS,
+  VOLTAGE_MODE_COUNT,
+} VOLTAGE_MODE_e;
+
+
+typedef enum {
+  AVFS_VOLTAGE_GFX = 0,
+  AVFS_VOLTAGE_SOC,
+  AVFS_VOLTAGE_COUNT,
+} AVFS_VOLTAGE_TYPE_e;
+
+typedef enum {
+  UCLK_DIV_BY_1 = 0,
+  UCLK_DIV_BY_2,
+  UCLK_DIV_BY_4,
+  UCLK_DIV_BY_8,
+} UCLK_DIV_e;
+
+typedef enum {
+  GPIO_INT_POLARITY_ACTIVE_LOW = 0,
+  GPIO_INT_POLARITY_ACTIVE_HIGH,
+} GpioIntPolarity_e;
+
+typedef enum {
+  MEMORY_TYPE_GDDR6 = 0,
+  MEMORY_TYPE_HBM,
+} MemoryType_e;
+
+typedef enum {
+  PWR_CONFIG_TDP = 0,
+  PWR_CONFIG_TGP,
+  PWR_CONFIG_TCP_ESTIMATED,
+  PWR_CONFIG_TCP_MEASURED,
+} PwrConfig_e;
+
+typedef struct {
+  uint8_t        VoltageMode;         // 0 - AVFS only, 1- min(AVFS,SS), 2-SS only
+  uint8_t        SnapToDiscrete;      // 0 - Fine grained DPM, 1 - Discrete DPM
+  uint8_t        NumDiscreteLevels;   // Set to 2 (Fmin, Fmax) when using fine grained DPM, otherwise set to # discrete levels used
+  uint8_t        Padding;         
+  LinearInt_t    ConversionToAvfsClk; // Transfer function to AVFS Clock (GHz->GHz)
+  QuadraticInt_t SsCurve;             // Slow-slow curve (GHz->V)
+} DpmDescriptor_t;
+
+typedef enum  {
+  TEMP_EDGE,
+  TEMP_HOTSPOT,
+  TEMP_MEM,
+  TEMP_VR_GFX,
+  TEMP_VR_MEM0,
+  TEMP_VR_MEM1,
+  TEMP_VR_SOC,  
+  TEMP_LIQUID0,
+  TEMP_LIQUID1,  
+  TEMP_PLX,
+  TEMP_COUNT
+} TEMP_e;
+
+//Out of band monitor status defines
+//see SPEC //gpu/doc/soc_arch/spec/feature/SMBUS/SMBUS.xlsx
+#define POWER_MANAGER_CONTROLLER_NOT_RUNNING 0
+#define POWER_MANAGER_CONTROLLER_RUNNING     1
+
+#define POWER_MANAGER_CONTROLLER_BIT                             0
+#define MAXIMUM_DPM_STATE_GFX_ENGINE_RESTRICTED_BIT              8
+#define GPU_DIE_TEMPERATURE_THROTTLING_BIT                       9
+#define HBM_DIE_TEMPERATURE_THROTTLING_BIT                       10
+#define TGP_THROTTLING_BIT                                       11
+#define PCC_THROTTLING_BIT                                       12
+#define HBM_TEMPERATURE_EXCEEDING_TEMPERATURE_LIMIT_BIT          13
+#define HBM_TEMPERATURE_EXCEEDING_MAX_MEMORY_TEMPERATURE_BIT     14
+
+#define POWER_MANAGER_CONTROLLER_MASK                            (1 << POWER_MANAGER_CONTROLLER_BIT                        ) 
+#define MAXIMUM_DPM_STATE_GFX_ENGINE_RESTRICTED_MASK             (1 << MAXIMUM_DPM_STATE_GFX_ENGINE_RESTRICTED_BIT         )
+#define GPU_DIE_TEMPERATURE_THROTTLING_MASK                      (1 << GPU_DIE_TEMPERATURE_THROTTLING_BIT                  ) 
+#define HBM_DIE_TEMPERATURE_THROTTLING_MASK                      (1 << HBM_DIE_TEMPERATURE_THROTTLING_BIT                  )
+#define TGP_THROTTLING_MASK                                      (1 << TGP_THROTTLING_BIT                                  )
+#define PCC_THROTTLING_MASK                                      (1 << PCC_THROTTLING_BIT                                  )
+#define HBM_TEMPERATURE_EXCEEDING_TEMPERATURE_LIMIT_MASK         (1 << HBM_TEMPERATURE_EXCEEDING_TEMPERATURE_LIMIT_BIT     )
+#define HBM_TEMPERATURE_EXCEEDING_MAX_MEMORY_TEMPERATURE_MASK    (1 << HBM_TEMPERATURE_EXCEEDING_MAX_MEMORY_TEMPERATURE_BIT) 
+
+//This structure to be DMA to SMBUS Config register space
+typedef struct {
+  uint8_t  MinorInfoVersion;
+  uint8_t  MajorInfoVersion;
+  uint8_t  TableSize;
+  uint8_t  Reserved;
+
+  uint8_t  Reserved1;
+  uint8_t  RevID;
+  uint16_t DeviceID;
+
+  uint16_t DieTemperatureLimit;
+  uint16_t FanTargetTemperature;
+
+  uint16_t MemoryTemperatureLimit;
+  uint16_t Reserved2;
+
+  uint16_t TGP;
+  uint16_t Reserved3;
+
+  uint32_t DieTemperatureRegisterOffset;
+
+  uint32_t Reserved4;
+  
+  uint32_t Reserved5;
+
+  uint32_t Status;
+
+  uint16_t DieTemperature;
+  uint16_t MemoryTemperature;
+
+  uint32_t     MmHubPadding[8]; // SMU internal use  
+} OutOfBandMonitor_t;
+
+typedef struct {
+  uint32_t Version;
+
+  // SECTION: Feature Enablement
+  uint32_t FeaturesToRun[2];
+
+  // SECTION: Infrastructure Limits
+  uint16_t SocketPowerLimitAc[PPT_THROTTLER_COUNT];
+  uint16_t SocketPowerLimitAcTau[PPT_THROTTLER_COUNT];
+  uint16_t SocketPowerLimitDc[PPT_THROTTLER_COUNT];
+  uint16_t SocketPowerLimitDcTau[PPT_THROTTLER_COUNT];  
+
+  uint16_t TdcLimitSoc;             // Amps
+  uint16_t TdcLimitSocTau;          // Time constant of LPF in ms
+  uint16_t TdcLimitGfx;             // Amps
+  uint16_t TdcLimitGfxTau;          // Time constant of LPF in ms
+  
+  uint16_t TedgeLimit;              // Celcius
+  uint16_t ThotspotLimit;           // Celcius
+  uint16_t TmemLimit;               // Celcius
+  uint16_t Tvr_gfxLimit;            // Celcius
+  uint16_t Tvr_mem0Limit;           // Celcius
+  uint16_t Tvr_mem1Limit;           // Celcius  
+  uint16_t Tvr_socLimit;            // Celcius
+  uint16_t Tliquid0Limit;           // Celcius
+  uint16_t Tliquid1Limit;           // Celcius
+  uint16_t TplxLimit;               // Celcius
+  uint32_t FitLimit;                // Failures in time (failures per million parts over the defined lifetime)
+
+  uint16_t PpmPowerLimit;           // Switch this this power limit when temperature is above PpmTempThreshold
+  uint16_t PpmTemperatureThreshold;
+  
+  // SECTION: Throttler settings
+  uint32_t ThrottlerControlMask;   // See Throtter masks defines
+
+  // SECTION: FW DSTATE Settings  
+  uint32_t FwDStateMask;           // See FW DState masks defines
+
+  // SECTION: ULV Settings
+  uint16_t  UlvVoltageOffsetSoc; // In mV(Q2)
+  uint16_t  UlvVoltageOffsetGfx; // In mV(Q2)
+
+  uint8_t   GceaLinkMgrIdleThreshold;        //Set by SMU FW during enablment of SOC_ULV. Controls delay for GFX SDP port disconnection during idle events
+  uint8_t   paddingRlcUlvParams[3];
+  
+  uint8_t  UlvSmnclkDid;     //DID for ULV mode. 0 means CLK will not be modified in ULV.
+  uint8_t  UlvMp1clkDid;     //DID for ULV mode. 0 means CLK will not be modified in ULV.
+  uint8_t  UlvGfxclkBypass;  // 1 to turn off/bypass Gfxclk during ULV, 0 to leave Gfxclk on during ULV
+  uint8_t  Padding234;
+
+  uint16_t     MinVoltageUlvGfx; // In mV(Q2)  Minimum Voltage ("Vmin") of VDD_GFX in ULV mode 
+  uint16_t     MinVoltageUlvSoc; // In mV(Q2)  Minimum Voltage ("Vmin") of VDD_SOC in ULV mode
+
+
+  // SECTION: Voltage Control Parameters
+  uint16_t     MinVoltageGfx;     // In mV(Q2) Minimum Voltage ("Vmin") of VDD_GFX
+  uint16_t     MinVoltageSoc;     // In mV(Q2) Minimum Voltage ("Vmin") of VDD_SOC
+  uint16_t     MaxVoltageGfx;     // In mV(Q2) Maximum Voltage allowable of VDD_GFX
+  uint16_t     MaxVoltageSoc;     // In mV(Q2) Maximum Voltage allowable of VDD_SOC
+
+  uint16_t     LoadLineResistanceGfx;   // In mOhms with 8 fractional bits
+  uint16_t     LoadLineResistanceSoc;   // In mOhms with 8 fractional bits
+
+  //SECTION: DPM Config 1
+  DpmDescriptor_t DpmDescriptor[PPCLK_COUNT];
+
+  uint16_t       FreqTableGfx      [NUM_GFXCLK_DPM_LEVELS  ];     // In MHz
+  uint16_t       FreqTableVclk     [NUM_VCLK_DPM_LEVELS    ];     // In MHz
+  uint16_t       FreqTableDclk     [NUM_DCLK_DPM_LEVELS    ];     // In MHz
+  uint16_t       FreqTableSocclk   [NUM_SOCCLK_DPM_LEVELS  ];     // In MHz
+  uint16_t       FreqTableUclk     [NUM_UCLK_DPM_LEVELS    ];     // In MHz
+  uint16_t       FreqTableDcefclk  [NUM_DCEFCLK_DPM_LEVELS ];     // In MHz
+  uint16_t       FreqTableDispclk  [NUM_DISPCLK_DPM_LEVELS ];     // In MHz
+  uint16_t       FreqTablePixclk   [NUM_PIXCLK_DPM_LEVELS  ];     // In MHz
+  uint16_t       FreqTablePhyclk   [NUM_PHYCLK_DPM_LEVELS  ];     // In MHz
+  uint32_t       Paddingclks[16];
+
+  uint16_t       DcModeMaxFreq     [PPCLK_COUNT            ];     // In MHz
+  uint16_t       Padding8_Clks;
+  
+  uint8_t        FreqTableUclkDiv  [NUM_UCLK_DPM_LEVELS    ];     // 0:Div-1, 1:Div-1/2, 2:Div-1/4, 3:Div-1/8
+
+  // SECTION: DPM Config 2
+  uint16_t       Mp0clkFreq        [NUM_MP0CLK_DPM_LEVELS];       // in MHz
+  uint16_t       Mp0DpmVoltage     [NUM_MP0CLK_DPM_LEVELS];       // mV(Q2)
+  uint16_t       MemVddciVoltage   [NUM_UCLK_DPM_LEVELS];         // mV(Q2)
+  uint16_t       MemMvddVoltage    [NUM_UCLK_DPM_LEVELS];         // mV(Q2)
+  // GFXCLK DPM
+  uint16_t        GfxclkFgfxoffEntry;   // in Mhz
+  uint16_t        GfxclkFinit;          // in Mhz 
+  uint16_t        GfxclkFidle;          // in MHz
+  uint16_t        GfxclkSlewRate;       // for PLL babystepping???
+  uint16_t        GfxclkFopt;           // in Mhz
+  uint8_t         Padding567[2]; 
+  uint16_t        GfxclkDsMaxFreq;      // in MHz
+  uint8_t         GfxclkSource;         // 0 = PLL, 1 = DFLL
+  uint8_t         Padding456;
+
+  // UCLK section
+  uint8_t      LowestUclkReservedForUlv; // Set this to 1 if UCLK DPM0 is reserved for ULV-mode only
+  uint8_t      paddingUclk[3];
+  
+  uint8_t      MemoryType;          // 0-GDDR6, 1-HBM
+  uint8_t      MemoryChannels;
+  uint8_t      PaddingMem[2];
+
+  // Link DPM Settings
+  uint8_t      PcieGenSpeed[NUM_LINK_LEVELS];           ///< 0:PciE-gen1 1:PciE-gen2 2:PciE-gen3 3:PciE-gen4
+  uint8_t      PcieLaneCount[NUM_LINK_LEVELS];          ///< 1=x1, 2=x2, 3=x4, 4=x8, 5=x12, 6=x16
+  uint16_t     LclkFreq[NUM_LINK_LEVELS];              
+
+  // GFXCLK Thermal DPM (formerly 'Boost' Settings)
+  uint16_t     EnableTdpm;      
+  uint16_t     TdpmHighHystTemperature;
+  uint16_t     TdpmLowHystTemperature;
+  uint16_t     GfxclkFreqHighTempLimit; // High limit on GFXCLK when temperature is high, for reliability.
+ 
+  // SECTION: Fan Control
+  uint16_t     FanStopTemp;          //Celcius
+  uint16_t     FanStartTemp;         //Celcius
+
+  uint16_t     FanGainEdge;
+  uint16_t     FanGainHotspot;
+  uint16_t     FanGainLiquid0;
+  uint16_t     FanGainLiquid1;  
+  uint16_t     FanGainVrGfx;
+  uint16_t     FanGainVrSoc;
+  uint16_t     FanGainVrMem0;
+  uint16_t     FanGainVrMem1;  
+  uint16_t     FanGainPlx;
+  uint16_t     FanGainMem;
+  uint16_t     FanPwmMin;
+  uint16_t     FanAcousticLimitRpm;
+  uint16_t     FanThrottlingRpm;
+  uint16_t     FanMaximumRpm;
+  uint16_t     FanTargetTemperature;
+  uint16_t     FanTargetGfxclk;
+  uint8_t      FanTempInputSelect;
+  uint8_t      FanPadding;
+  uint8_t      FanZeroRpmEnable; 
+  uint8_t      FanTachEdgePerRev;
+  //uint8_t      padding8_Fan[2];
+    
+  // The following are AFC override parameters. Leave at 0 to use FW defaults.
+  int16_t      FuzzyFan_ErrorSetDelta;
+  int16_t      FuzzyFan_ErrorRateSetDelta;
+  int16_t      FuzzyFan_PwmSetDelta;
+  uint16_t     FuzzyFan_Reserved;
+
+
+  // SECTION: AVFS 
+  // Overrides
+  uint8_t           OverrideAvfsGb[AVFS_VOLTAGE_COUNT];
+  uint8_t           Padding8_Avfs[2];
+
+  QuadraticInt_t    qAvfsGb[AVFS_VOLTAGE_COUNT];              // GHz->V Override of fused curve 
+  DroopInt_t        dBtcGbGfxPll;         // GHz->V BtcGb
+  DroopInt_t        dBtcGbGfxDfll;        // GHz->V BtcGb
+  DroopInt_t        dBtcGbSoc;            // GHz->V BtcGb
+  LinearInt_t       qAgingGb[AVFS_VOLTAGE_COUNT];          // GHz->V 
+
+  QuadraticInt_t    qStaticVoltageOffset[AVFS_VOLTAGE_COUNT]; // GHz->V 
+
+  uint16_t          DcTol[AVFS_VOLTAGE_COUNT];            // mV Q2
+
+  uint8_t           DcBtcEnabled[AVFS_VOLTAGE_COUNT];
+  uint8_t           Padding8_GfxBtc[2];
+
+  uint16_t          DcBtcMin[AVFS_VOLTAGE_COUNT];       // mV Q2
+  uint16_t          DcBtcMax[AVFS_VOLTAGE_COUNT];       // mV Q2
+
+  // SECTION: Advanced Options
+  uint32_t          DebugOverrides;
+  QuadraticInt_t    ReservedEquation0; 
+  QuadraticInt_t    ReservedEquation1; 
+  QuadraticInt_t    ReservedEquation2; 
+  QuadraticInt_t    ReservedEquation3; 
+  
+  // Total Power configuration, use defines from PwrConfig_e
+  uint8_t      TotalPowerConfig;    //0-TDP, 1-TGP, 2-TCP Estimated, 3-TCP Measured
+  uint8_t      TotalPowerSpare1;  
+  uint16_t     TotalPowerSpare2;
+
+  // APCC Settings
+  uint16_t     PccThresholdLow;
+  uint16_t     PccThresholdHigh;
+  uint32_t     PaddingAPCC[6];  //FIXME pending SPEC
+
+  // Temperature Dependent Vmin
+  uint16_t     VDDGFX_TVmin;       //Celcius
+  uint16_t     VDDSOC_TVmin;       //Celcius
+  uint16_t     VDDGFX_Vmin_HiTemp; // mV Q2
+  uint16_t     VDDGFX_Vmin_LoTemp; // mV Q2
+  uint16_t     VDDSOC_Vmin_HiTemp; // mV Q2
+  uint16_t     VDDSOC_Vmin_LoTemp; // mV Q2
+  
+  uint16_t     VDDGFX_TVminHystersis; // Celcius
+  uint16_t     VDDSOC_TVminHystersis; // Celcius
+
+  // BTC Setting
+  uint32_t     BtcConfig;
+  
+  // SECTION: Board Reserved
+  uint32_t     Reserved[14];
+
+  // SECTION: BOARD PARAMETERS
+  // I2C Control
+  I2cControllerConfig_t  I2cControllers[NUM_I2C_CONTROLLERS];     
+
+  // SVI2 Board Parameters
+  uint16_t     MaxVoltageStepGfx; // In mV(Q2) Max voltage step that SMU will request. Multiple steps are taken if voltage change exceeds this value.
+  uint16_t     MaxVoltageStepSoc; // In mV(Q2) Max voltage step that SMU will request. Multiple steps are taken if voltage change exceeds this value.
+  
+  uint8_t      VddGfxVrMapping;   // Use VR_MAPPING* bitfields
+  uint8_t      VddSocVrMapping;   // Use VR_MAPPING* bitfields
+  uint8_t      VddMem0VrMapping;  // Use VR_MAPPING* bitfields
+  uint8_t      VddMem1VrMapping;  // Use VR_MAPPING* bitfields
+
+  uint8_t      GfxUlvPhaseSheddingMask; // set this to 1 to set PSI0/1 to 1 in ULV mode
+  uint8_t      SocUlvPhaseSheddingMask; // set this to 1 to set PSI0/1 to 1 in ULV mode
+  uint8_t      ExternalSensorPresent; // External RDI connected to TMON (aka TEMP IN)
+  uint8_t      Padding8_V; 
+
+  // Telemetry Settings
+  uint16_t     GfxMaxCurrent;   // in Amps
+  int8_t       GfxOffset;       // in Amps
+  uint8_t      Padding_TelemetryGfx;
+
+  uint16_t     SocMaxCurrent;   // in Amps
+  int8_t       SocOffset;       // in Amps
+  uint8_t      Padding_TelemetrySoc;
+
+  uint16_t     Mem0MaxCurrent;   // in Amps
+  int8_t       Mem0Offset;       // in Amps
+  uint8_t      Padding_TelemetryMem0;
+  
+  uint16_t     Mem1MaxCurrent;   // in Amps
+  int8_t       Mem1Offset;       // in Amps
+  uint8_t      Padding_TelemetryMem1;
+  
+  // GPIO Settings
+  uint8_t      AcDcGpio;        // GPIO pin configured for AC/DC switching
+  uint8_t      AcDcPolarity;    // GPIO polarity for AC/DC switching
+  uint8_t      VR0HotGpio;      // GPIO pin configured for VR0 HOT event
+  uint8_t      VR0HotPolarity;  // GPIO polarity for VR0 HOT event
+
+  uint8_t      VR1HotGpio;      // GPIO pin configured for VR1 HOT event 
+  uint8_t      VR1HotPolarity;  // GPIO polarity for VR1 HOT event 
+  uint8_t      GthrGpio;        // GPIO pin configured for GTHR Event
+  uint8_t      GthrPolarity;    // replace GPIO polarity for GTHR
+
+  // LED Display Settings
+  uint8_t      LedPin0;         // GPIO number for LedPin[0]
+  uint8_t      LedPin1;         // GPIO number for LedPin[1]
+  uint8_t      LedPin2;         // GPIO number for LedPin[2]
+  uint8_t      padding8_4;
+ 
+  // GFXCLK PLL Spread Spectrum
+  uint8_t      PllGfxclkSpreadEnabled;   // on or off
+  uint8_t      PllGfxclkSpreadPercent;   // Q4.4
+  uint16_t     PllGfxclkSpreadFreq;      // kHz
+
+  // GFXCLK DFLL Spread Spectrum
+  uint8_t      DfllGfxclkSpreadEnabled;   // on or off
+  uint8_t      DfllGfxclkSpreadPercent;   // Q4.4
+  uint16_t     DfllGfxclkSpreadFreq;      // kHz
+  
+  // UCLK Spread Spectrum
+  uint8_t      UclkSpreadEnabled;   // on or off
+  uint8_t      UclkSpreadPercent;   // Q4.4
+  uint16_t     UclkSpreadFreq;      // kHz
+
+  // SOCCLK Spread Spectrum
+  uint8_t      SoclkSpreadEnabled;   // on or off
+  uint8_t      SocclkSpreadPercent;   // Q4.4
+  uint16_t     SocclkSpreadFreq;      // kHz
+
+  // Total board power
+  uint16_t     TotalBoardPower;     //Only needed for TCP Estimated case, where TCP = TGP+Total Board Power
+  uint16_t     BoardPadding; 
+
+  // Mvdd Svi2 Div Ratio Setting
+  uint32_t MvddRatio; // This is used for MVDD Vid workaround. It has 16 fractional bits (Q16.16)
+  
+  uint32_t     BoardReserved[9];
+
+  // Padding for MMHUB - do not modify this
+  uint32_t     MmHubPadding[8]; // SMU internal use
+
+} PPTable_t;
+
+typedef struct {
+  // Time constant parameters for clock averages in ms
+  uint16_t     GfxclkAverageLpfTau;
+  uint16_t     SocclkAverageLpfTau;
+  uint16_t     UclkAverageLpfTau;
+  uint16_t     GfxActivityLpfTau;
+  uint16_t     UclkActivityLpfTau;
+
+  uint16_t     Padding;  
+  // Padding - ignore
+  uint32_t     MmHubPadding[8]; // SMU internal use
+} DriverSmuConfig_t;
+
+typedef struct {
+  
+  uint16_t      GfxclkFmin;           // MHz
+  uint16_t      GfxclkFmax;           // MHz
+  uint16_t      GfxclkFreq1;          // MHz
+  uint16_t      GfxclkVolt1;          // mV (Q2)
+  uint16_t      GfxclkFreq2;          // MHz
+  uint16_t      GfxclkVolt2;          // mV (Q2)
+  uint16_t      GfxclkFreq3;          // MHz
+  uint16_t      GfxclkVolt3;          // mV (Q2)
+  uint16_t      UclkFmax;             // MHz
+  int16_t       OverDrivePct;         // %
+  uint16_t      FanMaximumRpm;
+  uint16_t      FanMinimumPwm;
+  uint16_t      FanTargetTemperature; // Degree Celcius 
+  uint16_t      MaxOpTemp;            // Degree Celcius
+  uint16_t      FanZeroRpmEnable;
+  uint16_t      Padding;
+
+  uint32_t     MmHubPadding[8]; // SMU internal use  
+
+} OverDriveTable_t; 
+
+typedef struct {
+  uint16_t CurrClock[PPCLK_COUNT];
+  uint16_t AverageGfxclkFrequency;
+  uint16_t AverageSocclkFrequency;
+  uint16_t AverageUclkFrequency  ;
+  uint16_t AverageGfxActivity    ;
+  uint16_t AverageUclkActivity   ;
+  uint8_t  CurrSocVoltageOffset  ;
+  uint8_t  CurrGfxVoltageOffset  ;
+  uint8_t  CurrMemVidOffset      ;
+  uint8_t  Padding8              ;
+  uint16_t CurrSocketPower       ;
+  uint16_t TemperatureEdge       ;
+  uint16_t TemperatureHotspot    ;
+  uint16_t TemperatureMem        ;
+  uint16_t TemperatureVrGfx      ;
+  uint16_t TemperatureVrMem0     ;
+  uint16_t TemperatureVrMem1     ;  
+  uint16_t TemperatureVrSoc      ;  
+  uint16_t TemperatureLiquid0    ;
+  uint16_t TemperatureLiquid1    ;  
+  uint16_t TemperaturePlx        ;
+  uint16_t Padding16             ;
+  uint32_t ThrottlerStatus       ; 
+ 
+  uint8_t  LinkDpmLevel;
+  uint8_t  Padding[3];
+
+  // Padding - ignore
+  uint32_t     MmHubPadding[8]; // SMU internal use
+} SmuMetrics_t;
+
+typedef struct {
+  uint16_t MinClock; // This is either DCEFCLK or SOCCLK (in MHz)
+  uint16_t MaxClock; // This is either DCEFCLK or SOCCLK (in MHz)
+  uint16_t MinUclk;
+  uint16_t MaxUclk;
+  
+  uint8_t  WmSetting;
+  uint8_t  Padding[3];
+
+  uint32_t     MmHubPadding[8]; // SMU internal use  
+} WatermarkRowGeneric_t;
+
+#define NUM_WM_RANGES 4
+
+typedef enum {
+  WM_SOCCLK = 0,
+  WM_DCEFCLK,
+  WM_COUNT,
+} WM_CLOCK_e;
+
+typedef struct {
+  // Watermarks
+  WatermarkRowGeneric_t WatermarkRow[WM_COUNT][NUM_WM_RANGES];
+
+  uint32_t     MmHubPadding[8]; // SMU internal use
+} Watermarks_t;
+
+typedef struct {
+  uint16_t avgPsmCount[36];
+  uint16_t minPsmCount[36];
+  float    avgPsmVoltage[36]; 
+  float    minPsmVoltage[36];
+
+  uint32_t     MmHubPadding[8]; // SMU internal use
+} AvfsDebugTable_t;
+
+typedef struct {
+  uint8_t  AvfsVersion;
+  uint8_t  Padding;
+
+  uint8_t  AvfsEn[AVFS_VOLTAGE_COUNT];
+  
+  uint8_t  OverrideVFT[AVFS_VOLTAGE_COUNT];
+  uint8_t  OverrideAvfsGb[AVFS_VOLTAGE_COUNT];
+
+  uint8_t  OverrideTemperatures[AVFS_VOLTAGE_COUNT];
+  uint8_t  OverrideVInversion[AVFS_VOLTAGE_COUNT];
+  uint8_t  OverrideP2V[AVFS_VOLTAGE_COUNT];
+  uint8_t  OverrideP2VCharzFreq[AVFS_VOLTAGE_COUNT];
+
+  int32_t VFT0_m1[AVFS_VOLTAGE_COUNT]; // Q8.24
+  int32_t VFT0_m2[AVFS_VOLTAGE_COUNT]; // Q12.12
+  int32_t VFT0_b[AVFS_VOLTAGE_COUNT];  // Q32
+
+  int32_t VFT1_m1[AVFS_VOLTAGE_COUNT]; // Q8.16
+  int32_t VFT1_m2[AVFS_VOLTAGE_COUNT]; // Q12.12
+  int32_t VFT1_b[AVFS_VOLTAGE_COUNT];  // Q32
+
+  int32_t VFT2_m1[AVFS_VOLTAGE_COUNT]; // Q8.16
+  int32_t VFT2_m2[AVFS_VOLTAGE_COUNT]; // Q12.12
+  int32_t VFT2_b[AVFS_VOLTAGE_COUNT];  // Q32
+
+  int32_t AvfsGb0_m1[AVFS_VOLTAGE_COUNT]; // Q8.24
+  int32_t AvfsGb0_m2[AVFS_VOLTAGE_COUNT]; // Q12.12
+  int32_t AvfsGb0_b[AVFS_VOLTAGE_COUNT];  // Q32
+
+  int32_t AcBtcGb_m1[AVFS_VOLTAGE_COUNT]; // Q8.24
+  int32_t AcBtcGb_m2[AVFS_VOLTAGE_COUNT]; // Q12.12
+  int32_t AcBtcGb_b[AVFS_VOLTAGE_COUNT];  // Q32
+
+  uint32_t AvfsTempCold[AVFS_VOLTAGE_COUNT];
+  uint32_t AvfsTempMid[AVFS_VOLTAGE_COUNT];
+  uint32_t AvfsTempHot[AVFS_VOLTAGE_COUNT];
+
+  uint32_t VInversion[AVFS_VOLTAGE_COUNT]; // in mV with 2 fractional bits
+
+
+  int32_t P2V_m1[AVFS_VOLTAGE_COUNT]; // Q8.24
+  int32_t P2V_m2[AVFS_VOLTAGE_COUNT]; // Q12.12
+  int32_t P2V_b[AVFS_VOLTAGE_COUNT];  // Q32
+
+  uint32_t P2VCharzFreq[AVFS_VOLTAGE_COUNT]; // in 10KHz units
+
+  uint32_t EnabledAvfsModules[2]; //NV10 - 36 AVFS modules
+
+  uint32_t     MmHubPadding[8]; // SMU internal use
+} AvfsFuseOverride_t;
+
+typedef struct {
+
+  uint8_t   Gfx_ActiveHystLimit;
+  uint8_t   Gfx_IdleHystLimit;
+  uint8_t   Gfx_FPS;
+  uint8_t   Gfx_MinActiveFreqType;
+  uint8_t   Gfx_BoosterFreqType; 
+  uint8_t   Gfx_MinFreqStep;                // Minimum delta between current and target frequeny in order for FW to change clock.
+  uint16_t  Gfx_MinActiveFreq;              // MHz
+  uint16_t  Gfx_BoosterFreq;                // MHz
+  uint16_t  Gfx_PD_Data_time_constant;      // Time constant of PD controller in ms
+  uint32_t  Gfx_PD_Data_limit_a;            // Q16
+  uint32_t  Gfx_PD_Data_limit_b;            // Q16
+  uint32_t  Gfx_PD_Data_limit_c;            // Q16
+  uint32_t  Gfx_PD_Data_error_coeff;        // Q16
+  uint32_t  Gfx_PD_Data_error_rate_coeff;   // Q16
+  
+  uint8_t   Soc_ActiveHystLimit;
+  uint8_t   Soc_IdleHystLimit;
+  uint8_t   Soc_FPS;
+  uint8_t   Soc_MinActiveFreqType;
+  uint8_t   Soc_BoosterFreqType; 
+  uint8_t   Soc_MinFreqStep;                // Minimum delta between current and target frequeny in order for FW to change clock.
+  uint16_t  Soc_MinActiveFreq;              // MHz
+  uint16_t  Soc_BoosterFreq;                // MHz
+  uint16_t  Soc_PD_Data_time_constant;      // Time constant of PD controller in ms
+  uint32_t  Soc_PD_Data_limit_a;            // Q16
+  uint32_t  Soc_PD_Data_limit_b;            // Q16
+  uint32_t  Soc_PD_Data_limit_c;            // Q16
+  uint32_t  Soc_PD_Data_error_coeff;        // Q16
+  uint32_t  Soc_PD_Data_error_rate_coeff;   // Q16
+  
+  uint8_t   Mem_ActiveHystLimit;
+  uint8_t   Mem_IdleHystLimit;
+  uint8_t   Mem_FPS;
+  uint8_t   Mem_MinActiveFreqType;
+  uint8_t   Mem_BoosterFreqType;
+  uint8_t   Mem_MinFreqStep;                // Minimum delta between current and target frequeny in order for FW to change clock.
+  uint16_t  Mem_MinActiveFreq;              // MHz
+  uint16_t  Mem_BoosterFreq;                // MHz
+  uint16_t  Mem_PD_Data_time_constant;      // Time constant of PD controller in ms
+  uint32_t  Mem_PD_Data_limit_a;            // Q16
+  uint32_t  Mem_PD_Data_limit_b;            // Q16
+  uint32_t  Mem_PD_Data_limit_c;            // Q16
+  uint32_t  Mem_PD_Data_error_coeff;        // Q16
+  uint32_t  Mem_PD_Data_error_rate_coeff;   // Q16
+
+  uint32_t  Mem_UpThreshold_Limit;          // Q16
+  uint8_t   Mem_UpHystLimit;
+  uint8_t   Mem_DownHystLimit;
+  uint16_t  Mem_Fps;
+
+  uint32_t     MmHubPadding[8]; // SMU internal use  
+
+} DpmActivityMonitorCoeffInt_t;
+
+
+// Workload bits
+#define WORKLOAD_PPLIB_DEFAULT_BIT        0 
+#define WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT 1 
+#define WORKLOAD_PPLIB_POWER_SAVING_BIT   2 
+#define WORKLOAD_PPLIB_VIDEO_BIT          3 
+#define WORKLOAD_PPLIB_VR_BIT             4 
+#define WORKLOAD_PPLIB_COMPUTE_BIT        5 
+#define WORKLOAD_PPLIB_CUSTOM_BIT         6 
+#define WORKLOAD_PPLIB_COUNT              7 
+
+
+// These defines are used with the following messages:
+// SMC_MSG_TransferTableDram2Smu
+// SMC_MSG_TransferTableSmu2Dram
+
+// Table transfer status
+#define TABLE_TRANSFER_OK         0x0
+#define TABLE_TRANSFER_FAILED     0xFF
+
+// Table types
+#define TABLE_PPTABLE                 0
+#define TABLE_WATERMARKS              1
+#define TABLE_AVFS                    2
+#define TABLE_AVFS_PSM_DEBUG          3
+#define TABLE_AVFS_FUSE_OVERRIDE      4
+#define TABLE_PMSTATUSLOG             5
+#define TABLE_SMU_METRICS             6
+#define TABLE_DRIVER_SMU_CONFIG       7
+#define TABLE_ACTIVITY_MONITOR_COEFF  8
+#define TABLE_OVERDRIVE               9
+#define TABLE_I2C_COMMANDS           10
+#define TABLE_PACE                   11
+#define TABLE_COUNT                  12
+
+//RLC Pace Table total number of levels
+#define RLC_PACE_TABLE_NUM_LEVELS 16
+#define RLC_PACE_RATIO_NUM_LEVELS 8
+
+typedef struct {
+  uint8_t ByteRatioLow;
+  uint8_t FlopsRatioLow;
+  uint8_t ByteRatioHigh;
+  uint8_t FlopsRatioHigh;
+} RlcPaceFlopsPerByte_t;
+
+typedef struct {
+  RlcPaceFlopsPerByte_t FlopsPerByteTable[RLC_PACE_RATIO_NUM_LEVELS];
+  
+  uint32_t     MmHubPadding[8]; // SMU internal use  
+} RlcPaceFlopsPerByteOverride_t;
+
+// These defines are used with the SMC_MSG_SetUclkFastSwitch message.
+#define UCLK_SWITCH_SLOW 0
+#define UCLK_SWITCH_FAST 1
+#endif
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index d2eeb6240484..376b10f0b2d9 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -27,7 +27,7 @@
 #include "atomfirmware.h"
 #include "amdgpu_atomfirmware.h"
 #include "smu_v11_0.h"
-#include "smu11_driver_if.h"
+#include "smu_11_0_driver_if.h"
 #include "soc15_common.h"
 #include "atom.h"
 #include "vega20_ppt.h"
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 143/459] drm/amd/powerplay: fix the mp/smuio header for navi10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (41 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 142/459] drm/amd/powerplay: update smu 11 driver if header for navi10 Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 144/459] drm/amd/powerplay: introduce the navi10 pptable implementation Alex Deucher
                     ` (49 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

SMU11 should use mp11 and smuio11 headers.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 376b10f0b2d9..26644168d58c 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -35,11 +35,11 @@
 
 #include "asic_reg/thm/thm_11_0_2_offset.h"
 #include "asic_reg/thm/thm_11_0_2_sh_mask.h"
-#include "asic_reg/mp/mp_9_0_offset.h"
-#include "asic_reg/mp/mp_9_0_sh_mask.h"
+#include "asic_reg/mp/mp_11_0_offset.h"
+#include "asic_reg/mp/mp_11_0_sh_mask.h"
 #include "asic_reg/nbio/nbio_7_4_offset.h"
-#include "asic_reg/smuio/smuio_9_0_offset.h"
-#include "asic_reg/smuio/smuio_9_0_sh_mask.h"
+#include "asic_reg/smuio/smuio_11_0_0_offset.h"
+#include "asic_reg/smuio/smuio_11_0_0_sh_mask.h"
 
 MODULE_FIRMWARE("amdgpu/vega20_smc.bin");
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 144/459] drm/amd/powerplay: introduce the navi10 pptable implementation
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (42 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 143/459] drm/amd/powerplay: fix the mp/smuio " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 145/459] drm/amd/powerplay: set smu v11 funcs for navi10 Alex Deucher
                     ` (48 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch introduces the navi10 pptable implementation, so far it is already
has firmware loading, pptable side loading, writing back to smc, and feature
mask enabling.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/Makefile     |   2 +-
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 303 +++++++++++++++++++++
 drivers/gpu/drm/amd/powerplay/navi10_ppt.h |  28 ++
 3 files changed, 332 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/powerplay/navi10_ppt.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/navi10_ppt.h

diff --git a/drivers/gpu/drm/amd/powerplay/Makefile b/drivers/gpu/drm/amd/powerplay/Makefile
index ec87b3430d12..727c5cff231c 100644
--- a/drivers/gpu/drm/amd/powerplay/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/Makefile
@@ -35,7 +35,7 @@ AMD_POWERPLAY = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/powerplay/,$(
 
 include $(AMD_POWERPLAY)
 
-POWER_MGR = amd_powerplay.o amdgpu_smu.o smu_v11_0.o vega20_ppt.o
+POWER_MGR = amd_powerplay.o amdgpu_smu.o smu_v11_0.o vega20_ppt.o navi10_ppt.o
 
 AMD_PP_POWER = $(addprefix $(AMD_PP_PATH)/,$(POWER_MGR))
 
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
new file mode 100644
index 000000000000..283b655a17df
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -0,0 +1,303 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "pp_debug.h"
+#include <linux/firmware.h>
+#include "amdgpu.h"
+#include "amdgpu_smu.h"
+#include "atomfirmware.h"
+#include "amdgpu_atomfirmware.h"
+#include "smu_v11_0.h"
+#include "smu_11_0_driver_if.h"
+#include "soc15_common.h"
+#include "atom.h"
+#include "navi10_ppt.h"
+#include "smu_v11_0_pptable.h"
+#include "smu_v11_0_ppsmc.h"
+
+#define MSG_MAP(msg, index) \
+	[SMU_MSG_##msg] = index
+
+static int navi10_message_map[SMU_MSG_MAX_COUNT] = {
+	MSG_MAP(TestMessage,			PPSMC_MSG_TestMessage),
+	MSG_MAP(GetSmuVersion,			PPSMC_MSG_GetSmuVersion),
+	MSG_MAP(GetDriverIfVersion,		PPSMC_MSG_GetDriverIfVersion),
+	MSG_MAP(SetAllowedFeaturesMaskLow,	PPSMC_MSG_SetAllowedFeaturesMaskLow),
+	MSG_MAP(SetAllowedFeaturesMaskHigh,	PPSMC_MSG_SetAllowedFeaturesMaskHigh),
+	MSG_MAP(EnableAllSmuFeatures,		PPSMC_MSG_EnableAllSmuFeatures),
+	MSG_MAP(DisableAllSmuFeatures,		PPSMC_MSG_DisableAllSmuFeatures),
+	MSG_MAP(EnableSmuFeaturesLow,		PPSMC_MSG_EnableSmuFeaturesLow),
+	MSG_MAP(EnableSmuFeaturesHigh,		PPSMC_MSG_EnableSmuFeaturesHigh),
+	MSG_MAP(DisableSmuFeaturesLow,		PPSMC_MSG_DisableSmuFeaturesLow),
+	MSG_MAP(DisableSmuFeaturesHigh,		PPSMC_MSG_DisableSmuFeaturesHigh),
+	MSG_MAP(GetEnabledSmuFeaturesLow,	PPSMC_MSG_GetEnabledSmuFeaturesLow),
+	MSG_MAP(GetEnabledSmuFeaturesHigh,	PPSMC_MSG_GetEnabledSmuFeaturesHigh),
+	MSG_MAP(SetWorkloadMask,		PPSMC_MSG_SetWorkloadMask),
+	MSG_MAP(SetPptLimit,			PPSMC_MSG_SetPptLimit),
+	MSG_MAP(SetDriverDramAddrHigh,		PPSMC_MSG_SetDriverDramAddrHigh),
+	MSG_MAP(SetDriverDramAddrLow,		PPSMC_MSG_SetDriverDramAddrLow),
+	MSG_MAP(SetToolsDramAddrHigh,		PPSMC_MSG_SetToolsDramAddrHigh),
+	MSG_MAP(SetToolsDramAddrLow,		PPSMC_MSG_SetToolsDramAddrLow),
+	MSG_MAP(TransferTableSmu2Dram,		PPSMC_MSG_TransferTableSmu2Dram),
+	MSG_MAP(TransferTableDram2Smu,		PPSMC_MSG_TransferTableDram2Smu),
+	MSG_MAP(UseDefaultPPTable,		PPSMC_MSG_UseDefaultPPTable),
+	MSG_MAP(UseBackupPPTable,		PPSMC_MSG_UseBackupPPTable),
+	MSG_MAP(RunBtc,				PPSMC_MSG_RunBtc),
+	MSG_MAP(EnterBaco,			PPSMC_MSG_EnterBaco),
+	MSG_MAP(SetSoftMinByFreq,		PPSMC_MSG_SetSoftMinByFreq),
+	MSG_MAP(SetSoftMaxByFreq,		PPSMC_MSG_SetSoftMaxByFreq),
+	MSG_MAP(SetHardMinByFreq,		PPSMC_MSG_SetHardMinByFreq),
+	MSG_MAP(SetHardMaxByFreq,		PPSMC_MSG_SetHardMaxByFreq),
+	MSG_MAP(GetMinDpmFreq,			PPSMC_MSG_GetMinDpmFreq),
+	MSG_MAP(GetMaxDpmFreq,			PPSMC_MSG_GetMaxDpmFreq),
+	MSG_MAP(GetDpmFreqByIndex,		PPSMC_MSG_GetDpmFreqByIndex),
+	MSG_MAP(SetMemoryChannelConfig,		PPSMC_MSG_SetMemoryChannelConfig),
+	MSG_MAP(SetGeminiMode,			PPSMC_MSG_SetGeminiMode),
+	MSG_MAP(SetGeminiApertureHigh,		PPSMC_MSG_SetGeminiApertureHigh),
+	MSG_MAP(SetGeminiApertureLow,		PPSMC_MSG_SetGeminiApertureLow),
+	MSG_MAP(OverridePcieParameters,		PPSMC_MSG_OverridePcieParameters),
+	MSG_MAP(SetMinDeepSleepDcefclk,		PPSMC_MSG_SetMinDeepSleepDcefclk),
+	MSG_MAP(ReenableAcDcInterrupt,		PPSMC_MSG_ReenableAcDcInterrupt),
+	MSG_MAP(NotifyPowerSource,		PPSMC_MSG_NotifyPowerSource),
+	MSG_MAP(SetUclkFastSwitch,		PPSMC_MSG_SetUclkFastSwitch),
+	MSG_MAP(SetVideoFps,			PPSMC_MSG_SetVideoFps),
+	MSG_MAP(PrepareMp1ForUnload,		PPSMC_MSG_PrepareMp1ForUnload),
+	MSG_MAP(DramLogSetDramAddrHigh,		PPSMC_MSG_DramLogSetDramAddrHigh),
+	MSG_MAP(DramLogSetDramAddrLow,		PPSMC_MSG_DramLogSetDramAddrLow),
+	MSG_MAP(DramLogSetDramSize,		PPSMC_MSG_DramLogSetDramSize),
+	MSG_MAP(ConfigureGfxDidt,		PPSMC_MSG_ConfigureGfxDidt),
+	MSG_MAP(NumOfDisplays,			PPSMC_MSG_NumOfDisplays),
+	MSG_MAP(SetSystemVirtualDramAddrHigh,	PPSMC_MSG_SetSystemVirtualDramAddrHigh),
+	MSG_MAP(SetSystemVirtualDramAddrLow,	PPSMC_MSG_SetSystemVirtualDramAddrLow),
+	MSG_MAP(AllowGfxOff,			PPSMC_MSG_AllowGfxOff),
+	MSG_MAP(DisallowGfxOff,			PPSMC_MSG_DisallowGfxOff),
+	MSG_MAP(GetPptLimit,			PPSMC_MSG_GetPptLimit),
+	MSG_MAP(GetDcModeMaxDpmFreq,		PPSMC_MSG_GetDcModeMaxDpmFreq),
+	MSG_MAP(GetDebugData,			PPSMC_MSG_GetDebugData),
+	MSG_MAP(ExitBaco,			PPSMC_MSG_ExitBaco),
+	MSG_MAP(PrepareMp1ForReset,		PPSMC_MSG_PrepareMp1ForReset),
+	MSG_MAP(PrepareMp1ForShutdown,		PPSMC_MSG_PrepareMp1ForShutdown),
+};
+
+static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
+{
+	if (index > SMU_MSG_MAX_COUNT || index > PPSMC_Message_Count)
+		return -EINVAL;
+	return navi10_message_map[index];
+
+}
+
+static int
+navi10_get_unallowed_feature_mask(struct smu_context *smu,
+				  uint32_t *feature_mask, uint32_t num)
+{
+	if (num > 2)
+		return -EINVAL;
+
+	feature_mask[0] = 0x0C677844;
+	feature_mask[1] = 0xFFFFFF28; /* bit32~bit63 is Unsupported */
+
+	return 0;
+}
+
+static int navi10_check_powerplay_table(struct smu_context *smu)
+{
+	return 0;
+}
+
+static int navi10_append_powerplay_table(struct smu_context *smu)
+{
+	struct smu_table_context *table_context = &smu->smu_table;
+	PPTable_t *smc_pptable = table_context->driver_pptable;
+	struct atom_smc_dpm_info_v4_5 *smc_dpm_table;
+	int index, ret;
+
+	index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+					   smc_dpm_info);
+
+	ret = smu_get_atom_data_table(smu, index, NULL, NULL, NULL,
+				      (uint8_t **)&smc_dpm_table);
+	if (ret)
+		return ret;
+
+	memcpy(smc_pptable->I2cControllers, smc_dpm_table->I2cControllers,
+	       sizeof(I2cControllerConfig_t) * NUM_I2C_CONTROLLERS);
+
+	/* SVI2 Board Parameters */
+	smc_pptable->MaxVoltageStepGfx = smc_dpm_table->MaxVoltageStepGfx;
+	smc_pptable->MaxVoltageStepSoc = smc_dpm_table->MaxVoltageStepSoc;
+	smc_pptable->VddGfxVrMapping = smc_dpm_table->VddGfxVrMapping;
+	smc_pptable->VddSocVrMapping = smc_dpm_table->VddSocVrMapping;
+	smc_pptable->VddMem0VrMapping = smc_dpm_table->VddMem0VrMapping;
+	smc_pptable->VddMem1VrMapping = smc_dpm_table->VddMem1VrMapping;
+	smc_pptable->GfxUlvPhaseSheddingMask = smc_dpm_table->GfxUlvPhaseSheddingMask;
+	smc_pptable->SocUlvPhaseSheddingMask = smc_dpm_table->SocUlvPhaseSheddingMask;
+	smc_pptable->ExternalSensorPresent = smc_dpm_table->ExternalSensorPresent;
+	smc_pptable->Padding8_V = smc_dpm_table->Padding8_V;
+
+	/* Telemetry Settings */
+	smc_pptable->GfxMaxCurrent = smc_dpm_table->GfxMaxCurrent;
+	smc_pptable->GfxOffset = smc_dpm_table->GfxOffset;
+	smc_pptable->Padding_TelemetryGfx = smc_dpm_table->Padding_TelemetryGfx;
+	smc_pptable->SocMaxCurrent = smc_dpm_table->SocMaxCurrent;
+	smc_pptable->SocOffset = smc_dpm_table->SocOffset;
+	smc_pptable->Padding_TelemetrySoc = smc_dpm_table->Padding_TelemetrySoc;
+	smc_pptable->Mem0MaxCurrent = smc_dpm_table->Mem0MaxCurrent;
+	smc_pptable->Mem0Offset = smc_dpm_table->Mem0Offset;
+	smc_pptable->Padding_TelemetryMem0 = smc_dpm_table->Padding_TelemetryMem0;
+	smc_pptable->Mem1MaxCurrent = smc_dpm_table->Mem1MaxCurrent;
+	smc_pptable->Mem1Offset = smc_dpm_table->Mem1Offset;
+	smc_pptable->Padding_TelemetryMem1 = smc_dpm_table->Padding_TelemetryMem1;
+
+	/* GPIO Settings */
+	smc_pptable->AcDcGpio = smc_dpm_table->AcDcGpio;
+	smc_pptable->AcDcPolarity = smc_dpm_table->AcDcPolarity;
+	smc_pptable->VR0HotGpio = smc_dpm_table->VR0HotGpio;
+	smc_pptable->VR0HotPolarity = smc_dpm_table->VR0HotPolarity;
+	smc_pptable->VR1HotGpio = smc_dpm_table->VR1HotGpio;
+	smc_pptable->VR1HotPolarity = smc_dpm_table->VR1HotPolarity;
+	smc_pptable->GthrGpio = smc_dpm_table->GthrGpio;
+	smc_pptable->GthrPolarity = smc_dpm_table->GthrPolarity;
+
+	/* LED Display Settings */
+	smc_pptable->LedPin0 = smc_dpm_table->LedPin0;
+	smc_pptable->LedPin1 = smc_dpm_table->LedPin1;
+	smc_pptable->LedPin2 = smc_dpm_table->LedPin2;
+	smc_pptable->padding8_4 = smc_dpm_table->padding8_4;
+
+	/* GFXCLK PLL Spread Spectrum */
+	smc_pptable->PllGfxclkSpreadEnabled = smc_dpm_table->PllGfxclkSpreadEnabled;
+	smc_pptable->PllGfxclkSpreadPercent = smc_dpm_table->PllGfxclkSpreadPercent;
+	smc_pptable->PllGfxclkSpreadFreq = smc_dpm_table->PllGfxclkSpreadFreq;
+
+	/* GFXCLK DFLL Spread Spectrum */
+	smc_pptable->DfllGfxclkSpreadEnabled = smc_dpm_table->DfllGfxclkSpreadEnabled;
+	smc_pptable->DfllGfxclkSpreadPercent = smc_dpm_table->DfllGfxclkSpreadPercent;
+	smc_pptable->DfllGfxclkSpreadFreq = smc_dpm_table->DfllGfxclkSpreadFreq;
+
+	/* UCLK Spread Spectrum */
+	smc_pptable->UclkSpreadEnabled = smc_dpm_table->UclkSpreadEnabled;
+	smc_pptable->UclkSpreadPercent = smc_dpm_table->UclkSpreadPercent;
+	smc_pptable->UclkSpreadFreq = smc_dpm_table->UclkSpreadFreq;
+
+	/* SOCCLK Spread Spectrum */
+	smc_pptable->SoclkSpreadEnabled = smc_dpm_table->SoclkSpreadEnabled;
+	smc_pptable->SocclkSpreadPercent = smc_dpm_table->SocclkSpreadPercent;
+	smc_pptable->SocclkSpreadFreq = smc_dpm_table->SocclkSpreadFreq;
+
+	/* Total board power */
+	smc_pptable->TotalBoardPower = smc_dpm_table->TotalBoardPower;
+	smc_pptable->BoardPadding = smc_dpm_table->BoardPadding;
+
+	/* Mvdd Svi2 Div Ratio Setting */
+	smc_pptable->MvddRatio = smc_dpm_table->MvddRatio;
+
+	return 0;
+}
+
+static int navi10_store_powerplay_table(struct smu_context *smu)
+{
+	struct smu_11_0_powerplay_table *powerplay_table = NULL;
+	struct smu_table_context *table_context = &smu->smu_table;
+
+	if (!table_context->power_play_table)
+		return -EINVAL;
+
+	powerplay_table = table_context->power_play_table;
+
+	memcpy(table_context->driver_pptable, &powerplay_table->smc_pptable,
+	       sizeof(PPTable_t));
+
+	return 0;
+}
+
+static int navi10_allocate_dpm_context(struct smu_context *smu)
+{
+	struct smu_dpm_context *smu_dpm = &smu->smu_dpm;
+
+	if (smu_dpm->dpm_context)
+		return -EINVAL;
+
+	smu_dpm->dpm_context = kzalloc(sizeof(struct smu_11_0_dpm_context),
+				       GFP_KERNEL);
+	if (!smu_dpm->dpm_context)
+		return -ENOMEM;
+
+	smu_dpm->dpm_context_size = sizeof(struct smu_11_0_dpm_context);
+
+	return 0;
+}
+
+static int navi10_set_default_dpm_table(struct smu_context *smu)
+{
+	struct smu_dpm_context *smu_dpm = &smu->smu_dpm;
+	struct smu_table_context *table_context = &smu->smu_table;
+	struct smu_11_0_dpm_context *dpm_context = smu_dpm->dpm_context;
+	PPTable_t *driver_ppt = NULL;
+
+        driver_ppt = table_context->driver_pptable;
+
+        dpm_context->dpm_tables.soc_table.min = driver_ppt->FreqTableSocclk[0];
+        dpm_context->dpm_tables.soc_table.max = driver_ppt->FreqTableSocclk[NUM_SOCCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.gfx_table.min = driver_ppt->FreqTableGfx[0];
+        dpm_context->dpm_tables.gfx_table.max = driver_ppt->FreqTableGfx[NUM_GFXCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.uclk_table.min = driver_ppt->FreqTableUclk[0];
+        dpm_context->dpm_tables.uclk_table.max = driver_ppt->FreqTableUclk[NUM_UCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.vclk_table.min = driver_ppt->FreqTableVclk[0];
+        dpm_context->dpm_tables.vclk_table.max = driver_ppt->FreqTableVclk[NUM_VCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.dclk_table.min = driver_ppt->FreqTableDclk[0];
+        dpm_context->dpm_tables.dclk_table.max = driver_ppt->FreqTableDclk[NUM_DCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.dcef_table.min = driver_ppt->FreqTableDcefclk[0];
+        dpm_context->dpm_tables.dcef_table.max = driver_ppt->FreqTableDcefclk[NUM_DCEFCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.pixel_table.min = driver_ppt->FreqTablePixclk[0];
+        dpm_context->dpm_tables.pixel_table.max = driver_ppt->FreqTablePixclk[NUM_PIXCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.display_table.min = driver_ppt->FreqTableDispclk[0];
+        dpm_context->dpm_tables.display_table.max = driver_ppt->FreqTableDispclk[NUM_DISPCLK_DPM_LEVELS - 1];
+
+        dpm_context->dpm_tables.phy_table.min = driver_ppt->FreqTablePhyclk[0];
+        dpm_context->dpm_tables.phy_table.max = driver_ppt->FreqTablePhyclk[NUM_PHYCLK_DPM_LEVELS - 1];
+
+	return 0;
+}
+
+static const struct pptable_funcs navi10_ppt_funcs = {
+	.alloc_dpm_context = navi10_allocate_dpm_context,
+	.store_powerplay_table = navi10_store_powerplay_table,
+	.check_powerplay_table = navi10_check_powerplay_table,
+	.append_powerplay_table = navi10_append_powerplay_table,
+	.get_smu_msg_index = navi10_get_smu_msg_index,
+	.get_unallowed_feature_mask = navi10_get_unallowed_feature_mask,
+	.set_default_dpm_table = navi10_set_default_dpm_table,
+};
+
+void navi10_set_ppt_funcs(struct smu_context *smu)
+{
+	smu->ppt_funcs = &navi10_ppt_funcs;
+}
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.h b/drivers/gpu/drm/amd/powerplay/navi10_ppt.h
new file mode 100644
index 000000000000..957288e22f47
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.h
@@ -0,0 +1,28 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef __NAVI10_PPT_H__
+#define __NAVI10_PPT_H__
+
+extern void navi10_set_ppt_funcs(struct smu_context *smu);
+
+#endif
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 145/459] drm/amd/powerplay: set smu v11 funcs for navi10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (43 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 144/459] drm/amd/powerplay: introduce the navi10 pptable implementation Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 146/459] drm/amd/powerplay: add navi10 smc ucode init and navi10 ppt functions setting Alex Deucher
                     ` (47 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

Naiv10 also uses smu v11 functions.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 3026c7e2d3ea..a3a6099ab8cd 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -330,6 +330,7 @@ static int smu_set_funcs(struct amdgpu_device *adev)
 
 	switch (adev->asic_type) {
 	case CHIP_VEGA20:
+	case CHIP_NAVI10:
 		adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
 		if (adev->pm.pp_feature & PP_OVERDRIVE_MASK)
 			smu->od_enabled = true;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 146/459] drm/amd/powerplay: add navi10 smc ucode init and navi10 ppt functions setting
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (44 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 145/459] drm/amd/powerplay: set smu v11 funcs for navi10 Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 147/459] drm/amd/powerplay: move bootup value before read pptable from vbios Alex Deucher
                     ` (46 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch adds navi10 smc ucode init and ppt functions setting.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 26644168d58c..4dcbf6ee7e8e 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -31,6 +31,7 @@
 #include "soc15_common.h"
 #include "atom.h"
 #include "vega20_ppt.h"
+#include "navi10_ppt.h"
 #include "pp_thermal.h"
 
 #include "asic_reg/thm/thm_11_0_2_offset.h"
@@ -165,6 +166,9 @@ static int smu_v11_0_init_microcode(struct smu_context *smu)
 	case CHIP_VEGA20:
 		chip_name = "vega20";
 		break;
+	case CHIP_NAVI10:
+		chip_name = "navi10";
+		break;
 	default:
 		BUG();
 	}
@@ -2096,6 +2100,9 @@ void smu_v11_0_set_smu_funcs(struct smu_context *smu)
 	case CHIP_VEGA20:
 		vega20_set_ppt_funcs(smu);
 		break;
+	case CHIP_NAVI10:
+		navi10_set_ppt_funcs(smu);
+		break;
 	default:
 		pr_warn("Unknown asic for smu11\n");
 	}
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 147/459] drm/amd/powerplay: move bootup value before read pptable from vbios
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (45 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 146/459] drm/amd/powerplay: add navi10 smc ucode init and navi10 ppt functions setting Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 148/459] drm/amd/powerplay: enable backdoor smu fw loading (v2) Alex Deucher
                     ` (45 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

In navi10, we need read the pp_table_id from bootup value, then decide whether
use load the soft pptable.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index a3a6099ab8cd..88fd79d5aca6 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -628,12 +628,12 @@ static int smu_smc_table_hw_init(struct smu_context *smu,
 		return ret;
 
 	if (initialize) {
-		ret = smu_read_pptable_from_vbios(smu);
+		/* get boot_values from vbios to set revision, gfxclk, and etc. */
+		ret = smu_get_vbios_bootup_values(smu);
 		if (ret)
 			return ret;
 
-		/* get boot_values from vbios to set revision, gfxclk, and etc. */
-		ret = smu_get_vbios_bootup_values(smu);
+		ret = smu_read_pptable_from_vbios(smu);
 		if (ret)
 			return ret;
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 148/459] drm/amd/powerplay: enable backdoor smu fw loading (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (46 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 147/459] drm/amd/powerplay: move bootup value before read pptable from vbios Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 149/459] drm/amd/powerplay: update smu11 driver if header for navi10 (v2) Alex Deucher
                     ` (44 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kenneth Feng, Hawking Zhang

From: Kenneth Feng <kenneth.feng@amd.com>

enable backdoor smu fw loading on navi10

v2: squash in define fix (Alex)

Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |  1 +
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 33 +++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index 02c965d64256..cd5e66b82ce1 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -30,6 +30,7 @@
 #define MP0_SRAM			0x03900000
 #define MP1_Public			0x03b00000
 #define MP1_SRAM			0x03c00004
+#define MP1_SMC_SIZE		0x40000
 
 /* address block */
 #define smnMP1_FIRMWARE_FLAGS		0x3010024
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 4dcbf6ee7e8e..2d55b825497f 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -207,6 +207,39 @@ static int smu_v11_0_init_microcode(struct smu_context *smu)
 
 static int smu_v11_0_load_microcode(struct smu_context *smu)
 {
+	struct amdgpu_device *adev = smu->adev;
+	const uint32_t *src;
+	const struct smc_firmware_header_v1_0 *hdr;
+	uint32_t addr_start = MP1_SRAM;
+	uint32_t i;
+	uint32_t mp1_fw_flags;
+
+	hdr = (const struct smc_firmware_header_v1_0 *)	adev->pm.fw->data;
+	src = (const uint32_t *)(adev->pm.fw->data +
+		le32_to_cpu(hdr->header.ucode_array_offset_bytes));
+
+	for (i = 1; i < MP1_SMC_SIZE/4 - 1; i++) {
+		WREG32_PCIE(addr_start, src[i]);
+		addr_start += 4;
+	}
+
+	WREG32_PCIE(MP1_Public | (smnMP1_PUB_CTRL & 0xffffffff),
+		1 & MP1_SMN_PUB_CTRL__RESET_MASK);
+	WREG32_PCIE(MP1_Public | (smnMP1_PUB_CTRL & 0xffffffff),
+		1 & ~MP1_SMN_PUB_CTRL__RESET_MASK);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		mp1_fw_flags = RREG32_PCIE(MP1_Public |
+			(smnMP1_FIRMWARE_FLAGS & 0xffffffff));
+		if ((mp1_fw_flags & MP1_FIRMWARE_FLAGS__INTERRUPTS_ENABLED_MASK) >>
+			MP1_FIRMWARE_FLAGS__INTERRUPTS_ENABLED__SHIFT)
+			break;
+		udelay(1);
+	}
+
+	if (i == adev->usec_timeout)
+		return -ETIME;
+
 	return 0;
 }
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 149/459] drm/amd/powerplay: update smu11 driver if header for navi10 (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (47 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 148/459] drm/amd/powerplay: enable backdoor smu fw loading (v2) Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 150/459] drm/amdgpu: bump smc firmware header version to v2 (v2) Alex Deucher
                     ` (43 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch updates smu11 driver if header for navi10 to match 42.09.00 smu
firmware.

v2: clean up comments

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h    | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h b/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
index b98cb005a46c..a53547fa8980 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
@@ -4,7 +4,7 @@
 // *** IMPORTANT ***
 // SMU TEAM: Always increment the interface version if 
 // any structure is changed in this file
-#define SMU11_DRIVER_IF_VERSION 0x2D
+#define SMU11_DRIVER_IF_VERSION 0x2E
 
 #define PPTABLE_NV10_SMU_VERSION 8
 
@@ -297,6 +297,15 @@ typedef struct {
   
 } SwI2cRequest_t; // SW I2C Request Table
 
+//D3HOT sequences
+typedef enum {
+  BACO_SEQUENCE,
+  MSR_SEQUENCE,
+  BAMACO_SEQUENCE,
+  ULPS_SEQUENCE,
+  D3HOT_SEQUENCE_COUNT,
+}D3HOTSequence_e;
+
 //THis is aligned with RSMU PGFSM Register Mapping
 typedef enum {
   PG_DYNAMIC_MODE = 0,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 150/459] drm/amdgpu: bump smc firmware header version to v2 (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (48 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 149/459] drm/amd/powerplay: update smu11 driver if header for navi10 (v2) Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 151/459] drm/amdgpu: fix the issue of checking on message mapping Alex Deucher
                     ` (42 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch bumps smc firmware header version to v2 for storing soft pptable.

v2: fix the typo, and add prints for v2 header

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 8 ++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 8 ++++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 09f384ce8cd7..7081ad9f93e3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -77,6 +77,14 @@ void amdgpu_ucode_print_smc_hdr(const struct common_firmware_header *hdr)
 			container_of(hdr, struct smc_firmware_header_v1_0, header);
 
 		DRM_DEBUG("ucode_start_addr: %u\n", le32_to_cpu(smc_hdr->ucode_start_addr));
+	} else if (version_major == 2) {
+		const struct smc_firmware_header_v1_0 *v1_hdr =
+			container_of(hdr, struct smc_firmware_header_v1_0, header);
+		const struct smc_firmware_header_v2_0 *v2_hdr =
+			container_of(v1_hdr, struct smc_firmware_header_v2_0, v1_0);
+
+		DRM_INFO("ppt_offset_bytes: %u\n", le32_to_cpu(v2_hdr->ppt_offset_bytes));
+		DRM_INFO("ppt_size_bytes: %u\n", le32_to_cpu(v2_hdr->ppt_size_bytes));
 	} else {
 		DRM_ERROR("Unknown SMC ucode version: %u.%u\n", version_major, version_minor);
 	}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 3806a7957c6f..9b096228a02f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -49,6 +49,13 @@ struct smc_firmware_header_v1_0 {
 	uint32_t ucode_start_addr;
 };
 
+/* version_major=2, version_minor=0 */
+struct smc_firmware_header_v2_0 {
+	struct smc_firmware_header_v1_0 v1_0;
+	uint32_t ppt_offset_bytes; /* soft pptable offset */
+	uint32_t ppt_size_bytes; /* soft pptable size */
+};
+
 /* version_major=1, version_minor=0 */
 struct psp_firmware_header_v1_0 {
 	struct common_firmware_header header;
@@ -194,6 +201,7 @@ union amdgpu_firmware_header {
 	struct common_firmware_header common;
 	struct mc_firmware_header_v1_0 mc;
 	struct smc_firmware_header_v1_0 smc;
+	struct smc_firmware_header_v2_0 smc_v2_0;
 	struct psp_firmware_header_v1_0 psp;
 	struct psp_firmware_header_v1_1 psp_v1_1;
 	struct ta_firmware_header_v1_0 ta;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 151/459] drm/amdgpu: fix the issue of checking on message mapping
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (49 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 150/459] drm/amdgpu: bump smc firmware header version to v2 (v2) Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 152/459] drm/amd/powerplay: smu needs to be initialized after rlc in direct mode Alex Deucher
                     ` (41 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

The navi10_message_map[index] scope should be in PPSMC_Message_Count not in
SMU_MSG_MAX_COUNT.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 283b655a17df..a97072cc0396 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -101,10 +101,15 @@ static int navi10_message_map[SMU_MSG_MAX_COUNT] = {
 
 static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
-	if (index > SMU_MSG_MAX_COUNT || index > PPSMC_Message_Count)
+	int val;
+	if (index > SMU_MSG_MAX_COUNT)
 		return -EINVAL;
-	return navi10_message_map[index];
 
+	val = navi10_message_map[index];
+	if (val > PPSMC_Message_Count)
+		return -EINVAL;
+
+	return val;
 }
 
 static int
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 152/459] drm/amd/powerplay: smu needs to be initialized after rlc in direct mode
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (50 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 151/459] drm/amdgpu: fix the issue of checking on message mapping Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 153/459] drm/amd/powerplay: introduce the function to load the soft pptable for navi10 (v2) Alex Deucher
                     ` (40 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

For gfx 10, rlc firmware loading relies on smu firmware is loaded firstly, so in
direct type, it has to load smc ucode here before rlc. And meanwhile, the smu
initialization has to move after rlc, otherwise, smu message will get failure
during the handshake with rlc and smu.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 88fd79d5aca6..b9b56ec1aacf 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -820,16 +820,12 @@ static int smu_hw_init(void *handle)
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct smu_context *smu = &adev->smu;
 
-	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
-		ret = smu_load_microcode(smu);
-		if (ret)
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		ret = smu_check_fw_status(smu);
+		if (ret) {
+			pr_err("SMC firmware status is not correct\n");
 			return ret;
-	}
-
-	ret = smu_check_fw_status(smu);
-	if (ret) {
-		pr_err("SMC firmware status is not correct\n");
-		return ret;
+		}
 	}
 
 	mutex_lock(&smu->mutex);
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 153/459] drm/amd/powerplay: introduce the function to load the soft pptable for navi10 (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (51 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 152/459] drm/amd/powerplay: smu needs to be initialized after rlc in direct mode Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 154/459] drm/amd/powerplay: modify the feature mask to enable gfx/soc dpm Alex Deucher
                     ` (39 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

Driver is able to load soft pptable from smc bin file with this function. We
stored the soft pptable in the bottom of smc.bin that the version is v2.

v2: remove is_fw_v2_0 flag.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  5 +++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 37 ++++++++++++++++---
 2 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 3eb1de9ecf73..2f8fe2a3d694 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -389,6 +389,11 @@ struct smu_context
 	uint32_t power_limit;
 	uint32_t default_power_limit;
 
+	/* soft pptable */
+	uint32_t ppt_offset_bytes;
+	uint32_t ppt_size_bytes;
+	uint8_t  *ppt_start_addr;
+
 	bool support_power_containment;
 	bool disable_watermark;
 
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 2d55b825497f..4311ee38a774 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -43,6 +43,7 @@
 #include "asic_reg/smuio/smuio_11_0_0_sh_mask.h"
 
 MODULE_FIRMWARE("amdgpu/vega20_smc.bin");
+MODULE_FIRMWARE("amdgpu/navi10_smc.bin");
 
 #define SMU11_TOOL_SIZE		0x19000
 #define SMU11_THERMAL_MINIMUM_ALERT_TEMP      0
@@ -152,6 +153,18 @@ smu_v11_0_send_msg_with_param(struct smu_context *smu, uint16_t msg,
 	return ret;
 }
 
+static void smu_v11_0_init_smu_ext_microcode(struct smu_context *smu)
+{
+	struct amdgpu_device *adev = smu->adev;
+	const struct smc_firmware_header_v2_0 *v2;
+
+	v2 = (const struct smc_firmware_header_v2_0 *) adev->pm.fw->data;
+
+	smu->ppt_offset_bytes = le32_to_cpu(v2->ppt_offset_bytes);
+	smu->ppt_size_bytes = le32_to_cpu(v2->ppt_size_bytes);
+	smu->ppt_start_addr = (uint8_t *)v2 + smu->ppt_offset_bytes;
+}
+
 static int smu_v11_0_init_microcode(struct smu_context *smu)
 {
 	struct amdgpu_device *adev = smu->adev;
@@ -161,6 +174,7 @@ static int smu_v11_0_init_microcode(struct smu_context *smu)
 	const struct smc_firmware_header_v1_0 *hdr;
 	const struct common_firmware_header *header;
 	struct amdgpu_firmware_info *ucode = NULL;
+	uint32_t version_major, version_minor;
 
 	switch (adev->asic_type) {
 	case CHIP_VEGA20:
@@ -186,6 +200,11 @@ static int smu_v11_0_init_microcode(struct smu_context *smu)
 	amdgpu_ucode_print_smc_hdr(&hdr->header);
 	adev->pm.fw_version = le32_to_cpu(hdr->header.ucode_version);
 
+	version_major = le16_to_cpu(hdr->header.header_version_major);
+	version_minor = le16_to_cpu(hdr->header.header_version_minor);
+	if (version_major == 2 && version_minor == 0)
+		smu_v11_0_init_smu_ext_microcode(smu); /* with soft pptable */
+
 	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
 		ucode = &adev->firmware.ucode[AMDGPU_UCODE_ID_SMC];
 		ucode->ucode_id = AMDGPU_UCODE_ID_SMC;
@@ -291,13 +310,19 @@ static int smu_v11_0_read_pptable_from_vbios(struct smu_context *smu)
 	uint8_t frev, crev;
 	void *table;
 
-	index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
-					    powerplayinfo);
+	if (smu->smu_table.boot_values.pp_table_id > 0 && smu->ppt_start_addr) {
+		/* load soft pptable */
+		table = (void *)smu->ppt_start_addr;
+		size= smu->ppt_size_bytes;
+	} else {
+		index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+						    powerplayinfo);
 
-	ret = smu_get_atom_data_table(smu, index, &size, &frev, &crev,
-				      (uint8_t **)&table);
-	if (ret)
-		return ret;
+		ret = smu_get_atom_data_table(smu, index, &size, &frev, &crev,
+					      (uint8_t **)&table);
+		if (ret)
+			return ret;
+	}
 
 	if (!smu->smu_table.power_play_table)
 		smu->smu_table.power_play_table = table;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 154/459] drm/amd/powerplay: modify the feature mask to enable gfx/soc dpm
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (52 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 153/459] drm/amd/powerplay: introduce the function to load the soft pptable for navi10 (v2) Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 155/459] drm/amd/powerplay: skip od feature on navi10 for the moment Alex Deucher
                     ` (38 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

So far, the gfx/soc dpm is enabled with feature mask set.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index a97072cc0396..3327af2376d7 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -119,8 +119,8 @@ navi10_get_unallowed_feature_mask(struct smu_context *smu,
 	if (num > 2)
 		return -EINVAL;
 
-	feature_mask[0] = 0x0C677844;
-	feature_mask[1] = 0xFFFFFF28; /* bit32~bit63 is Unsupported */
+	feature_mask[0] = 0xffffffe4;
+	feature_mask[1] = 0xffffffff;	/* bit32~bit63 is Unsupported */
 
 	return 0;
 }
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 155/459] drm/amd/powerplay: skip od feature on navi10 for the moment
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (53 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 154/459] drm/amd/powerplay: modify the feature mask to enable gfx/soc dpm Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 156/459] drm/amd/powerplay: enable power features Alex Deucher
                     ` (37 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

OD feature isn't enabled on navi10 so skip it for the moment.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 4311ee38a774..7aa6cf3c0dac 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1618,6 +1618,13 @@ static int smu_v11_0_set_od8_default_settings(struct smu_context *smu,
 	struct smu_table_context *table_context = &smu->smu_table;
 	int ret;
 
+	/**
+	 * TODO: Enable overdrive for navi10, that replies on smc/pptable
+	 * support.
+	 */
+	if (smu->adev->asic_type == CHIP_NAVI10)
+		return 0;
+
 	if (initialize) {
 		if (table_context->overdrive_table)
 			return -EINVAL;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 156/459] drm/amd/powerplay: enable power features
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (54 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 155/459] drm/amd/powerplay: skip od feature on navi10 for the moment Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 157/459] drm/amd/powerplay: move the funciton of conv_profile_to_workload to asic file Alex Deucher
                     ` (36 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kenneth Feng, Hawking Zhang

From: Kenneth Feng <kenneth.feng@amd.com>

the below smu related power features can be enabled now.
1.Prefetcher
2.GFX DPM
3.SOCCLK DPM
4.MP0CLK DPM
5.LCLK DPM
6.GFX ULV
7.CG
8.PPT
9.TDC
10.GFX EDC
11.VR0HOT
12.Fan Control
13.Thermal Control
14.LED Display
15.MMHub PG
16.ATHub PG

Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 3327af2376d7..64fecbb08995 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -119,8 +119,8 @@ navi10_get_unallowed_feature_mask(struct smu_context *smu,
 	if (num > 2)
 		return -EINVAL;
 
-	feature_mask[0] = 0xffffffe4;
-	feature_mask[1] = 0xffffffff;	/* bit32~bit63 is Unsupported */
+	feature_mask[0] = 0xdc3f7f8c;
+	feature_mask[1] = 0xfffffcec;	/* bit32~bit63 is Unsupported */
 
 	return 0;
 }
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 157/459] drm/amd/powerplay: move the funciton of conv_profile_to_workload to asic file
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (55 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 156/459] drm/amd/powerplay: enable power features Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 158/459] drm/amd/powerplay: move the function of get[set]_power_profile " Alex Deucher
                     ` (35 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

the function of conv_profile_to_workload is asic related function,
so move them into vega20_ppt file

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  3 ++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 37 +------------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 32 ++++++++++++++++
 3 files changed, 37 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 2f8fe2a3d694..f5305deacaab 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -451,6 +451,7 @@ struct pptable_funcs {
 					      *clocks);
 	int (*get_power_profile_mode)(struct smu_context *smu, char *buf);
 	int (*set_power_profile_mode)(struct smu_context *smu, long *input, uint32_t size);
+	int (*conv_profile_to_workload )(struct smu_context *smu, int power_profile);
 	enum amd_dpm_forced_level (*get_performance_level)(struct smu_context *smu);
 	int (*force_performance_level)(struct smu_context *smu, enum amd_dpm_forced_level level);
 	int (*pre_display_config_changed)(struct smu_context *smu);
@@ -728,6 +729,8 @@ struct smu_funcs
 	((smu)->funcs->notify_smu_enable_pwe ? (smu)->funcs->notify_smu_enable_pwe((smu)) : 0)
 #define smu_set_watermarks_for_clock_ranges(smu, clock_ranges) \
 	((smu)->funcs->set_watermarks_for_clock_ranges ? (smu)->funcs->set_watermarks_for_clock_ranges((smu), (clock_ranges)) : 0)
+#define smu_conv_profile_to_workload(smu, type) \
+	((smu)->ppt_funcs->conv_profile_to_workload ? (smu)->ppt_funcs->conv_profile_to_workload((smu), (type)) : 0)
 #define smu_dpm_set_uvd_enable(smu, enable) \
 	((smu)->funcs->dpm_set_uvd_enable ? (smu)->funcs->dpm_set_uvd_enable((smu), (enable)) : 0)
 #define smu_dpm_set_vce_enable(smu, enable) \
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 7aa6cf3c0dac..a5239504244e 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1652,37 +1652,6 @@ static int smu_v11_0_set_od8_default_settings(struct smu_context *smu,
 	return 0;
 }
 
-static int smu_v11_0_conv_power_profile_to_pplib_workload(int power_profile)
-{
-	int pplib_workload = 0;
-
-	switch (power_profile) {
-	case PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT:
-	     pplib_workload = WORKLOAD_DEFAULT_BIT;
-	     break;
-	case PP_SMC_POWER_PROFILE_FULLSCREEN3D:
-	     pplib_workload = WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT;
-	     break;
-	case PP_SMC_POWER_PROFILE_POWERSAVING:
-	     pplib_workload = WORKLOAD_PPLIB_POWER_SAVING_BIT;
-	     break;
-	case PP_SMC_POWER_PROFILE_VIDEO:
-	     pplib_workload = WORKLOAD_PPLIB_VIDEO_BIT;
-	     break;
-	case PP_SMC_POWER_PROFILE_VR:
-	     pplib_workload = WORKLOAD_PPLIB_VR_BIT;
-	     break;
-	case PP_SMC_POWER_PROFILE_COMPUTE:
-	     pplib_workload = WORKLOAD_PPLIB_COMPUTE_BIT;
-	     break;
-	case PP_SMC_POWER_PROFILE_CUSTOM:
-		pplib_workload = WORKLOAD_PPLIB_CUSTOM_BIT;
-		break;
-	}
-
-	return pplib_workload;
-}
-
 static int smu_v11_0_get_power_profile_mode(struct smu_context *smu, char *buf)
 {
 	DpmActivityMonitorCoeffInt_t activity_monitor;
@@ -1719,7 +1688,7 @@ static int smu_v11_0_get_power_profile_mode(struct smu_context *smu, char *buf)
 
 	for (i = 0; i <= PP_SMC_POWER_PROFILE_CUSTOM; i++) {
 		/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
-		workload_type = smu_v11_0_conv_power_profile_to_pplib_workload(i);
+		workload_type = smu_conv_profile_to_workload(smu, i);
 		result = smu_update_table_with_arg(smu, TABLE_ACTIVITY_MONITOR_COEFF,
 						   workload_type, &activity_monitor, false);
 		if (result) {
@@ -1868,8 +1837,7 @@ static int smu_v11_0_set_power_profile_mode(struct smu_context *smu, long *input
 	}
 
 	/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
-	workload_type =
-		smu_v11_0_conv_power_profile_to_pplib_workload(smu->power_profile_mode);
+	workload_type = smu_conv_profile_to_workload(smu, smu->power_profile_mode);
 	smu_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
 				    1 << workload_type);
 
@@ -2141,7 +2109,6 @@ static const struct smu_funcs smu_v11_0_funcs = {
 	.get_sclk = smu_v11_0_dpm_get_sclk,
 	.get_mclk = smu_v11_0_dpm_get_mclk,
 	.set_od8_default_settings = smu_v11_0_set_od8_default_settings,
-	.conv_power_profile_to_pplib_workload = smu_v11_0_conv_power_profile_to_pplib_workload,
 	.get_power_profile_mode = smu_v11_0_get_power_profile_mode,
 	.set_power_profile_mode = smu_v11_0_set_power_profile_mode,
 	.update_od8_settings = smu_v11_0_update_od8_settings,
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 62497ad66a39..3243928b6ee2 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -1496,6 +1496,37 @@ static int vega20_get_od_percentage(struct smu_context *smu,
 	return value;
 }
 
+static int vega20_conv_profile_to_workload(struct smu_context *smu, int power_profile)
+{
+	int pplib_workload = 0;
+
+	switch (power_profile) {
+	case PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT:
+		pplib_workload = WORKLOAD_DEFAULT_BIT;
+		break;
+	case PP_SMC_POWER_PROFILE_FULLSCREEN3D:
+		pplib_workload = WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT;
+		break;
+	case PP_SMC_POWER_PROFILE_POWERSAVING:
+		pplib_workload = WORKLOAD_PPLIB_POWER_SAVING_BIT;
+		break;
+	case PP_SMC_POWER_PROFILE_VIDEO:
+		pplib_workload = WORKLOAD_PPLIB_VIDEO_BIT;
+		break;
+	case PP_SMC_POWER_PROFILE_VR:
+		pplib_workload = WORKLOAD_PPLIB_VR_BIT;
+		break;
+	case PP_SMC_POWER_PROFILE_COMPUTE:
+		pplib_workload = WORKLOAD_PPLIB_COMPUTE_BIT;
+		break;
+	case PP_SMC_POWER_PROFILE_CUSTOM:
+		pplib_workload = WORKLOAD_PPLIB_CUSTOM_BIT;
+		break;
+	}
+
+	return pplib_workload;
+}
+
 static int
 vega20_get_profiling_clk_mask(struct smu_context *smu,
 			      enum amd_dpm_forced_level level,
@@ -2541,6 +2572,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.get_clock_by_type_with_latency = vega20_get_clock_by_type_with_latency,
 	.set_default_od8_settings = vega20_set_default_od8_setttings,
 	.get_od_percentage = vega20_get_od_percentage,
+	.conv_profile_to_workload = vega20_conv_profile_to_workload,
 	.get_performance_level = vega20_get_performance_level,
 	.force_performance_level = vega20_force_performance_level,
 	.update_specified_od8_value = vega20_update_specified_od8_value,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 158/459] drm/amd/powerplay: move the function of get[set]_power_profile to asic file
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (56 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 157/459] drm/amd/powerplay: move the funciton of conv_profile_to_workload to asic file Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 159/459] drm/amd/powerplay: move the function of uvd&vce dpm " Alex Deucher
                     ` (34 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

The callback of get[set]_power_profile is asic related function,
so move theme into vega20_ppt file.

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |   6 +-
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 194 -----------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 197 ++++++++++++++++++
 3 files changed, 199 insertions(+), 198 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index f5305deacaab..ca1c375bb101 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -537,8 +537,6 @@ struct smu_funcs
 	int (*set_od8_default_settings)(struct smu_context *smu,
 					bool initialize);
 	int (*conv_power_profile_to_pplib_workload)(int power_profile);
-	int (*get_power_profile_mode)(struct smu_context *smu, char *buf);
-	int (*set_power_profile_mode)(struct smu_context *smu, long *input, uint32_t size);
 	int (*update_od8_settings)(struct smu_context *smu,
 				   uint32_t index,
 				   uint32_t value);
@@ -663,9 +661,9 @@ struct smu_funcs
 #define smu_read_sensor(smu, sensor, data, size) \
 	((smu)->funcs->read_sensor? (smu)->funcs->read_sensor((smu), (sensor), (data), (size)) : 0)
 #define smu_get_power_profile_mode(smu, buf) \
-	((smu)->funcs->get_power_profile_mode ? (smu)->funcs->get_power_profile_mode((smu), buf) : 0)
+	((smu)->ppt_funcs->get_power_profile_mode ? (smu)->ppt_funcs->get_power_profile_mode((smu), buf) : 0)
 #define smu_set_power_profile_mode(smu, param, param_size) \
-	((smu)->funcs->set_power_profile_mode ? (smu)->funcs->set_power_profile_mode((smu), (param), (param_size)) : 0)
+	((smu)->ppt_funcs->set_power_profile_mode ? (smu)->ppt_funcs->set_power_profile_mode((smu), (param), (param_size)) : 0)
 #define smu_get_performance_level(smu) \
 	((smu)->ppt_funcs->get_performance_level ? (smu)->ppt_funcs->get_performance_level((smu)) : 0)
 #define smu_force_performance_level(smu, level) \
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index a5239504244e..f7dba32576ca 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1652,198 +1652,6 @@ static int smu_v11_0_set_od8_default_settings(struct smu_context *smu,
 	return 0;
 }
 
-static int smu_v11_0_get_power_profile_mode(struct smu_context *smu, char *buf)
-{
-	DpmActivityMonitorCoeffInt_t activity_monitor;
-	uint32_t i, size = 0;
-	uint16_t workload_type = 0;
-	static const char *profile_name[] = {
-					"BOOTUP_DEFAULT",
-					"3D_FULL_SCREEN",
-					"POWER_SAVING",
-					"VIDEO",
-					"VR",
-					"COMPUTE",
-					"CUSTOM"};
-	static const char *title[] = {
-			"PROFILE_INDEX(NAME)",
-			"CLOCK_TYPE(NAME)",
-			"FPS",
-			"UseRlcBusy",
-			"MinActiveFreqType",
-			"MinActiveFreq",
-			"BoosterFreqType",
-			"BoosterFreq",
-			"PD_Data_limit_c",
-			"PD_Data_error_coeff",
-			"PD_Data_error_rate_coeff"};
-	int result = 0;
-
-	if (!smu->pm_enabled || !buf)
-		return -EINVAL;
-
-	size += sprintf(buf + size, "%16s %s %s %s %s %s %s %s %s %s %s\n",
-			title[0], title[1], title[2], title[3], title[4], title[5],
-			title[6], title[7], title[8], title[9], title[10]);
-
-	for (i = 0; i <= PP_SMC_POWER_PROFILE_CUSTOM; i++) {
-		/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
-		workload_type = smu_conv_profile_to_workload(smu, i);
-		result = smu_update_table_with_arg(smu, TABLE_ACTIVITY_MONITOR_COEFF,
-						   workload_type, &activity_monitor, false);
-		if (result) {
-			pr_err("[%s] Failed to get activity monitor!", __func__);
-			return result;
-		}
-
-		size += sprintf(buf + size, "%2d %14s%s:\n",
-			i, profile_name[i], (i == smu->power_profile_mode) ? "*" : " ");
-
-		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
-			" ",
-			0,
-			"GFXCLK",
-			activity_monitor.Gfx_FPS,
-			activity_monitor.Gfx_UseRlcBusy,
-			activity_monitor.Gfx_MinActiveFreqType,
-			activity_monitor.Gfx_MinActiveFreq,
-			activity_monitor.Gfx_BoosterFreqType,
-			activity_monitor.Gfx_BoosterFreq,
-			activity_monitor.Gfx_PD_Data_limit_c,
-			activity_monitor.Gfx_PD_Data_error_coeff,
-			activity_monitor.Gfx_PD_Data_error_rate_coeff);
-
-		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
-			" ",
-			1,
-			"SOCCLK",
-			activity_monitor.Soc_FPS,
-			activity_monitor.Soc_UseRlcBusy,
-			activity_monitor.Soc_MinActiveFreqType,
-			activity_monitor.Soc_MinActiveFreq,
-			activity_monitor.Soc_BoosterFreqType,
-			activity_monitor.Soc_BoosterFreq,
-			activity_monitor.Soc_PD_Data_limit_c,
-			activity_monitor.Soc_PD_Data_error_coeff,
-			activity_monitor.Soc_PD_Data_error_rate_coeff);
-
-		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
-			" ",
-			2,
-			"UCLK",
-			activity_monitor.Mem_FPS,
-			activity_monitor.Mem_UseRlcBusy,
-			activity_monitor.Mem_MinActiveFreqType,
-			activity_monitor.Mem_MinActiveFreq,
-			activity_monitor.Mem_BoosterFreqType,
-			activity_monitor.Mem_BoosterFreq,
-			activity_monitor.Mem_PD_Data_limit_c,
-			activity_monitor.Mem_PD_Data_error_coeff,
-			activity_monitor.Mem_PD_Data_error_rate_coeff);
-
-		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
-			" ",
-			3,
-			"FCLK",
-			activity_monitor.Fclk_FPS,
-			activity_monitor.Fclk_UseRlcBusy,
-			activity_monitor.Fclk_MinActiveFreqType,
-			activity_monitor.Fclk_MinActiveFreq,
-			activity_monitor.Fclk_BoosterFreqType,
-			activity_monitor.Fclk_BoosterFreq,
-			activity_monitor.Fclk_PD_Data_limit_c,
-			activity_monitor.Fclk_PD_Data_error_coeff,
-			activity_monitor.Fclk_PD_Data_error_rate_coeff);
-	}
-
-	return size;
-}
-
-static int smu_v11_0_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
-{
-	DpmActivityMonitorCoeffInt_t activity_monitor;
-	int workload_type = 0, ret = 0;
-
-	smu->power_profile_mode = input[size];
-
-	if (!smu->pm_enabled)
-		return ret;
-	if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
-		pr_err("Invalid power profile mode %d\n", smu->power_profile_mode);
-		return -EINVAL;
-	}
-
-	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
-		ret = smu_update_table_with_arg(smu, TABLE_ACTIVITY_MONITOR_COEFF,
-						WORKLOAD_PPLIB_CUSTOM_BIT, &activity_monitor, false);
-		if (ret) {
-			pr_err("[%s] Failed to get activity monitor!", __func__);
-			return ret;
-		}
-
-		switch (input[0]) {
-		case 0: /* Gfxclk */
-			activity_monitor.Gfx_FPS = input[1];
-			activity_monitor.Gfx_UseRlcBusy = input[2];
-			activity_monitor.Gfx_MinActiveFreqType = input[3];
-			activity_monitor.Gfx_MinActiveFreq = input[4];
-			activity_monitor.Gfx_BoosterFreqType = input[5];
-			activity_monitor.Gfx_BoosterFreq = input[6];
-			activity_monitor.Gfx_PD_Data_limit_c = input[7];
-			activity_monitor.Gfx_PD_Data_error_coeff = input[8];
-			activity_monitor.Gfx_PD_Data_error_rate_coeff = input[9];
-			break;
-		case 1: /* Socclk */
-			activity_monitor.Soc_FPS = input[1];
-			activity_monitor.Soc_UseRlcBusy = input[2];
-			activity_monitor.Soc_MinActiveFreqType = input[3];
-			activity_monitor.Soc_MinActiveFreq = input[4];
-			activity_monitor.Soc_BoosterFreqType = input[5];
-			activity_monitor.Soc_BoosterFreq = input[6];
-			activity_monitor.Soc_PD_Data_limit_c = input[7];
-			activity_monitor.Soc_PD_Data_error_coeff = input[8];
-			activity_monitor.Soc_PD_Data_error_rate_coeff = input[9];
-			break;
-		case 2: /* Uclk */
-			activity_monitor.Mem_FPS = input[1];
-			activity_monitor.Mem_UseRlcBusy = input[2];
-			activity_monitor.Mem_MinActiveFreqType = input[3];
-			activity_monitor.Mem_MinActiveFreq = input[4];
-			activity_monitor.Mem_BoosterFreqType = input[5];
-			activity_monitor.Mem_BoosterFreq = input[6];
-			activity_monitor.Mem_PD_Data_limit_c = input[7];
-			activity_monitor.Mem_PD_Data_error_coeff = input[8];
-			activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
-			break;
-		case 3: /* Fclk */
-			activity_monitor.Fclk_FPS = input[1];
-			activity_monitor.Fclk_UseRlcBusy = input[2];
-			activity_monitor.Fclk_MinActiveFreqType = input[3];
-			activity_monitor.Fclk_MinActiveFreq = input[4];
-			activity_monitor.Fclk_BoosterFreqType = input[5];
-			activity_monitor.Fclk_BoosterFreq = input[6];
-			activity_monitor.Fclk_PD_Data_limit_c = input[7];
-			activity_monitor.Fclk_PD_Data_error_coeff = input[8];
-			activity_monitor.Fclk_PD_Data_error_rate_coeff = input[9];
-			break;
-		}
-
-		ret = smu_update_table_with_arg(smu, TABLE_ACTIVITY_MONITOR_COEFF,
-						WORKLOAD_PPLIB_COMPUTE_BIT, &activity_monitor, true);
-		if (ret) {
-			pr_err("[%s] Failed to set activity monitor!", __func__);
-			return ret;
-		}
-	}
-
-	/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
-	workload_type = smu_conv_profile_to_workload(smu, smu->power_profile_mode);
-	smu_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
-				    1 << workload_type);
-
-	return ret;
-}
-
 static int smu_v11_0_update_od8_settings(struct smu_context *smu,
 					uint32_t index,
 					uint32_t value)
@@ -2109,8 +1917,6 @@ static const struct smu_funcs smu_v11_0_funcs = {
 	.get_sclk = smu_v11_0_dpm_get_sclk,
 	.get_mclk = smu_v11_0_dpm_get_mclk,
 	.set_od8_default_settings = smu_v11_0_set_od8_default_settings,
-	.get_power_profile_mode = smu_v11_0_get_power_profile_mode,
-	.set_power_profile_mode = smu_v11_0_set_power_profile_mode,
 	.update_od8_settings = smu_v11_0_update_od8_settings,
 	.dpm_set_uvd_enable = smu_v11_0_dpm_set_uvd_enable,
 	.dpm_set_vce_enable = smu_v11_0_dpm_set_vce_enable,
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 3243928b6ee2..10a70f8c7e9b 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -1527,6 +1527,201 @@ static int vega20_conv_profile_to_workload(struct smu_context *smu, int power_pr
 	return pplib_workload;
 }
 
+static int vega20_get_power_profile_mode(struct smu_context *smu, char *buf)
+{
+	DpmActivityMonitorCoeffInt_t activity_monitor;
+	uint32_t i, size = 0;
+	uint16_t workload_type = 0;
+	static const char *profile_name[] = {
+					"BOOTUP_DEFAULT",
+					"3D_FULL_SCREEN",
+					"POWER_SAVING",
+					"VIDEO",
+					"VR",
+					"COMPUTE",
+					"CUSTOM"};
+	static const char *title[] = {
+			"PROFILE_INDEX(NAME)",
+			"CLOCK_TYPE(NAME)",
+			"FPS",
+			"UseRlcBusy",
+			"MinActiveFreqType",
+			"MinActiveFreq",
+			"BoosterFreqType",
+			"BoosterFreq",
+			"PD_Data_limit_c",
+			"PD_Data_error_coeff",
+			"PD_Data_error_rate_coeff"};
+	int result = 0;
+
+	if (!smu->pm_enabled || !buf)
+		return -EINVAL;
+
+	size += sprintf(buf + size, "%16s %s %s %s %s %s %s %s %s %s %s\n",
+			title[0], title[1], title[2], title[3], title[4], title[5],
+			title[6], title[7], title[8], title[9], title[10]);
+
+	for (i = 0; i <= PP_SMC_POWER_PROFILE_CUSTOM; i++) {
+		/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+		workload_type = smu_conv_profile_to_workload(smu, i);
+		result = smu_update_table(smu,
+					  TABLE_ACTIVITY_MONITOR_COEFF | workload_type << 16,
+					  (void *)(&activity_monitor), false);
+		if (result) {
+			pr_err("[%s] Failed to get activity monitor!", __func__);
+			return result;
+		}
+
+		size += sprintf(buf + size, "%2d %14s%s:\n",
+			i, profile_name[i], (i == smu->power_profile_mode) ? "*" : " ");
+
+		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
+			" ",
+			0,
+			"GFXCLK",
+			activity_monitor.Gfx_FPS,
+			activity_monitor.Gfx_UseRlcBusy,
+			activity_monitor.Gfx_MinActiveFreqType,
+			activity_monitor.Gfx_MinActiveFreq,
+			activity_monitor.Gfx_BoosterFreqType,
+			activity_monitor.Gfx_BoosterFreq,
+			activity_monitor.Gfx_PD_Data_limit_c,
+			activity_monitor.Gfx_PD_Data_error_coeff,
+			activity_monitor.Gfx_PD_Data_error_rate_coeff);
+
+		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
+			" ",
+			1,
+			"SOCCLK",
+			activity_monitor.Soc_FPS,
+			activity_monitor.Soc_UseRlcBusy,
+			activity_monitor.Soc_MinActiveFreqType,
+			activity_monitor.Soc_MinActiveFreq,
+			activity_monitor.Soc_BoosterFreqType,
+			activity_monitor.Soc_BoosterFreq,
+			activity_monitor.Soc_PD_Data_limit_c,
+			activity_monitor.Soc_PD_Data_error_coeff,
+			activity_monitor.Soc_PD_Data_error_rate_coeff);
+
+		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
+			" ",
+			2,
+			"UCLK",
+			activity_monitor.Mem_FPS,
+			activity_monitor.Mem_UseRlcBusy,
+			activity_monitor.Mem_MinActiveFreqType,
+			activity_monitor.Mem_MinActiveFreq,
+			activity_monitor.Mem_BoosterFreqType,
+			activity_monitor.Mem_BoosterFreq,
+			activity_monitor.Mem_PD_Data_limit_c,
+			activity_monitor.Mem_PD_Data_error_coeff,
+			activity_monitor.Mem_PD_Data_error_rate_coeff);
+
+		size += sprintf(buf + size, "%19s %d(%13s) %7d %7d %7d %7d %7d %7d %7d %7d %7d\n",
+			" ",
+			3,
+			"FCLK",
+			activity_monitor.Fclk_FPS,
+			activity_monitor.Fclk_UseRlcBusy,
+			activity_monitor.Fclk_MinActiveFreqType,
+			activity_monitor.Fclk_MinActiveFreq,
+			activity_monitor.Fclk_BoosterFreqType,
+			activity_monitor.Fclk_BoosterFreq,
+			activity_monitor.Fclk_PD_Data_limit_c,
+			activity_monitor.Fclk_PD_Data_error_coeff,
+			activity_monitor.Fclk_PD_Data_error_rate_coeff);
+	}
+
+	return size;
+}
+
+static int vega20_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
+{
+	DpmActivityMonitorCoeffInt_t activity_monitor;
+	int workload_type = 0, ret = 0;
+
+	smu->power_profile_mode = input[size];
+
+	if (!smu->pm_enabled)
+		return ret;
+	if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+		pr_err("Invalid power profile mode %d\n", smu->power_profile_mode);
+		return -EINVAL;
+	}
+
+	if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+		ret = smu_update_table(smu,
+				       TABLE_ACTIVITY_MONITOR_COEFF | WORKLOAD_PPLIB_CUSTOM_BIT << 16,
+				       (void *)(&activity_monitor), false);
+		if (ret) {
+			pr_err("[%s] Failed to get activity monitor!", __func__);
+			return ret;
+		}
+
+		switch (input[0]) {
+		case 0: /* Gfxclk */
+			activity_monitor.Gfx_FPS = input[1];
+			activity_monitor.Gfx_UseRlcBusy = input[2];
+			activity_monitor.Gfx_MinActiveFreqType = input[3];
+			activity_monitor.Gfx_MinActiveFreq = input[4];
+			activity_monitor.Gfx_BoosterFreqType = input[5];
+			activity_monitor.Gfx_BoosterFreq = input[6];
+			activity_monitor.Gfx_PD_Data_limit_c = input[7];
+			activity_monitor.Gfx_PD_Data_error_coeff = input[8];
+			activity_monitor.Gfx_PD_Data_error_rate_coeff = input[9];
+			break;
+		case 1: /* Socclk */
+			activity_monitor.Soc_FPS = input[1];
+			activity_monitor.Soc_UseRlcBusy = input[2];
+			activity_monitor.Soc_MinActiveFreqType = input[3];
+			activity_monitor.Soc_MinActiveFreq = input[4];
+			activity_monitor.Soc_BoosterFreqType = input[5];
+			activity_monitor.Soc_BoosterFreq = input[6];
+			activity_monitor.Soc_PD_Data_limit_c = input[7];
+			activity_monitor.Soc_PD_Data_error_coeff = input[8];
+			activity_monitor.Soc_PD_Data_error_rate_coeff = input[9];
+			break;
+		case 2: /* Uclk */
+			activity_monitor.Mem_FPS = input[1];
+			activity_monitor.Mem_UseRlcBusy = input[2];
+			activity_monitor.Mem_MinActiveFreqType = input[3];
+			activity_monitor.Mem_MinActiveFreq = input[4];
+			activity_monitor.Mem_BoosterFreqType = input[5];
+			activity_monitor.Mem_BoosterFreq = input[6];
+			activity_monitor.Mem_PD_Data_limit_c = input[7];
+			activity_monitor.Mem_PD_Data_error_coeff = input[8];
+			activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+			break;
+		case 3: /* Fclk */
+			activity_monitor.Fclk_FPS = input[1];
+			activity_monitor.Fclk_UseRlcBusy = input[2];
+			activity_monitor.Fclk_MinActiveFreqType = input[3];
+			activity_monitor.Fclk_MinActiveFreq = input[4];
+			activity_monitor.Fclk_BoosterFreqType = input[5];
+			activity_monitor.Fclk_BoosterFreq = input[6];
+			activity_monitor.Fclk_PD_Data_limit_c = input[7];
+			activity_monitor.Fclk_PD_Data_error_coeff = input[8];
+			activity_monitor.Fclk_PD_Data_error_rate_coeff = input[9];
+			break;
+		}
+
+		ret = smu_update_table(smu,
+				       TABLE_ACTIVITY_MONITOR_COEFF | WORKLOAD_PPLIB_CUSTOM_BIT << 16,
+				       (void *)(&activity_monitor), true);
+		if (ret) {
+			pr_err("[%s] Failed to set activity monitor!", __func__);
+			return ret;
+		}
+	}
+
+	/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+	workload_type = smu_conv_profile_to_workload(smu, smu->power_profile_mode);
+	smu_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+				    1 << workload_type);
+
+	return ret;
+}
+
 static int
 vega20_get_profiling_clk_mask(struct smu_context *smu,
 			      enum amd_dpm_forced_level level,
@@ -2573,6 +2768,8 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.set_default_od8_settings = vega20_set_default_od8_setttings,
 	.get_od_percentage = vega20_get_od_percentage,
 	.conv_profile_to_workload = vega20_conv_profile_to_workload,
+	.get_power_profile_mode = vega20_get_power_profile_mode,
+	.set_power_profile_mode = vega20_set_power_profile_mode,
 	.get_performance_level = vega20_get_performance_level,
 	.force_performance_level = vega20_force_performance_level,
 	.update_specified_od8_value = vega20_update_specified_od8_value,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 159/459] drm/amd/powerplay: move the function of uvd&vce dpm to asic file
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (57 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 158/459] drm/amd/powerplay: move the function of get[set]_power_profile " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 160/459] drm/amd/powerplay: move the function of read_sensor " Alex Deucher
                     ` (33 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  8 +++----
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 24 -------------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 24 +++++++++++++++++++
 3 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index ca1c375bb101..57049af9a5a2 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -454,6 +454,8 @@ struct pptable_funcs {
 	int (*conv_profile_to_workload )(struct smu_context *smu, int power_profile);
 	enum amd_dpm_forced_level (*get_performance_level)(struct smu_context *smu);
 	int (*force_performance_level)(struct smu_context *smu, enum amd_dpm_forced_level level);
+	int (*dpm_set_uvd_enable)(struct smu_context *smu, bool enable);
+	int (*dpm_set_vce_enable)(struct smu_context *smu, bool enable);
 	int (*pre_display_config_changed)(struct smu_context *smu);
 	int (*display_config_changed)(struct smu_context *smu);
 	int (*apply_clocks_adjust_rules)(struct smu_context *smu);
@@ -540,8 +542,6 @@ struct smu_funcs
 	int (*update_od8_settings)(struct smu_context *smu,
 				   uint32_t index,
 				   uint32_t value);
-	int (*dpm_set_uvd_enable)(struct smu_context *smu, bool enable);
-	int (*dpm_set_vce_enable)(struct smu_context *smu, bool enable);
 	uint32_t (*get_sclk)(struct smu_context *smu, bool low);
 	uint32_t (*get_mclk)(struct smu_context *smu, bool low);
 	int (*get_current_rpm)(struct smu_context *smu, uint32_t *speed);
@@ -730,9 +730,9 @@ struct smu_funcs
 #define smu_conv_profile_to_workload(smu, type) \
 	((smu)->ppt_funcs->conv_profile_to_workload ? (smu)->ppt_funcs->conv_profile_to_workload((smu), (type)) : 0)
 #define smu_dpm_set_uvd_enable(smu, enable) \
-	((smu)->funcs->dpm_set_uvd_enable ? (smu)->funcs->dpm_set_uvd_enable((smu), (enable)) : 0)
+	((smu)->ppt_funcs->dpm_set_uvd_enable ? (smu)->ppt_funcs->dpm_set_uvd_enable((smu), (enable)) : 0)
 #define smu_dpm_set_vce_enable(smu, enable) \
-	((smu)->funcs->dpm_set_vce_enable ? (smu)->funcs->dpm_set_vce_enable((smu), (enable)) : 0)
+	((smu)->ppt_funcs->dpm_set_vce_enable ? (smu)->ppt_funcs->dpm_set_vce_enable((smu), (enable)) : 0)
 #define smu_get_sclk(smu, low) \
 	((smu)->funcs->get_sclk ? (smu)->funcs->get_sclk((smu), (low)) : 0)
 #define smu_get_mclk(smu, low) \
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index f7dba32576ca..2cbf270bc1e1 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1678,28 +1678,6 @@ static int smu_v11_0_update_od8_settings(struct smu_context *smu,
 	return 0;
 }
 
-static int smu_v11_0_dpm_set_uvd_enable(struct smu_context *smu, bool enable)
-{
-	if (!smu_feature_is_supported(smu, FEATURE_DPM_UVD_BIT))
-		return 0;
-
-	if (enable == smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT))
-		return 0;
-
-	return smu_feature_set_enabled(smu, FEATURE_DPM_UVD_BIT, enable);
-}
-
-static int smu_v11_0_dpm_set_vce_enable(struct smu_context *smu, bool enable)
-{
-	if (!smu_feature_is_supported(smu, FEATURE_DPM_VCE_BIT))
-		return 0;
-
-	if (enable == smu_feature_is_enabled(smu, FEATURE_DPM_VCE_BIT))
-		return 0;
-
-	return smu_feature_set_enabled(smu, FEATURE_DPM_VCE_BIT, enable);
-}
-
 static int smu_v11_0_get_current_rpm(struct smu_context *smu,
 				     uint32_t *current_rpm)
 {
@@ -1918,8 +1896,6 @@ static const struct smu_funcs smu_v11_0_funcs = {
 	.get_mclk = smu_v11_0_dpm_get_mclk,
 	.set_od8_default_settings = smu_v11_0_set_od8_default_settings,
 	.update_od8_settings = smu_v11_0_update_od8_settings,
-	.dpm_set_uvd_enable = smu_v11_0_dpm_set_uvd_enable,
-	.dpm_set_vce_enable = smu_v11_0_dpm_set_vce_enable,
 	.get_current_rpm = smu_v11_0_get_current_rpm,
 	.get_fan_control_mode = smu_v11_0_get_fan_control_mode,
 	.set_fan_control_mode = smu_v11_0_set_fan_control_mode,
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 10a70f8c7e9b..31c104233323 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -2599,6 +2599,28 @@ static int vega20_odn_edit_dpm_table(struct smu_context *smu,
 	return ret;
 }
 
+static int vega20_dpm_set_uvd_enable(struct smu_context *smu, bool enable)
+{
+	if (!smu_feature_is_supported(smu, FEATURE_DPM_UVD_BIT))
+		return 0;
+
+	if (enable == smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT))
+		return 0;
+
+	return smu_feature_set_enabled(smu, FEATURE_DPM_UVD_BIT, enable);
+}
+
+static int vega20_dpm_set_vce_enable(struct smu_context *smu, bool enable)
+{
+	if (!smu_feature_is_supported(smu, FEATURE_DPM_VCE_BIT))
+		return 0;
+
+	if (enable == smu_feature_is_enabled(smu, FEATURE_DPM_VCE_BIT))
+		return 0;
+
+	return smu_feature_set_enabled(smu, FEATURE_DPM_VCE_BIT, enable);
+}
+
 static int vega20_get_enabled_smc_features(struct smu_context *smu,
 		uint64_t *features_enabled)
 {
@@ -2775,6 +2797,8 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.update_specified_od8_value = vega20_update_specified_od8_value,
 	.set_od_percentage = vega20_set_od_percentage,
 	.od_edit_dpm_table = vega20_odn_edit_dpm_table,
+	.dpm_set_uvd_enable = vega20_dpm_set_uvd_enable,
+	.dpm_set_vce_enable = vega20_dpm_set_vce_enable,
 	.pre_display_config_changed = vega20_pre_display_config_changed,
 	.display_config_changed = vega20_display_config_changed,
 	.apply_clocks_adjust_rules = vega20_apply_clocks_adjust_rules,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 160/459] drm/amd/powerplay: move the function of read_sensor to asic file
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (58 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 159/459] drm/amd/powerplay: move the function of uvd&vce dpm " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 161/459] drm/amd/powerplay: move the function of is_dpm_running " Alex Deucher
                     ` (32 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

The read_sensor functions has asic related parts code,
so move them to asic file to implement.

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  4 ++++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 12 ++++------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 23 +++++++++++++++++++
 3 files changed, 31 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 57049af9a5a2..fbbaf8e2a6a2 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -456,6 +456,8 @@ struct pptable_funcs {
 	int (*force_performance_level)(struct smu_context *smu, enum amd_dpm_forced_level level);
 	int (*dpm_set_uvd_enable)(struct smu_context *smu, bool enable);
 	int (*dpm_set_vce_enable)(struct smu_context *smu, bool enable);
+	int (*read_sensor)(struct smu_context *smu, enum amd_pp_sensors sensor,
+			   void *data, uint32_t *size);
 	int (*pre_display_config_changed)(struct smu_context *smu);
 	int (*display_config_changed)(struct smu_context *smu);
 	int (*apply_clocks_adjust_rules)(struct smu_context *smu);
@@ -660,6 +662,8 @@ struct smu_funcs
 	((smu)->funcs->start_thermal_control? (smu)->funcs->start_thermal_control((smu)) : 0)
 #define smu_read_sensor(smu, sensor, data, size) \
 	((smu)->funcs->read_sensor? (smu)->funcs->read_sensor((smu), (sensor), (data), (size)) : 0)
+#define smu_asic_read_sensor(smu, sensor, data, size) \
+	((smu)->ppt_funcs->read_sensor? (smu)->ppt_funcs->read_sensor((smu), (sensor), (data), (size)) : 0)
 #define smu_get_power_profile_mode(smu, buf) \
 	((smu)->ppt_funcs->get_power_profile_mode ? (smu)->ppt_funcs->get_power_profile_mode((smu), buf) : 0)
 #define smu_set_power_profile_mode(smu, param, param_size) \
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 2cbf270bc1e1..004c84223e40 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1381,14 +1381,6 @@ static int smu_v11_0_read_sensor(struct smu_context *smu,
 		ret = smu_v11_0_get_gfx_vdd(smu, (uint32_t *)data);
 		*size = 4;
 		break;
-	case AMDGPU_PP_SENSOR_UVD_POWER:
-		*(uint32_t *)data = smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT) ? 1 : 0;
-		*size = 4;
-		break;
-	case AMDGPU_PP_SENSOR_VCE_POWER:
-		*(uint32_t *)data = smu_feature_is_enabled(smu, FEATURE_DPM_VCE_BIT) ? 1 : 0;
-		*size = 4;
-		break;
 	case AMDGPU_PP_SENSOR_MIN_FAN_RPM:
 		*(uint32_t *)data = 0;
 		*size = 4;
@@ -1402,6 +1394,10 @@ static int smu_v11_0_read_sensor(struct smu_context *smu,
 		break;
 	}
 
+	/* try get sensor data by asic */
+	if (ret)
+		ret = smu_asic_read_sensor(smu, sensor, data, size);
+
 	if (ret)
 		*size = 0;
 
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 31c104233323..06f91969cf76 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -2772,6 +2772,28 @@ static int vega20_set_ppfeature_status(struct smu_context *smu, uint64_t new_ppf
 	return 0;
 }
 
+static int vega20_read_sensor(struct smu_context *smu,
+			      enum amd_pp_sensors sensor,
+			      void *data, uint32_t *size)
+{
+	int ret = 0;
+
+	switch (sensor) {
+	case AMDGPU_PP_SENSOR_UVD_POWER:
+		*(uint32_t *)data = smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT) ? 1 : 0;
+		*size = 4;
+		break;
+	case AMDGPU_PP_SENSOR_VCE_POWER:
+		*(uint32_t *)data = smu_feature_is_enabled(smu, FEATURE_DPM_VCE_BIT) ? 1 : 0;
+		*size = 4;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
 static const struct pptable_funcs vega20_ppt_funcs = {
 	.alloc_dpm_context = vega20_allocate_dpm_context,
 	.store_powerplay_table = vega20_store_powerplay_table,
@@ -2799,6 +2821,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.od_edit_dpm_table = vega20_odn_edit_dpm_table,
 	.dpm_set_uvd_enable = vega20_dpm_set_uvd_enable,
 	.dpm_set_vce_enable = vega20_dpm_set_vce_enable,
+	.read_sensor = vega20_read_sensor,
 	.pre_display_config_changed = vega20_pre_display_config_changed,
 	.display_config_changed = vega20_display_config_changed,
 	.apply_clocks_adjust_rules = vega20_apply_clocks_adjust_rules,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 161/459] drm/amd/powerplay: move the function of is_dpm_running to asic file
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (59 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 160/459] drm/amd/powerplay: move the function of read_sensor " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 162/459] drm/amd/powerplay: add smu11 smu_if_version check for navi10 Alex Deucher
                     ` (31 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

the function os is_dpm_running is aisc related function,
so move them to asic file.

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  4 ++--
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 22 -------------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 22 +++++++++++++++++++
 3 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index fbbaf8e2a6a2..7b31054b3e5e 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -474,6 +474,7 @@ struct pptable_funcs {
 	int (*set_cpu_power_state)(struct smu_context *smu);
 	int (*set_ppfeature_status)(struct smu_context *smu, uint64_t ppfeatures);
 	int (*get_ppfeature_status)(struct smu_context *smu, char *buf);
+	bool (*is_dpm_running)(struct smu_context *smu);
 };
 
 struct smu_funcs
@@ -505,7 +506,6 @@ struct smu_funcs
 	int (*init_display)(struct smu_context *smu);
 	int (*set_allowed_mask)(struct smu_context *smu);
 	int (*get_enabled_mask)(struct smu_context *smu, uint32_t *feature_mask, uint32_t num);
-	bool (*is_dpm_running)(struct smu_context *smu);
 	int (*update_feature_enable_state)(struct smu_context *smu, uint32_t feature_id, bool enabled);
 	int (*notify_display_change)(struct smu_context *smu);
 	int (*get_power_limit)(struct smu_context *smu, uint32_t *limit, bool def);
@@ -623,7 +623,7 @@ struct smu_funcs
 #define smu_feature_get_enabled_mask(smu, mask, num) \
 	((smu)->funcs->get_enabled_mask? (smu)->funcs->get_enabled_mask((smu), (mask), (num)) : 0)
 #define smu_is_dpm_running(smu) \
-	((smu)->funcs->is_dpm_running ? (smu)->funcs->is_dpm_running((smu)) : 0)
+	((smu)->ppt_funcs->is_dpm_running ? (smu)->ppt_funcs->is_dpm_running((smu)) : 0)
 #define smu_feature_update_enable_state(smu, feature_id, enabled) \
 	((smu)->funcs->update_feature_enable_state? (smu)->funcs->update_feature_enable_state((smu), (feature_id), (enabled)) : 0)
 #define smu_notify_display_change(smu) \
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 004c84223e40..8f2272a420b6 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -52,16 +52,6 @@ MODULE_FIRMWARE("amdgpu/navi10_smc.bin");
 #define SMU11_TEMPERATURE_UNITS_PER_CENTIGRADES 1000
 #define SMU11_VOLTAGE_SCALE 4
 
-#define SMC_DPM_FEATURE (FEATURE_DPM_PREFETCHER_MASK | \
-			 FEATURE_DPM_GFXCLK_MASK | \
-			 FEATURE_DPM_UCLK_MASK | \
-			 FEATURE_DPM_SOCCLK_MASK | \
-			 FEATURE_DPM_UVD_MASK | \
-			 FEATURE_DPM_VCE_MASK | \
-			 FEATURE_DPM_MP0CLK_MASK | \
-			 FEATURE_DPM_LINK_MASK | \
-			 FEATURE_DPM_DCEFCLK_MASK)
-
 static int smu_v11_0_send_msg_without_waiting(struct smu_context *smu,
 					      uint16_t msg)
 {
@@ -848,17 +838,6 @@ static int smu_v11_0_get_enabled_mask(struct smu_context *smu,
 	return ret;
 }
 
-static bool smu_v11_0_is_dpm_running(struct smu_context *smu)
-{
-	int ret = 0;
-	uint32_t feature_mask[2];
-	unsigned long feature_enabled;
-	ret = smu_v11_0_get_enabled_mask(smu, feature_mask, 2);
-	feature_enabled = (unsigned long)((uint64_t)feature_mask[0] |
-			   ((uint64_t)feature_mask[1] << 32));
-	return !!(feature_enabled & SMC_DPM_FEATURE);
-}
-
 static int smu_v11_0_system_features_control(struct smu_context *smu,
 					     bool en)
 {
@@ -1875,7 +1854,6 @@ static const struct smu_funcs smu_v11_0_funcs = {
 	.init_display = smu_v11_0_init_display,
 	.set_allowed_mask = smu_v11_0_set_allowed_mask,
 	.get_enabled_mask = smu_v11_0_get_enabled_mask,
-	.is_dpm_running = smu_v11_0_is_dpm_running,
 	.system_features_control = smu_v11_0_system_features_control,
 	.update_feature_enable_state = smu_v11_0_update_feature_enable_state,
 	.notify_display_change = smu_v11_0_notify_display_change,
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 06f91969cf76..e070c7e7cdb7 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -43,6 +43,16 @@
 #define MSG_MAP(msg) \
 	[SMU_MSG_##msg] = PPSMC_MSG_##msg
 
+#define SMC_DPM_FEATURE (FEATURE_DPM_PREFETCHER_MASK | \
+			 FEATURE_DPM_GFXCLK_MASK | \
+			 FEATURE_DPM_UCLK_MASK | \
+			 FEATURE_DPM_SOCCLK_MASK | \
+			 FEATURE_DPM_UVD_MASK | \
+			 FEATURE_DPM_VCE_MASK | \
+			 FEATURE_DPM_MP0CLK_MASK | \
+			 FEATURE_DPM_LINK_MASK | \
+			 FEATURE_DPM_DCEFCLK_MASK)
+
 static int vega20_message_map[SMU_MSG_MAX_COUNT] = {
 	MSG_MAP(TestMessage),
 	MSG_MAP(GetSmuVersion),
@@ -2794,6 +2804,17 @@ static int vega20_read_sensor(struct smu_context *smu,
 	return ret;
 }
 
+static bool vega20_is_dpm_running(struct smu_context *smu)
+{
+	int ret = 0;
+	uint32_t feature_mask[2];
+	unsigned long feature_enabled;
+	ret = smu_feature_get_enabled_mask(smu, feature_mask, 2);
+	feature_enabled = (unsigned long)((uint64_t)feature_mask[0] |
+			   ((uint64_t)feature_mask[1] << 32));
+	return !!(feature_enabled & SMC_DPM_FEATURE);
+}
+
 static const struct pptable_funcs vega20_ppt_funcs = {
 	.alloc_dpm_context = vega20_allocate_dpm_context,
 	.store_powerplay_table = vega20_store_powerplay_table,
@@ -2832,6 +2853,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.get_profiling_clk_mask = vega20_get_profiling_clk_mask,
 	.set_ppfeature_status = vega20_set_ppfeature_status,
 	.get_ppfeature_status = vega20_get_ppfeature_status,
+	.is_dpm_running = vega20_is_dpm_running,
 };
 
 void vega20_set_ppt_funcs(struct smu_context *smu)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 162/459] drm/amd/powerplay: add smu11 smu_if_version check for navi10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (60 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 161/459] drm/amd/powerplay: move the function of is_dpm_running " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 163/459] drm/amd/powerplay: implement smc firmware v2.1 for smu11 Alex Deucher
                     ` (30 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

add smu11 fw version check for navi10

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 64fecbb08995..424b138eba2f 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -305,4 +305,5 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 void navi10_set_ppt_funcs(struct smu_context *smu)
 {
 	smu->ppt_funcs = &navi10_ppt_funcs;
+	smu->smc_if_version = SMU11_DRIVER_IF_VERSION;
 }
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 163/459] drm/amd/powerplay: implement smc firmware v2.1 for smu11
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (61 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 162/459] drm/amd/powerplay: add smu11 smu_if_version check for navi10 Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 164/459] drm/amd/powerplay: remove duplicate code from smu hw init Alex Deucher
                     ` (29 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

1.add smc_firmware_header_v2_1 hfirmware support, support more pptable in smc firmware.
2.optimization current pptable load framework.
3.rename read_pptable_from_vbios with setup_pptable.

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h     | 13 +++
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c    |  2 +-
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  6 +-
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 94 ++++++++++++++-----
 4 files changed, 85 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 9b096228a02f..eaafea87aa3a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -56,6 +56,19 @@ struct smc_firmware_header_v2_0 {
 	uint32_t ppt_size_bytes; /* soft pptable size */
 };
 
+struct smc_soft_pptable_entry {
+        uint32_t id;
+        uint32_t ppt_offset_bytes;
+        uint32_t ppt_size_bytes;
+};
+
+/* version_major=2, version_minor=1 */
+struct smc_firmware_header_v2_1 {
+        struct smc_firmware_header_v1_0 v1_0;
+        uint32_t pptable_count;
+        uint32_t pptable_entry_offset;
+};
+
 /* version_major=1, version_minor=0 */
 struct psp_firmware_header_v1_0 {
 	struct common_firmware_header header;
diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index b9b56ec1aacf..2de67e16e5e3 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -633,7 +633,7 @@ static int smu_smc_table_hw_init(struct smu_context *smu,
 		if (ret)
 			return ret;
 
-		ret = smu_read_pptable_from_vbios(smu);
+		ret = smu_setup_pptable(smu);
 		if (ret)
 			return ret;
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 7b31054b3e5e..4ec643417b68 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -486,7 +486,7 @@ struct smu_funcs
 	int (*fini_power)(struct smu_context *smu);
 	int (*load_microcode)(struct smu_context *smu);
 	int (*check_fw_status)(struct smu_context *smu);
-	int (*read_pptable_from_vbios)(struct smu_context *smu);
+	int (*setup_pptable)(struct smu_context *smu);
 	int (*get_vbios_bootup_values)(struct smu_context *smu);
 	int (*get_clk_info_from_vbios)(struct smu_context *smu);
 	int (*check_pptable)(struct smu_context *smu);
@@ -570,8 +570,8 @@ struct smu_funcs
 	((smu)->funcs->load_microcode ? (smu)->funcs->load_microcode((smu)) : 0)
 #define smu_check_fw_status(smu) \
 	((smu)->funcs->check_fw_status ? (smu)->funcs->check_fw_status((smu)) : 0)
-#define smu_read_pptable_from_vbios(smu) \
-	((smu)->funcs->read_pptable_from_vbios ? (smu)->funcs->read_pptable_from_vbios((smu)) : 0)
+#define smu_setup_pptable(smu) \
+	((smu)->funcs->setup_pptable ? (smu)->funcs->setup_pptable((smu)) : 0)
 #define smu_get_vbios_bootup_values(smu) \
 	((smu)->funcs->get_vbios_bootup_values ? (smu)->funcs->get_vbios_bootup_values((smu)) : 0)
 #define smu_get_clk_info_from_vbios(smu) \
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 8f2272a420b6..a952d2a297f7 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -143,18 +143,6 @@ smu_v11_0_send_msg_with_param(struct smu_context *smu, uint16_t msg,
 	return ret;
 }
 
-static void smu_v11_0_init_smu_ext_microcode(struct smu_context *smu)
-{
-	struct amdgpu_device *adev = smu->adev;
-	const struct smc_firmware_header_v2_0 *v2;
-
-	v2 = (const struct smc_firmware_header_v2_0 *) adev->pm.fw->data;
-
-	smu->ppt_offset_bytes = le32_to_cpu(v2->ppt_offset_bytes);
-	smu->ppt_size_bytes = le32_to_cpu(v2->ppt_size_bytes);
-	smu->ppt_start_addr = (uint8_t *)v2 + smu->ppt_offset_bytes;
-}
-
 static int smu_v11_0_init_microcode(struct smu_context *smu)
 {
 	struct amdgpu_device *adev = smu->adev;
@@ -164,7 +152,6 @@ static int smu_v11_0_init_microcode(struct smu_context *smu)
 	const struct smc_firmware_header_v1_0 *hdr;
 	const struct common_firmware_header *header;
 	struct amdgpu_firmware_info *ucode = NULL;
-	uint32_t version_major, version_minor;
 
 	switch (adev->asic_type) {
 	case CHIP_VEGA20:
@@ -190,11 +177,6 @@ static int smu_v11_0_init_microcode(struct smu_context *smu)
 	amdgpu_ucode_print_smc_hdr(&hdr->header);
 	adev->pm.fw_version = le32_to_cpu(hdr->header.ucode_version);
 
-	version_major = le16_to_cpu(hdr->header.header_version_major);
-	version_minor = le16_to_cpu(hdr->header.header_version_minor);
-	if (version_major == 2 && version_minor == 0)
-		smu_v11_0_init_smu_ext_microcode(smu); /* with soft pptable */
-
 	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
 		ucode = &adev->firmware.ucode[AMDGPU_UCODE_ID_SMC];
 		ucode->ucode_id = AMDGPU_UCODE_ID_SMC;
@@ -293,22 +275,82 @@ static int smu_v11_0_check_fw_version(struct smu_context *smu)
 	return ret;
 }
 
-static int smu_v11_0_read_pptable_from_vbios(struct smu_context *smu)
+static int smu_v11_0_set_pptable_v2_0(struct smu_context *smu, void **table, uint32_t *size)
+{
+	struct amdgpu_device *adev = smu->adev;
+	uint32_t ppt_offset_bytes;
+	const struct smc_firmware_header_v2_0 *v2;
+
+	v2 = (const struct smc_firmware_header_v2_0 *) adev->pm.fw->data;
+
+	ppt_offset_bytes = le32_to_cpu(v2->ppt_offset_bytes);
+	*size = le32_to_cpu(v2->ppt_size_bytes);
+	*table = (uint8_t *)v2 + ppt_offset_bytes;
+
+	return 0;
+}
+
+static int smu_v11_0_set_pptable_v2_1(struct smu_context *smu, void **table, uint32_t *size, uint32_t pptable_id)
+{
+	struct amdgpu_device *adev = smu->adev;
+	const struct smc_firmware_header_v2_1 *v2_1;
+	struct smc_soft_pptable_entry *entries;
+	uint32_t pptable_count = 0;
+	int i = 0;
+
+	v2_1 = (const struct smc_firmware_header_v2_1 *) adev->pm.fw->data;
+	entries = (struct smc_soft_pptable_entry *)
+		((uint8_t *)v2_1 + le32_to_cpu(v2_1->pptable_entry_offset));
+	pptable_count = le32_to_cpu(v2_1->pptable_count);
+	for (i = 0; i < pptable_count; i++) {
+		if (le32_to_cpu(entries[i].id) == pptable_id) {
+			*table = ((uint8_t *)v2_1 + le32_to_cpu(entries[i].ppt_offset_bytes));
+			*size = le32_to_cpu(entries[i].ppt_size_bytes);
+			break;
+		}
+	}
+
+	if (i == pptable_count)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int smu_v11_0_setup_pptable(struct smu_context *smu)
 {
+	struct amdgpu_device *adev = smu->adev;
+	const struct smc_firmware_header_v1_0 *hdr;
 	int ret, index;
-	uint16_t size;
+	uint32_t size;
 	uint8_t frev, crev;
 	void *table;
+	uint16_t version_major, version_minor;
+
+	hdr = (const struct smc_firmware_header_v1_0 *) adev->pm.fw->data;
+	version_major = le16_to_cpu(hdr->header.header_version_major);
+	version_minor = le16_to_cpu(hdr->header.header_version_minor);
+
+	if (version_major == 2 && smu->smu_table.boot_values.pp_table_id > 0) {
+		switch (version_minor) {
+		case 0:
+			ret = smu_v11_0_set_pptable_v2_0(smu, &table, &size);
+			break;
+		case 1:
+			ret = smu_v11_0_set_pptable_v2_1(smu, &table, &size,
+							 smu->smu_table.boot_values.pp_table_id);
+			break;
+		default:
+			ret = -EINVAL;
+			break;
+		}
+		if (ret)
+			return ret;
 
-	if (smu->smu_table.boot_values.pp_table_id > 0 && smu->ppt_start_addr) {
-		/* load soft pptable */
-		table = (void *)smu->ppt_start_addr;
-		size= smu->ppt_size_bytes;
 	} else {
 		index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
 						    powerplayinfo);
 
-		ret = smu_get_atom_data_table(smu, index, &size, &frev, &crev,
+		ret = smu_get_atom_data_table(smu, index, (uint16_t *)&size, &frev, &crev,
 					      (uint8_t **)&table);
 		if (ret)
 			return ret;
@@ -1836,7 +1878,7 @@ static const struct smu_funcs smu_v11_0_funcs = {
 	.send_smc_msg = smu_v11_0_send_msg,
 	.send_smc_msg_with_param = smu_v11_0_send_msg_with_param,
 	.read_smc_arg = smu_v11_0_read_arg,
-	.read_pptable_from_vbios = smu_v11_0_read_pptable_from_vbios,
+	.setup_pptable= smu_v11_0_setup_pptable,
 	.init_smc_tables = smu_v11_0_init_smc_tables,
 	.fini_smc_tables = smu_v11_0_fini_smc_tables,
 	.init_power = smu_v11_0_init_power,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 164/459] drm/amd/powerplay: remove duplicate code from smu hw init
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (62 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 163/459] drm/amd/powerplay: implement smc firmware v2.1 for smu11 Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 165/459] drm/amd/powerplay: optimization feature mask function for asic Alex Deucher
                     ` (28 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Xiaojie Yuan

From: Kevin Wang <kevin1.wang@amd.com>

remove duplicate code (un-used) in smu

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 2de67e16e5e3..a48ca6a4353c 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -637,10 +637,6 @@ static int smu_smc_table_hw_init(struct smu_context *smu,
 		if (ret)
 			return ret;
 
-		ret = smu_get_clk_info_from_vbios(smu);
-		if (ret)
-			return ret;
-
 		/*
 		 * check if the format_revision in vbios is up to pptable header
 		 * version, and the structure size is not 0.
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 165/459] drm/amd/powerplay: optimization feature mask function for asic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (63 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 164/459] drm/amd/powerplay: remove duplicate code from smu hw init Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 166/459] drm/amd/powerplay: add allowed feature mask for navi10 Alex Deucher
                     ` (27 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kenneth Feng, Kevin Wang, Huang Rui

From: Kevin Wang <kevin1.wang@amd.com>

1.change function return value type: from "unallowed" to "allowed"
2.replace feature mask number with feature macro, the code will clear.

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kenneth Feng <kenneth.feng@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c    | 10 +++---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  6 ++--
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c    | 22 +++++++++---
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 36 ++++++++++++++++---
 4 files changed, 57 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index a48ca6a4353c..cc245f4c61ab 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -233,22 +233,22 @@ int smu_feature_init_dpm(struct smu_context *smu)
 {
 	struct smu_feature *feature = &smu->smu_feature;
 	int ret = 0;
-	uint32_t unallowed_feature_mask[SMU_FEATURE_MAX/32];
+	uint32_t allowed_feature_mask[SMU_FEATURE_MAX/32];
 
 	if (!smu->pm_enabled)
 		return ret;
 	mutex_lock(&feature->mutex);
-	bitmap_fill(feature->allowed, SMU_FEATURE_MAX);
+	bitmap_zero(feature->allowed, SMU_FEATURE_MAX);
 	mutex_unlock(&feature->mutex);
 
-	ret = smu_get_unallowed_feature_mask(smu, unallowed_feature_mask,
+	ret = smu_get_allowed_feature_mask(smu, allowed_feature_mask,
 					     SMU_FEATURE_MAX/32);
 	if (ret)
 		return ret;
 
 	mutex_lock(&feature->mutex);
-	bitmap_andnot(feature->allowed, feature->allowed,
-		      (unsigned long *)unallowed_feature_mask,
+	bitmap_or(feature->allowed, feature->allowed,
+		      (unsigned long *)allowed_feature_mask,
 		      feature->feature_num);
 	mutex_unlock(&feature->mutex);
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 4ec643417b68..23324b9fb31b 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -421,7 +421,7 @@ struct pptable_funcs {
 	int (*append_powerplay_table)(struct smu_context *smu);
 	int (*get_smu_msg_index)(struct smu_context *smu, uint32_t index);
 	int (*run_afll_btc)(struct smu_context *smu);
-	int (*get_unallowed_feature_mask)(struct smu_context *smu, uint32_t *feature_mask, uint32_t num);
+	int (*get_allowed_feature_mask)(struct smu_context *smu, uint32_t *feature_mask, uint32_t num);
 	enum amd_pm_state_type (*get_current_power_state)(struct smu_context *smu);
 	int (*set_default_dpm_table)(struct smu_context *smu);
 	int (*set_power_state)(struct smu_context *smu);
@@ -703,8 +703,8 @@ struct smu_funcs
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_msg_index? (smu)->ppt_funcs->get_smu_msg_index((smu), (msg)) : -EINVAL) : -EINVAL)
 #define smu_run_afll_btc(smu) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->run_afll_btc? (smu)->ppt_funcs->run_afll_btc((smu)) : 0) : 0)
-#define smu_get_unallowed_feature_mask(smu, feature_mask, num) \
-	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_unallowed_feature_mask? (smu)->ppt_funcs->get_unallowed_feature_mask((smu), (feature_mask), (num)) : 0) : 0)
+#define smu_get_allowed_feature_mask(smu, feature_mask, num) \
+	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_allowed_feature_mask? (smu)->ppt_funcs->get_allowed_feature_mask((smu), (feature_mask), (num)) : 0) : 0)
 #define smu_set_deep_sleep_dcefclk(smu, clk) \
 	((smu)->funcs->set_deep_sleep_dcefclk ? (smu)->funcs->set_deep_sleep_dcefclk((smu), (clk)) : 0)
 #define smu_set_active_display_count(smu, count) \
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 424b138eba2f..6c2000734f1f 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -112,15 +112,29 @@ static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 	return val;
 }
 
+#define FEATURE_MASK(feature) (1UL << feature)
 static int
-navi10_get_unallowed_feature_mask(struct smu_context *smu,
+navi10_get_allowed_feature_mask(struct smu_context *smu,
 				  uint32_t *feature_mask, uint32_t num)
 {
 	if (num > 2)
 		return -EINVAL;
 
-	feature_mask[0] = 0xdc3f7f8c;
-	feature_mask[1] = 0xfffffcec;	/* bit32~bit63 is Unsupported */
+	memset(feature_mask, 0, sizeof(uint32_t) * num);
+
+	*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_DPM_SOCCLK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_MP0CLK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_LINK_BIT)
+				| FEATURE_MASK(FEATURE_GFX_ULV_BIT)
+				| FEATURE_MASK(FEATURE_RSMU_SMN_CG_BIT)
+				| FEATURE_MASK(FEATURE_PPT_BIT)
+				| FEATURE_MASK(FEATURE_TDC_BIT)
+				| FEATURE_MASK(FEATURE_GFX_EDC_BIT)
+				| FEATURE_MASK(FEATURE_VR0HOT_BIT)
+				| FEATURE_MASK(FEATURE_FAN_CONTROL_BIT)
+				| FEATURE_MASK(FEATURE_THERMAL_BIT)
+				| FEATURE_MASK(FEATURE_LED_DISPLAY_BIT)
+				| FEATURE_MASK(FEATURE_MMHUB_PG);
 
 	return 0;
 }
@@ -298,7 +312,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 	.check_powerplay_table = navi10_check_powerplay_table,
 	.append_powerplay_table = navi10_append_powerplay_table,
 	.get_smu_msg_index = navi10_get_smu_msg_index,
-	.get_unallowed_feature_mask = navi10_get_unallowed_feature_mask,
+	.get_allowed_feature_mask = navi10_get_allowed_feature_mask,
 	.set_default_dpm_table = navi10_set_default_dpm_table,
 };
 
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index e070c7e7cdb7..7e6148ab134b 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -402,16 +402,42 @@ static int vega20_run_btc_afll(struct smu_context *smu)
 	return smu_send_smc_msg(smu, SMU_MSG_RunAfllBtc);
 }
 
+#define FEATURE_MASK(feature) (1UL << feature)
 static int
-vega20_get_unallowed_feature_mask(struct smu_context *smu,
+vega20_get_allowed_feature_mask(struct smu_context *smu,
 				  uint32_t *feature_mask, uint32_t num)
 {
 	if (num > 2)
 		return -EINVAL;
 
-	feature_mask[0] = 0xE0041C00;
-	feature_mask[1] = 0xFFFFFFFE; /* bit32~bit63 is Unsupported */
-
+	memset(feature_mask, 0, sizeof(uint32_t) * num);
+
+	*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_DPM_PREFETCHER_BIT)
+				| FEATURE_MASK(FEATURE_DPM_GFXCLK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_UCLK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_SOCCLK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_UVD_BIT)
+				| FEATURE_MASK(FEATURE_DPM_VCE_BIT)
+				| FEATURE_MASK(FEATURE_ULV_BIT)
+				| FEATURE_MASK(FEATURE_DPM_MP0CLK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_LINK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_DCEFCLK_BIT)
+				| FEATURE_MASK(FEATURE_PPT_BIT)
+				| FEATURE_MASK(FEATURE_TDC_BIT)
+				| FEATURE_MASK(FEATURE_THERMAL_BIT)
+				| FEATURE_MASK(FEATURE_GFX_PER_CU_CG_BIT)
+				| FEATURE_MASK(FEATURE_RM_BIT)
+				| FEATURE_MASK(FEATURE_ACDC_BIT)
+				| FEATURE_MASK(FEATURE_VR0HOT_BIT)
+				| FEATURE_MASK(FEATURE_VR1HOT_BIT)
+				| FEATURE_MASK(FEATURE_FW_CTF_BIT)
+				| FEATURE_MASK(FEATURE_LED_DISPLAY_BIT)
+				| FEATURE_MASK(FEATURE_FAN_CONTROL_BIT)
+				| FEATURE_MASK(FEATURE_GFX_EDC_BIT)
+				| FEATURE_MASK(FEATURE_GFXOFF_BIT)
+				| FEATURE_MASK(FEATURE_CG_BIT)
+				| FEATURE_MASK(FEATURE_DPM_FCLK_BIT)
+				| FEATURE_MASK(FEATURE_XGMI_BIT);
 	return 0;
 }
 
@@ -2822,7 +2848,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.append_powerplay_table = vega20_append_powerplay_table,
 	.get_smu_msg_index = vega20_get_smu_msg_index,
 	.run_afll_btc = vega20_run_btc_afll,
-	.get_unallowed_feature_mask = vega20_get_unallowed_feature_mask,
+	.get_allowed_feature_mask = vega20_get_allowed_feature_mask,
 	.get_current_power_state = vega20_get_current_power_state,
 	.set_default_dpm_table = vega20_set_default_dpm_table,
 	.set_power_state = NULL,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 166/459] drm/amd/powerplay: add allowed feature mask for navi10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (64 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 165/459] drm/amd/powerplay: optimization feature mask function for asic Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 167/459] drm/amd: add gfxoff support on navi10 Alex Deucher
                     ` (26 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Kenneth Feng

From: Kevin Wang <kevin1.wang@amd.com>

add smu feature mask:
1.FEATURE_DPM_PREFETCHER_BIT
2.FEATURE_DPM_PREFETCHER_BIT
3.FEATURE_ATHUB_PG

Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Kenneth Feng <kenneth.feng@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 6c2000734f1f..b8d9d1a73b16 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -122,7 +122,9 @@ navi10_get_allowed_feature_mask(struct smu_context *smu,
 
 	memset(feature_mask, 0, sizeof(uint32_t) * num);
 
-	*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_DPM_SOCCLK_BIT)
+	*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_DPM_PREFETCHER_BIT)
+				| FEATURE_MASK(FEATURE_DPM_GFXCLK_BIT)
+				| FEATURE_MASK(FEATURE_DPM_SOCCLK_BIT)
 				| FEATURE_MASK(FEATURE_DPM_MP0CLK_BIT)
 				| FEATURE_MASK(FEATURE_DPM_LINK_BIT)
 				| FEATURE_MASK(FEATURE_GFX_ULV_BIT)
@@ -134,7 +136,8 @@ navi10_get_allowed_feature_mask(struct smu_context *smu,
 				| FEATURE_MASK(FEATURE_FAN_CONTROL_BIT)
 				| FEATURE_MASK(FEATURE_THERMAL_BIT)
 				| FEATURE_MASK(FEATURE_LED_DISPLAY_BIT)
-				| FEATURE_MASK(FEATURE_MMHUB_PG);
+				| FEATURE_MASK(FEATURE_MMHUB_PG)
+				| FEATURE_MASK(FEATURE_ATHUB_PG);
 
 	return 0;
 }
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 167/459] drm/amd: add gfxoff support on navi10
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (65 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 166/459] drm/amd/powerplay: add allowed feature mask for navi10 Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 168/459] drm/amd/amdgpu: fw version check with gfxoff Alex Deucher
                     ` (25 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kenneth Feng, Hawking Zhang

From: Kenneth Feng <kenneth.feng@amd.com>

add the gfxoff interface to navi10,it's disabled by default.

Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c       | 20 +++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h       |  7 +++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       |  4 +++-
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  5 ++++-
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 16 +++++++++++++++
 5 files changed, 46 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
index 523b8ab6b04e..b5397135c417 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
@@ -920,3 +920,23 @@ int amdgpu_dpm_get_mclk(struct amdgpu_device *adev, bool low)
 	else
 		return (adev)->powerplay.pp_funcs->get_mclk((adev)->powerplay.pp_handle, (low));
 }
+
+int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev, uint32_t block_type, bool gate)
+{
+	int ret = 0;
+	bool swsmu = is_support_sw_smu(adev);
+
+	switch (block_type) {
+	case AMD_IP_BLOCK_TYPE_GFX:
+		if (swsmu)
+			ret = smu_gfx_off_control(&adev->smu, gate);
+		else
+			ret = ((adev)->powerplay.pp_funcs->set_powergating_by_smu(
+				(adev)->powerplay.pp_handle, block_type, gate));
+		break;
+	default:
+		break;
+	}
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
index 521dbd0d9af8..1c5c0fd76dbf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
@@ -355,10 +355,6 @@ enum amdgpu_pcie_gen {
 		((adev)->powerplay.pp_funcs->set_clockgating_by_smu(\
 			(adev)->powerplay.pp_handle, msg_id))
 
-#define amdgpu_dpm_set_powergating_by_smu(adev, block_type, gate) \
-		((adev)->powerplay.pp_funcs->set_powergating_by_smu(\
-			(adev)->powerplay.pp_handle, block_type, gate))
-
 #define amdgpu_dpm_get_power_profile_mode(adev, buf) \
 		((adev)->powerplay.pp_funcs->get_power_profile_mode(\
 			(adev)->powerplay.pp_handle, buf))
@@ -520,6 +516,9 @@ enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct amdgpu_device *adev,
 struct amd_vce_state*
 amdgpu_get_vce_clock_state(void *handle, u32 idx);
 
+int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev,
+				      uint32_t block_type, bool gate);
+
 extern int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low);
 
 extern int amdgpu_dpm_get_mclk(struct amdgpu_device *adev, bool low);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index 8c2b8543d7bd..633f6876b20d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -547,7 +547,9 @@ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
 	if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))
 		return;
 
-	if (!adev->powerplay.pp_funcs || !adev->powerplay.pp_funcs->set_powergating_by_smu)
+	if (!is_support_sw_smu(adev) &&
+	    (!adev->powerplay.pp_funcs ||
+	     !adev->powerplay.pp_funcs->set_powergating_by_smu))
 		return;
 
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 23324b9fb31b..2336597c09e0 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -553,7 +553,7 @@ struct smu_funcs
 	int (*set_fan_speed_percent)(struct smu_context *smu, uint32_t speed);
 	int (*set_fan_speed_rpm)(struct smu_context *smu, uint32_t speed);
 	int (*set_xgmi_pstate)(struct smu_context *smu, uint32_t pstate);
-
+	int (*gfx_off_control)(struct smu_context *smu, bool enable);
 };
 
 #define smu_init_microcode(smu) \
@@ -592,6 +592,9 @@ struct smu_funcs
 	((smu)->funcs->set_tool_table_location ? (smu)->funcs->set_tool_table_location((smu)) : 0)
 #define smu_notify_memory_pool_location(smu) \
 	((smu)->funcs->notify_memory_pool_location ? (smu)->funcs->notify_memory_pool_location((smu)) : 0)
+#define smu_gfx_off_control(smu, enable) \
+	((smu)->funcs->gfx_off_control ? (smu)->funcs->gfx_off_control((smu), (enable)) : 0)
+
 #define smu_write_watermarks_table(smu) \
 	((smu)->funcs->write_watermarks_table ? (smu)->funcs->write_watermarks_table((smu)) : 0)
 #define smu_set_last_dcef_min_deep_sleep_clk(smu) \
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index a952d2a297f7..e1841651693a 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1547,6 +1547,21 @@ smu_v11_0_set_watermarks_for_clock_ranges(struct smu_context *smu, struct
 	return ret;
 }
 
+static int smu_v11_0_gfx_off_control(struct smu_context *smu, bool enable)
+{
+	int ret = 0;
+
+	mutex_lock(&smu->mutex);
+	if (enable)
+		ret = smu_send_smc_msg(smu, SMU_MSG_AllowGfxOff);
+	else
+		ret = smu_send_smc_msg(smu, SMU_MSG_DisallowGfxOff);
+	mutex_unlock(&smu->mutex);
+
+	return ret;
+}
+
+
 static int smu_v11_0_get_clock_ranges(struct smu_context *smu,
 				      uint32_t *clock,
 				      PPCLK_e clock_select,
@@ -1919,6 +1934,7 @@ static const struct smu_funcs smu_v11_0_funcs = {
 	.set_fan_speed_percent = smu_v11_0_set_fan_speed_percent,
 	.set_fan_speed_rpm = smu_v11_0_set_fan_speed_rpm,
 	.set_xgmi_pstate = smu_v11_0_set_xgmi_pstate,
+	.gfx_off_control = smu_v11_0_gfx_off_control,
 };
 
 void smu_v11_0_set_smu_funcs(struct smu_context *smu)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 168/459] drm/amd/amdgpu: fw version check with gfxoff
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (66 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 167/459] drm/amd: add gfxoff support on navi10 Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 169/459] drm/amd/powerplay: gfxoff-seperate the Vega20 case Alex Deucher
                     ` (24 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kenneth Feng, Hawking Zhang

From: Kenneth Feng <kenneth.feng@amd.com>

1. check the firmware version when enabling gfxoff
2. overwrite the pptable to make sure gfxoff is really
enabled on navi10

Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c |  1 -
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 11 +++++++++++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index cc245f4c61ab..63df59e7335d 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -331,7 +331,6 @@ static int smu_set_funcs(struct amdgpu_device *adev)
 	switch (adev->asic_type) {
 	case CHIP_VEGA20:
 	case CHIP_NAVI10:
-		adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
 		if (adev->pm.pp_feature & PP_OVERDRIVE_MASK)
 			smu->od_enabled = true;
 		smu_v11_0_set_smu_funcs(smu);
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index b8d9d1a73b16..b78fa7fc0623 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -117,6 +117,8 @@ static int
 navi10_get_allowed_feature_mask(struct smu_context *smu,
 				  uint32_t *feature_mask, uint32_t num)
 {
+	struct amdgpu_device *adev = smu->adev;
+
 	if (num > 2)
 		return -EINVAL;
 
@@ -139,6 +141,10 @@ navi10_get_allowed_feature_mask(struct smu_context *smu,
 				| FEATURE_MASK(FEATURE_MMHUB_PG)
 				| FEATURE_MASK(FEATURE_ATHUB_PG);
 
+	if (adev->pm.pp_feature & PP_GFXOFF_MASK)
+		*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_GFX_SS_BIT)
+				| FEATURE_MASK(FEATURE_GFXOFF_BIT);
+
 	return 0;
 }
 
@@ -149,6 +155,7 @@ static int navi10_check_powerplay_table(struct smu_context *smu)
 
 static int navi10_append_powerplay_table(struct smu_context *smu)
 {
+	struct amdgpu_device *adev = smu->adev;
 	struct smu_table_context *table_context = &smu->smu_table;
 	PPTable_t *smc_pptable = table_context->driver_pptable;
 	struct atom_smc_dpm_info_v4_5 *smc_dpm_table;
@@ -234,6 +241,10 @@ static int navi10_append_powerplay_table(struct smu_context *smu)
 	/* Mvdd Svi2 Div Ratio Setting */
 	smc_pptable->MvddRatio = smc_dpm_table->MvddRatio;
 
+	if (adev->pm.pp_feature & PP_GFXOFF_MASK)
+		*(uint64_t *)smc_pptable->FeaturesToRun |= FEATURE_MASK(FEATURE_GFX_SS_BIT)
+					| FEATURE_MASK(FEATURE_GFXOFF_BIT);
+
 	return 0;
 }
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 169/459] drm/amd/powerplay: gfxoff-seperate the Vega20 case
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (67 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 168/459] drm/amd/amdgpu: fw version check with gfxoff Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 170/459] drm/amd/powerplay: enable DCEFCLK dpm support Alex Deucher
                     ` (23 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kenneth Feng

From: Kenneth Feng <kenneth.feng@amd.com>

seperate the Vega20 case from navi10 for gfxoff so that gfxoff
won't be allowed on Vega20

Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index e1841651693a..d0019b8a68f3 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1550,13 +1550,24 @@ smu_v11_0_set_watermarks_for_clock_ranges(struct smu_context *smu, struct
 static int smu_v11_0_gfx_off_control(struct smu_context *smu, bool enable)
 {
 	int ret = 0;
+	struct amdgpu_device *adev = smu->adev;
 
-	mutex_lock(&smu->mutex);
-	if (enable)
-		ret = smu_send_smc_msg(smu, SMU_MSG_AllowGfxOff);
-	else
-		ret = smu_send_smc_msg(smu, SMU_MSG_DisallowGfxOff);
-	mutex_unlock(&smu->mutex);
+	switch (adev->asic_type) {
+	case CHIP_VEGA20:
+		break;
+	case CHIP_NAVI10:
+		if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))
+			return 0;
+		mutex_lock(&smu->mutex);
+		if (enable)
+			ret = smu_send_smc_msg(smu, SMU_MSG_AllowGfxOff);
+		else
+			ret = smu_send_smc_msg(smu, SMU_MSG_DisallowGfxOff);
+		mutex_unlock(&smu->mutex);
+		break;
+	default:
+		break;
+	}
 
 	return ret;
 }
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 170/459] drm/amd/powerplay: enable DCEFCLK dpm support
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (68 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 169/459] drm/amd/powerplay: gfxoff-seperate the Vega20 case Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 171/459] drm/amdgpu: enable sw smu driver for navi10 by default Alex Deucher
                     ` (22 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kevin Wang, Kenneth Feng

From: Kenneth Feng <kenneth.feng@amd.com>

Enabale DCEFCLK dpm on navi10

Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index b78fa7fc0623..961f44e55f35 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -139,7 +139,8 @@ navi10_get_allowed_feature_mask(struct smu_context *smu,
 				| FEATURE_MASK(FEATURE_THERMAL_BIT)
 				| FEATURE_MASK(FEATURE_LED_DISPLAY_BIT)
 				| FEATURE_MASK(FEATURE_MMHUB_PG)
-				| FEATURE_MASK(FEATURE_ATHUB_PG);
+				| FEATURE_MASK(FEATURE_ATHUB_PG)
+				| FEATURE_MASK(FEATURE_DPM_DCEFCLK_BIT);
 
 	if (adev->pm.pp_feature & PP_GFXOFF_MASK)
 		*(uint64_t *)feature_mask |= FEATURE_MASK(FEATURE_GFX_SS_BIT)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 171/459] drm/amdgpu: enable sw smu driver for navi10 by default
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (69 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 170/459] drm/amd/powerplay: enable DCEFCLK dpm support Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 172/459] drm/amd/powerplay: introduce smu clk type to handle ppclk for each asic Alex Deucher
                     ` (21 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

Navi10 will use sw smu driver for dynamic power managment,
while vega20 could also use sw smu driver when amdgpu_dpm is
set to 2

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c    |  4 +++-
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 11 +++++------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index a9d1ceb11e5d..d0168e03d85e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -251,7 +251,9 @@ module_param_string(lockup_timeout, amdgpu_lockup_timeout, sizeof(amdgpu_lockup_
 
 /**
  * DOC: dpm (int)
- * Override for dynamic power management setting (1 = enable, 0 = disable). The default is -1 (auto).
+ * Override for dynamic power management setting
+ * (0 = disable, 1 = enable, 2 = enable sw smu driver for vega20)
+ * The default is -1 (auto).
  */
 MODULE_PARM_DESC(dpm, "DPM support (1 = enable, 0 = disable, -1 = auto)");
 module_param_named(dpm, amdgpu_dpm, int, 0444);
diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 63df59e7335d..87822f434350 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -168,13 +168,12 @@ int smu_update_table_with_arg(struct smu_context *smu, uint16_t table_id, uint16
 
 bool is_support_sw_smu(struct amdgpu_device *adev)
 {
-	if (amdgpu_dpm != 1)
-		return false;
-
-	if (adev->asic_type >= CHIP_VEGA20 && adev->asic_type != CHIP_RAVEN)
+	if (adev->asic_type == CHIP_VEGA20)
+		return (amdgpu_dpm == 2) ? true: false;
+	else if (adev->asic_type >= CHIP_NAVI10)
 		return true;
-
-	return false;
+	else
+		return false;
 }
 
 int smu_sys_get_pp_table(struct smu_context *smu, void **table)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 172/459] drm/amd/powerplay: introduce smu clk type to handle ppclk for each asic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (70 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 171/459] drm/amdgpu: enable sw smu driver for navi10 by default Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 173/459] drm/amd/powerplay: introduce smu feature type to handle feature mask " Alex Deucher
                     ` (20 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch introduces new smu clk type, it's to handle the different ppclk
defines for each asic with the same smu ip.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    | 21 +++++-
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |  3 +
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c    | 26 ++++++++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 66 ++++++++++---------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 32 ++++++++-
 5 files changed, 113 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 2336597c09e0..65990dd700f0 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -227,6 +227,22 @@ enum smu_message_type
 	SMU_MSG_MAX_COUNT,
 };
 
+enum smu_clk_type
+{
+	SMU_GFXCLK,
+	SMU_VCLK,
+	SMU_DCLK,
+	SMU_ECLK,
+	SMU_SOCCLK,
+	SMU_UCLK,
+	SMU_DCEFCLK,
+	SMU_DISPCLK,
+	SMU_PIXCLK,
+	SMU_PHYCLK,
+	SMU_FCLK,
+	SMU_CLK_COUNT,
+};
+
 enum smu_memory_pool_size
 {
     SMU_MEMORY_POOL_SIZE_ZERO   = 0,
@@ -420,6 +436,7 @@ struct pptable_funcs {
 	int (*check_powerplay_table)(struct smu_context *smu);
 	int (*append_powerplay_table)(struct smu_context *smu);
 	int (*get_smu_msg_index)(struct smu_context *smu, uint32_t index);
+	int (*get_smu_clk_index)(struct smu_context *smu, uint32_t index);
 	int (*run_afll_btc)(struct smu_context *smu);
 	int (*get_allowed_feature_mask)(struct smu_context *smu, uint32_t *feature_mask, uint32_t num);
 	enum amd_pm_state_type (*get_current_power_state)(struct smu_context *smu);
@@ -510,7 +527,7 @@ struct smu_funcs
 	int (*notify_display_change)(struct smu_context *smu);
 	int (*get_power_limit)(struct smu_context *smu, uint32_t *limit, bool def);
 	int (*set_power_limit)(struct smu_context *smu, uint32_t n);
-	int (*get_current_clk_freq)(struct smu_context *smu, uint32_t clk_id, uint32_t *value);
+	int (*get_current_clk_freq)(struct smu_context *smu, enum smu_clk_type clk_id, uint32_t *value);
 	int (*init_max_sustainable_clocks)(struct smu_context *smu);
 	int (*start_thermal_control)(struct smu_context *smu);
 	int (*read_sensor)(struct smu_context *smu, enum amd_pp_sensors sensor,
@@ -704,6 +721,8 @@ struct smu_funcs
 
 #define smu_msg_get_index(smu, msg) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_msg_index? (smu)->ppt_funcs->get_smu_msg_index((smu), (msg)) : -EINVAL) : -EINVAL)
+#define smu_clk_get_index(smu, msg) \
+	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_clk_index? (smu)->ppt_funcs->get_smu_clk_index((smu), (msg)) : -EINVAL) : -EINVAL)
 #define smu_run_afll_btc(smu) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->run_afll_btc? (smu)->ppt_funcs->run_afll_btc((smu)) : 0) : 0)
 #define smu_get_allowed_feature_mask(smu, feature_mask, num) \
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index cd5e66b82ce1..cae6619d2df5 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -40,6 +40,9 @@
 #define TEMP_RANGE_MIN			(0)
 #define TEMP_RANGE_MAX			(80 * 1000)
 
+#define CLK_MAP(clk, index) \
+	[SMU_##clk] = index
+
 struct smu_11_0_max_sustainable_clocks {
 	uint32_t display_clock;
 	uint32_t phy_clock;
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 961f44e55f35..f45df244013f 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -99,6 +99,18 @@ static int navi10_message_map[SMU_MSG_MAX_COUNT] = {
 	MSG_MAP(PrepareMp1ForShutdown,		PPSMC_MSG_PrepareMp1ForShutdown),
 };
 
+static int navi10_clk_map[SMU_CLK_COUNT] = {
+	CLK_MAP(GFXCLK, PPCLK_GFXCLK),
+	CLK_MAP(SOCCLK, PPCLK_SOCCLK),
+	CLK_MAP(UCLK, PPCLK_UCLK),
+	CLK_MAP(DCLK, PPCLK_DCLK),
+	CLK_MAP(VCLK, PPCLK_VCLK),
+	CLK_MAP(DCEFCLK, PPCLK_DCEFCLK),
+	CLK_MAP(DISPCLK, PPCLK_DISPCLK),
+	CLK_MAP(PIXCLK, PPCLK_PIXCLK),
+	CLK_MAP(PHYCLK, PPCLK_PHYCLK),
+};
+
 static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -112,6 +124,19 @@ static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 	return val;
 }
 
+static int navi10_get_smu_clk_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_CLK_COUNT)
+		return -EINVAL;
+
+	val = navi10_clk_map[index];
+	if (val >= PPCLK_COUNT)
+		return -EINVAL;
+
+	return val;
+}
+
 #define FEATURE_MASK(feature) (1UL << feature)
 static int
 navi10_get_allowed_feature_mask(struct smu_context *smu,
@@ -327,6 +352,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 	.check_powerplay_table = navi10_check_powerplay_table,
 	.append_powerplay_table = navi10_append_powerplay_table,
 	.get_smu_msg_index = navi10_get_smu_msg_index,
+	.get_smu_clk_index = navi10_get_smu_clk_index,
 	.get_allowed_feature_mask = navi10_get_allowed_feature_mask,
 	.set_default_dpm_table = navi10_set_default_dpm_table,
 };
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index d0019b8a68f3..367c9795985c 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -920,14 +920,14 @@ static int smu_v11_0_notify_display_change(struct smu_context *smu)
 
 static int
 smu_v11_0_get_max_sustainable_clock(struct smu_context *smu, uint32_t *clock,
-				    PPCLK_e clock_select)
+				    enum smu_clk_type clock_select)
 {
 	int ret = 0;
 
 	if (!smu->pm_enabled)
 		return ret;
 	ret = smu_send_smc_msg_with_param(smu, SMU_MSG_GetDcModeMaxDpmFreq,
-					  clock_select << 16);
+					  smu_clk_get_index(smu, clock_select) << 16);
 	if (ret) {
 		pr_err("[GetMaxSustainableClock] Failed to get max DC clock from SMC!");
 		return ret;
@@ -942,7 +942,7 @@ smu_v11_0_get_max_sustainable_clock(struct smu_context *smu, uint32_t *clock,
 
 	/* if DC limit is zero, return AC limit */
 	ret = smu_send_smc_msg_with_param(smu, SMU_MSG_GetMaxDpmFreq,
-					  clock_select << 16);
+					  smu_clk_get_index(smu, clock_select) << 16);
 	if (ret) {
 		pr_err("[GetMaxSustainableClock] failed to get max AC clock from SMC!");
 		return ret;
@@ -972,7 +972,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT)) {
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->uclock),
-							  PPCLK_UCLK);
+							  SMU_UCLK);
 		if (ret) {
 			pr_err("[%s] failed to get max UCLK from SMC!",
 			       __func__);
@@ -983,7 +983,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 	if (smu_feature_is_enabled(smu, FEATURE_DPM_SOCCLK_BIT)) {
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->soc_clock),
-							  PPCLK_SOCCLK);
+							  SMU_SOCCLK);
 		if (ret) {
 			pr_err("[%s] failed to get max SOCCLK from SMC!",
 			       __func__);
@@ -994,7 +994,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->dcef_clock),
-							  PPCLK_DCEFCLK);
+							  SMU_DCEFCLK);
 		if (ret) {
 			pr_err("[%s] failed to get max DCEFCLK from SMC!",
 			       __func__);
@@ -1003,7 +1003,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->display_clock),
-							  PPCLK_DISPCLK);
+							  SMU_DISPCLK);
 		if (ret) {
 			pr_err("[%s] failed to get max DISPCLK from SMC!",
 			       __func__);
@@ -1011,7 +1011,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 		}
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->phy_clock),
-							  PPCLK_PHYCLK);
+							  SMU_PHYCLK);
 		if (ret) {
 			pr_err("[%s] failed to get max PHYCLK from SMC!",
 			       __func__);
@@ -1019,7 +1019,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 		}
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->pixel_clock),
-							  PPCLK_PIXCLK);
+							  SMU_PIXCLK);
 		if (ret) {
 			pr_err("[%s] failed to get max PIXCLK from SMC!",
 			       __func__);
@@ -1086,16 +1086,18 @@ static int smu_v11_0_set_power_limit(struct smu_context *smu, uint32_t n)
 	return ret;
 }
 
-static int smu_v11_0_get_current_clk_freq(struct smu_context *smu, uint32_t clk_id, uint32_t *value)
+static int smu_v11_0_get_current_clk_freq(struct smu_context *smu,
+					  enum smu_clk_type clk_id,
+					  uint32_t *value)
 {
 	int ret = 0;
 	uint32_t freq;
 
-	if (clk_id >= PPCLK_COUNT || !value)
+	if (clk_id >= SMU_CLK_COUNT || !value)
 		return -EINVAL;
 
-	ret = smu_send_smc_msg_with_param(smu,
-			SMU_MSG_GetDpmClockFreq, (clk_id << 16));
+	ret = smu_send_smc_msg_with_param(smu, SMU_MSG_GetDpmClockFreq,
+					  (smu_clk_get_index(smu, clk_id) << 16));
 	if (ret)
 		return ret;
 
@@ -1381,11 +1383,11 @@ static int smu_v11_0_read_sensor(struct smu_context *smu,
 		*size = 4;
 		break;
 	case AMDGPU_PP_SENSOR_GFX_MCLK:
-		ret = smu_get_current_clk_freq(smu, PPCLK_UCLK, (uint32_t *)data);
+		ret = smu_get_current_clk_freq(smu, SMU_UCLK, (uint32_t *)data);
 		*size = 4;
 		break;
 	case AMDGPU_PP_SENSOR_GFX_SCLK:
-		ret = smu_get_current_clk_freq(smu, PPCLK_GFXCLK, (uint32_t *)data);
+		ret = smu_get_current_clk_freq(smu, SMU_GFXCLK, (uint32_t *)data);
 		*size = 4;
 		break;
 	case AMDGPU_PP_SENSOR_HOTSPOT_TEMP:
@@ -1432,7 +1434,7 @@ smu_v11_0_display_clock_voltage_request(struct smu_context *smu,
 {
 	enum amd_pp_clock_type clk_type = clock_req->clock_type;
 	int ret = 0;
-	PPCLK_e clk_select = 0;
+	enum smu_clk_type clk_select = 0;
 	uint32_t clk_freq = clock_req->clock_freq_in_khz / 1000;
 
 	if (!smu->pm_enabled)
@@ -1440,16 +1442,16 @@ smu_v11_0_display_clock_voltage_request(struct smu_context *smu,
 	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
 		switch (clk_type) {
 		case amd_pp_dcef_clock:
-			clk_select = PPCLK_DCEFCLK;
+			clk_select = SMU_DCEFCLK;
 			break;
 		case amd_pp_disp_clock:
-			clk_select = PPCLK_DISPCLK;
+			clk_select = SMU_DISPCLK;
 			break;
 		case amd_pp_pixel_clock:
-			clk_select = PPCLK_PIXCLK;
+			clk_select = SMU_PIXCLK;
 			break;
 		case amd_pp_phy_clock:
-			clk_select = PPCLK_PHYCLK;
+			clk_select = SMU_PHYCLK;
 			break;
 		default:
 			pr_info("[%s] Invalid Clock Type!", __func__);
@@ -1461,7 +1463,7 @@ smu_v11_0_display_clock_voltage_request(struct smu_context *smu,
 			goto failed;
 
 		ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetHardMinByFreq,
-						  (clk_select << 16) | clk_freq);
+			(smu_clk_get_index(smu, clk_select) << 16) | clk_freq);
 	}
 
 failed:
@@ -1575,14 +1577,14 @@ static int smu_v11_0_gfx_off_control(struct smu_context *smu, bool enable)
 
 static int smu_v11_0_get_clock_ranges(struct smu_context *smu,
 				      uint32_t *clock,
-				      PPCLK_e clock_select,
+				      enum smu_clk_type clock_select,
 				      bool max)
 {
 	int ret;
 	*clock = 0;
 	if (max) {
 		ret = smu_send_smc_msg_with_param(smu, SMU_MSG_GetMaxDpmFreq,
-					    (clock_select << 16));
+				smu_clk_get_index(smu, clock_select) << 16);
 		if (ret) {
 			pr_err("[GetClockRanges] Failed to get max clock from SMC!\n");
 			return ret;
@@ -1590,7 +1592,7 @@ static int smu_v11_0_get_clock_ranges(struct smu_context *smu,
 		smu_read_smc_arg(smu, clock);
 	} else {
 		ret = smu_send_smc_msg_with_param(smu, SMU_MSG_GetMinDpmFreq,
-					    (clock_select << 16));
+				smu_clk_get_index(smu, clock_select) << 16);
 		if (ret) {
 			pr_err("[GetClockRanges] Failed to get min clock from SMC!\n");
 			return ret;
@@ -1612,15 +1614,15 @@ static uint32_t smu_v11_0_dpm_get_sclk(struct smu_context *smu, bool low)
 	}
 
 	if (low) {
-		ret = smu_v11_0_get_clock_ranges(smu, &gfx_clk, PPCLK_GFXCLK, false);
+		ret = smu_v11_0_get_clock_ranges(smu, &gfx_clk, SMU_GFXCLK, false);
 		if (ret) {
-			pr_err("[GetSclks]: fail to get min PPCLK_GFXCLK\n");
+			pr_err("[GetSclks]: fail to get min SMU_GFXCLK\n");
 			return ret;
 		}
 	} else {
-		ret = smu_v11_0_get_clock_ranges(smu, &gfx_clk, PPCLK_GFXCLK, true);
+		ret = smu_v11_0_get_clock_ranges(smu, &gfx_clk, SMU_GFXCLK, true);
 		if (ret) {
-			pr_err("[GetSclks]: fail to get max PPCLK_GFXCLK\n");
+			pr_err("[GetSclks]: fail to get max SMU_GFXCLK\n");
 			return ret;
 		}
 	}
@@ -1639,15 +1641,15 @@ static uint32_t smu_v11_0_dpm_get_mclk(struct smu_context *smu, bool low)
 	}
 
 	if (low) {
-		ret = smu_v11_0_get_clock_ranges(smu, &mem_clk, PPCLK_UCLK, false);
+		ret = smu_v11_0_get_clock_ranges(smu, &mem_clk, SMU_UCLK, false);
 		if (ret) {
-			pr_err("[GetMclks]: fail to get min PPCLK_UCLK\n");
+			pr_err("[GetMclks]: fail to get min SMU_UCLK\n");
 			return ret;
 		}
 	} else {
-		ret = smu_v11_0_get_clock_ranges(smu, &mem_clk, PPCLK_GFXCLK, true);
+		ret = smu_v11_0_get_clock_ranges(smu, &mem_clk, SMU_GFXCLK, true);
 		if (ret) {
-			pr_err("[GetMclks]: fail to get max PPCLK_UCLK\n");
+			pr_err("[GetMclks]: fail to get max SMU_UCLK\n");
 			return ret;
 		}
 	}
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 7e6148ab134b..b3fc9c034aa0 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -139,6 +139,33 @@ static int vega20_message_map[SMU_MSG_MAX_COUNT] = {
 	MSG_MAP(GetAVFSVoltageByDpm),
 };
 
+static int vega20_clk_map[SMU_CLK_COUNT] = {
+	CLK_MAP(GFXCLK, PPCLK_GFXCLK),
+	CLK_MAP(VCLK, PPCLK_VCLK),
+	CLK_MAP(DCLK, PPCLK_DCLK),
+	CLK_MAP(ECLK, PPCLK_ECLK),
+	CLK_MAP(SOCCLK, PPCLK_SOCCLK),
+	CLK_MAP(UCLK, PPCLK_UCLK),
+	CLK_MAP(DCEFCLK, PPCLK_DCEFCLK),
+	CLK_MAP(DISPCLK, PPCLK_DISPCLK),
+	CLK_MAP(PIXCLK, PPCLK_PIXCLK),
+	CLK_MAP(PHYCLK, PPCLK_PHYCLK),
+	CLK_MAP(FCLK, PPCLK_FCLK),
+};
+
+static int vega20_get_smu_clk_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_CLK_COUNT)
+		return -EINVAL;
+
+	val = vega20_clk_map[index];
+	if (val >= PPCLK_COUNT)
+		return -EINVAL;
+
+	return val;
+}
+
 static int vega20_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -776,7 +803,7 @@ static int vega20_print_clk_levels(struct smu_context *smu,
 
 	switch (type) {
 	case PP_SCLK:
-		ret = smu_get_current_clk_freq(smu, PPCLK_GFXCLK, &now);
+		ret = smu_get_current_clk_freq(smu, SMU_GFXCLK, &now);
 		if (ret) {
 			pr_err("Attempt to get current gfx clk Failed!");
 			return ret;
@@ -797,7 +824,7 @@ static int vega20_print_clk_levels(struct smu_context *smu,
 		break;
 
 	case PP_MCLK:
-		ret = smu_get_current_clk_freq(smu, PPCLK_UCLK, &now);
+		ret = smu_get_current_clk_freq(smu, SMU_UCLK, &now);
 		if (ret) {
 			pr_err("Attempt to get current mclk Failed!");
 			return ret;
@@ -2847,6 +2874,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.check_powerplay_table = vega20_check_powerplay_table,
 	.append_powerplay_table = vega20_append_powerplay_table,
 	.get_smu_msg_index = vega20_get_smu_msg_index,
+	.get_smu_clk_index = vega20_get_smu_clk_index,
 	.run_afll_btc = vega20_run_btc_afll,
 	.get_allowed_feature_mask = vega20_get_allowed_feature_mask,
 	.get_current_power_state = vega20_get_current_power_state,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 173/459] drm/amd/powerplay: introduce smu feature type to handle feature mask for each asic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (71 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 172/459] drm/amd/powerplay: introduce smu clk type to handle ppclk for each asic Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 174/459] drm/amd/powerplay: introduce smu table id type to handle the smu table " Alex Deucher
                     ` (19 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch introduces new smu feature type, it's to handle the different feature
mask defines for each asic with the same smu ip.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c    |  22 +++-
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  72 ++++++++++-
 .../amd/powerplay/inc/smu_11_0_driver_if.h    |   4 +-
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |   3 +
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c    |  63 ++++++++-
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     |  26 ++--
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 122 ++++++++++++------
 7 files changed, 251 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 87822f434350..99e10313afa2 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -254,11 +254,14 @@ int smu_feature_init_dpm(struct smu_context *smu)
 	return ret;
 }
 
-int smu_feature_is_enabled(struct smu_context *smu, int feature_id)
+int smu_feature_is_enabled(struct smu_context *smu, enum smu_feature_mask mask)
 {
 	struct smu_feature *feature = &smu->smu_feature;
+	uint32_t feature_id;
 	int ret = 0;
 
+	feature_id = smu_feature_get_index(smu, mask);
+
 	WARN_ON(feature_id > feature->feature_num);
 
 	mutex_lock(&feature->mutex);
@@ -268,11 +271,15 @@ int smu_feature_is_enabled(struct smu_context *smu, int feature_id)
 	return ret;
 }
 
-int smu_feature_set_enabled(struct smu_context *smu, int feature_id, bool enable)
+int smu_feature_set_enabled(struct smu_context *smu, enum smu_feature_mask mask,
+			    bool enable)
 {
 	struct smu_feature *feature = &smu->smu_feature;
+	uint32_t feature_id;
 	int ret = 0;
 
+	feature_id = smu_feature_get_index(smu, mask);
+
 	WARN_ON(feature_id > feature->feature_num);
 
 	mutex_lock(&feature->mutex);
@@ -291,11 +298,14 @@ int smu_feature_set_enabled(struct smu_context *smu, int feature_id, bool enable
 	return ret;
 }
 
-int smu_feature_is_supported(struct smu_context *smu, int feature_id)
+int smu_feature_is_supported(struct smu_context *smu, enum smu_feature_mask mask)
 {
 	struct smu_feature *feature = &smu->smu_feature;
+	uint32_t feature_id;
 	int ret = 0;
 
+	feature_id = smu_feature_get_index(smu, mask);
+
 	WARN_ON(feature_id > feature->feature_num);
 
 	mutex_lock(&feature->mutex);
@@ -305,12 +315,16 @@ int smu_feature_is_supported(struct smu_context *smu, int feature_id)
 	return ret;
 }
 
-int smu_feature_set_supported(struct smu_context *smu, int feature_id,
+int smu_feature_set_supported(struct smu_context *smu,
+			      enum smu_feature_mask mask,
 			      bool enable)
 {
 	struct smu_feature *feature = &smu->smu_feature;
+	uint32_t feature_id;
 	int ret = 0;
 
+	feature_id = smu_feature_get_index(smu, mask);
+
 	WARN_ON(feature_id > feature->feature_num);
 
 	mutex_lock(&feature->mutex);
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 65990dd700f0..f0b313baf04d 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -243,6 +243,63 @@ enum smu_clk_type
 	SMU_CLK_COUNT,
 };
 
+enum smu_feature_mask
+{
+	SMU_FEATURE_DPM_PREFETCHER_BIT,
+	SMU_FEATURE_DPM_GFXCLK_BIT,
+	SMU_FEATURE_DPM_UCLK_BIT,
+	SMU_FEATURE_DPM_SOCCLK_BIT,
+	SMU_FEATURE_DPM_UVD_BIT,
+	SMU_FEATURE_DPM_VCE_BIT,
+	SMU_FEATURE_ULV_BIT,
+	SMU_FEATURE_DPM_MP0CLK_BIT,
+	SMU_FEATURE_DPM_LINK_BIT,
+	SMU_FEATURE_DPM_DCEFCLK_BIT,
+	SMU_FEATURE_DS_GFXCLK_BIT,
+	SMU_FEATURE_DS_SOCCLK_BIT,
+	SMU_FEATURE_DS_LCLK_BIT,
+	SMU_FEATURE_PPT_BIT,
+	SMU_FEATURE_TDC_BIT,
+	SMU_FEATURE_THERMAL_BIT,
+	SMU_FEATURE_GFX_PER_CU_CG_BIT,
+	SMU_FEATURE_RM_BIT,
+	SMU_FEATURE_DS_DCEFCLK_BIT,
+	SMU_FEATURE_ACDC_BIT,
+	SMU_FEATURE_VR0HOT_BIT,
+	SMU_FEATURE_VR1HOT_BIT,
+	SMU_FEATURE_FW_CTF_BIT,
+	SMU_FEATURE_LED_DISPLAY_BIT,
+	SMU_FEATURE_FAN_CONTROL_BIT,
+	SMU_FEATURE_GFX_EDC_BIT,
+	SMU_FEATURE_GFXOFF_BIT,
+	SMU_FEATURE_CG_BIT,
+	SMU_FEATURE_DPM_FCLK_BIT,
+	SMU_FEATURE_DS_FCLK_BIT,
+	SMU_FEATURE_DS_MP1CLK_BIT,
+	SMU_FEATURE_DS_MP0CLK_BIT,
+	SMU_FEATURE_XGMI_BIT,
+	SMU_FEATURE_DPM_GFX_PACE_BIT,
+	SMU_FEATURE_MEM_VDDCI_SCALING_BIT,
+	SMU_FEATURE_MEM_MVDD_SCALING_BIT,
+	SMU_FEATURE_DS_UCLK_BIT,
+	SMU_FEATURE_GFX_ULV_BIT,
+	SMU_FEATURE_FW_DSTATE_BIT,
+	SMU_FEATURE_BACO_BIT,
+	SMU_FEATURE_VCN_PG_BIT,
+	SMU_FEATURE_JPEG_PG_BIT,
+	SMU_FEATURE_USB_PG_BIT,
+	SMU_FEATURE_RSMU_SMN_CG_BIT,
+	SMU_FEATURE_APCC_PLUS_BIT,
+	SMU_FEATURE_GTHR_BIT,
+	SMU_FEATURE_GFX_DCS_BIT,
+	SMU_FEATURE_GFX_SS_BIT,
+	SMU_FEATURE_OUT_OF_BAND_MONITOR_BIT,
+	SMU_FEATURE_TEMP_DEPENDENT_VMIN_BIT,
+	SMU_FEATURE_MMHUB_PG_BIT,
+	SMU_FEATURE_ATHUB_PG_BIT,
+	SMU_FEATURE_COUNT,
+};
+
 enum smu_memory_pool_size
 {
     SMU_MEMORY_POOL_SIZE_ZERO   = 0,
@@ -437,6 +494,7 @@ struct pptable_funcs {
 	int (*append_powerplay_table)(struct smu_context *smu);
 	int (*get_smu_msg_index)(struct smu_context *smu, uint32_t index);
 	int (*get_smu_clk_index)(struct smu_context *smu, uint32_t index);
+	int (*get_smu_feature_index)(struct smu_context *smu, uint32_t index);
 	int (*run_afll_btc)(struct smu_context *smu);
 	int (*get_allowed_feature_mask)(struct smu_context *smu, uint32_t *feature_mask, uint32_t num);
 	enum amd_pm_state_type (*get_current_power_state)(struct smu_context *smu);
@@ -723,6 +781,8 @@ struct smu_funcs
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_msg_index? (smu)->ppt_funcs->get_smu_msg_index((smu), (msg)) : -EINVAL) : -EINVAL)
 #define smu_clk_get_index(smu, msg) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_clk_index? (smu)->ppt_funcs->get_smu_clk_index((smu), (msg)) : -EINVAL) : -EINVAL)
+#define smu_feature_get_index(smu, msg) \
+	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_feature_index? (smu)->ppt_funcs->get_smu_feature_index((smu), (msg)) : -EINVAL) : -EINVAL)
 #define smu_run_afll_btc(smu) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->run_afll_btc? (smu)->ppt_funcs->run_afll_btc((smu)) : 0) : 0)
 #define smu_get_allowed_feature_mask(smu, feature_mask, num) \
@@ -779,10 +839,14 @@ extern const struct amd_ip_funcs smu_ip_funcs;
 extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
 extern int smu_feature_init_dpm(struct smu_context *smu);
 
-extern int smu_feature_is_enabled(struct smu_context *smu, int feature_id);
-extern int smu_feature_set_enabled(struct smu_context *smu, int feature_id, bool enable);
-extern int smu_feature_is_supported(struct smu_context *smu, int feature_id);
-extern int smu_feature_set_supported(struct smu_context *smu, int feature_id, bool enable);
+extern int smu_feature_is_enabled(struct smu_context *smu,
+				  enum smu_feature_mask mask);
+extern int smu_feature_set_enabled(struct smu_context *smu,
+				   enum smu_feature_mask mask, bool enable);
+extern int smu_feature_is_supported(struct smu_context *smu,
+				    enum smu_feature_mask mask);
+extern int smu_feature_set_supported(struct smu_context *smu,
+				     enum smu_feature_mask mask, bool enable);
 
 int smu_update_table_with_arg(struct smu_context *smu, uint16_t table_id, uint16_t exarg,
 		     void *table_data, bool drv2smu);
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h b/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
index a53547fa8980..1ab6e4eca09f 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
@@ -90,8 +90,8 @@
 #define FEATURE_OUT_OF_BAND_MONITOR_BIT 38
 #define FEATURE_TEMP_DEPENDENT_VMIN_BIT 39
 
-#define FEATURE_MMHUB_PG                40 
-#define FEATURE_ATHUB_PG                41
+#define FEATURE_MMHUB_PG_BIT            40
+#define FEATURE_ATHUB_PG_BIT            41
 #define FEATURE_SPARE_42_BIT            42
 #define FEATURE_SPARE_43_BIT            43
 #define FEATURE_SPARE_44_BIT            44
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index cae6619d2df5..9284c1edfe42 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -43,6 +43,9 @@
 #define CLK_MAP(clk, index) \
 	[SMU_##clk] = index
 
+#define FEA_MAP(fea) \
+	[SMU_FEATURE_##fea##_BIT] = FEATURE_##fea##_BIT
+
 struct smu_11_0_max_sustainable_clocks {
 	uint32_t display_clock;
 	uint32_t phy_clock;
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index f45df244013f..d1c2d4e67879 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -111,6 +111,51 @@ static int navi10_clk_map[SMU_CLK_COUNT] = {
 	CLK_MAP(PHYCLK, PPCLK_PHYCLK),
 };
 
+static int navi10_feature_mask_map[SMU_FEATURE_COUNT] = {
+	FEA_MAP(DPM_PREFETCHER),
+	FEA_MAP(DPM_GFXCLK),
+	FEA_MAP(DPM_GFX_PACE),
+	FEA_MAP(DPM_UCLK),
+	FEA_MAP(DPM_SOCCLK),
+	FEA_MAP(DPM_MP0CLK),
+	FEA_MAP(DPM_LINK),
+	FEA_MAP(DPM_DCEFCLK),
+	FEA_MAP(MEM_VDDCI_SCALING),
+	FEA_MAP(MEM_MVDD_SCALING),
+	FEA_MAP(DS_GFXCLK),
+	FEA_MAP(DS_SOCCLK),
+	FEA_MAP(DS_LCLK),
+	FEA_MAP(DS_DCEFCLK),
+	FEA_MAP(DS_UCLK),
+	FEA_MAP(GFX_ULV),
+	FEA_MAP(FW_DSTATE),
+	FEA_MAP(GFXOFF),
+	FEA_MAP(BACO),
+	FEA_MAP(VCN_PG),
+	FEA_MAP(JPEG_PG),
+	FEA_MAP(USB_PG),
+	FEA_MAP(RSMU_SMN_CG),
+	FEA_MAP(PPT),
+	FEA_MAP(TDC),
+	FEA_MAP(GFX_EDC),
+	FEA_MAP(APCC_PLUS),
+	FEA_MAP(GTHR),
+	FEA_MAP(ACDC),
+	FEA_MAP(VR0HOT),
+	FEA_MAP(VR1HOT),
+	FEA_MAP(FW_CTF),
+	FEA_MAP(FAN_CONTROL),
+	FEA_MAP(THERMAL),
+	FEA_MAP(GFX_DCS),
+	FEA_MAP(RM),
+	FEA_MAP(LED_DISPLAY),
+	FEA_MAP(GFX_SS),
+	FEA_MAP(OUT_OF_BAND_MONITOR),
+	FEA_MAP(TEMP_DEPENDENT_VMIN),
+	FEA_MAP(MMHUB_PG),
+	FEA_MAP(ATHUB_PG),
+};
+
 static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -137,6 +182,19 @@ static int navi10_get_smu_clk_index(struct smu_context *smc, uint32_t index)
 	return val;
 }
 
+static int navi10_get_smu_feature_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_FEATURE_COUNT)
+		return -EINVAL;
+
+	val = navi10_feature_mask_map[index];
+	if (val > 64)
+		return -EINVAL;
+
+	return val;
+}
+
 #define FEATURE_MASK(feature) (1UL << feature)
 static int
 navi10_get_allowed_feature_mask(struct smu_context *smu,
@@ -163,8 +221,8 @@ navi10_get_allowed_feature_mask(struct smu_context *smu,
 				| FEATURE_MASK(FEATURE_FAN_CONTROL_BIT)
 				| FEATURE_MASK(FEATURE_THERMAL_BIT)
 				| FEATURE_MASK(FEATURE_LED_DISPLAY_BIT)
-				| FEATURE_MASK(FEATURE_MMHUB_PG)
-				| FEATURE_MASK(FEATURE_ATHUB_PG)
+				| FEATURE_MASK(FEATURE_MMHUB_PG_BIT)
+				| FEATURE_MASK(FEATURE_ATHUB_PG_BIT)
 				| FEATURE_MASK(FEATURE_DPM_DCEFCLK_BIT);
 
 	if (adev->pm.pp_feature & PP_GFXOFF_MASK)
@@ -353,6 +411,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 	.append_powerplay_table = navi10_append_powerplay_table,
 	.get_smu_msg_index = navi10_get_smu_msg_index,
 	.get_smu_clk_index = navi10_get_smu_clk_index,
+	.get_smu_feature_index = navi10_get_smu_feature_index,
 	.get_allowed_feature_mask = navi10_get_allowed_feature_mask,
 	.set_default_dpm_table = navi10_set_default_dpm_table,
 };
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 367c9795985c..8c60cdcba4ef 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -912,7 +912,7 @@ static int smu_v11_0_notify_display_change(struct smu_context *smu)
 
 	if (!smu->pm_enabled)
 		return ret;
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT))
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT))
 	    ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetUclkFastSwitch, 1);
 
 	return ret;
@@ -969,7 +969,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 	max_sustainable_clocks->phy_clock = 0xFFFFFFFF;
 	max_sustainable_clocks->pixel_clock = 0xFFFFFFFF;
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT)) {
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->uclock),
 							  SMU_UCLK);
@@ -980,7 +980,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_SOCCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) {
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->soc_clock),
 							  SMU_SOCCLK);
@@ -991,7 +991,7 @@ static int smu_v11_0_init_max_sustainable_clocks(struct smu_context *smu)
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT)) {
 		ret = smu_v11_0_get_max_sustainable_clock(smu,
 							  &(max_sustainable_clocks->dcef_clock),
 							  SMU_DCEFCLK);
@@ -1076,7 +1076,7 @@ static int smu_v11_0_set_power_limit(struct smu_context *smu, uint32_t n)
 		max_power_limit /= 100;
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_PPT_BIT))
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_PPT_BIT))
 		ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetPptLimit, n);
 	if (ret) {
 		pr_err("[%s] Set power limit Failed!", __func__);
@@ -1439,7 +1439,7 @@ smu_v11_0_display_clock_voltage_request(struct smu_context *smu,
 
 	if (!smu->pm_enabled)
 		return -EINVAL;
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT)) {
 		switch (clk_type) {
 		case amd_pp_dcef_clock:
 			clk_select = SMU_DCEFCLK;
@@ -1539,8 +1539,8 @@ smu_v11_0_set_watermarks_for_clock_ranges(struct smu_context *smu, struct
 	Watermarks_t *table = watermarks->cpu_addr;
 
 	if (!smu->disable_watermark &&
-	    smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT) &&
-	    smu_feature_is_enabled(smu, FEATURE_DPM_SOCCLK_BIT)) {
+	    smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT) &&
+	    smu_feature_is_enabled(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) {
 		smu_v11_0_set_watermarks_table(smu, table, clock_ranges);
 		smu->watermarks_bitmap |= WATERMARKS_EXIST;
 		smu->watermarks_bitmap &= ~WATERMARKS_LOADED;
@@ -1608,7 +1608,7 @@ static uint32_t smu_v11_0_dpm_get_sclk(struct smu_context *smu, bool low)
 	uint32_t gfx_clk;
 	int ret;
 
-	if (!smu_feature_is_enabled(smu, FEATURE_DPM_GFXCLK_BIT)) {
+	if (!smu_feature_is_enabled(smu, SMU_FEATURE_DPM_GFXCLK_BIT)) {
 		pr_err("[GetSclks]: gfxclk dpm not enabled!\n");
 		return -EPERM;
 	}
@@ -1635,7 +1635,7 @@ static uint32_t smu_v11_0_dpm_get_mclk(struct smu_context *smu, bool low)
 	uint32_t mem_clk;
 	int ret;
 
-	if (!smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT)) {
+	if (!smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT)) {
 		pr_err("[GetMclks]: memclk dpm not enabled!\n");
 		return -EPERM;
 	}
@@ -1743,7 +1743,7 @@ static int smu_v11_0_get_current_rpm(struct smu_context *smu,
 static uint32_t
 smu_v11_0_get_fan_control_mode(struct smu_context *smu)
 {
-	if (!smu_feature_is_enabled(smu, FEATURE_FAN_CONTROL_BIT))
+	if (!smu_feature_is_enabled(smu, SMU_FEATURE_FAN_CONTROL_BIT))
 		return AMD_FAN_CTRL_MANUAL;
 	else
 		return AMD_FAN_CTRL_AUTO;
@@ -1770,10 +1770,10 @@ smu_v11_0_smc_fan_control(struct smu_context *smu, bool start)
 {
 	int ret = 0;
 
-	if (smu_feature_is_supported(smu, FEATURE_FAN_CONTROL_BIT))
+	if (smu_feature_is_supported(smu, SMU_FEATURE_FAN_CONTROL_BIT))
 		return 0;
 
-	ret = smu_feature_set_enabled(smu, FEATURE_FAN_CONTROL_BIT, start);
+	ret = smu_feature_set_enabled(smu, SMU_FEATURE_FAN_CONTROL_BIT, start);
 	if (ret)
 		pr_err("[%s]%s smc FAN CONTROL feature failed!",
 		       __func__, (start ? "Start" : "Stop"));
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index b3fc9c034aa0..718fd4dec531 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -153,6 +153,55 @@ static int vega20_clk_map[SMU_CLK_COUNT] = {
 	CLK_MAP(FCLK, PPCLK_FCLK),
 };
 
+static int vega20_feature_mask_map[SMU_FEATURE_COUNT] = {
+	FEA_MAP(DPM_PREFETCHER),
+	FEA_MAP(DPM_GFXCLK),
+	FEA_MAP(DPM_UCLK),
+	FEA_MAP(DPM_SOCCLK),
+	FEA_MAP(DPM_UVD),
+	FEA_MAP(DPM_VCE),
+	FEA_MAP(ULV),
+	FEA_MAP(DPM_MP0CLK),
+	FEA_MAP(DPM_LINK),
+	FEA_MAP(DPM_DCEFCLK),
+	FEA_MAP(DS_GFXCLK),
+	FEA_MAP(DS_SOCCLK),
+	FEA_MAP(DS_LCLK),
+	FEA_MAP(PPT),
+	FEA_MAP(TDC),
+	FEA_MAP(THERMAL),
+	FEA_MAP(GFX_PER_CU_CG),
+	FEA_MAP(RM),
+	FEA_MAP(DS_DCEFCLK),
+	FEA_MAP(ACDC),
+	FEA_MAP(VR0HOT),
+	FEA_MAP(VR1HOT),
+	FEA_MAP(FW_CTF),
+	FEA_MAP(LED_DISPLAY),
+	FEA_MAP(FAN_CONTROL),
+	FEA_MAP(GFX_EDC),
+	FEA_MAP(GFXOFF),
+	FEA_MAP(CG),
+	FEA_MAP(DPM_FCLK),
+	FEA_MAP(DS_FCLK),
+	FEA_MAP(DS_MP1CLK),
+	FEA_MAP(DS_MP0CLK),
+	FEA_MAP(XGMI),
+};
+
+static int vega20_get_smu_feature_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_FEATURE_COUNT)
+		return -EINVAL;
+
+	val = vega20_feature_mask_map[index];
+	if (val > 64)
+		return -EINVAL;
+
+	return val;
+}
+
 static int vega20_get_smu_clk_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -565,7 +614,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* socclk */
 	single_dpm_table = &(dpm_table->soc_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_SOCCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 						  PPCLK_SOCCLK);
 		if (ret) {
@@ -581,7 +630,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* gfxclk */
 	single_dpm_table = &(dpm_table->gfx_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_GFXCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_GFXCLK_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 						  PPCLK_GFXCLK);
 		if (ret) {
@@ -597,7 +646,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* memclk */
 	single_dpm_table = &(dpm_table->mem_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 						  PPCLK_UCLK);
 		if (ret) {
@@ -613,7 +662,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* eclk */
 	single_dpm_table = &(dpm_table->eclk_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_VCE_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_VCE_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table, PPCLK_ECLK);
 		if (ret) {
 			pr_err("[SetupDefaultDpmTable] failed to get eclk dpm levels!");
@@ -628,7 +677,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* vclk */
 	single_dpm_table = &(dpm_table->vclk_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UVD_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table, PPCLK_VCLK);
 		if (ret) {
 			pr_err("[SetupDefaultDpmTable] failed to get vclk dpm levels!");
@@ -643,7 +692,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* dclk */
 	single_dpm_table = &(dpm_table->dclk_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UVD_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table, PPCLK_DCLK);
 		if (ret) {
 			pr_err("[SetupDefaultDpmTable] failed to get dclk dpm levels!");
@@ -658,7 +707,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* dcefclk */
 	single_dpm_table = &(dpm_table->dcef_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 						  PPCLK_DCEFCLK);
 		if (ret) {
@@ -674,7 +723,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* pixclk */
 	single_dpm_table = &(dpm_table->pixel_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 						  PPCLK_PIXCLK);
 		if (ret) {
@@ -689,7 +738,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* dispclk */
 	single_dpm_table = &(dpm_table->display_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 						  PPCLK_DISPCLK);
 		if (ret) {
@@ -704,7 +753,7 @@ static int vega20_set_default_dpm_table(struct smu_context *smu)
 	/* phyclk */
 	single_dpm_table = &(dpm_table->phy_table);
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT)) {
 		ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 						  PPCLK_PHYCLK);
 		if (ret) {
@@ -1034,7 +1083,7 @@ static int vega20_upload_dpm_level(struct smu_context *smu, bool max,
 
 	dpm_table = smu->smu_dpm.dpm_context;
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_GFXCLK_BIT) &&
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_GFXCLK_BIT) &&
 	    (feature_mask & FEATURE_DPM_GFXCLK_MASK)) {
 		single_dpm_table = &(dpm_table->gfx_table);
 		freq = max ? single_dpm_table->dpm_state.soft_max_level :
@@ -1049,7 +1098,7 @@ static int vega20_upload_dpm_level(struct smu_context *smu, bool max,
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT) &&
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT) &&
 	    (feature_mask & FEATURE_DPM_UCLK_MASK)) {
 		single_dpm_table = &(dpm_table->mem_table);
 		freq = max ? single_dpm_table->dpm_state.soft_max_level :
@@ -1064,7 +1113,7 @@ static int vega20_upload_dpm_level(struct smu_context *smu, bool max,
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_SOCCLK_BIT) &&
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_SOCCLK_BIT) &&
 	    (feature_mask & FEATURE_DPM_SOCCLK_MASK)) {
 		single_dpm_table = &(dpm_table->soc_table);
 		freq = max ? single_dpm_table->dpm_state.soft_max_level :
@@ -1079,7 +1128,7 @@ static int vega20_upload_dpm_level(struct smu_context *smu, bool max,
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_FCLK_BIT) &&
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_FCLK_BIT) &&
 	    (feature_mask & FEATURE_DPM_FCLK_MASK)) {
 		single_dpm_table = &(dpm_table->fclk_table);
 		freq = max ? single_dpm_table->dpm_state.soft_max_level :
@@ -1094,7 +1143,7 @@ static int vega20_upload_dpm_level(struct smu_context *smu, bool max,
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_DCEFCLK_BIT) &&
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT) &&
 	    (feature_mask & FEATURE_DPM_DCEFCLK_MASK)) {
 		single_dpm_table = &(dpm_table->dcef_table);
 		freq = single_dpm_table->dpm_state.hard_min_level;
@@ -1360,7 +1409,7 @@ static int vega20_set_default_od8_setttings(struct smu_context *smu)
 
 	od8_settings = (struct vega20_od8_settings *)table_context->od8_settings;
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_SOCCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) {
 		if (table_context->od_feature_capabilities[ATOM_VEGA20_ODFEATURE_GFXCLK_LIMITS] &&
 		    table_context->od_settings_max[OD8_SETTING_GFXCLK_FMAX] > 0 &&
 		    table_context->od_settings_min[OD8_SETTING_GFXCLK_FMIN] > 0 &&
@@ -1433,7 +1482,7 @@ static int vega20_set_default_od8_setttings(struct smu_context *smu)
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT)) {
 		if (table_context->od_feature_capabilities[ATOM_VEGA20_ODFEATURE_UCLK_MAX] &&
 		    table_context->od_settings_min[OD8_SETTING_UCLK_FMAX] > 0 &&
 		    table_context->od_settings_max[OD8_SETTING_UCLK_FMAX] > 0 &&
@@ -1457,7 +1506,7 @@ static int vega20_set_default_od8_setttings(struct smu_context *smu)
 			od_table->OverDrivePct;
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_FAN_CONTROL_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_FAN_CONTROL_BIT)) {
 		if (table_context->od_feature_capabilities[ATOM_VEGA20_ODFEATURE_FAN_ACOUSTIC_LIMIT] &&
 		    table_context->od_settings_min[OD8_SETTING_FAN_ACOUSTIC_LIMIT] > 0 &&
 		    table_context->od_settings_max[OD8_SETTING_FAN_ACOUSTIC_LIMIT] > 0 &&
@@ -1481,7 +1530,7 @@ static int vega20_set_default_od8_setttings(struct smu_context *smu)
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_THERMAL_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_THERMAL_BIT)) {
 		if (table_context->od_feature_capabilities[ATOM_VEGA20_ODFEATURE_TEMPERATURE_FAN] &&
 		    table_context->od_settings_min[OD8_SETTING_FAN_TARGET_TEMP] > 0 &&
 		    table_context->od_settings_max[OD8_SETTING_FAN_TARGET_TEMP] > 0 &&
@@ -1838,7 +1887,7 @@ vega20_set_uclk_to_highest_dpm_level(struct smu_context *smu,
 	if (!smu_dpm_ctx->dpm_context)
 		return -EINVAL;
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT)) {
 		if (dpm_table->count <= 0) {
 			pr_err("[%s] Dpm table has no entry!", __func__);
 				return -EINVAL;
@@ -1901,8 +1950,8 @@ static int vega20_display_config_changed(struct smu_context *smu)
 	}
 
 	if ((smu->watermarks_bitmap & WATERMARKS_EXIST) &&
-	    smu_feature_is_supported(smu, FEATURE_DPM_DCEFCLK_BIT) &&
-	    smu_feature_is_supported(smu, FEATURE_DPM_SOCCLK_BIT)) {
+	    smu_feature_is_supported(smu, SMU_FEATURE_DPM_DCEFCLK_BIT) &&
+	    smu_feature_is_supported(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) {
 		smu_send_smc_msg_with_param(smu,
 					    SMU_MSG_NumOfDisplays,
 					    smu->display_config->num_display);
@@ -2071,11 +2120,11 @@ vega20_notify_smc_dispaly_config(struct smu_context *smu)
 	min_clocks.dcef_clock_in_sr = smu->display_config->min_dcef_deep_sleep_set_clk;
 	min_clocks.memory_clock = smu->display_config->min_mem_set_clock;
 
-	if (smu_feature_is_supported(smu, FEATURE_DPM_DCEFCLK_BIT)) {
+	if (smu_feature_is_supported(smu, SMU_FEATURE_DPM_DCEFCLK_BIT)) {
 		clock_req.clock_type = amd_pp_dcef_clock;
 		clock_req.clock_freq_in_khz = min_clocks.dcef_clock * 10;
 		if (!smu->funcs->display_clock_voltage_request(smu, &clock_req)) {
-			if (smu_feature_is_supported(smu, FEATURE_DS_DCEFCLK_BIT)) {
+			if (smu_feature_is_supported(smu, SMU_FEATURE_DS_DCEFCLK_BIT)) {
 				ret = smu_send_smc_msg_with_param(smu,
 								  SMU_MSG_SetMinDeepSleepDcefclk,
 								  min_clocks.dcef_clock_in_sr/100);
@@ -2089,7 +2138,7 @@ vega20_notify_smc_dispaly_config(struct smu_context *smu)
 		}
 	}
 
-	if (smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT)) {
+	if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT)) {
 		memtable->dpm_state.hard_min_level = min_clocks.memory_clock/100;
 		ret = smu_send_smc_msg_with_param(smu,
 						  SMU_MSG_SetHardMinByFreq,
@@ -2382,14 +2431,14 @@ static int vega20_set_od_percentage(struct smu_context *smu,
 	case OD_SCLK:
 		single_dpm_table = &(dpm_table->gfx_table);
 		golden_dpm_table = &(golden_table->gfx_table);
-		feature_enabled = smu_feature_is_enabled(smu, FEATURE_DPM_GFXCLK_BIT);
+		feature_enabled = smu_feature_is_enabled(smu, SMU_FEATURE_DPM_GFXCLK_BIT);
 		clk_id = PPCLK_GFXCLK;
 		index = OD8_SETTING_GFXCLK_FMAX;
 		break;
 	case OD_MCLK:
 		single_dpm_table = &(dpm_table->mem_table);
 		golden_dpm_table = &(golden_table->mem_table);
-		feature_enabled = smu_feature_is_enabled(smu, FEATURE_DPM_UCLK_BIT);
+		feature_enabled = smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UCLK_BIT);
 		clk_id = PPCLK_UCLK;
 		index = OD8_SETTING_UCLK_FMAX;
 		break;
@@ -2633,7 +2682,7 @@ static int vega20_odn_edit_dpm_table(struct smu_context *smu,
 			table_context->od_gfxclk_update = false;
 			single_dpm_table = &(dpm_table->gfx_table);
 
-			if (smu_feature_is_enabled(smu, FEATURE_DPM_GFXCLK_BIT)) {
+			if (smu_feature_is_enabled(smu, SMU_FEATURE_DPM_GFXCLK_BIT)) {
 				ret = vega20_set_single_dpm_table(smu, single_dpm_table,
 								  PPCLK_GFXCLK);
 				if (ret) {
@@ -2664,24 +2713,24 @@ static int vega20_odn_edit_dpm_table(struct smu_context *smu,
 
 static int vega20_dpm_set_uvd_enable(struct smu_context *smu, bool enable)
 {
-	if (!smu_feature_is_supported(smu, FEATURE_DPM_UVD_BIT))
+	if (!smu_feature_is_supported(smu, SMU_FEATURE_DPM_UVD_BIT))
 		return 0;
 
-	if (enable == smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT))
+	if (enable == smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UVD_BIT))
 		return 0;
 
-	return smu_feature_set_enabled(smu, FEATURE_DPM_UVD_BIT, enable);
+	return smu_feature_set_enabled(smu, SMU_FEATURE_DPM_UVD_BIT, enable);
 }
 
 static int vega20_dpm_set_vce_enable(struct smu_context *smu, bool enable)
 {
-	if (!smu_feature_is_supported(smu, FEATURE_DPM_VCE_BIT))
+	if (!smu_feature_is_supported(smu, SMU_FEATURE_DPM_VCE_BIT))
 		return 0;
 
-	if (enable == smu_feature_is_enabled(smu, FEATURE_DPM_VCE_BIT))
+	if (enable == smu_feature_is_enabled(smu, SMU_FEATURE_DPM_VCE_BIT))
 		return 0;
 
-	return smu_feature_set_enabled(smu, FEATURE_DPM_VCE_BIT, enable);
+	return smu_feature_set_enabled(smu, SMU_FEATURE_DPM_VCE_BIT, enable);
 }
 
 static int vega20_get_enabled_smc_features(struct smu_context *smu,
@@ -2843,11 +2892,11 @@ static int vega20_read_sensor(struct smu_context *smu,
 
 	switch (sensor) {
 	case AMDGPU_PP_SENSOR_UVD_POWER:
-		*(uint32_t *)data = smu_feature_is_enabled(smu, FEATURE_DPM_UVD_BIT) ? 1 : 0;
+		*(uint32_t *)data = smu_feature_is_enabled(smu, SMU_FEATURE_DPM_UVD_BIT) ? 1 : 0;
 		*size = 4;
 		break;
 	case AMDGPU_PP_SENSOR_VCE_POWER:
-		*(uint32_t *)data = smu_feature_is_enabled(smu, FEATURE_DPM_VCE_BIT) ? 1 : 0;
+		*(uint32_t *)data = smu_feature_is_enabled(smu, SMU_FEATURE_DPM_VCE_BIT) ? 1 : 0;
 		*size = 4;
 		break;
 	default:
@@ -2875,6 +2924,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.append_powerplay_table = vega20_append_powerplay_table,
 	.get_smu_msg_index = vega20_get_smu_msg_index,
 	.get_smu_clk_index = vega20_get_smu_clk_index,
+	.get_smu_feature_index = vega20_get_smu_feature_index,
 	.run_afll_btc = vega20_run_btc_afll,
 	.get_allowed_feature_mask = vega20_get_allowed_feature_mask,
 	.get_current_power_state = vega20_get_current_power_state,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 174/459] drm/amd/powerplay: introduce smu table id type to handle the smu table for each asic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (72 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 173/459] drm/amd/powerplay: introduce smu feature type to handle feature mask " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 175/459] drm/amd/powerplay: init table_count for smu tables on asic level Alex Deucher
                     ` (18 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch introduces new smu table type, it's to handle the different smu table
defines for each asic with the same smu ip.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    | 20 +++++++++++++
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |  3 ++
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c    | 29 +++++++++++++++++++
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 27 +++++++++++++++++
 4 files changed, 79 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index f0b313baf04d..631e2fc1e055 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -366,6 +366,23 @@ struct smu_bios_boot_up_values
 	uint32_t			pp_table_id;
 };
 
+enum smu_table_id
+{
+	SMU_TABLE_PPTABLE = 0,
+	SMU_TABLE_WATERMARKS,
+	SMU_TABLE_AVFS,
+	SMU_TABLE_AVFS_PSM_DEBUG,
+	SMU_TABLE_AVFS_FUSE_OVERRIDE,
+	SMU_TABLE_PMSTATUSLOG,
+	SMU_TABLE_SMU_METRICS,
+	SMU_TABLE_DRIVER_SMU_CONFIG,
+	SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+	SMU_TABLE_OVERDRIVE,
+	SMU_TABLE_I2C_COMMANDS,
+	SMU_TABLE_PACE,
+	SMU_TABLE_COUNT,
+};
+
 struct smu_table_context
 {
 	void				*power_play_table;
@@ -495,6 +512,7 @@ struct pptable_funcs {
 	int (*get_smu_msg_index)(struct smu_context *smu, uint32_t index);
 	int (*get_smu_clk_index)(struct smu_context *smu, uint32_t index);
 	int (*get_smu_feature_index)(struct smu_context *smu, uint32_t index);
+	int (*get_smu_table_index)(struct smu_context *smu, uint32_t index);
 	int (*run_afll_btc)(struct smu_context *smu);
 	int (*get_allowed_feature_mask)(struct smu_context *smu, uint32_t *feature_mask, uint32_t num);
 	enum amd_pm_state_type (*get_current_power_state)(struct smu_context *smu);
@@ -783,6 +801,8 @@ struct smu_funcs
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_clk_index? (smu)->ppt_funcs->get_smu_clk_index((smu), (msg)) : -EINVAL) : -EINVAL)
 #define smu_feature_get_index(smu, msg) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_feature_index? (smu)->ppt_funcs->get_smu_feature_index((smu), (msg)) : -EINVAL) : -EINVAL)
+#define smu_table_get_index(smu, tab) \
+	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_table_index? (smu)->ppt_funcs->get_smu_table_index((smu), (tab)) : -EINVAL) : -EINVAL)
 #define smu_run_afll_btc(smu) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->run_afll_btc? (smu)->ppt_funcs->run_afll_btc((smu)) : 0) : 0)
 #define smu_get_allowed_feature_mask(smu, feature_mask, num) \
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index 9284c1edfe42..dcc1ede97c04 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -46,6 +46,9 @@
 #define FEA_MAP(fea) \
 	[SMU_FEATURE_##fea##_BIT] = FEATURE_##fea##_BIT
 
+#define TAB_MAP(tab) \
+	[SMU_TABLE_##tab] = TABLE_##tab
+
 struct smu_11_0_max_sustainable_clocks {
 	uint32_t display_clock;
 	uint32_t phy_clock;
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index d1c2d4e67879..7c78251ed944 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -156,6 +156,21 @@ static int navi10_feature_mask_map[SMU_FEATURE_COUNT] = {
 	FEA_MAP(ATHUB_PG),
 };
 
+static int navi10_table_map[SMU_TABLE_COUNT] = {
+	TAB_MAP(PPTABLE),
+	TAB_MAP(WATERMARKS),
+	TAB_MAP(AVFS),
+	TAB_MAP(AVFS_PSM_DEBUG),
+	TAB_MAP(AVFS_FUSE_OVERRIDE),
+	TAB_MAP(PMSTATUSLOG),
+	TAB_MAP(SMU_METRICS),
+	TAB_MAP(DRIVER_SMU_CONFIG),
+	TAB_MAP(ACTIVITY_MONITOR_COEFF),
+	TAB_MAP(OVERDRIVE),
+	TAB_MAP(I2C_COMMANDS),
+	TAB_MAP(PACE),
+};
+
 static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -195,6 +210,19 @@ static int navi10_get_smu_feature_index(struct smu_context *smc, uint32_t index)
 	return val;
 }
 
+static int navi10_get_smu_table_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_TABLE_COUNT)
+		return -EINVAL;
+
+	val = navi10_table_map[index];
+	if (val >= TABLE_COUNT)
+		return -EINVAL;
+
+	return val;
+}
+
 #define FEATURE_MASK(feature) (1UL << feature)
 static int
 navi10_get_allowed_feature_mask(struct smu_context *smu,
@@ -412,6 +440,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 	.get_smu_msg_index = navi10_get_smu_msg_index,
 	.get_smu_clk_index = navi10_get_smu_clk_index,
 	.get_smu_feature_index = navi10_get_smu_feature_index,
+	.get_smu_table_index = navi10_get_smu_table_index,
 	.get_allowed_feature_mask = navi10_get_allowed_feature_mask,
 	.set_default_dpm_table = navi10_set_default_dpm_table,
 };
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 718fd4dec531..7cafbc942b2a 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -189,6 +189,32 @@ static int vega20_feature_mask_map[SMU_FEATURE_COUNT] = {
 	FEA_MAP(XGMI),
 };
 
+static int vega20_table_map[SMU_TABLE_COUNT] = {
+	TAB_MAP(PPTABLE),
+	TAB_MAP(WATERMARKS),
+	TAB_MAP(AVFS),
+	TAB_MAP(AVFS_PSM_DEBUG),
+	TAB_MAP(AVFS_FUSE_OVERRIDE),
+	TAB_MAP(PMSTATUSLOG),
+	TAB_MAP(SMU_METRICS),
+	TAB_MAP(DRIVER_SMU_CONFIG),
+	TAB_MAP(ACTIVITY_MONITOR_COEFF),
+	TAB_MAP(OVERDRIVE),
+};
+
+static int vega20_get_smu_table_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_TABLE_COUNT)
+		return -EINVAL;
+
+	val = vega20_table_map[index];
+	if (val >= TABLE_COUNT)
+		return -EINVAL;
+
+	return val;
+}
+
 static int vega20_get_smu_feature_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -2925,6 +2951,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.get_smu_msg_index = vega20_get_smu_msg_index,
 	.get_smu_clk_index = vega20_get_smu_clk_index,
 	.get_smu_feature_index = vega20_get_smu_feature_index,
+	.get_smu_table_index = vega20_get_smu_table_index,
 	.run_afll_btc = vega20_run_btc_afll,
 	.get_allowed_feature_mask = vega20_get_allowed_feature_mask,
 	.get_current_power_state = vega20_get_current_power_state,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 175/459] drm/amd/powerplay: init table_count for smu tables on asic level
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (73 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 174/459] drm/amd/powerplay: introduce smu table id type to handle the smu table " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 176/459] drm/amd/powerplay: add tables_init interface for each asic Alex Deucher
                     ` (17 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

TABLE_COUNT should be inited in asic level. Because the value may be different
on each asic even on the same ip.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 3 +++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c  | 6 +++---
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c | 3 +++
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 7c78251ed944..5ab35fff88ba 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -447,6 +447,9 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 
 void navi10_set_ppt_funcs(struct smu_context *smu)
 {
+	struct smu_table_context *smu_table = &smu->smu_table;
+
 	smu->ppt_funcs = &navi10_ppt_funcs;
 	smu->smc_if_version = SMU11_DRIVER_IF_VERSION;
+	smu_table->table_count = TABLE_COUNT;
 }
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 8c60cdcba4ef..d05e263859a0 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -400,15 +400,15 @@ static int smu_v11_0_init_smc_tables(struct smu_context *smu)
 	struct smu_table *tables = NULL;
 	int ret = 0;
 
-	if (smu_table->tables || smu_table->table_count != 0)
+	if (smu_table->tables || smu_table->table_count == 0)
 		return -EINVAL;
 
-	tables = kcalloc(TABLE_COUNT, sizeof(struct smu_table), GFP_KERNEL);
+	tables = kcalloc(SMU_TABLE_COUNT, sizeof(struct smu_table),
+			 GFP_KERNEL);
 	if (!tables)
 		return -ENOMEM;
 
 	smu_table->tables = tables;
-	smu_table->table_count = TABLE_COUNT;
 
 	SMU_TABLE_INIT(tables, TABLE_PPTABLE, sizeof(PPTable_t),
 		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 7cafbc942b2a..17a954bd5aa4 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -2989,6 +2989,9 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 
 void vega20_set_ppt_funcs(struct smu_context *smu)
 {
+	struct smu_table_context *smu_table = &smu->smu_table;
+
 	smu->ppt_funcs = &vega20_ppt_funcs;
 	smu->smc_if_version = SMU11_DRIVER_IF_VERSION;
+	smu_table->table_count = TABLE_COUNT;
 }
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 176/459] drm/amd/powerplay: add tables_init interface for each asic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (74 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 175/459] drm/amd/powerplay: init table_count for smu tables on asic level Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 177/459] drm/amd/powerplay/smu11: remove smu_update_table_with_arg Alex Deucher
                     ` (16 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

The smc tables defines should be in the asic level.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h |  3 +++
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h  |  2 ++
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c     | 18 ++++++++++++++++++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c      | 16 +---------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c     | 18 ++++++++++++++++++
 5 files changed, 42 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 631e2fc1e055..57ab23d9ddfd 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -568,6 +568,7 @@ struct pptable_funcs {
 	int (*set_ppfeature_status)(struct smu_context *smu, uint64_t ppfeatures);
 	int (*get_ppfeature_status)(struct smu_context *smu, char *buf);
 	bool (*is_dpm_running)(struct smu_context *smu);
+	void (*tables_init)(struct smu_context *smu, struct smu_table *tables);
 };
 
 struct smu_funcs
@@ -754,6 +755,8 @@ struct smu_funcs
 	((smu)->ppt_funcs->set_od_percentage ? (smu)->ppt_funcs->set_od_percentage((smu), (type), (value)) : 0)
 #define smu_od_edit_dpm_table(smu, type, input, size) \
 	((smu)->ppt_funcs->od_edit_dpm_table ? (smu)->ppt_funcs->od_edit_dpm_table((smu), (type), (input), (size)) : 0)
+#define smu_tables_init(smu, tab) \
+	((smu)->ppt_funcs->tables_init ? (smu)->ppt_funcs->tables_init((smu), (tab)) : 0)
 #define smu_start_thermal_control(smu) \
 	((smu)->funcs->start_thermal_control? (smu)->funcs->start_thermal_control((smu)) : 0)
 #define smu_read_sensor(smu, sensor, data, size) \
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index dcc1ede97c04..a708c5d5b82e 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -40,6 +40,8 @@
 #define TEMP_RANGE_MIN			(0)
 #define TEMP_RANGE_MAX			(80 * 1000)
 
+#define SMU11_TOOL_SIZE			0x19000
+
 #define CLK_MAP(clk, index) \
 	[SMU_##clk] = index
 
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 5ab35fff88ba..2d0f764d4f19 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -376,6 +376,23 @@ static int navi10_store_powerplay_table(struct smu_context *smu)
 	return 0;
 }
 
+static void navi10_tables_init(struct smu_context *smu, struct smu_table *tables)
+{
+	SMU_TABLE_INIT(tables, SMU_TABLE_PPTABLE, sizeof(PPTable_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_WATERMARKS, sizeof(Watermarks_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_SMU_METRICS, sizeof(SmuMetrics_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_OVERDRIVE, sizeof(OverDriveTable_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_PMSTATUSLOG, SMU11_TOOL_SIZE,
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+		       sizeof(DpmActivityMonitorCoeffInt_t), PAGE_SIZE,
+	               AMDGPU_GEM_DOMAIN_VRAM);
+}
+
 static int navi10_allocate_dpm_context(struct smu_context *smu)
 {
 	struct smu_dpm_context *smu_dpm = &smu->smu_dpm;
@@ -433,6 +450,7 @@ static int navi10_set_default_dpm_table(struct smu_context *smu)
 }
 
 static const struct pptable_funcs navi10_ppt_funcs = {
+	.tables_init = navi10_tables_init,
 	.alloc_dpm_context = navi10_allocate_dpm_context,
 	.store_powerplay_table = navi10_store_powerplay_table,
 	.check_powerplay_table = navi10_check_powerplay_table,
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index d05e263859a0..bfee0b413ca1 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -45,7 +45,6 @@
 MODULE_FIRMWARE("amdgpu/vega20_smc.bin");
 MODULE_FIRMWARE("amdgpu/navi10_smc.bin");
 
-#define SMU11_TOOL_SIZE		0x19000
 #define SMU11_THERMAL_MINIMUM_ALERT_TEMP      0
 #define SMU11_THERMAL_MAXIMUM_ALERT_TEMP      255
 
@@ -410,20 +409,7 @@ static int smu_v11_0_init_smc_tables(struct smu_context *smu)
 
 	smu_table->tables = tables;
 
-	SMU_TABLE_INIT(tables, TABLE_PPTABLE, sizeof(PPTable_t),
-		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
-	SMU_TABLE_INIT(tables, TABLE_WATERMARKS, sizeof(Watermarks_t),
-		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
-	SMU_TABLE_INIT(tables, TABLE_SMU_METRICS, sizeof(SmuMetrics_t),
-		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
-	SMU_TABLE_INIT(tables, TABLE_OVERDRIVE, sizeof(OverDriveTable_t),
-		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
-	SMU_TABLE_INIT(tables, TABLE_PMSTATUSLOG, SMU11_TOOL_SIZE, PAGE_SIZE,
-		       AMDGPU_GEM_DOMAIN_VRAM);
-	SMU_TABLE_INIT(tables, TABLE_ACTIVITY_MONITOR_COEFF,
-		       sizeof(DpmActivityMonitorCoeffInt_t),
-		       PAGE_SIZE,
-		       AMDGPU_GEM_DOMAIN_VRAM);
+	smu_tables_init(smu, tables);
 
 	ret = smu_v11_0_init_dpm_context(smu);
 	if (ret)
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 17a954bd5aa4..d71b682002bd 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -255,6 +255,23 @@ static int vega20_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 	return val;
 }
 
+static void vega20_tables_init(struct smu_context *smu, struct smu_table *tables)
+{
+	SMU_TABLE_INIT(tables, SMU_TABLE_PPTABLE, sizeof(PPTable_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_WATERMARKS, sizeof(Watermarks_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_SMU_METRICS, sizeof(SmuMetrics_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_OVERDRIVE, sizeof(OverDriveTable_t),
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_PMSTATUSLOG, SMU11_TOOL_SIZE,
+		       PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
+	SMU_TABLE_INIT(tables, SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+		       sizeof(DpmActivityMonitorCoeffInt_t), PAGE_SIZE,
+	               AMDGPU_GEM_DOMAIN_VRAM);
+}
+
 static int vega20_allocate_dpm_context(struct smu_context *smu)
 {
 	struct smu_dpm_context *smu_dpm = &smu->smu_dpm;
@@ -2944,6 +2961,7 @@ static bool vega20_is_dpm_running(struct smu_context *smu)
 }
 
 static const struct pptable_funcs vega20_ppt_funcs = {
+	.tables_init = vega20_tables_init,
 	.alloc_dpm_context = vega20_allocate_dpm_context,
 	.store_powerplay_table = vega20_store_powerplay_table,
 	.check_powerplay_table = vega20_check_powerplay_table,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 177/459] drm/amd/powerplay/smu11: remove smu_update_table_with_arg
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (75 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 176/459] drm/amd/powerplay: add tables_init interface for each asic Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 178/459] drm/amd/powerplay: modify smu_update_table to use SMU_TABLE_xxx as the input Alex Deucher
                     ` (15 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Nothing was using it.  Just replace with smu_update_table
which is what everything was using via a wrapper anyway.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c     | 6 ++----
 drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h | 4 +---
 2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 99e10313afa2..8bb78fdc782a 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -127,19 +127,17 @@ int smu_common_read_sensor(struct smu_context *smu, enum amd_pp_sensors sensor,
 	return ret;
 }
 
-int smu_update_table_with_arg(struct smu_context *smu, uint16_t table_id, uint16_t exarg,
+int smu_update_table(struct smu_context *smu, uint32_t table_index,
 		     void *table_data, bool drv2smu)
 {
 	struct smu_table_context *smu_table = &smu->smu_table;
 	struct smu_table *table = NULL;
 	int ret = 0;
-	uint32_t table_index;
+	int table_id = table_index & 0xffff;
 
 	if (!table_data || table_id >= smu_table->table_count)
 		return -EINVAL;
 
-	table_index = (exarg << 16) | table_id;
-
 	table = &smu_table->tables[table_id];
 
 	if (drv2smu)
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 57ab23d9ddfd..9be3e759e332 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -871,10 +871,8 @@ extern int smu_feature_is_supported(struct smu_context *smu,
 extern int smu_feature_set_supported(struct smu_context *smu,
 				     enum smu_feature_mask mask, bool enable);
 
-int smu_update_table_with_arg(struct smu_context *smu, uint16_t table_id, uint16_t exarg,
+int smu_update_table(struct smu_context *smu, uint32_t table_index,
 		     void *table_data, bool drv2smu);
-#define smu_update_table(smu, table_id, table_data, drv2smu) \
-	smu_update_table_with_arg((smu), (table_id), 0, (table_data), (drv2smu))
 
 bool is_support_sw_smu(struct amdgpu_device *adev);
 int smu_reset(struct smu_context *smu);
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 178/459] drm/amd/powerplay: modify smu_update_table to use SMU_TABLE_xxx as the input
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (76 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 177/459] drm/amd/powerplay/smu11: remove smu_update_table_with_arg Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 179/459] drm/amd/powerplay: use the table size member in the structure instead of getting directly Alex Deucher
                     ` (14 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

Table id may be different for each asic, so it's good to use this as the input
for common interface.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c |  8 ++++----
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c  | 24 +++++++++++++---------
 2 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 8bb78fdc782a..858ce5db687f 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -127,18 +127,18 @@ int smu_common_read_sensor(struct smu_context *smu, enum amd_pp_sensors sensor,
 	return ret;
 }
 
-int smu_update_table(struct smu_context *smu, uint32_t table_index,
+int smu_update_table(struct smu_context *smu, enum smu_table_id table_index,
 		     void *table_data, bool drv2smu)
 {
 	struct smu_table_context *smu_table = &smu->smu_table;
 	struct smu_table *table = NULL;
 	int ret = 0;
-	int table_id = table_index & 0xffff;
+	int table_id = smu_table_get_index(smu, table_index);
 
 	if (!table_data || table_id >= smu_table->table_count)
 		return -EINVAL;
 
-	table = &smu_table->tables[table_id];
+	table = &smu_table->tables[table_index];
 
 	if (drv2smu)
 		memcpy(table->cpu_addr, table_data, table->size);
@@ -154,7 +154,7 @@ int smu_update_table(struct smu_context *smu, uint32_t table_index,
 	ret = smu_send_smc_msg_with_param(smu, drv2smu ?
 					  SMU_MSG_TransferTableDram2Smu :
 					  SMU_MSG_TransferTableSmu2Dram,
-					  table_index);
+					  table_id);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index bfee0b413ca1..3e114316f385 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -707,15 +707,17 @@ static int smu_v11_0_write_pptable(struct smu_context *smu)
 	struct smu_table_context *table_context = &smu->smu_table;
 	int ret = 0;
 
-	ret = smu_update_table(smu, TABLE_PPTABLE, table_context->driver_pptable, true);
+	ret = smu_update_table(smu, SMU_TABLE_PPTABLE,
+			       table_context->driver_pptable, true);
 
 	return ret;
 }
 
 static int smu_v11_0_write_watermarks_table(struct smu_context *smu)
 {
-	return smu_update_table(smu, TABLE_WATERMARKS,
-				smu->smu_table.tables[TABLE_WATERMARKS].cpu_addr, true);
+	return smu_update_table(smu, SMU_TABLE_WATERMARKS,
+				smu->smu_table.tables[SMU_TABLE_WATERMARKS].cpu_addr,
+				true);
 }
 
 static int smu_v11_0_set_deep_sleep_dcefclk(struct smu_context *smu, uint32_t clk)
@@ -746,7 +748,7 @@ static int smu_v11_0_set_min_dcef_deep_sleep(struct smu_context *smu)
 static int smu_v11_0_set_tool_table_location(struct smu_context *smu)
 {
 	int ret = 0;
-	struct smu_table *tool_table = &smu->smu_table.tables[TABLE_PMSTATUSLOG];
+	struct smu_table *tool_table = &smu->smu_table.tables[SMU_TABLE_PMSTATUSLOG];
 
 	if (tool_table->mc_address) {
 		ret = smu_send_smc_msg_with_param(smu,
@@ -1226,7 +1228,7 @@ static int smu_v11_0_get_metrics_table(struct smu_context *smu,
 	int ret = 0;
 
 	if (!smu->metrics_time || time_after(jiffies, smu->metrics_time + HZ / 1000)) {
-		ret = smu_update_table(smu, TABLE_SMU_METRICS,
+		ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS,
 				(void *)metrics_table, false);
 		if (ret) {
 			pr_info("Failed to export SMU metrics table!\n");
@@ -1521,7 +1523,7 @@ smu_v11_0_set_watermarks_for_clock_ranges(struct smu_context *smu, struct
 					  *clock_ranges)
 {
 	int ret = 0;
-	struct smu_table *watermarks = &smu->smu_table.tables[TABLE_WATERMARKS];
+	struct smu_table *watermarks = &smu->smu_table.tables[SMU_TABLE_WATERMARKS];
 	Watermarks_t *table = watermarks->cpu_addr;
 
 	if (!smu->disable_watermark &&
@@ -1665,7 +1667,8 @@ static int smu_v11_0_set_od8_default_settings(struct smu_context *smu,
 		if (!table_context->overdrive_table)
 			return -ENOMEM;
 
-		ret = smu_update_table(smu, TABLE_OVERDRIVE, table_context->overdrive_table, false);
+		ret = smu_update_table(smu, SMU_TABLE_OVERDRIVE,
+				       table_context->overdrive_table, false);
 		if (ret) {
 			pr_err("Failed to export over drive table!\n");
 			return ret;
@@ -1674,7 +1677,8 @@ static int smu_v11_0_set_od8_default_settings(struct smu_context *smu,
 		smu_set_default_od8_settings(smu);
 	}
 
-	ret = smu_update_table(smu, TABLE_OVERDRIVE, table_context->overdrive_table, true);
+	ret = smu_update_table(smu, SMU_TABLE_OVERDRIVE,
+			       table_context->overdrive_table, true);
 	if (ret) {
 		pr_err("Failed to import over drive table!\n");
 		return ret;
@@ -1690,7 +1694,7 @@ static int smu_v11_0_update_od8_settings(struct smu_context *smu,
 	struct smu_table_context *table_context = &smu->smu_table;
 	int ret;
 
-	ret = smu_update_table(smu, TABLE_OVERDRIVE,
+	ret = smu_update_table(smu, SMU_TABLE_OVERDRIVE,
 			       table_context->overdrive_table, false);
 	if (ret) {
 		pr_err("Failed to export over drive table!\n");
@@ -1699,7 +1703,7 @@ static int smu_v11_0_update_od8_settings(struct smu_context *smu,
 
 	smu_update_specified_od8_value(smu, index, value);
 
-	ret = smu_update_table(smu, TABLE_OVERDRIVE,
+	ret = smu_update_table(smu, SMU_TABLE_OVERDRIVE,
 			       table_context->overdrive_table, true);
 	if (ret) {
 		pr_err("Failed to import over drive table!\n");
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 179/459] drm/amd/powerplay: use the table size member in the structure instead of getting directly
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (77 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 178/459] drm/amd/powerplay: modify smu_update_table to use SMU_TABLE_xxx as the input Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 180/459] drm/amd/powerplay: move PPTable_t uses into asic level Alex Deucher
                     ` (13 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch uses the table size member in the structure instead of getting
directly, because the table is different in each asic.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 3e114316f385..e193f63879ac 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -675,11 +675,12 @@ static int smu_v11_0_parse_pptable(struct smu_context *smu)
 	int ret;
 
 	struct smu_table_context *table_context = &smu->smu_table;
+	struct smu_table *table = &table_context->tables[SMU_TABLE_PPTABLE];
 
 	if (table_context->driver_pptable)
 		return -EINVAL;
 
-	table_context->driver_pptable = kzalloc(sizeof(PPTable_t), GFP_KERNEL);
+	table_context->driver_pptable = kzalloc(table->size, GFP_KERNEL);
 
 	if (!table_context->driver_pptable)
 		return -ENOMEM;
@@ -1649,6 +1650,7 @@ static int smu_v11_0_set_od8_default_settings(struct smu_context *smu,
 					      bool initialize)
 {
 	struct smu_table_context *table_context = &smu->smu_table;
+	struct smu_table *table = &table_context->tables[SMU_TABLE_OVERDRIVE];
 	int ret;
 
 	/**
@@ -1662,7 +1664,7 @@ static int smu_v11_0_set_od8_default_settings(struct smu_context *smu,
 		if (table_context->overdrive_table)
 			return -EINVAL;
 
-		table_context->overdrive_table = kzalloc(sizeof(OverDriveTable_t), GFP_KERNEL);
+		table_context->overdrive_table = kzalloc(table->size, GFP_KERNEL);
 
 		if (!table_context->overdrive_table)
 			return -ENOMEM;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 180/459] drm/amd/powerplay: move PPTable_t uses into asic level
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (78 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 179/459] drm/amd/powerplay: use the table size member in the structure instead of getting directly Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 181/459] drm/amd/powerplay: move SmuMetrics_t " Alex Deucher
                     ` (12 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch moves the rest of PPTable_t uses into asic level. It's to avoid the
conflicts with different asic.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  7 +++--
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 31 +------------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 29 +++++++++++++++++
 3 files changed, 35 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 9be3e759e332..856846b6fd27 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -569,6 +569,8 @@ struct pptable_funcs {
 	int (*get_ppfeature_status)(struct smu_context *smu, char *buf);
 	bool (*is_dpm_running)(struct smu_context *smu);
 	void (*tables_init)(struct smu_context *smu, struct smu_table *tables);
+	int (*set_thermal_fan_table)(struct smu_context *smu);
+	int (*get_fan_speed_percent)(struct smu_context *smu, uint32_t *speed);
 };
 
 struct smu_funcs
@@ -643,7 +645,6 @@ struct smu_funcs
 	int (*get_current_rpm)(struct smu_context *smu, uint32_t *speed);
 	uint32_t (*get_fan_control_mode)(struct smu_context *smu);
 	int (*set_fan_control_mode)(struct smu_context *smu, uint32_t mode);
-	int (*get_fan_speed_percent)(struct smu_context *smu, uint32_t *speed);
 	int (*set_fan_speed_percent)(struct smu_context *smu, uint32_t speed);
 	int (*set_fan_speed_rpm)(struct smu_context *smu, uint32_t speed);
 	int (*set_xgmi_pstate)(struct smu_context *smu, uint32_t pstate);
@@ -757,6 +758,8 @@ struct smu_funcs
 	((smu)->ppt_funcs->od_edit_dpm_table ? (smu)->ppt_funcs->od_edit_dpm_table((smu), (type), (input), (size)) : 0)
 #define smu_tables_init(smu, tab) \
 	((smu)->ppt_funcs->tables_init ? (smu)->ppt_funcs->tables_init((smu), (tab)) : 0)
+#define smu_set_thermal_fan_table(smu) \
+	((smu)->ppt_funcs->set_thermal_fan_table ? (smu)->ppt_funcs->set_thermal_fan_table((smu)) : 0)
 #define smu_start_thermal_control(smu) \
 	((smu)->funcs->start_thermal_control? (smu)->funcs->start_thermal_control((smu)) : 0)
 #define smu_read_sensor(smu, sensor, data, size) \
@@ -794,7 +797,7 @@ struct smu_funcs
 #define smu_set_fan_control_mode(smu, value) \
 	((smu)->funcs->set_fan_control_mode ? (smu)->funcs->set_fan_control_mode((smu), (value)) : 0)
 #define smu_get_fan_speed_percent(smu, speed) \
-	((smu)->funcs->get_fan_speed_percent ? (smu)->funcs->get_fan_speed_percent((smu), (speed)) : 0)
+	((smu)->ppt_funcs->get_fan_speed_percent ? (smu)->ppt_funcs->get_fan_speed_percent((smu), (speed)) : 0)
 #define smu_set_fan_speed_percent(smu, speed) \
 	((smu)->funcs->set_fan_speed_percent ? (smu)->funcs->set_fan_speed_percent((smu), (speed)) : 0)
 
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index e193f63879ac..1d5fdf9e4a86 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1166,18 +1166,6 @@ static int smu_v11_0_enable_thermal_alert(struct smu_context *smu)
 	return 0;
 }
 
-static int smu_v11_0_set_thermal_fan_table(struct smu_context *smu)
-{
-	int ret;
-	struct smu_table_context *table_context = &smu->smu_table;
-	PPTable_t *pptable = table_context->driver_pptable;
-
-	ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetFanTemperatureTarget,
-			(uint32_t)pptable->FanTargetTemperature);
-
-	return ret;
-}
-
 static int smu_v11_0_start_thermal_control(struct smu_context *smu)
 {
 	int ret = 0;
@@ -1205,7 +1193,7 @@ static int smu_v11_0_start_thermal_control(struct smu_context *smu)
 		ret = smu_v11_0_enable_thermal_alert(smu);
 		if (ret)
 			return ret;
-		ret = smu_v11_0_set_thermal_fan_table(smu);
+		ret = smu_set_thermal_fan_table(smu);
 		if (ret)
 			return ret;
 	}
@@ -1741,22 +1729,6 @@ smu_v11_0_get_fan_control_mode(struct smu_context *smu)
 		return AMD_FAN_CTRL_AUTO;
 }
 
-static int
-smu_v11_0_get_fan_speed_percent(struct smu_context *smu,
-					   uint32_t *speed)
-{
-	int ret = 0;
-	uint32_t percent = 0;
-	uint32_t current_rpm;
-	PPTable_t *pptable = smu->smu_table.driver_pptable;
-
-	ret = smu_v11_0_get_current_rpm(smu, &current_rpm);
-	percent = current_rpm * 100 / pptable->FanMaximumRpm;
-	*speed = percent > 100 ? 100 : percent;
-
-	return ret;
-}
-
 static int
 smu_v11_0_smc_fan_control(struct smu_context *smu, bool start)
 {
@@ -1935,7 +1907,6 @@ static const struct smu_funcs smu_v11_0_funcs = {
 	.get_current_rpm = smu_v11_0_get_current_rpm,
 	.get_fan_control_mode = smu_v11_0_get_fan_control_mode,
 	.set_fan_control_mode = smu_v11_0_set_fan_control_mode,
-	.get_fan_speed_percent = smu_v11_0_get_fan_speed_percent,
 	.set_fan_speed_percent = smu_v11_0_set_fan_speed_percent,
 	.set_fan_speed_rpm = smu_v11_0_set_fan_speed_rpm,
 	.set_xgmi_pstate = smu_v11_0_set_xgmi_pstate,
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index d71b682002bd..2367bcc45468 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -2960,6 +2960,33 @@ static bool vega20_is_dpm_running(struct smu_context *smu)
 	return !!(feature_enabled & SMC_DPM_FEATURE);
 }
 
+static int vega20_set_thermal_fan_table(struct smu_context *smu)
+{
+	int ret;
+	struct smu_table_context *table_context = &smu->smu_table;
+	PPTable_t *pptable = table_context->driver_pptable;
+
+	ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetFanTemperatureTarget,
+			(uint32_t)pptable->FanTargetTemperature);
+
+	return ret;
+}
+
+static int vega20_get_fan_speed_percent(struct smu_context *smu,
+					uint32_t *speed)
+{
+	int ret = 0;
+	uint32_t percent = 0;
+	uint32_t current_rpm;
+	PPTable_t *pptable = smu->smu_table.driver_pptable;
+
+	ret = smu_get_current_rpm(smu, &current_rpm);
+	percent = current_rpm * 100 / pptable->FanMaximumRpm;
+	*speed = percent > 100 ? 100 : percent;
+
+	return ret;
+}
+
 static const struct pptable_funcs vega20_ppt_funcs = {
 	.tables_init = vega20_tables_init,
 	.alloc_dpm_context = vega20_allocate_dpm_context,
@@ -3003,6 +3030,8 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.set_ppfeature_status = vega20_set_ppfeature_status,
 	.get_ppfeature_status = vega20_get_ppfeature_status,
 	.is_dpm_running = vega20_is_dpm_running,
+	.set_thermal_fan_table = vega20_set_thermal_fan_table,
+	.get_fan_speed_percent = vega20_get_fan_speed_percent,
 };
 
 void vega20_set_ppt_funcs(struct smu_context *smu)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 181/459] drm/amd/powerplay: move SmuMetrics_t uses into asic level
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (79 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 180/459] drm/amd/powerplay: move PPTable_t uses into asic level Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 182/459] drm/amd/powerplay: move Watermarks_t " Alex Deucher
                     ` (11 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch moves the rest of SmuMetrics_t uses into asic level. It's to avoid the
conflicts with different asic.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  8 +++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 54 ++-----------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 50 +++++++++++++++++
 3 files changed, 62 insertions(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 856846b6fd27..7fa03eca4a08 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -571,6 +571,10 @@ struct pptable_funcs {
 	void (*tables_init)(struct smu_context *smu, struct smu_table *tables);
 	int (*set_thermal_fan_table)(struct smu_context *smu);
 	int (*get_fan_speed_percent)(struct smu_context *smu, uint32_t *speed);
+	int (*get_gpu_power)(struct smu_context *smu, uint32_t *value);
+	int (*get_current_activity_percent)(struct smu_context *smu,
+					    enum amd_pp_sensors sensor,
+					    uint32_t *value);
 };
 
 struct smu_funcs
@@ -798,6 +802,10 @@ struct smu_funcs
 	((smu)->funcs->set_fan_control_mode ? (smu)->funcs->set_fan_control_mode((smu), (value)) : 0)
 #define smu_get_fan_speed_percent(smu, speed) \
 	((smu)->ppt_funcs->get_fan_speed_percent ? (smu)->ppt_funcs->get_fan_speed_percent((smu), (speed)) : 0)
+#define smu_get_gpu_power(smu, val) \
+	((smu)->ppt_funcs->get_gpu_power ? (smu)->ppt_funcs->get_gpu_power((smu), (val)) : 0)
+#define smu_get_current_activity_percent(smu, val) \
+	((smu)->ppt_funcs->get_current_activity_percent ? (smu)->ppt_funcs->get_current_activity_percent((smu), (sensor), (val)) : 0)
 #define smu_set_fan_speed_percent(smu, speed) \
 	((smu)->funcs->set_fan_speed_percent ? (smu)->funcs->set_fan_speed_percent((smu), (speed)) : 0)
 
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 1d5fdf9e4a86..103e8bc3a7b9 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1231,35 +1231,6 @@ static int smu_v11_0_get_metrics_table(struct smu_context *smu,
 	return ret;
 }
 
-static int smu_v11_0_get_current_activity_percent(struct smu_context *smu,
-						  enum amd_pp_sensors sensor,
-						  uint32_t *value)
-{
-	int ret = 0;
-	SmuMetrics_t metrics;
-
-	if (!value)
-		return -EINVAL;
-
-	ret = smu_v11_0_get_metrics_table(smu, &metrics);
-	if (ret)
-		return ret;
-
-	switch (sensor) {
-	case AMDGPU_PP_SENSOR_GPU_LOAD:
-		*value = metrics.AverageGfxActivity;
-		break;
-	case AMDGPU_PP_SENSOR_MEM_LOAD:
-		*value = metrics.AverageUclkActivity;
-		break;
-	default:
-		pr_err("Invalid sensor for retrieving clock activity\n");
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
 static int smu_v11_0_thermal_get_temperature(struct smu_context *smu,
 					     enum amd_pp_sensors sensor,
 					     uint32_t *value)
@@ -1303,23 +1274,6 @@ static int smu_v11_0_thermal_get_temperature(struct smu_context *smu,
 	return 0;
 }
 
-static int smu_v11_0_get_gpu_power(struct smu_context *smu, uint32_t *value)
-{
-	int ret = 0;
-	SmuMetrics_t metrics;
-
-	if (!value)
-		return -EINVAL;
-
-	ret = smu_v11_0_get_metrics_table(smu, &metrics);
-	if (ret)
-		return ret;
-
-	*value = metrics.CurrSocketPower << 8;
-
-	return 0;
-}
-
 static uint16_t convert_to_vddc(uint8_t vid)
 {
 	return (uint16_t) ((6200 - (vid * 25)) / SMU11_VOLTAGE_SCALE);
@@ -1354,9 +1308,9 @@ static int smu_v11_0_read_sensor(struct smu_context *smu,
 	switch (sensor) {
 	case AMDGPU_PP_SENSOR_GPU_LOAD:
 	case AMDGPU_PP_SENSOR_MEM_LOAD:
-		ret = smu_v11_0_get_current_activity_percent(smu,
-							     sensor,
-							     (uint32_t *)data);
+		ret = smu_get_current_activity_percent(smu,
+						       sensor,
+						       (uint32_t *)data);
 		*size = 4;
 		break;
 	case AMDGPU_PP_SENSOR_GFX_MCLK:
@@ -1374,7 +1328,7 @@ static int smu_v11_0_read_sensor(struct smu_context *smu,
 		*size = 4;
 		break;
 	case AMDGPU_PP_SENSOR_GPU_POWER:
-		ret = smu_v11_0_get_gpu_power(smu, (uint32_t *)data);
+		ret = smu_get_gpu_power(smu, (uint32_t *)data);
 		*size = 4;
 		break;
 	case AMDGPU_PP_SENSOR_VDDGFX:
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 2367bcc45468..75c86c4b2ece 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -2987,6 +2987,54 @@ static int vega20_get_fan_speed_percent(struct smu_context *smu,
 	return ret;
 }
 
+static int vega20_get_gpu_power(struct smu_context *smu, uint32_t *value)
+{
+	int ret = 0;
+	SmuMetrics_t metrics;
+
+	if (!value)
+		return -EINVAL;
+
+	ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS, (void *)&metrics,
+			       false);
+	if (ret)
+		return ret;
+
+	*value = metrics.CurrSocketPower << 8;
+
+	return 0;
+}
+
+static int vega20_get_current_activity_percent(struct smu_context *smu,
+					       enum amd_pp_sensors sensor,
+					       uint32_t *value)
+{
+	int ret = 0;
+	SmuMetrics_t metrics;
+
+	if (!value)
+		return -EINVAL;
+
+	ret = smu_update_table(smu, SMU_TABLE_SMU_METRICS,
+			       (void *)&metrics, false);
+	if (ret)
+		return ret;
+
+	switch (sensor) {
+	case AMDGPU_PP_SENSOR_GPU_LOAD:
+		*value = metrics.AverageGfxActivity;
+		break;
+	case AMDGPU_PP_SENSOR_MEM_LOAD:
+		*value = metrics.AverageUclkActivity;
+		break;
+	default:
+		pr_err("Invalid sensor for retrieving clock activity\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static const struct pptable_funcs vega20_ppt_funcs = {
 	.tables_init = vega20_tables_init,
 	.alloc_dpm_context = vega20_allocate_dpm_context,
@@ -3032,6 +3080,8 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.is_dpm_running = vega20_is_dpm_running,
 	.set_thermal_fan_table = vega20_set_thermal_fan_table,
 	.get_fan_speed_percent = vega20_get_fan_speed_percent,
+	.get_gpu_power= vega20_get_gpu_power,
+	.get_current_activity_percent = vega20_get_current_activity_percent,
 };
 
 void vega20_set_ppt_funcs(struct smu_context *smu)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 182/459] drm/amd/powerplay: move Watermarks_t uses into asic level
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (80 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 181/459] drm/amd/powerplay: move SmuMetrics_t " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 183/459] drm/amd/powerplay: introduce smu power source type to handle AC/DC source for each asic Alex Deucher
                     ` (10 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch moves the rest of Watermarks_t uses into asic level. It's to avoid
the conflicts with different asic.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    |  4 ++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 63 +------------------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 61 ++++++++++++++++++
 3 files changed, 67 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 7fa03eca4a08..469b2c9e6805 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -575,6 +575,8 @@ struct pptable_funcs {
 	int (*get_current_activity_percent)(struct smu_context *smu,
 					    enum amd_pp_sensors sensor,
 					    uint32_t *value);
+	int (*set_watermarks_table)(struct smu_context *smu, void *watermarks,
+				    struct dm_pp_wm_sets_with_clock_ranges_soc15 *clock_ranges);
 };
 
 struct smu_funcs
@@ -863,6 +865,8 @@ struct smu_funcs
 	((smu)->ppt_funcs->set_ppfeature_status ? (smu)->ppt_funcs->set_ppfeature_status((smu), (ppfeatures)) : -EINVAL)
 #define smu_get_ppfeature_status(smu, buf) \
 	((smu)->ppt_funcs->get_ppfeature_status ? (smu)->ppt_funcs->get_ppfeature_status((smu), (buf)) : -EINVAL)
+#define smu_set_watermarks_table(smu, tab, clock_ranges) \
+	((smu)->ppt_funcs->set_watermarks_table ? (smu)->ppt_funcs->set_watermarks_table((smu), (tab), (clock_ranges)) : 0)
 
 extern int smu_get_atom_data_table(struct smu_context *smu, uint32_t table,
 				   uint16_t *size, uint8_t *frev, uint8_t *crev,
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 103e8bc3a7b9..4620bd578bcd 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1401,65 +1401,6 @@ smu_v11_0_display_clock_voltage_request(struct smu_context *smu,
 	return ret;
 }
 
-static int smu_v11_0_set_watermarks_table(struct smu_context *smu,
-					  Watermarks_t *table, struct
-					  dm_pp_wm_sets_with_clock_ranges_soc15
-					  *clock_ranges)
-{
-	int i;
-
-	if (!table || !clock_ranges)
-		return -EINVAL;
-
-	if (clock_ranges->num_wm_dmif_sets > 4 ||
-	    clock_ranges->num_wm_mcif_sets > 4)
-                return -EINVAL;
-
-        for (i = 0; i < clock_ranges->num_wm_dmif_sets; i++) {
-		table->WatermarkRow[1][i].MinClock =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz /
-			1000));
-		table->WatermarkRow[1][i].MaxClock =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz /
-			1000));
-		table->WatermarkRow[1][i].MinUclk =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz /
-			1000));
-		table->WatermarkRow[1][i].MaxUclk =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz /
-			1000));
-		table->WatermarkRow[1][i].WmSetting = (uint8_t)
-				clock_ranges->wm_dmif_clocks_ranges[i].wm_set_id;
-        }
-
-	for (i = 0; i < clock_ranges->num_wm_mcif_sets; i++) {
-		table->WatermarkRow[0][i].MinClock =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz /
-			1000));
-		table->WatermarkRow[0][i].MaxClock =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz /
-			1000));
-		table->WatermarkRow[0][i].MinUclk =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz /
-			1000));
-		table->WatermarkRow[0][i].MaxUclk =
-			cpu_to_le16((uint16_t)
-			(clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz /
-			1000));
-		table->WatermarkRow[0][i].WmSetting = (uint8_t)
-				clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id;
-        }
-
-	return 0;
-}
-
 static int
 smu_v11_0_set_watermarks_for_clock_ranges(struct smu_context *smu, struct
 					  dm_pp_wm_sets_with_clock_ranges_soc15
@@ -1467,12 +1408,12 @@ smu_v11_0_set_watermarks_for_clock_ranges(struct smu_context *smu, struct
 {
 	int ret = 0;
 	struct smu_table *watermarks = &smu->smu_table.tables[SMU_TABLE_WATERMARKS];
-	Watermarks_t *table = watermarks->cpu_addr;
+	void *table = watermarks->cpu_addr;
 
 	if (!smu->disable_watermark &&
 	    smu_feature_is_enabled(smu, SMU_FEATURE_DPM_DCEFCLK_BIT) &&
 	    smu_feature_is_enabled(smu, SMU_FEATURE_DPM_SOCCLK_BIT)) {
-		smu_v11_0_set_watermarks_table(smu, table, clock_ranges);
+		smu_set_watermarks_table(smu, table, clock_ranges);
 		smu->watermarks_bitmap |= WATERMARKS_EXIST;
 		smu->watermarks_bitmap &= ~WATERMARKS_LOADED;
 	}
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 75c86c4b2ece..ba0175ae247a 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -3035,6 +3035,66 @@ static int vega20_get_current_activity_percent(struct smu_context *smu,
 	return 0;
 }
 
+static int vega20_set_watermarks_table(struct smu_context *smu,
+				       void *watermarks, struct
+				       dm_pp_wm_sets_with_clock_ranges_soc15
+				       *clock_ranges)
+{
+	int i;
+	Watermarks_t *table = watermarks;
+
+	if (!table || !clock_ranges)
+		return -EINVAL;
+
+	if (clock_ranges->num_wm_dmif_sets > 4 ||
+	    clock_ranges->num_wm_mcif_sets > 4)
+                return -EINVAL;
+
+        for (i = 0; i < clock_ranges->num_wm_dmif_sets; i++) {
+		table->WatermarkRow[1][i].MinClock =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_dmif_clocks_ranges[i].wm_min_dcfclk_clk_in_khz /
+			1000));
+		table->WatermarkRow[1][i].MaxClock =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_dmif_clocks_ranges[i].wm_max_dcfclk_clk_in_khz /
+			1000));
+		table->WatermarkRow[1][i].MinUclk =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_dmif_clocks_ranges[i].wm_min_mem_clk_in_khz /
+			1000));
+		table->WatermarkRow[1][i].MaxUclk =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_dmif_clocks_ranges[i].wm_max_mem_clk_in_khz /
+			1000));
+		table->WatermarkRow[1][i].WmSetting = (uint8_t)
+				clock_ranges->wm_dmif_clocks_ranges[i].wm_set_id;
+        }
+
+	for (i = 0; i < clock_ranges->num_wm_mcif_sets; i++) {
+		table->WatermarkRow[0][i].MinClock =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_mcif_clocks_ranges[i].wm_min_socclk_clk_in_khz /
+			1000));
+		table->WatermarkRow[0][i].MaxClock =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_mcif_clocks_ranges[i].wm_max_socclk_clk_in_khz /
+			1000));
+		table->WatermarkRow[0][i].MinUclk =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_mcif_clocks_ranges[i].wm_min_mem_clk_in_khz /
+			1000));
+		table->WatermarkRow[0][i].MaxUclk =
+			cpu_to_le16((uint16_t)
+			(clock_ranges->wm_mcif_clocks_ranges[i].wm_max_mem_clk_in_khz /
+			1000));
+		table->WatermarkRow[0][i].WmSetting = (uint8_t)
+				clock_ranges->wm_mcif_clocks_ranges[i].wm_set_id;
+        }
+
+	return 0;
+}
+
 static const struct pptable_funcs vega20_ppt_funcs = {
 	.tables_init = vega20_tables_init,
 	.alloc_dpm_context = vega20_allocate_dpm_context,
@@ -3082,6 +3142,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.get_fan_speed_percent = vega20_get_fan_speed_percent,
 	.get_gpu_power= vega20_get_gpu_power,
 	.get_current_activity_percent = vega20_get_current_activity_percent,
+	.set_watermarks_table = vega20_set_watermarks_table,
 };
 
 void vega20_set_ppt_funcs(struct smu_context *smu)
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 183/459] drm/amd/powerplay: introduce smu power source type to handle AC/DC source for each asic
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (81 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 182/459] drm/amd/powerplay: move Watermarks_t " Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 184/459] drm/amd/powerplay: move getting MAX_FAN_RPM value to asic level Alex Deucher
                     ` (9 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch introduces new smu power source type, it's to handle the different
AC/DC source defines for each asic with the same smu ip.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    | 10 ++++++++++
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |  3 +++
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c    | 19 +++++++++++++++++++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     |  2 +-
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    | 19 +++++++++++++++++++
 5 files changed, 52 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 469b2c9e6805..5fdf983d6dc6 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -243,6 +243,13 @@ enum smu_clk_type
 	SMU_CLK_COUNT,
 };
 
+enum smu_power_src_type
+{
+	SMU_POWER_SOURCE_AC,
+	SMU_POWER_SOURCE_DC,
+	SMU_POWER_SOURCE_COUNT,
+};
+
 enum smu_feature_mask
 {
 	SMU_FEATURE_DPM_PREFETCHER_BIT,
@@ -513,6 +520,7 @@ struct pptable_funcs {
 	int (*get_smu_clk_index)(struct smu_context *smu, uint32_t index);
 	int (*get_smu_feature_index)(struct smu_context *smu, uint32_t index);
 	int (*get_smu_table_index)(struct smu_context *smu, uint32_t index);
+	int (*get_smu_power_index)(struct smu_context *smu, uint32_t index);
 	int (*run_afll_btc)(struct smu_context *smu);
 	int (*get_allowed_feature_mask)(struct smu_context *smu, uint32_t *feature_mask, uint32_t num);
 	enum amd_pm_state_type (*get_current_power_state)(struct smu_context *smu);
@@ -819,6 +827,8 @@ struct smu_funcs
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_feature_index? (smu)->ppt_funcs->get_smu_feature_index((smu), (msg)) : -EINVAL) : -EINVAL)
 #define smu_table_get_index(smu, tab) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_table_index? (smu)->ppt_funcs->get_smu_table_index((smu), (tab)) : -EINVAL) : -EINVAL)
+#define smu_power_get_index(smu, src) \
+	((smu)->ppt_funcs? ((smu)->ppt_funcs->get_smu_power_index? (smu)->ppt_funcs->get_smu_power_index((smu), (src)) : -EINVAL) : -EINVAL)
 #define smu_run_afll_btc(smu) \
 	((smu)->ppt_funcs? ((smu)->ppt_funcs->run_afll_btc? (smu)->ppt_funcs->run_afll_btc((smu)) : 0) : 0)
 #define smu_get_allowed_feature_mask(smu, feature_mask, num) \
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index a708c5d5b82e..3a1f6f790795 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -51,6 +51,9 @@
 #define TAB_MAP(tab) \
 	[SMU_TABLE_##tab] = TABLE_##tab
 
+#define PWR_MAP(tab) \
+	[SMU_POWER_SOURCE_##tab] = POWER_SOURCE_##tab
+
 struct smu_11_0_max_sustainable_clocks {
 	uint32_t display_clock;
 	uint32_t phy_clock;
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 2d0f764d4f19..98c1798e59d1 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -171,6 +171,11 @@ static int navi10_table_map[SMU_TABLE_COUNT] = {
 	TAB_MAP(PACE),
 };
 
+static int navi10_pwr_src_map[SMU_POWER_SOURCE_COUNT] = {
+	PWR_MAP(AC),
+	PWR_MAP(DC),
+};
+
 static int navi10_get_smu_msg_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -223,6 +228,19 @@ static int navi10_get_smu_table_index(struct smu_context *smc, uint32_t index)
 	return val;
 }
 
+static int navi10_get_pwr_src_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_POWER_SOURCE_COUNT)
+		return -EINVAL;
+
+	val = navi10_pwr_src_map[index];
+	if (val >= POWER_SOURCE_COUNT)
+		return -EINVAL;
+
+	return val;
+}
+
 #define FEATURE_MASK(feature) (1UL << feature)
 static int
 navi10_get_allowed_feature_mask(struct smu_context *smu,
@@ -459,6 +477,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 	.get_smu_clk_index = navi10_get_smu_clk_index,
 	.get_smu_feature_index = navi10_get_smu_feature_index,
 	.get_smu_table_index = navi10_get_smu_table_index,
+	.get_smu_power_index= navi10_get_pwr_src_index,
 	.get_allowed_feature_mask = navi10_get_allowed_feature_mask,
 	.set_default_dpm_table = navi10_set_default_dpm_table,
 };
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 4620bd578bcd..243f0ea9259f 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1038,7 +1038,7 @@ static int smu_v11_0_get_power_limit(struct smu_context *smu,
 		mutex_unlock(&smu->mutex);
 	} else {
 		ret = smu_send_smc_msg_with_param(smu, SMU_MSG_GetPptLimit,
-						  POWER_SOURCE_AC << 16);
+			smu_power_get_index(smu, SMU_POWER_SOURCE_AC) << 16);
 		if (ret) {
 			pr_err("[%s] get PPT limit failed!", __func__);
 			return ret;
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index ba0175ae247a..57efe145e6fe 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -202,6 +202,11 @@ static int vega20_table_map[SMU_TABLE_COUNT] = {
 	TAB_MAP(OVERDRIVE),
 };
 
+static int vega20_pwr_src_map[SMU_POWER_SOURCE_COUNT] = {
+	PWR_MAP(AC),
+	PWR_MAP(DC),
+};
+
 static int vega20_get_smu_table_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -215,6 +220,19 @@ static int vega20_get_smu_table_index(struct smu_context *smc, uint32_t index)
 	return val;
 }
 
+static int vega20_get_pwr_src_index(struct smu_context *smc, uint32_t index)
+{
+	int val;
+	if (index >= SMU_POWER_SOURCE_COUNT)
+		return -EINVAL;
+
+	val = vega20_pwr_src_map[index];
+	if (val >= POWER_SOURCE_COUNT)
+		return -EINVAL;
+
+	return val;
+}
+
 static int vega20_get_smu_feature_index(struct smu_context *smc, uint32_t index)
 {
 	int val;
@@ -3105,6 +3123,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {
 	.get_smu_clk_index = vega20_get_smu_clk_index,
 	.get_smu_feature_index = vega20_get_smu_feature_index,
 	.get_smu_table_index = vega20_get_smu_table_index,
+	.get_smu_power_index = vega20_get_pwr_src_index,
 	.run_afll_btc = vega20_run_btc_afll,
 	.get_allowed_feature_mask = vega20_get_allowed_feature_mask,
 	.get_current_power_state = vega20_get_current_power_state,
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 184/459] drm/amd/powerplay: move getting MAX_FAN_RPM value to asic level
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (82 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 183/459] drm/amd/powerplay: introduce smu power source type to handle AC/DC source for each asic Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 185/459] drm/amd/powerplay: don't include the smu11 driver if header in smu v11 (v2) Alex Deucher
                     ` (8 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

Getting MAX_FAN_RPM value needs to be read by pptable, so it should be moved to
asic level.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c  | 6 ------
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c | 6 ++++++
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index 243f0ea9259f..e4fbf8dd57b2 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1302,8 +1302,6 @@ static int smu_v11_0_read_sensor(struct smu_context *smu,
 				 enum amd_pp_sensors sensor,
 				 void *data, uint32_t *size)
 {
-	struct smu_table_context *table_context = &smu->smu_table;
-	PPTable_t *pptable = table_context->driver_pptable;
 	int ret = 0;
 	switch (sensor) {
 	case AMDGPU_PP_SENSOR_GPU_LOAD:
@@ -1339,10 +1337,6 @@ static int smu_v11_0_read_sensor(struct smu_context *smu,
 		*(uint32_t *)data = 0;
 		*size = 4;
 		break;
-	case AMDGPU_PP_SENSOR_MAX_FAN_RPM:
-		*(uint32_t *)data = pptable->FanMaximumRpm;
-		*size = 4;
-		break;
 	default:
 		ret = smu_common_read_sensor(smu, sensor, data, size);
 		break;
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
index 57efe145e6fe..e9f0230fc274 100644
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c
@@ -2950,6 +2950,8 @@ static int vega20_read_sensor(struct smu_context *smu,
 			      void *data, uint32_t *size)
 {
 	int ret = 0;
+	struct smu_table_context *table_context = &smu->smu_table;
+	PPTable_t *pptable = table_context->driver_pptable;
 
 	switch (sensor) {
 	case AMDGPU_PP_SENSOR_UVD_POWER:
@@ -2960,6 +2962,10 @@ static int vega20_read_sensor(struct smu_context *smu,
 		*(uint32_t *)data = smu_feature_is_enabled(smu, SMU_FEATURE_DPM_VCE_BIT) ? 1 : 0;
 		*size = 4;
 		break;
+	case AMDGPU_PP_SENSOR_MAX_FAN_RPM:
+		*(uint32_t *)data = pptable->FanMaximumRpm;
+		*size = 4;
+		break;
 	default:
 		return -EINVAL;
 	}
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 185/459] drm/amd/powerplay: don't include the smu11 driver if header in smu v11 (v2)
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (83 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 184/459] drm/amd/powerplay: move getting MAX_FAN_RPM value to asic level Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 186/459] drm/amd/powerplay: fix the incorrect type of pptable Alex Deucher
                     ` (7 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kevin Wang, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This header is actually for each asic, so we should not include in smu_v11_0.c.
And rename the one for navi10.

v2: add hack for XGMI (Alex)

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../inc/{smu_11_0_driver_if.h => smu11_driver_if_navi10.h}    | 4 ++--
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c                    | 2 +-
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c                     | 4 +++-
 3 files changed, 6 insertions(+), 4 deletions(-)
 rename drivers/gpu/drm/amd/powerplay/inc/{smu_11_0_driver_if.h => smu11_driver_if_navi10.h} (99%)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
similarity index 99%
rename from drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
rename to drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
index 1ab6e4eca09f..25b7c8c496f7 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_11_0_driver_if.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
@@ -1,5 +1,5 @@
-#ifndef __SMU11_DRIVER_IF_H__
-#define __SMU11_DRIVER_IF_H__
+#ifndef __SMU11_DRIVER_IF_NAVI10_H__
+#define __SMU11_DRIVER_IF_NAVI10_H__
 
 // *** IMPORTANT ***
 // SMU TEAM: Always increment the interface version if 
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
index 98c1798e59d1..6d1b01a5228a 100644
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
@@ -28,7 +28,7 @@
 #include "atomfirmware.h"
 #include "amdgpu_atomfirmware.h"
 #include "smu_v11_0.h"
-#include "smu_11_0_driver_if.h"
+#include "smu11_driver_if_navi10.h"
 #include "soc15_common.h"
 #include "atom.h"
 #include "navi10_ppt.h"
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index e4fbf8dd57b2..564b61af6c30 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -27,7 +27,6 @@
 #include "atomfirmware.h"
 #include "amdgpu_atomfirmware.h"
 #include "smu_v11_0.h"
-#include "smu_11_0_driver_if.h"
 #include "soc15_common.h"
 #include "atom.h"
 #include "vega20_ppt.h"
@@ -1739,6 +1738,9 @@ static int smu_v11_0_set_fan_speed_rpm(struct smu_context *smu,
 	return ret;
 }
 
+#define XGMI_STATE_D0 1
+#define XGMI_STATE_D3 0
+
 static int smu_v11_0_set_xgmi_pstate(struct smu_context *smu,
 				     uint32_t pstate)
 {
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 186/459] drm/amd/powerplay: fix the incorrect type of pptable
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (84 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 185/459] drm/amd/powerplay: don't include the smu11 driver if header in smu v11 (v2) Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 187/459] drm/amd/powerplay: do not set dpm_enabled flag before VCN/DCN DPM is workable Alex Deucher
                     ` (6 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Kenneth Feng, Hawking Zhang

From: Kenneth Feng <Kenneth.Feng@amd.com>

This patch is to fix the incorrect type of pptable, otherwise, the data will be
totally wrong in parsing phase.

Signed-off-by: Kenneth Feng <Kenneth.Feng@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h
index 92c65b80bde2..86cdc3393eac 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h
@@ -121,7 +121,7 @@ struct smu_11_0_powerplay_table
 {
       struct atom_common_table_header header;
       uint8_t  table_revision;
-      uint32_t table_size;                          //Driver portion table size. The offset to smc_pptable including header size
+      uint16_t table_size;                          //Driver portion table size. The offset to smc_pptable including header size
       uint32_t golden_pp_id;
       uint32_t golden_revision;
       uint16_t format_id;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 187/459] drm/amd/powerplay: do not set dpm_enabled flag before VCN/DCN DPM is workable
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (85 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 186/459] drm/amd/powerplay: fix the incorrect type of pptable Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 188/459] drm/amdgpu/gfx10: update gfx golden settings Alex Deucher
                     ` (5 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This dpm_enabled flag will be recognized as the VCN DPM enabled as well. In fact
VCN/DCN DPM on Navi10 is not good so far, so we cannot enable it for now.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 858ce5db687f..06f5e5ce9db1 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -866,6 +866,9 @@ static int smu_hw_init(void *handle)
 		adev->pm.dpm_enabled = false;
 	else
 		adev->pm.dpm_enabled = true;
+	/* TODO: will set dpm_enabled flag while VCN and DAL DPM is workable */
+	if (adev->asic_type != CHIP_NAVI10)
+		adev->pm.dpm_enabled = true;
 
 	pr_info("SMU is initialized successfully!\n");
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 188/459] drm/amdgpu/gfx10: update gfx golden settings
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (86 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 187/459] drm/amd/powerplay: do not set dpm_enabled flag before VCN/DCN DPM is workable Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 189/459] drm/amdgpu: disable some gfx light sleep Alex Deucher
                     ` (4 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, tiancyin

From: tiancyin <tianci.yin@amd.com>

add new registers: mmCGTT_SPI_CLK_CTRL, mmDB_DEBUG3 and
mmGL2C_CGTT_SCLK_CTRL.

Reviewed-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Tianci Yin <tianci.yin@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 0d5d86a5d62f..4e7f64d91d12 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -67,6 +67,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_1[] =
 {
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_4, 0xffffffff, 0x00400014),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_CPF_CLK_CTRL, 0xfcff8fff, 0xf8000100),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xc0000000, 0xc0000100),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQ_CLK_CTRL, 0x60000ff0, 0x60000100),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQG_CLK_CTRL, 0x40000000, 0x40000100),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_VGT_CLK_CTRL, 0xffff8fff, 0xffff8100),
@@ -76,6 +77,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_1[] =
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_SD_CNTL, 0x000007ff, 0x000005ff),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG, 0x20000000, 0x20000000),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xffffffff, 0x00000420),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0x00000200, 0x00000200),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0x07800000, 0x04800000),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DFSM_TILES_IN_FLIGHT, 0x0000ffff, 0x0000003f),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_LAST_OF_BURST_CONFIG, 0xffffffff, 0x03860204),
@@ -86,6 +88,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_1[] =
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x02310231),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2A_ADDR_MATCH_MASK, 0xffffffff, 0xffffffcf),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_ADDR_MATCH_MASK, 0xffffffff, 0xffffffcf),
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CGTT_SCLK_CTRL, 0x10000000, 0x10000100),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CTRL2, 0xffffffff, 0x1402002f),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CTRL3, 0xffff9fff, 0x00001188),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE, 0x3fffffff, 0x08000009),
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 189/459] drm/amdgpu: disable some gfx light sleep
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (87 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 188/459] drm/amdgpu/gfx10: update gfx golden settings Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 190/459] drm/amdgpu/gfx10: fix resume failure when enabling async gfx ring Alex Deucher
                     ` (3 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Jack Xiao, tiancyin

From: tiancyin <tianci.yin@amd.com>

temporarily disable to avoid s3 test failure.

s3 test failure log:
"[drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring sdma0 timeout,
signaled seq=8278, emitted seq=8281"

Reviewed-by: Jack Xiao <Jack.Xiao@amd.com>
Signed-off-by: Tianci Yin <tianci.yin@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/nv.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
index a0d19b9d329c..5f00eef1be2a 100644
--- a/drivers/gpu/drm/amd/amdgpu/nv.c
+++ b/drivers/gpu/drm/amd/amdgpu/nv.c
@@ -492,10 +492,6 @@ static int nv_common_early_init(void *handle)
 	switch (adev->asic_type) {
 	case CHIP_NAVI10:
 		adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
-			AMD_CG_SUPPORT_GFX_MGLS |
-			AMD_CG_SUPPORT_GFX_RLC_LS |
-			AMD_CG_SUPPORT_GFX_CP_LS |
-			AMD_CG_SUPPORT_GFX_CGLS |
 			AMD_CG_SUPPORT_GFX_CGCG |
 			AMD_CG_SUPPORT_IH_CG |
 			AMD_CG_SUPPORT_HDP_MGCG |
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 190/459] drm/amdgpu/gfx10: fix resume failure when enabling async gfx ring
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (88 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 189/459] drm/amdgpu: disable some gfx light sleep Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 191/459] drm/amd/powerplay: update smu11_driver_if_navi10.h Alex Deucher
                     ` (2 subsequent siblings)
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Hawking Zhang, Xiaojie Yuan

From: Xiaojie Yuan <xiaojie.yuan@amd.com>

'adev->in_suspend' code path is missing in gfx_v10_0_gfx_init_queue()

Signed-off-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 4e7f64d91d12..9d162d269aca 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -2903,7 +2903,19 @@ static int gfx_v10_0_gfx_init_queue(struct amdgpu_ring *ring)
 	struct amdgpu_device *adev = ring->adev;
 	struct v10_gfx_mqd *mqd = ring->mqd_ptr;
 
-	if (adev->in_gpu_reset) {
+	if (!adev->in_gpu_reset && !adev->in_suspend) {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		nv_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v10_0_gfx_mqd_init(ring);
+#ifdef BRING_UP_DEBUG
+		gfx_v10_0_gfx_queue_init_register(ring);
+#endif
+		nv_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+		if (adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS])
+			memcpy(adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS], mqd, sizeof(*mqd));
+	} else if (adev->in_gpu_reset) {
 		/* reset mqd with the backup copy */
 		if (adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS])
 			memcpy(mqd, adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS], sizeof(*mqd));
@@ -2918,17 +2930,7 @@ static int gfx_v10_0_gfx_init_queue(struct amdgpu_ring *ring)
 		mutex_unlock(&adev->srbm_mutex);
 #endif
 	} else {
-		memset((void *)mqd, 0, sizeof(*mqd));
-		mutex_lock(&adev->srbm_mutex);
-		nv_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
-		gfx_v10_0_gfx_mqd_init(ring);
-#ifdef BRING_UP_DEBUG
-		gfx_v10_0_gfx_queue_init_register(ring);
-#endif
-		nv_grbm_select(adev, 0, 0, 0, 0);
-		mutex_unlock(&adev->srbm_mutex);
-		if (adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS])
-			memcpy(adev->gfx.me.mqd_backup[AMDGPU_MAX_GFX_RINGS], mqd, sizeof(*mqd));
+		amdgpu_ring_clear_ring(ring);
 	}
 
 	return 0;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 191/459] drm/amd/powerplay: update smu11_driver_if_navi10.h
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (89 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 190/459] drm/amdgpu/gfx10: fix resume failure when enabling async gfx ring Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 192/459] drm/amdgpu: mask some pm interfaces for navi10 because they are changed or not workable so far Alex Deucher
  2019-06-17 19:26   ` [PATCH 193/459] drm/amd/powerplay: set dpm_enabled flag but don't enable vcn dpm Alex Deucher
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Kenneth Feng, Xiaojie Yuan

From: Kenneth Feng <kenneth.feng@amd.com>

update the smu11_driver_if_navi10.h since navi10 smu fw
update to 42.15.0

Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Xiaojie Yuan <xiaojie.yuan@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../amd/powerplay/inc/smu11_driver_if_navi10.h  | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
index 25b7c8c496f7..83ef0e26c051 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_navi10.h
@@ -4,7 +4,7 @@
 // *** IMPORTANT ***
 // SMU TEAM: Always increment the interface version if 
 // any structure is changed in this file
-#define SMU11_DRIVER_IF_VERSION 0x2E
+#define SMU11_DRIVER_IF_VERSION 0x2F
 
 #define PPTABLE_NV10_SMU_VERSION 8
 
@@ -689,8 +689,11 @@ typedef struct {
   // BTC Setting
   uint32_t     BtcConfig;
   
+  uint16_t     SsFmin[10]; // PPtable value to function similar to VFTFmin for SS Curve; Size is PPCLK_COUNT rounded to nearest multiple of 2
+  uint16_t     DcBtcGb[AVFS_VOLTAGE_COUNT];
+
   // SECTION: Board Reserved
-  uint32_t     Reserved[14];
+  uint32_t     Reserved[8];
 
   // SECTION: BOARD PARAMETERS
   // I2C Control
@@ -1027,17 +1030,9 @@ typedef struct {
 
 //RLC Pace Table total number of levels
 #define RLC_PACE_TABLE_NUM_LEVELS 16
-#define RLC_PACE_RATIO_NUM_LEVELS 8
-
-typedef struct {
-  uint8_t ByteRatioLow;
-  uint8_t FlopsRatioLow;
-  uint8_t ByteRatioHigh;
-  uint8_t FlopsRatioHigh;
-} RlcPaceFlopsPerByte_t;
 
 typedef struct {
-  RlcPaceFlopsPerByte_t FlopsPerByteTable[RLC_PACE_RATIO_NUM_LEVELS];
+  float FlopsPerByteTable[RLC_PACE_TABLE_NUM_LEVELS];
   
   uint32_t     MmHubPadding[8]; // SMU internal use  
 } RlcPaceFlopsPerByteOverride_t;
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 192/459] drm/amdgpu: mask some pm interfaces for navi10 because they are changed or not workable so far
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (90 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 191/459] drm/amd/powerplay: update smu11_driver_if_navi10.h Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  2019-06-17 19:26   ` [PATCH 193/459] drm/amd/powerplay: set dpm_enabled flag but don't enable vcn dpm Alex Deucher
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

Some interfaces are changed in the navi series, E.X. PPSMC_MSG_GetDpmClockFreq
is not implemented by SMC ucode so far. So it is unable to get current clock
values with the sysfs interface. We have to mask them for the momment.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c  | 156 +++++++++++++-----------
 2 files changed, 85 insertions(+), 73 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index ed051fdb509f..af86d9f47785 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -690,7 +690,7 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
 		dev_info.num_shader_arrays_per_engine = adev->gfx.config.max_sh_per_se;
 		/* return all clocks in KHz */
 		dev_info.gpu_counter_freq = amdgpu_asic_get_xclk(adev) * 10;
-		if (adev->pm.dpm_enabled) {
+		if (adev->pm.dpm_enabled && adev->asic_type != CHIP_NAVI10) {
 			dev_info.max_engine_clock = amdgpu_dpm_get_sclk(adev, false) * 10;
 			dev_info.max_memory_clock = amdgpu_dpm_get_mclk(adev, false) * 10;
 		} else if (amdgpu_sriov_vf(adev) && amdgim_is_hwperf(adev) &&
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
index 55b4e6c21f19..009c8ca49211 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
@@ -2793,32 +2793,33 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
 		return ret;
 	}
 
-	ret = device_create_file(adev->dev, &dev_attr_power_dpm_state);
-	if (ret) {
-		DRM_ERROR("failed to create device file for dpm state\n");
-		return ret;
-	}
-	ret = device_create_file(adev->dev, &dev_attr_power_dpm_force_performance_level);
-	if (ret) {
-		DRM_ERROR("failed to create device file for dpm state\n");
-		return ret;
-	}
-
+	if (adev->asic_type != CHIP_NAVI10) {
+		ret = device_create_file(adev->dev, &dev_attr_power_dpm_state);
+		if (ret) {
+			DRM_ERROR("failed to create device file for dpm state\n");
+			return ret;
+		}
+		ret = device_create_file(adev->dev, &dev_attr_power_dpm_force_performance_level);
+		if (ret) {
+			DRM_ERROR("failed to create device file for dpm state\n");
+			return ret;
+		}
 
-	ret = device_create_file(adev->dev, &dev_attr_pp_num_states);
-	if (ret) {
-		DRM_ERROR("failed to create device file pp_num_states\n");
-		return ret;
-	}
-	ret = device_create_file(adev->dev, &dev_attr_pp_cur_state);
-	if (ret) {
-		DRM_ERROR("failed to create device file pp_cur_state\n");
-		return ret;
-	}
-	ret = device_create_file(adev->dev, &dev_attr_pp_force_state);
-	if (ret) {
-		DRM_ERROR("failed to create device file pp_force_state\n");
-		return ret;
+		ret = device_create_file(adev->dev, &dev_attr_pp_num_states);
+		if (ret) {
+			DRM_ERROR("failed to create device file pp_num_states\n");
+			return ret;
+		}
+		ret = device_create_file(adev->dev, &dev_attr_pp_cur_state);
+		if (ret) {
+			DRM_ERROR("failed to create device file pp_cur_state\n");
+			return ret;
+		}
+		ret = device_create_file(adev->dev, &dev_attr_pp_force_state);
+		if (ret) {
+			DRM_ERROR("failed to create device file pp_force_state\n");
+			return ret;
+		}
 	}
 	ret = device_create_file(adev->dev, &dev_attr_pp_table);
 	if (ret) {
@@ -2831,51 +2832,54 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
 		DRM_ERROR("failed to create device file pp_dpm_sclk\n");
 		return ret;
 	}
-	ret = device_create_file(adev->dev, &dev_attr_pp_dpm_mclk);
-	if (ret) {
-		DRM_ERROR("failed to create device file pp_dpm_mclk\n");
-		return ret;
-	}
-	if (adev->asic_type >= CHIP_VEGA10) {
-		ret = device_create_file(adev->dev, &dev_attr_pp_dpm_socclk);
+
+	if (adev->asic_type != CHIP_NAVI10) {
+		ret = device_create_file(adev->dev, &dev_attr_pp_dpm_mclk);
 		if (ret) {
-			DRM_ERROR("failed to create device file pp_dpm_socclk\n");
+			DRM_ERROR("failed to create device file pp_dpm_mclk\n");
 			return ret;
 		}
-		ret = device_create_file(adev->dev, &dev_attr_pp_dpm_dcefclk);
+		if (adev->asic_type >= CHIP_VEGA10) {
+			ret = device_create_file(adev->dev, &dev_attr_pp_dpm_socclk);
+			if (ret) {
+				DRM_ERROR("failed to create device file pp_dpm_socclk\n");
+				return ret;
+			}
+			ret = device_create_file(adev->dev, &dev_attr_pp_dpm_dcefclk);
+			if (ret) {
+				DRM_ERROR("failed to create device file pp_dpm_dcefclk\n");
+				return ret;
+			}
+		}
+		if (adev->asic_type >= CHIP_VEGA20) {
+			ret = device_create_file(adev->dev, &dev_attr_pp_dpm_fclk);
+			if (ret) {
+				DRM_ERROR("failed to create device file pp_dpm_fclk\n");
+				return ret;
+			}
+		}
+		ret = device_create_file(adev->dev, &dev_attr_pp_dpm_pcie);
 		if (ret) {
-			DRM_ERROR("failed to create device file pp_dpm_dcefclk\n");
+			DRM_ERROR("failed to create device file pp_dpm_pcie\n");
 			return ret;
 		}
-	}
-	if (adev->asic_type >= CHIP_VEGA20) {
-		ret = device_create_file(adev->dev, &dev_attr_pp_dpm_fclk);
+		ret = device_create_file(adev->dev, &dev_attr_pp_sclk_od);
 		if (ret) {
-			DRM_ERROR("failed to create device file pp_dpm_fclk\n");
+			DRM_ERROR("failed to create device file pp_sclk_od\n");
+			return ret;
+		}
+		ret = device_create_file(adev->dev, &dev_attr_pp_mclk_od);
+		if (ret) {
+			DRM_ERROR("failed to create device file pp_mclk_od\n");
+			return ret;
+		}
+		ret = device_create_file(adev->dev,
+				&dev_attr_pp_power_profile_mode);
+		if (ret) {
+			DRM_ERROR("failed to create device file	"
+					"pp_power_profile_mode\n");
 			return ret;
 		}
-	}
-	ret = device_create_file(adev->dev, &dev_attr_pp_dpm_pcie);
-	if (ret) {
-		DRM_ERROR("failed to create device file pp_dpm_pcie\n");
-		return ret;
-	}
-	ret = device_create_file(adev->dev, &dev_attr_pp_sclk_od);
-	if (ret) {
-		DRM_ERROR("failed to create device file pp_sclk_od\n");
-		return ret;
-	}
-	ret = device_create_file(adev->dev, &dev_attr_pp_mclk_od);
-	if (ret) {
-		DRM_ERROR("failed to create device file pp_mclk_od\n");
-		return ret;
-	}
-	ret = device_create_file(adev->dev,
-			&dev_attr_pp_power_profile_mode);
-	if (ret) {
-		DRM_ERROR("failed to create device file	"
-				"pp_power_profile_mode\n");
-		return ret;
 	}
 	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
 	    (!is_support_sw_smu(adev) && hwmgr->od_enabled)) {
@@ -3052,21 +3056,25 @@ static int amdgpu_debugfs_pm_info_pp(struct seq_file *m, struct amdgpu_device *a
 	/* GPU Clocks */
 	size = sizeof(value);
 	seq_printf(m, "GFX Clocks and Power:\n");
-	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GFX_MCLK, (void *)&value, &size))
-		seq_printf(m, "\t%u MHz (MCLK)\n", value/100);
-	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GFX_SCLK, (void *)&value, &size))
-		seq_printf(m, "\t%u MHz (SCLK)\n", value/100);
-	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_STABLE_PSTATE_SCLK, (void *)&value, &size))
-		seq_printf(m, "\t%u MHz (PSTATE_SCLK)\n", value/100);
-	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_STABLE_PSTATE_MCLK, (void *)&value, &size))
-		seq_printf(m, "\t%u MHz (PSTATE_MCLK)\n", value/100);
+	if (adev->asic_type != CHIP_NAVI10) {
+		if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GFX_MCLK, (void *)&value, &size))
+			seq_printf(m, "\t%u MHz (MCLK)\n", value/100);
+		if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GFX_SCLK, (void *)&value, &size))
+			seq_printf(m, "\t%u MHz (SCLK)\n", value/100);
+		if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_STABLE_PSTATE_SCLK, (void *)&value, &size))
+			seq_printf(m, "\t%u MHz (PSTATE_SCLK)\n", value/100);
+		if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_STABLE_PSTATE_MCLK, (void *)&value, &size))
+			seq_printf(m, "\t%u MHz (PSTATE_MCLK)\n", value/100);
+	}
 	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_VDDGFX, (void *)&value, &size))
 		seq_printf(m, "\t%u mV (VDDGFX)\n", value);
 	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_VDDNB, (void *)&value, &size))
 		seq_printf(m, "\t%u mV (VDDNB)\n", value);
-	size = sizeof(uint32_t);
-	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GPU_POWER, (void *)&query, &size))
-		seq_printf(m, "\t%u.%u W (average GPU)\n", query >> 8, query & 0xff);
+	if (adev->asic_type != CHIP_NAVI10) {
+		size = sizeof(uint32_t);
+		if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GPU_POWER, (void *)&query, &size))
+			seq_printf(m, "\t%u.%u W (average GPU)\n", query >> 8, query & 0xff);
+	}
 	size = sizeof(value);
 	seq_printf(m, "\n");
 
@@ -3074,6 +3082,10 @@ static int amdgpu_debugfs_pm_info_pp(struct seq_file *m, struct amdgpu_device *a
 	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GPU_TEMP, (void *)&value, &size))
 		seq_printf(m, "GPU Temperature: %u C\n", value/1000);
 
+	/* TODO: will be removed after gpu load, feature mask, uvd/vce clocks enabled on navi10 */
+	if (adev->asic_type == CHIP_NAVI10)
+		return 0;
+
 	/* GPU Load */
 	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GPU_LOAD, (void *)&value, &size))
 		seq_printf(m, "GPU Load: %u %%\n", value);
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* [PATCH 193/459] drm/amd/powerplay: set dpm_enabled flag but don't enable vcn dpm
       [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (91 preceding siblings ...)
  2019-06-17 19:26   ` [PATCH 192/459] drm/amdgpu: mask some pm interfaces for navi10 because they are changed or not workable so far Alex Deucher
@ 2019-06-17 19:26   ` Alex Deucher
  92 siblings, 0 replies; 94+ messages in thread
From: Alex Deucher @ 2019-06-17 19:26 UTC (permalink / raw
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Huang Rui, Hawking Zhang

From: Huang Rui <ray.huang@amd.com>

This patch sets dpm_enabled flag but don't enable vcn dpm, because vcn dpm
doesn't work so far and we needs to enable the sysfs interfaces.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Acked-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c    | 4 ++--
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 5 +----
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 6a74f5499ef7..765018322abd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -249,7 +249,7 @@ static void amdgpu_vcn_idle_work_handler(struct work_struct *work)
 
 	if (fences == 0) {
 		amdgpu_gfx_off_ctrl(adev, true);
-		if (adev->pm.dpm_enabled)
+		if (adev->asic_type != CHIP_NAVI10 && adev->pm.dpm_enabled)
 			amdgpu_dpm_enable_uvd(adev, false);
 		else
 			amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
@@ -266,7 +266,7 @@ void amdgpu_vcn_ring_begin_use(struct amdgpu_ring *ring)
 
 	if (set_clocks) {
 		amdgpu_gfx_off_ctrl(adev, false);
-		if (adev->pm.dpm_enabled)
+		if (adev->asic_type != CHIP_NAVI10 && adev->pm.dpm_enabled)
 			amdgpu_dpm_enable_uvd(adev, true);
 		else
 			amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,
diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 06f5e5ce9db1..652963e52a5a 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -865,10 +865,7 @@ static int smu_hw_init(void *handle)
 	if (!smu->pm_enabled)
 		adev->pm.dpm_enabled = false;
 	else
-		adev->pm.dpm_enabled = true;
-	/* TODO: will set dpm_enabled flag while VCN and DAL DPM is workable */
-	if (adev->asic_type != CHIP_NAVI10)
-		adev->pm.dpm_enabled = true;
+		adev->pm.dpm_enabled = true;	/* TODO: will set dpm_enabled flag while VCN and DAL DPM is workable */
 
 	pr_info("SMU is initialized successfully!\n");
 
-- 
2.20.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 94+ messages in thread

end of thread, other threads:[~2019-06-17 19:26 UTC | newest]

Thread overview: 94+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-17 19:25 [PATCH 100/459] drm/amdgpu/discovery: add harvest info data table Alex Deucher
     [not found] ` <20190617192704.18038-1-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
2019-06-17 19:25   ` [PATCH 101/459] drm/amdgpu/discovery: use hardcoded mmRCC_CONFIG_MEMSIZE Alex Deucher
2019-06-17 19:25   ` [PATCH 102/459] drm/amdgpu/discovery: fix hwid for nbio Alex Deucher
2019-06-17 19:25   ` [PATCH 103/459] drm/amdgpu/discovery: stop taking psp header into account Alex Deucher
2019-06-17 19:25   ` [PATCH 104/459] drm/amdgpu/discovery: update definition for struct die_header Alex Deucher
2019-06-17 19:25   ` [PATCH 105/459] drm/amdgpu/discovery: stop converting the units of base addresses Alex Deucher
2019-06-17 19:25   ` [PATCH 106/459] drm/amdgpu/discovery: add module param for ip discovery enablement Alex Deucher
2019-06-17 19:25   ` [PATCH 107/459] drm/amdgpu/discovery: refactor ip list traversal Alex Deucher
2019-06-17 19:25   ` [PATCH 108/459] drm/amdgpu: disable concurrent flushes for Navi10 v2 Alex Deucher
2019-06-17 19:25   ` [PATCH 109/459] drm/amdgpu: add pa_sc_tile_steering_override to drm_amdgpu_info_device Alex Deucher
2019-06-17 19:25   ` [PATCH 110/459] drm/amdgpu: set the default value of pa_sc_tile_steering_override Alex Deucher
2019-06-17 19:25   ` [PATCH 111/459] drm/amdgpu: add initial support for sdma v5.0 (v6) Alex Deucher
2019-06-17 19:25   ` [PATCH 112/459] drm/amdgpu: add Navi10 VCN firmware support Alex Deucher
2019-06-17 19:25   ` [PATCH 113/459] drm/amdgpu: add VCN2.0 decode ring test Alex Deucher
2019-06-17 19:25   ` [PATCH 114/459] drm/amdgpu: add VCN2.0 decode ib test Alex Deucher
2019-06-17 19:25   ` [PATCH 115/459] drm/amdgpu: add JPEG2.0 decode ring test Alex Deucher
2019-06-17 19:25   ` [PATCH 116/459] drm/amdgpu: add JPEG2.0 decode ring ib test Alex Deucher
2019-06-17 19:25   ` [PATCH 117/459] drm/amdgpu: add initial VCN2.0 support (v2) Alex Deucher
2019-06-17 19:25   ` [PATCH 118/459] drm/amdgpu/mes: add amdgpu_mes driver parameter Alex Deucher
2019-06-17 19:25   ` [PATCH 119/459] drm/amdgpu/mes: add mes header file and definition Alex Deucher
2019-06-17 19:25   ` [PATCH 120/459] drm/amdgpu/mes: add definitions of ip callback function Alex Deucher
2019-06-17 19:25   ` [PATCH 121/459] drm/amdgpu/mes: enable mes on navi10 and later asic Alex Deucher
2019-06-17 19:25   ` [PATCH 122/459] drm/amdgpu/mes10.1: add ip block mes10.1 (v2) Alex Deucher
2019-06-17 19:25   ` [PATCH 123/459] drm/amdgpu: add gfx v10 implementation (v8) Alex Deucher
2019-06-17 19:25   ` [PATCH 124/459] drm/amdgpu: avoid to use SOC15_REG_OFFSET in static array for navi10 Alex Deucher
2019-06-17 19:25   ` [PATCH 125/459] drm/amdgpu: add navi10 common ip block (v3) Alex Deucher
2019-06-17 19:25   ` [PATCH 126/459] drm/amdgpu: Add navi10 kfd support for amdgpu (v3) Alex Deucher
2019-06-17 19:25   ` [PATCH 127/459] drm/amdgpu: update golden setting programming logic Alex Deucher
2019-06-17 19:25   ` [PATCH 128/459] drm/amdkfd: Add navi10 support to amdkfd. (v2) Alex Deucher
2019-06-17 19:25   ` [PATCH 129/459] drm/amdkfd: Added cwsr trap handler for gfx10 Alex Deucher
2019-06-17 19:25   ` [PATCH 130/459] drm/amdkfd: Moved gfx10 cwsr binary to cwsr_trap_handler.h Alex Deucher
2019-06-17 19:25   ` [PATCH 131/459] drm/amdkfd: Parameterize queue_preemption_timeout_ms Alex Deucher
2019-06-17 19:25   ` [PATCH 132/459] drm/amdkfd: Introduce DIQ type mqd manager for gfx10 Alex Deucher
2019-06-17 19:25   ` [PATCH 133/459] drm/amdkfd: Add mqd size in mqd manager struct " Alex Deucher
2019-06-17 19:25   ` [PATCH 134/459] drm/amdkfd: Allocate hiq and sdma mqd from mqd trunk " Alex Deucher
2019-06-17 19:26   ` [PATCH 135/459] drm/amdkfd: Introduce XGMI SDMA queue type " Alex Deucher
2019-06-17 19:26   ` [PATCH 136/459] drm/amdkfd: Delete alloc_format field from map_queue struct " Alex Deucher
2019-06-17 19:26   ` [PATCH 137/459] drm/amdkfd: update gfx10 support for latest kfd changes Alex Deucher
2019-06-17 19:26   ` [PATCH 138/459] drm/amdkfd: add more navi10 pci ids Alex Deucher
2019-06-17 19:26   ` [PATCH 139/459] drm/amdgpu: add Navi10 " Alex Deucher
2019-06-17 19:26   ` [PATCH 140/459] drm/amdgpu: add to set navi ip blocks Alex Deucher
2019-06-17 19:26   ` [PATCH 141/459] drm/amd/powerplay: update smu v11 ppsmc header Alex Deucher
2019-06-17 19:26   ` [PATCH 142/459] drm/amd/powerplay: update smu 11 driver if header for navi10 Alex Deucher
2019-06-17 19:26   ` [PATCH 143/459] drm/amd/powerplay: fix the mp/smuio " Alex Deucher
2019-06-17 19:26   ` [PATCH 144/459] drm/amd/powerplay: introduce the navi10 pptable implementation Alex Deucher
2019-06-17 19:26   ` [PATCH 145/459] drm/amd/powerplay: set smu v11 funcs for navi10 Alex Deucher
2019-06-17 19:26   ` [PATCH 146/459] drm/amd/powerplay: add navi10 smc ucode init and navi10 ppt functions setting Alex Deucher
2019-06-17 19:26   ` [PATCH 147/459] drm/amd/powerplay: move bootup value before read pptable from vbios Alex Deucher
2019-06-17 19:26   ` [PATCH 148/459] drm/amd/powerplay: enable backdoor smu fw loading (v2) Alex Deucher
2019-06-17 19:26   ` [PATCH 149/459] drm/amd/powerplay: update smu11 driver if header for navi10 (v2) Alex Deucher
2019-06-17 19:26   ` [PATCH 150/459] drm/amdgpu: bump smc firmware header version to v2 (v2) Alex Deucher
2019-06-17 19:26   ` [PATCH 151/459] drm/amdgpu: fix the issue of checking on message mapping Alex Deucher
2019-06-17 19:26   ` [PATCH 152/459] drm/amd/powerplay: smu needs to be initialized after rlc in direct mode Alex Deucher
2019-06-17 19:26   ` [PATCH 153/459] drm/amd/powerplay: introduce the function to load the soft pptable for navi10 (v2) Alex Deucher
2019-06-17 19:26   ` [PATCH 154/459] drm/amd/powerplay: modify the feature mask to enable gfx/soc dpm Alex Deucher
2019-06-17 19:26   ` [PATCH 155/459] drm/amd/powerplay: skip od feature on navi10 for the moment Alex Deucher
2019-06-17 19:26   ` [PATCH 156/459] drm/amd/powerplay: enable power features Alex Deucher
2019-06-17 19:26   ` [PATCH 157/459] drm/amd/powerplay: move the funciton of conv_profile_to_workload to asic file Alex Deucher
2019-06-17 19:26   ` [PATCH 158/459] drm/amd/powerplay: move the function of get[set]_power_profile " Alex Deucher
2019-06-17 19:26   ` [PATCH 159/459] drm/amd/powerplay: move the function of uvd&vce dpm " Alex Deucher
2019-06-17 19:26   ` [PATCH 160/459] drm/amd/powerplay: move the function of read_sensor " Alex Deucher
2019-06-17 19:26   ` [PATCH 161/459] drm/amd/powerplay: move the function of is_dpm_running " Alex Deucher
2019-06-17 19:26   ` [PATCH 162/459] drm/amd/powerplay: add smu11 smu_if_version check for navi10 Alex Deucher
2019-06-17 19:26   ` [PATCH 163/459] drm/amd/powerplay: implement smc firmware v2.1 for smu11 Alex Deucher
2019-06-17 19:26   ` [PATCH 164/459] drm/amd/powerplay: remove duplicate code from smu hw init Alex Deucher
2019-06-17 19:26   ` [PATCH 165/459] drm/amd/powerplay: optimization feature mask function for asic Alex Deucher
2019-06-17 19:26   ` [PATCH 166/459] drm/amd/powerplay: add allowed feature mask for navi10 Alex Deucher
2019-06-17 19:26   ` [PATCH 167/459] drm/amd: add gfxoff support on navi10 Alex Deucher
2019-06-17 19:26   ` [PATCH 168/459] drm/amd/amdgpu: fw version check with gfxoff Alex Deucher
2019-06-17 19:26   ` [PATCH 169/459] drm/amd/powerplay: gfxoff-seperate the Vega20 case Alex Deucher
2019-06-17 19:26   ` [PATCH 170/459] drm/amd/powerplay: enable DCEFCLK dpm support Alex Deucher
2019-06-17 19:26   ` [PATCH 171/459] drm/amdgpu: enable sw smu driver for navi10 by default Alex Deucher
2019-06-17 19:26   ` [PATCH 172/459] drm/amd/powerplay: introduce smu clk type to handle ppclk for each asic Alex Deucher
2019-06-17 19:26   ` [PATCH 173/459] drm/amd/powerplay: introduce smu feature type to handle feature mask " Alex Deucher
2019-06-17 19:26   ` [PATCH 174/459] drm/amd/powerplay: introduce smu table id type to handle the smu table " Alex Deucher
2019-06-17 19:26   ` [PATCH 175/459] drm/amd/powerplay: init table_count for smu tables on asic level Alex Deucher
2019-06-17 19:26   ` [PATCH 176/459] drm/amd/powerplay: add tables_init interface for each asic Alex Deucher
2019-06-17 19:26   ` [PATCH 177/459] drm/amd/powerplay/smu11: remove smu_update_table_with_arg Alex Deucher
2019-06-17 19:26   ` [PATCH 178/459] drm/amd/powerplay: modify smu_update_table to use SMU_TABLE_xxx as the input Alex Deucher
2019-06-17 19:26   ` [PATCH 179/459] drm/amd/powerplay: use the table size member in the structure instead of getting directly Alex Deucher
2019-06-17 19:26   ` [PATCH 180/459] drm/amd/powerplay: move PPTable_t uses into asic level Alex Deucher
2019-06-17 19:26   ` [PATCH 181/459] drm/amd/powerplay: move SmuMetrics_t " Alex Deucher
2019-06-17 19:26   ` [PATCH 182/459] drm/amd/powerplay: move Watermarks_t " Alex Deucher
2019-06-17 19:26   ` [PATCH 183/459] drm/amd/powerplay: introduce smu power source type to handle AC/DC source for each asic Alex Deucher
2019-06-17 19:26   ` [PATCH 184/459] drm/amd/powerplay: move getting MAX_FAN_RPM value to asic level Alex Deucher
2019-06-17 19:26   ` [PATCH 185/459] drm/amd/powerplay: don't include the smu11 driver if header in smu v11 (v2) Alex Deucher
2019-06-17 19:26   ` [PATCH 186/459] drm/amd/powerplay: fix the incorrect type of pptable Alex Deucher
2019-06-17 19:26   ` [PATCH 187/459] drm/amd/powerplay: do not set dpm_enabled flag before VCN/DCN DPM is workable Alex Deucher
2019-06-17 19:26   ` [PATCH 188/459] drm/amdgpu/gfx10: update gfx golden settings Alex Deucher
2019-06-17 19:26   ` [PATCH 189/459] drm/amdgpu: disable some gfx light sleep Alex Deucher
2019-06-17 19:26   ` [PATCH 190/459] drm/amdgpu/gfx10: fix resume failure when enabling async gfx ring Alex Deucher
2019-06-17 19:26   ` [PATCH 191/459] drm/amd/powerplay: update smu11_driver_if_navi10.h Alex Deucher
2019-06-17 19:26   ` [PATCH 192/459] drm/amdgpu: mask some pm interfaces for navi10 because they are changed or not workable so far Alex Deucher
2019-06-17 19:26   ` [PATCH 193/459] drm/amd/powerplay: set dpm_enabled flag but don't enable vcn dpm Alex Deucher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.