NVDIMM Device and Persistent Memory development
 help / color / mirror / Atom feed
From: Dave Jiang <dave.jiang@intel.com>
To: Dan Williams <dan.j.williams@intel.com>,
	<linux-cxl@vger.kernel.org>, <nvdimm@lists.linux.dev>
Cc: <ira.weiny@intel.com>, <vishal.l.verma@intel.com>,
	<alison.schofield@intel.com>, <Jonathan.Cameron@huawei.com>,
	<dave@stgolabs.net>
Subject: Re: [PATCH v2 1/3] cxl: Change 'struct cxl_memdev_state' *_perf_list to single 'struct cxl_dpa_perf'
Date: Wed, 31 Jan 2024 16:35:41 -0700	[thread overview]
Message-ID: <01cb8ae6-f436-422a-9fd3-92eda9dce3b7@intel.com> (raw)
In-Reply-To: <65bacb4c3a677_37ad294ab@dwillia2-xfh.jf.intel.com.notmuch>



On 1/31/24 15:35, Dan Williams wrote:
> Dave Jiang wrote:
>> In order to address the issue with being able to expose qos_class sysfs
>> attributes under 'ram' and 'pmem' sub-directories, the attributes must
>> be defined as static attributes rather than under driver->dev_groups.
>> To avoid implementing locking for accessing the 'struct cxl_dpa_perf`
>> lists, convert the list to a single 'struct cxl_dpa_perf' entry in
>> preparation to move the attributes to statically defined.
>>
>> Link: https://lore.kernel.org/linux-cxl/65b200ba228f_2d43c29468@dwillia2-mobl3.amr.corp.intel.com.notmuch/
>> Suggested-by: Dan Williams <dan.j.williams@intel.com>
>> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
>> ---
>>  drivers/cxl/core/cdat.c | 90 +++++++++++++----------------------------
>>  drivers/cxl/core/mbox.c |  4 +-
>>  drivers/cxl/cxlmem.h    | 10 ++---
>>  drivers/cxl/mem.c       | 25 ++++--------
>>  4 files changed, 42 insertions(+), 87 deletions(-)
> 
> Oh, wow, this looks wonderful!
> 
> I was expecting the lists to still be there, just moved out of 'struct
> cxl_dev_state'. Am I reading this right, the work to select and validate
> the "best" performance per partition can be done without list walking?
> If so, great!

I've not encountered more than 1 DSMAS per partition in the CDAT on hardware so far. I don't see why we can't just have the simple case until we need something more complex.

DJ

> 
> [..]
>> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
>> index c5c9d8e0d88d..a62099e47d71 100644
>> --- a/drivers/cxl/mem.c
>> +++ b/drivers/cxl/mem.c
>> @@ -221,16 +221,13 @@ static ssize_t ram_qos_class_show(struct device *dev,
>>  	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
>>  	struct cxl_dev_state *cxlds = cxlmd->cxlds;
>>  	struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
>> -	struct cxl_dpa_perf *dpa_perf;
>> +	struct cxl_dpa_perf *dpa_perf = &mds->ram_perf;
>>  
>>  	if (!dev->driver)
>>  		return -ENOENT;
> 
> This can be deleted since it is racy being referenced without the
> device_lock(), and nothing in this routine requires the device to be
> locked.
> 
>>  
>> -	if (list_empty(&mds->ram_perf_list))
>> -		return -ENOENT;
>> -
>> -	dpa_perf = list_first_entry(&mds->ram_perf_list, struct cxl_dpa_perf,
>> -				    list);
>> +	if (dpa_perf->qos_class == CXL_QOS_CLASS_INVALID)
>> +		return -ENODATA;
> 
> As long as is_visible() checks for CXL_QOS_CLASS_INVALID there is no
> need to add error handling in this _show() routine.
> 
>>  
>>  	return sysfs_emit(buf, "%d\n", dpa_perf->qos_class);
>>  }
>> @@ -244,16 +241,10 @@ static ssize_t pmem_qos_class_show(struct device *dev,
>>  	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
>>  	struct cxl_dev_state *cxlds = cxlmd->cxlds;
>>  	struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
>> -	struct cxl_dpa_perf *dpa_perf;
>> +	struct cxl_dpa_perf *dpa_perf = &mds->pmem_perf;
>>  
>> -	if (!dev->driver)
>> -		return -ENOENT;
> 
> Ah, good, you deleted it this time.
> 
>> -
>> -	if (list_empty(&mds->pmem_perf_list))
>> -		return -ENOENT;
>> -
>> -	dpa_perf = list_first_entry(&mds->pmem_perf_list, struct cxl_dpa_perf,
>> -				    list);
>> +	if (dpa_perf->qos_class == CXL_QOS_CLASS_INVALID)
>> +		return -ENODATA;
> 
> This can go.

      reply	other threads:[~2024-01-31 23:35 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-30 22:29 [PATCH v2 1/3] cxl: Change 'struct cxl_memdev_state' *_perf_list to single 'struct cxl_dpa_perf' Dave Jiang
2024-01-30 22:29 ` [PATCH v2 2/3] cxl: Fix sysfs export of qos_class for memdev Dave Jiang
2024-01-30 22:29 ` [PATCH v2 3/3] cxl/test: Add support for qos_class checking Dave Jiang
2024-01-31 22:35 ` [PATCH v2 1/3] cxl: Change 'struct cxl_memdev_state' *_perf_list to single 'struct cxl_dpa_perf' Dan Williams
2024-01-31 23:35   ` Dave Jiang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=01cb8ae6-f436-422a-9fd3-92eda9dce3b7@intel.com \
    --to=dave.jiang@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave@stgolabs.net \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).