All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] CXL core reorganization
@ 2021-07-30 20:06 Dan Williams
  2021-07-30 20:07 ` [PATCH v3 1/6] cxl: Move cxl_core to new directory Dan Williams
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Dan Williams @ 2021-07-30 20:06 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, Jonathan.Cameron, vishal.l.verma, alison.schofield

Changes since v2 [1]:
- Rebase on top of the Makefile changes
- Split register and pmem moving into 2 independent patches
- Drop inclusion of mem.h and cxl.h from core.h. I.e. require all
  compilation units to directly include only the headers they need.
- Squash / rewrite "cxl/mem: Move character device region creation" to
  move the char-dev infrastructure to drivers/cxl/core/memdev.c
- Rewrite the justification in some of the changelogs
- Rewrite "cxl: Pass fops and shutdown to memdev creation" to introduce
  cdevm_file_operations.

[1]: https://lore.kernel.org/linux-cxl/20210720180742.89992-1-ben.widawsky@intel.com/

---

Given Ben is out for a bit I have folded my review comments into the set
directly.

Original Cover from Ben:

The main motivation of the patch series is to establish the cxl_core driver in
its own directory and modularize it. Specifically, the patch series aims to
achieve three things:
1. Move existing core functionality to a new directory.
2. Split existing core functionality into multiple files.
3. Migrate memdev functionality into core.

#1 is trivially accomplished with git mv. The file itself is renamed back to
bus.c since the goal is to break up core functionality into multiple files, and
so the name core.c doesn't make sense in that context.

#2 is also trivially accomplished via cut/paste.

#3 is slightly invasive in that it has certain functional changes to improve the
existing interfaces and make them more generic. The rest of the change is
cut/paste. This is also the only part of the series which has runtime functional
change in that some interfaces are removed from cxl_pci, moved into cxl_core,
and exported for other drivers to use.

---

Ben Widawsky (3):
      cxl: Move cxl_core to new directory
      cxl/core: Improve CXL core kernel docs
      cxl/core: Move memdev management to core

Dan Williams (3):
      cxl/core: Move pmem functionality
      cxl/core: Move register mapping infrastructure
      cxl/pci: Introduce cdevm_file_operations


 Documentation/driver-api/cxl/memory-devices.rst |    8 
 drivers/cxl/Makefile                            |    4 
 drivers/cxl/core/Makefile                       |    8 
 drivers/cxl/core/bus.c                          |  463 +----------------------
 drivers/cxl/core/core.h                         |   20 +
 drivers/cxl/core/memdev.c                       |  245 ++++++++++++
 drivers/cxl/core/pmem.c                         |  204 ++++++++++
 drivers/cxl/core/regs.c                         |  235 ++++++++++++
 drivers/cxl/mem.h                               |   26 +
 drivers/cxl/pci.c                               |  257 +------------
 10 files changed, 791 insertions(+), 679 deletions(-)
 create mode 100644 drivers/cxl/core/Makefile
 rename drivers/cxl/{core.c => core/bus.c} (58%)
 create mode 100644 drivers/cxl/core/core.h
 create mode 100644 drivers/cxl/core/memdev.c
 create mode 100644 drivers/cxl/core/pmem.c
 create mode 100644 drivers/cxl/core/regs.c

base-commit: ff1176468d368232b684f75e82563369208bc371

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/6] cxl: Move cxl_core to new directory
  2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
@ 2021-07-30 20:07 ` Dan Williams
  2021-07-31 16:35   ` [PATCH v4 " Dan Williams
  2021-07-30 20:07 ` [PATCH v3 2/6] cxl/core: Improve CXL core kernel docs Dan Williams
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Dan Williams @ 2021-07-30 20:07 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, Jonathan.Cameron, vishal.l.verma, alison.schofield

From: Ben Widawsky <ben.widawsky@intel.com>

CXL core is growing, and it's already arguably unmanageable. To support
future growth, move core functionality to a new directory and rename the
file to represent just bus support. Future work will remove non-bus
functionality.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 Documentation/driver-api/cxl/memory-devices.rst |    2 +-
 drivers/cxl/Makefile                            |    4 +---
 drivers/cxl/core/Makefile                       |    5 +++++
 drivers/cxl/core/bus.c                          |    4 ++--
 4 files changed, 9 insertions(+), 6 deletions(-)
 create mode 100644 drivers/cxl/core/Makefile
 rename drivers/cxl/{core.c => core/bus.c} (99%)

diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index 487ce4f41d77..a86e2c7c551a 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -36,7 +36,7 @@ CXL Core
 .. kernel-doc:: drivers/cxl/cxl.h
    :internal:
 
-.. kernel-doc:: drivers/cxl/core.c
+.. kernel-doc:: drivers/cxl/core/bus.c
    :doc: cxl core
 
 External Interfaces
diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
index 32954059b37b..d1aaabc940f3 100644
--- a/drivers/cxl/Makefile
+++ b/drivers/cxl/Makefile
@@ -1,11 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
-obj-$(CONFIG_CXL_BUS) += cxl_core.o
+obj-$(CONFIG_CXL_BUS) += core/
 obj-$(CONFIG_CXL_MEM) += cxl_pci.o
 obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
 obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o
 
-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL
-cxl_core-y := core.o
 cxl_pci-y := pci.o
 cxl_acpi-y := acpi.o
 cxl_pmem-y := pmem.o
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
new file mode 100644
index 000000000000..c65e9f61abe9
--- /dev/null
+++ b/drivers/cxl/core/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_CXL_BUS) += cxl_core.o
+
+ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL -I./drivers/cxl
+cxl_core-y := bus.o
diff --git a/drivers/cxl/core.c b/drivers/cxl/core/bus.c
similarity index 99%
rename from drivers/cxl/core.c
rename to drivers/cxl/core/bus.c
index a2e4d54fc7bc..6ce04e3976d2 100644
--- a/drivers/cxl/core.c
+++ b/drivers/cxl/core/bus.c
@@ -6,8 +6,8 @@
 #include <linux/pci.h>
 #include <linux/slab.h>
 #include <linux/idr.h>
-#include "cxl.h"
-#include "mem.h"
+#include <cxl.h>
+#include <mem.h>
 
 /**
  * DOC: cxl core


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/6] cxl/core: Improve CXL core kernel docs
  2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
  2021-07-30 20:07 ` [PATCH v3 1/6] cxl: Move cxl_core to new directory Dan Williams
@ 2021-07-30 20:07 ` Dan Williams
  2021-07-30 20:07 ` [PATCH v3 3/6] cxl/core: Move pmem functionality Dan Williams
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-07-30 20:07 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, Jonathan.Cameron, vishal.l.verma, alison.schofield

From: Ben Widawsky <ben.widawsky@intel.com>

Now that CXL core's role is well understood, the documentation should
reflect that information.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/bus.c |   11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
index 6ce04e3976d2..647b8a00ab36 100644
--- a/drivers/cxl/core/bus.c
+++ b/drivers/cxl/core/bus.c
@@ -12,8 +12,15 @@
 /**
  * DOC: cxl core
  *
- * The CXL core provides a sysfs hierarchy for control devices and a rendezvous
- * point for cross-device interleave coordination through cxl ports.
+ * The CXL core provides a set of interfaces that can be consumed by CXL aware
+ * drivers. The interfaces allow for creation, modification, and destruction of
+ * regions, memory devices, ports, and decoders. CXL aware drivers must register
+ * with the CXL core via these interfaces in order to be able to participate in
+ * cross-device interleave coordination. The CXL core also establishes and
+ * maintains the bridge to the nvdimm subsystem.
+ *
+ * CXL core introduces sysfs hierarchy to control the devices that are
+ * instantiated by the core.
  */
 
 static DEFINE_IDA(cxl_port_ida);


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 3/6] cxl/core: Move pmem functionality
  2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
  2021-07-30 20:07 ` [PATCH v3 1/6] cxl: Move cxl_core to new directory Dan Williams
  2021-07-30 20:07 ` [PATCH v3 2/6] cxl/core: Improve CXL core kernel docs Dan Williams
@ 2021-07-30 20:07 ` Dan Williams
  2021-07-30 20:07 ` [PATCH v3 4/6] cxl/core: Move register mapping infrastructure Dan Williams
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-07-30 20:07 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, Jonathan.Cameron, vishal.l.verma, alison.schofield

Refactor the pmem / nvdimm-bridge functionality from core/bus.c to
core/pmem.c. Introduce drivers/core/core.h to communicate data
structures and helpers between the core bus and other functionality that
registers devices on the bus.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 Documentation/driver-api/cxl/memory-devices.rst |    3 
 drivers/cxl/core/Makefile                       |    1 
 drivers/cxl/core/bus.c                          |  205 -----------------------
 drivers/cxl/core/core.h                         |   17 ++
 drivers/cxl/core/pmem.c                         |  204 +++++++++++++++++++++++
 5 files changed, 228 insertions(+), 202 deletions(-)
 create mode 100644 drivers/cxl/core/core.h
 create mode 100644 drivers/cxl/core/pmem.c

diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index a86e2c7c551a..e65c0ba82229 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -39,6 +39,9 @@ CXL Core
 .. kernel-doc:: drivers/cxl/core/bus.c
    :doc: cxl core
 
+.. kernel-doc:: drivers/cxl/core/pmem.c
+   :internal:
+
 External Interfaces
 ===================
 
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index c65e9f61abe9..3be8ef61fda3 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -3,3 +3,4 @@ obj-$(CONFIG_CXL_BUS) += cxl_core.o
 
 ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL -I./drivers/cxl
 cxl_core-y := bus.o
+cxl_core-y += pmem.o
diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
index 647b8a00ab36..d25fe6c235f5 100644
--- a/drivers/cxl/core/bus.c
+++ b/drivers/cxl/core/bus.c
@@ -8,6 +8,7 @@
 #include <linux/idr.h>
 #include <cxl.h>
 #include <mem.h>
+#include "core.h"
 
 /**
  * DOC: cxl core
@@ -37,7 +38,7 @@ static struct attribute *cxl_base_attributes[] = {
 	NULL,
 };
 
-static struct attribute_group cxl_base_attribute_group = {
+struct attribute_group cxl_base_attribute_group = {
 	.attrs = cxl_base_attributes,
 };
 
@@ -514,11 +515,6 @@ cxl_decoder_alloc(struct cxl_port *port, int nr_targets, resource_size_t base,
 	return ERR_PTR(rc);
 }
 
-static void unregister_dev(void *dev)
-{
-	device_unregister(dev);
-}
-
 struct cxl_decoder *
 devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
 		     resource_size_t base, resource_size_t len,
@@ -543,7 +539,7 @@ devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
 	if (rc)
 		goto err;
 
-	rc = devm_add_action_or_reset(host, unregister_dev, dev);
+	rc = devm_add_action_or_reset(host, unregister_cxl_dev, dev);
 	if (rc)
 		return ERR_PTR(rc);
 	return cxld;
@@ -626,201 +622,6 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
 }
 EXPORT_SYMBOL_GPL(cxl_probe_component_regs);
 
-static void cxl_nvdimm_bridge_release(struct device *dev)
-{
-	struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
-
-	kfree(cxl_nvb);
-}
-
-static const struct attribute_group *cxl_nvdimm_bridge_attribute_groups[] = {
-	&cxl_base_attribute_group,
-	NULL,
-};
-
-static const struct device_type cxl_nvdimm_bridge_type = {
-	.name = "cxl_nvdimm_bridge",
-	.release = cxl_nvdimm_bridge_release,
-	.groups = cxl_nvdimm_bridge_attribute_groups,
-};
-
-struct cxl_nvdimm_bridge *to_cxl_nvdimm_bridge(struct device *dev)
-{
-	if (dev_WARN_ONCE(dev, dev->type != &cxl_nvdimm_bridge_type,
-			  "not a cxl_nvdimm_bridge device\n"))
-		return NULL;
-	return container_of(dev, struct cxl_nvdimm_bridge, dev);
-}
-EXPORT_SYMBOL_GPL(to_cxl_nvdimm_bridge);
-
-static struct cxl_nvdimm_bridge *
-cxl_nvdimm_bridge_alloc(struct cxl_port *port)
-{
-	struct cxl_nvdimm_bridge *cxl_nvb;
-	struct device *dev;
-
-	cxl_nvb = kzalloc(sizeof(*cxl_nvb), GFP_KERNEL);
-	if (!cxl_nvb)
-		return ERR_PTR(-ENOMEM);
-
-	dev = &cxl_nvb->dev;
-	cxl_nvb->port = port;
-	cxl_nvb->state = CXL_NVB_NEW;
-	device_initialize(dev);
-	device_set_pm_not_required(dev);
-	dev->parent = &port->dev;
-	dev->bus = &cxl_bus_type;
-	dev->type = &cxl_nvdimm_bridge_type;
-
-	return cxl_nvb;
-}
-
-static void unregister_nvb(void *_cxl_nvb)
-{
-	struct cxl_nvdimm_bridge *cxl_nvb = _cxl_nvb;
-	bool flush;
-
-	/*
-	 * If the bridge was ever activated then there might be in-flight state
-	 * work to flush. Once the state has been changed to 'dead' then no new
-	 * work can be queued by user-triggered bind.
-	 */
-	device_lock(&cxl_nvb->dev);
-	flush = cxl_nvb->state != CXL_NVB_NEW;
-	cxl_nvb->state = CXL_NVB_DEAD;
-	device_unlock(&cxl_nvb->dev);
-
-	/*
-	 * Even though the device core will trigger device_release_driver()
-	 * before the unregister, it does not know about the fact that
-	 * cxl_nvdimm_bridge_driver defers ->remove() work. So, do the driver
-	 * release not and flush it before tearing down the nvdimm device
-	 * hierarchy.
-	 */
-	device_release_driver(&cxl_nvb->dev);
-	if (flush)
-		flush_work(&cxl_nvb->state_work);
-	device_unregister(&cxl_nvb->dev);
-}
-
-struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
-						     struct cxl_port *port)
-{
-	struct cxl_nvdimm_bridge *cxl_nvb;
-	struct device *dev;
-	int rc;
-
-	if (!IS_ENABLED(CONFIG_CXL_PMEM))
-		return ERR_PTR(-ENXIO);
-
-	cxl_nvb = cxl_nvdimm_bridge_alloc(port);
-	if (IS_ERR(cxl_nvb))
-		return cxl_nvb;
-
-	dev = &cxl_nvb->dev;
-	rc = dev_set_name(dev, "nvdimm-bridge");
-	if (rc)
-		goto err;
-
-	rc = device_add(dev);
-	if (rc)
-		goto err;
-
-	rc = devm_add_action_or_reset(host, unregister_nvb, cxl_nvb);
-	if (rc)
-		return ERR_PTR(rc);
-
-	return cxl_nvb;
-
-err:
-	put_device(dev);
-	return ERR_PTR(rc);
-}
-EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm_bridge);
-
-static void cxl_nvdimm_release(struct device *dev)
-{
-	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
-
-	kfree(cxl_nvd);
-}
-
-static const struct attribute_group *cxl_nvdimm_attribute_groups[] = {
-	&cxl_base_attribute_group,
-	NULL,
-};
-
-static const struct device_type cxl_nvdimm_type = {
-	.name = "cxl_nvdimm",
-	.release = cxl_nvdimm_release,
-	.groups = cxl_nvdimm_attribute_groups,
-};
-
-bool is_cxl_nvdimm(struct device *dev)
-{
-	return dev->type == &cxl_nvdimm_type;
-}
-EXPORT_SYMBOL_GPL(is_cxl_nvdimm);
-
-struct cxl_nvdimm *to_cxl_nvdimm(struct device *dev)
-{
-	if (dev_WARN_ONCE(dev, !is_cxl_nvdimm(dev),
-			  "not a cxl_nvdimm device\n"))
-		return NULL;
-	return container_of(dev, struct cxl_nvdimm, dev);
-}
-EXPORT_SYMBOL_GPL(to_cxl_nvdimm);
-
-static struct cxl_nvdimm *cxl_nvdimm_alloc(struct cxl_memdev *cxlmd)
-{
-	struct cxl_nvdimm *cxl_nvd;
-	struct device *dev;
-
-	cxl_nvd = kzalloc(sizeof(*cxl_nvd), GFP_KERNEL);
-	if (!cxl_nvd)
-		return ERR_PTR(-ENOMEM);
-
-	dev = &cxl_nvd->dev;
-	cxl_nvd->cxlmd = cxlmd;
-	device_initialize(dev);
-	device_set_pm_not_required(dev);
-	dev->parent = &cxlmd->dev;
-	dev->bus = &cxl_bus_type;
-	dev->type = &cxl_nvdimm_type;
-
-	return cxl_nvd;
-}
-
-int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd)
-{
-	struct cxl_nvdimm *cxl_nvd;
-	struct device *dev;
-	int rc;
-
-	cxl_nvd = cxl_nvdimm_alloc(cxlmd);
-	if (IS_ERR(cxl_nvd))
-		return PTR_ERR(cxl_nvd);
-
-	dev = &cxl_nvd->dev;
-	rc = dev_set_name(dev, "pmem%d", cxlmd->id);
-	if (rc)
-		goto err;
-
-	rc = device_add(dev);
-	if (rc)
-		goto err;
-
-	dev_dbg(host, "%s: register %s\n", dev_name(dev->parent),
-		dev_name(dev));
-
-	return devm_add_action_or_reset(host, unregister_dev, dev);
-
-err:
-	put_device(dev);
-	return rc;
-}
-EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm);
-
 /**
  * cxl_probe_device_regs() - Detect CXL Device register blocks
  * @dev: Host device of the @base mapping
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
new file mode 100644
index 000000000000..49045daf8bd7
--- /dev/null
+++ b/drivers/cxl/core/core.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2020 Intel Corporation. */
+
+#ifndef __CXL_CORE_H__
+#define __CXL_CORE_H__
+
+extern const struct device_type cxl_nvdimm_bridge_type;
+extern const struct device_type cxl_nvdimm_type;
+
+extern struct attribute_group cxl_base_attribute_group;
+
+static inline void unregister_cxl_dev(void *dev)
+{
+	device_unregister(dev);
+}
+
+#endif /* __CXL_CORE_H__ */
diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c
new file mode 100644
index 000000000000..bb3ef0cdd74d
--- /dev/null
+++ b/drivers/cxl/core/pmem.c
@@ -0,0 +1,204 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. */
+
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <cxl.h>
+#include <mem.h>
+
+#include "core.h"
+
+static void cxl_nvdimm_bridge_release(struct device *dev)
+{
+	struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
+
+	kfree(cxl_nvb);
+}
+
+static const struct attribute_group *cxl_nvdimm_bridge_attribute_groups[] = {
+	&cxl_base_attribute_group,
+	NULL,
+};
+
+const struct device_type cxl_nvdimm_bridge_type = {
+	.name = "cxl_nvdimm_bridge",
+	.release = cxl_nvdimm_bridge_release,
+	.groups = cxl_nvdimm_bridge_attribute_groups,
+};
+
+struct cxl_nvdimm_bridge *to_cxl_nvdimm_bridge(struct device *dev)
+{
+	if (dev_WARN_ONCE(dev, dev->type != &cxl_nvdimm_bridge_type,
+			  "not a cxl_nvdimm_bridge device\n"))
+		return NULL;
+	return container_of(dev, struct cxl_nvdimm_bridge, dev);
+}
+EXPORT_SYMBOL_GPL(to_cxl_nvdimm_bridge);
+
+static struct cxl_nvdimm_bridge *
+cxl_nvdimm_bridge_alloc(struct cxl_port *port)
+{
+	struct cxl_nvdimm_bridge *cxl_nvb;
+	struct device *dev;
+
+	cxl_nvb = kzalloc(sizeof(*cxl_nvb), GFP_KERNEL);
+	if (!cxl_nvb)
+		return ERR_PTR(-ENOMEM);
+
+	dev = &cxl_nvb->dev;
+	cxl_nvb->port = port;
+	cxl_nvb->state = CXL_NVB_NEW;
+	device_initialize(dev);
+	device_set_pm_not_required(dev);
+	dev->parent = &port->dev;
+	dev->bus = &cxl_bus_type;
+	dev->type = &cxl_nvdimm_bridge_type;
+
+	return cxl_nvb;
+}
+
+static void unregister_nvb(void *_cxl_nvb)
+{
+	struct cxl_nvdimm_bridge *cxl_nvb = _cxl_nvb;
+	bool flush;
+
+	/*
+	 * If the bridge was ever activated then there might be in-flight state
+	 * work to flush. Once the state has been changed to 'dead' then no new
+	 * work can be queued by user-triggered bind.
+	 */
+	device_lock(&cxl_nvb->dev);
+	flush = cxl_nvb->state != CXL_NVB_NEW;
+	cxl_nvb->state = CXL_NVB_DEAD;
+	device_unlock(&cxl_nvb->dev);
+
+	/*
+	 * Even though the device core will trigger device_release_driver()
+	 * before the unregister, it does not know about the fact that
+	 * cxl_nvdimm_bridge_driver defers ->remove() work. So, do the driver
+	 * release not and flush it before tearing down the nvdimm device
+	 * hierarchy.
+	 */
+	device_release_driver(&cxl_nvb->dev);
+	if (flush)
+		flush_work(&cxl_nvb->state_work);
+	device_unregister(&cxl_nvb->dev);
+}
+
+struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
+						     struct cxl_port *port)
+{
+	struct cxl_nvdimm_bridge *cxl_nvb;
+	struct device *dev;
+	int rc;
+
+	if (!IS_ENABLED(CONFIG_CXL_PMEM))
+		return ERR_PTR(-ENXIO);
+
+	cxl_nvb = cxl_nvdimm_bridge_alloc(port);
+	if (IS_ERR(cxl_nvb))
+		return cxl_nvb;
+
+	dev = &cxl_nvb->dev;
+	rc = dev_set_name(dev, "nvdimm-bridge");
+	if (rc)
+		goto err;
+
+	rc = device_add(dev);
+	if (rc)
+		goto err;
+
+	rc = devm_add_action_or_reset(host, unregister_nvb, cxl_nvb);
+	if (rc)
+		return ERR_PTR(rc);
+
+	return cxl_nvb;
+
+err:
+	put_device(dev);
+	return ERR_PTR(rc);
+}
+EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm_bridge);
+
+static void cxl_nvdimm_release(struct device *dev)
+{
+	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
+
+	kfree(cxl_nvd);
+}
+
+static const struct attribute_group *cxl_nvdimm_attribute_groups[] = {
+	&cxl_base_attribute_group,
+	NULL,
+};
+
+const struct device_type cxl_nvdimm_type = {
+	.name = "cxl_nvdimm",
+	.release = cxl_nvdimm_release,
+	.groups = cxl_nvdimm_attribute_groups,
+};
+
+bool is_cxl_nvdimm(struct device *dev)
+{
+	return dev->type == &cxl_nvdimm_type;
+}
+EXPORT_SYMBOL_GPL(is_cxl_nvdimm);
+
+struct cxl_nvdimm *to_cxl_nvdimm(struct device *dev)
+{
+	if (dev_WARN_ONCE(dev, !is_cxl_nvdimm(dev),
+			  "not a cxl_nvdimm device\n"))
+		return NULL;
+	return container_of(dev, struct cxl_nvdimm, dev);
+}
+EXPORT_SYMBOL_GPL(to_cxl_nvdimm);
+
+static struct cxl_nvdimm *cxl_nvdimm_alloc(struct cxl_memdev *cxlmd)
+{
+	struct cxl_nvdimm *cxl_nvd;
+	struct device *dev;
+
+	cxl_nvd = kzalloc(sizeof(*cxl_nvd), GFP_KERNEL);
+	if (!cxl_nvd)
+		return ERR_PTR(-ENOMEM);
+
+	dev = &cxl_nvd->dev;
+	cxl_nvd->cxlmd = cxlmd;
+	device_initialize(dev);
+	device_set_pm_not_required(dev);
+	dev->parent = &cxlmd->dev;
+	dev->bus = &cxl_bus_type;
+	dev->type = &cxl_nvdimm_type;
+
+	return cxl_nvd;
+}
+
+int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd)
+{
+	struct cxl_nvdimm *cxl_nvd;
+	struct device *dev;
+	int rc;
+
+	cxl_nvd = cxl_nvdimm_alloc(cxlmd);
+	if (IS_ERR(cxl_nvd))
+		return PTR_ERR(cxl_nvd);
+
+	dev = &cxl_nvd->dev;
+	rc = dev_set_name(dev, "pmem%d", cxlmd->id);
+	if (rc)
+		goto err;
+
+	rc = device_add(dev);
+	if (rc)
+		goto err;
+
+	dev_dbg(host, "%s: register %s\n", dev_name(dev->parent),
+		dev_name(dev));
+
+	return devm_add_action_or_reset(host, unregister_cxl_dev, dev);
+
+err:
+	put_device(dev);
+	return rc;
+}
+EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm);


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 4/6] cxl/core: Move register mapping infrastructure
  2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
                   ` (2 preceding siblings ...)
  2021-07-30 20:07 ` [PATCH v3 3/6] cxl/core: Move pmem functionality Dan Williams
@ 2021-07-30 20:07 ` Dan Williams
  2021-07-30 20:07 ` [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations Dan Williams
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-07-30 20:07 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, Jonathan.Cameron, vishal.l.verma, alison.schofield

The register mapping infrastructure is large enough to move to its own
compilation unit. This also cleans up an unnecessary include of <mem.h>
core/bus.c.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 Documentation/driver-api/cxl/memory-devices.rst |    3 
 drivers/cxl/core/Makefile                       |    1 
 drivers/cxl/core/bus.c                          |  229 ----------------------
 drivers/cxl/core/regs.c                         |  235 +++++++++++++++++++++++
 4 files changed, 239 insertions(+), 229 deletions(-)
 create mode 100644 drivers/cxl/core/regs.c

diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index e65c0ba82229..46847d8c70a0 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -42,6 +42,9 @@ CXL Core
 .. kernel-doc:: drivers/cxl/core/pmem.c
    :internal:
 
+.. kernel-doc:: drivers/cxl/core/regs.c
+   :internal:
+
 External Interfaces
 ===================
 
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index 3be8ef61fda3..a49bd72dfed3 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_CXL_BUS) += cxl_core.o
 ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL -I./drivers/cxl
 cxl_core-y := bus.o
 cxl_core-y += pmem.o
+cxl_core-y += regs.o
diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
index d25fe6c235f5..176db3058575 100644
--- a/drivers/cxl/core/bus.c
+++ b/drivers/cxl/core/bus.c
@@ -7,7 +7,6 @@
 #include <linux/slab.h>
 #include <linux/idr.h>
 #include <cxl.h>
-#include <mem.h>
 #include "core.h"
 
 /**
@@ -550,234 +549,6 @@ devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
 }
 EXPORT_SYMBOL_GPL(devm_cxl_add_decoder);
 
-/**
- * cxl_probe_component_regs() - Detect CXL Component register blocks
- * @dev: Host device of the @base mapping
- * @base: Mapping containing the HDM Decoder Capability Header
- * @map: Map object describing the register block information found
- *
- * See CXL 2.0 8.2.4 Component Register Layout and Definition
- * See CXL 2.0 8.2.5.5 CXL Device Register Interface
- *
- * Probe for component register information and return it in map object.
- */
-void cxl_probe_component_regs(struct device *dev, void __iomem *base,
-			      struct cxl_component_reg_map *map)
-{
-	int cap, cap_count;
-	u64 cap_array;
-
-	*map = (struct cxl_component_reg_map) { 0 };
-
-	/*
-	 * CXL.cache and CXL.mem registers are at offset 0x1000 as defined in
-	 * CXL 2.0 8.2.4 Table 141.
-	 */
-	base += CXL_CM_OFFSET;
-
-	cap_array = readq(base + CXL_CM_CAP_HDR_OFFSET);
-
-	if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) {
-		dev_err(dev,
-			"Couldn't locate the CXL.cache and CXL.mem capability array header./n");
-		return;
-	}
-
-	/* It's assumed that future versions will be backward compatible */
-	cap_count = FIELD_GET(CXL_CM_CAP_HDR_ARRAY_SIZE_MASK, cap_array);
-
-	for (cap = 1; cap <= cap_count; cap++) {
-		void __iomem *register_block;
-		u32 hdr;
-		int decoder_cnt;
-		u16 cap_id, offset;
-		u32 length;
-
-		hdr = readl(base + cap * 0x4);
-
-		cap_id = FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, hdr);
-		offset = FIELD_GET(CXL_CM_CAP_PTR_MASK, hdr);
-		register_block = base + offset;
-
-		switch (cap_id) {
-		case CXL_CM_CAP_CAP_ID_HDM:
-			dev_dbg(dev, "found HDM decoder capability (0x%x)\n",
-				offset);
-
-			hdr = readl(register_block);
-
-			decoder_cnt = cxl_hdm_decoder_count(hdr);
-			length = 0x20 * decoder_cnt + 0x10;
-
-			map->hdm_decoder.valid = true;
-			map->hdm_decoder.offset = CXL_CM_OFFSET + offset;
-			map->hdm_decoder.size = length;
-			break;
-		default:
-			dev_dbg(dev, "Unknown CM cap ID: %d (0x%x)\n", cap_id,
-				offset);
-			break;
-		}
-	}
-}
-EXPORT_SYMBOL_GPL(cxl_probe_component_regs);
-
-/**
- * cxl_probe_device_regs() - Detect CXL Device register blocks
- * @dev: Host device of the @base mapping
- * @base: Mapping of CXL 2.0 8.2.8 CXL Device Register Interface
- * @map: Map object describing the register block information found
- *
- * Probe for device register information and return it in map object.
- */
-void cxl_probe_device_regs(struct device *dev, void __iomem *base,
-			   struct cxl_device_reg_map *map)
-{
-	int cap, cap_count;
-	u64 cap_array;
-
-	*map = (struct cxl_device_reg_map){ 0 };
-
-	cap_array = readq(base + CXLDEV_CAP_ARRAY_OFFSET);
-	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
-	    CXLDEV_CAP_ARRAY_CAP_ID)
-		return;
-
-	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
-
-	for (cap = 1; cap <= cap_count; cap++) {
-		u32 offset, length;
-		u16 cap_id;
-
-		cap_id = FIELD_GET(CXLDEV_CAP_HDR_CAP_ID_MASK,
-				   readl(base + cap * 0x10));
-		offset = readl(base + cap * 0x10 + 0x4);
-		length = readl(base + cap * 0x10 + 0x8);
-
-		switch (cap_id) {
-		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
-			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
-
-			map->status.valid = true;
-			map->status.offset = offset;
-			map->status.size = length;
-			break;
-		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
-			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
-			map->mbox.valid = true;
-			map->mbox.offset = offset;
-			map->mbox.size = length;
-			break;
-		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
-			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
-			break;
-		case CXLDEV_CAP_CAP_ID_MEMDEV:
-			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
-			map->memdev.valid = true;
-			map->memdev.offset = offset;
-			map->memdev.size = length;
-			break;
-		default:
-			if (cap_id >= 0x8000)
-				dev_dbg(dev, "Vendor cap ID: %#x offset: %#x\n", cap_id, offset);
-			else
-				dev_dbg(dev, "Unknown cap ID: %#x offset: %#x\n", cap_id, offset);
-			break;
-		}
-	}
-}
-EXPORT_SYMBOL_GPL(cxl_probe_device_regs);
-
-static void __iomem *devm_cxl_iomap_block(struct device *dev,
-					  resource_size_t addr,
-					  resource_size_t length)
-{
-	void __iomem *ret_val;
-	struct resource *res;
-
-	res = devm_request_mem_region(dev, addr, length, dev_name(dev));
-	if (!res) {
-		resource_size_t end = addr + length - 1;
-
-		dev_err(dev, "Failed to request region %pa-%pa\n", &addr, &end);
-		return NULL;
-	}
-
-	ret_val = devm_ioremap(dev, addr, length);
-	if (!ret_val)
-		dev_err(dev, "Failed to map region %pr\n", res);
-
-	return ret_val;
-}
-
-int cxl_map_component_regs(struct pci_dev *pdev,
-			   struct cxl_component_regs *regs,
-			   struct cxl_register_map *map)
-{
-	struct device *dev = &pdev->dev;
-	resource_size_t phys_addr;
-	resource_size_t length;
-
-	phys_addr = pci_resource_start(pdev, map->barno);
-	phys_addr += map->block_offset;
-
-	phys_addr += map->component_map.hdm_decoder.offset;
-	length = map->component_map.hdm_decoder.size;
-	regs->hdm_decoder = devm_cxl_iomap_block(dev, phys_addr, length);
-	if (!regs->hdm_decoder)
-		return -ENOMEM;
-
-	return 0;
-}
-EXPORT_SYMBOL_GPL(cxl_map_component_regs);
-
-int cxl_map_device_regs(struct pci_dev *pdev,
-			struct cxl_device_regs *regs,
-			struct cxl_register_map *map)
-{
-	struct device *dev = &pdev->dev;
-	resource_size_t phys_addr;
-
-	phys_addr = pci_resource_start(pdev, map->barno);
-	phys_addr += map->block_offset;
-
-	if (map->device_map.status.valid) {
-		resource_size_t addr;
-		resource_size_t length;
-
-		addr = phys_addr + map->device_map.status.offset;
-		length = map->device_map.status.size;
-		regs->status = devm_cxl_iomap_block(dev, addr, length);
-		if (!regs->status)
-			return -ENOMEM;
-	}
-
-	if (map->device_map.mbox.valid) {
-		resource_size_t addr;
-		resource_size_t length;
-
-		addr = phys_addr + map->device_map.mbox.offset;
-		length = map->device_map.mbox.size;
-		regs->mbox = devm_cxl_iomap_block(dev, addr, length);
-		if (!regs->mbox)
-			return -ENOMEM;
-	}
-
-	if (map->device_map.memdev.valid) {
-		resource_size_t addr;
-		resource_size_t length;
-
-		addr = phys_addr + map->device_map.memdev.offset;
-		length = map->device_map.memdev.size;
-		regs->memdev = devm_cxl_iomap_block(dev, addr, length);
-		if (!regs->memdev)
-			return -ENOMEM;
-	}
-
-	return 0;
-}
-EXPORT_SYMBOL_GPL(cxl_map_device_regs);
-
 /**
  * __cxl_driver_register - register a driver for the cxl bus
  * @cxl_drv: cxl driver structure to attach
diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
new file mode 100644
index 000000000000..edf0f4d387ae
--- /dev/null
+++ b/drivers/cxl/core/regs.c
@@ -0,0 +1,235 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. */
+
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/pci.h>
+#include <mem.h>
+
+/**
+ * cxl_probe_component_regs() - Detect CXL Component register blocks
+ * @dev: Host device of the @base mapping
+ * @base: Mapping containing the HDM Decoder Capability Header
+ * @map: Map object describing the register block information found
+ *
+ * See CXL 2.0 8.2.4 Component Register Layout and Definition
+ * See CXL 2.0 8.2.5.5 CXL Device Register Interface
+ *
+ * Probe for component register information and return it in map object.
+ */
+void cxl_probe_component_regs(struct device *dev, void __iomem *base,
+			      struct cxl_component_reg_map *map)
+{
+	int cap, cap_count;
+	u64 cap_array;
+
+	*map = (struct cxl_component_reg_map) { 0 };
+
+	/*
+	 * CXL.cache and CXL.mem registers are at offset 0x1000 as defined in
+	 * CXL 2.0 8.2.4 Table 141.
+	 */
+	base += CXL_CM_OFFSET;
+
+	cap_array = readq(base + CXL_CM_CAP_HDR_OFFSET);
+
+	if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) {
+		dev_err(dev,
+			"Couldn't locate the CXL.cache and CXL.mem capability array header./n");
+		return;
+	}
+
+	/* It's assumed that future versions will be backward compatible */
+	cap_count = FIELD_GET(CXL_CM_CAP_HDR_ARRAY_SIZE_MASK, cap_array);
+
+	for (cap = 1; cap <= cap_count; cap++) {
+		void __iomem *register_block;
+		u32 hdr;
+		int decoder_cnt;
+		u16 cap_id, offset;
+		u32 length;
+
+		hdr = readl(base + cap * 0x4);
+
+		cap_id = FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, hdr);
+		offset = FIELD_GET(CXL_CM_CAP_PTR_MASK, hdr);
+		register_block = base + offset;
+
+		switch (cap_id) {
+		case CXL_CM_CAP_CAP_ID_HDM:
+			dev_dbg(dev, "found HDM decoder capability (0x%x)\n",
+				offset);
+
+			hdr = readl(register_block);
+
+			decoder_cnt = cxl_hdm_decoder_count(hdr);
+			length = 0x20 * decoder_cnt + 0x10;
+
+			map->hdm_decoder.valid = true;
+			map->hdm_decoder.offset = CXL_CM_OFFSET + offset;
+			map->hdm_decoder.size = length;
+			break;
+		default:
+			dev_dbg(dev, "Unknown CM cap ID: %d (0x%x)\n", cap_id,
+				offset);
+			break;
+		}
+	}
+}
+EXPORT_SYMBOL_GPL(cxl_probe_component_regs);
+
+/**
+ * cxl_probe_device_regs() - Detect CXL Device register blocks
+ * @dev: Host device of the @base mapping
+ * @base: Mapping of CXL 2.0 8.2.8 CXL Device Register Interface
+ * @map: Map object describing the register block information found
+ *
+ * Probe for device register information and return it in map object.
+ */
+void cxl_probe_device_regs(struct device *dev, void __iomem *base,
+			   struct cxl_device_reg_map *map)
+{
+	int cap, cap_count;
+	u64 cap_array;
+
+	*map = (struct cxl_device_reg_map){ 0 };
+
+	cap_array = readq(base + CXLDEV_CAP_ARRAY_OFFSET);
+	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
+	    CXLDEV_CAP_ARRAY_CAP_ID)
+		return;
+
+	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
+
+	for (cap = 1; cap <= cap_count; cap++) {
+		u32 offset, length;
+		u16 cap_id;
+
+		cap_id = FIELD_GET(CXLDEV_CAP_HDR_CAP_ID_MASK,
+				   readl(base + cap * 0x10));
+		offset = readl(base + cap * 0x10 + 0x4);
+		length = readl(base + cap * 0x10 + 0x8);
+
+		switch (cap_id) {
+		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
+			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
+
+			map->status.valid = true;
+			map->status.offset = offset;
+			map->status.size = length;
+			break;
+		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
+			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
+			map->mbox.valid = true;
+			map->mbox.offset = offset;
+			map->mbox.size = length;
+			break;
+		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
+			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
+			break;
+		case CXLDEV_CAP_CAP_ID_MEMDEV:
+			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
+			map->memdev.valid = true;
+			map->memdev.offset = offset;
+			map->memdev.size = length;
+			break;
+		default:
+			if (cap_id >= 0x8000)
+				dev_dbg(dev, "Vendor cap ID: %#x offset: %#x\n", cap_id, offset);
+			else
+				dev_dbg(dev, "Unknown cap ID: %#x offset: %#x\n", cap_id, offset);
+			break;
+		}
+	}
+}
+EXPORT_SYMBOL_GPL(cxl_probe_device_regs);
+
+static void __iomem *devm_cxl_iomap_block(struct device *dev,
+					  resource_size_t addr,
+					  resource_size_t length)
+{
+	void __iomem *ret_val;
+	struct resource *res;
+
+	res = devm_request_mem_region(dev, addr, length, dev_name(dev));
+	if (!res) {
+		resource_size_t end = addr + length - 1;
+
+		dev_err(dev, "Failed to request region %pa-%pa\n", &addr, &end);
+		return NULL;
+	}
+
+	ret_val = devm_ioremap(dev, addr, length);
+	if (!ret_val)
+		dev_err(dev, "Failed to map region %pr\n", res);
+
+	return ret_val;
+}
+
+int cxl_map_component_regs(struct pci_dev *pdev,
+			   struct cxl_component_regs *regs,
+			   struct cxl_register_map *map)
+{
+	struct device *dev = &pdev->dev;
+	resource_size_t phys_addr;
+	resource_size_t length;
+
+	phys_addr = pci_resource_start(pdev, map->barno);
+	phys_addr += map->block_offset;
+
+	phys_addr += map->component_map.hdm_decoder.offset;
+	length = map->component_map.hdm_decoder.size;
+	regs->hdm_decoder = devm_cxl_iomap_block(dev, phys_addr, length);
+	if (!regs->hdm_decoder)
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_map_component_regs);
+
+int cxl_map_device_regs(struct pci_dev *pdev,
+			struct cxl_device_regs *regs,
+			struct cxl_register_map *map)
+{
+	struct device *dev = &pdev->dev;
+	resource_size_t phys_addr;
+
+	phys_addr = pci_resource_start(pdev, map->barno);
+	phys_addr += map->block_offset;
+
+	if (map->device_map.status.valid) {
+		resource_size_t addr;
+		resource_size_t length;
+
+		addr = phys_addr + map->device_map.status.offset;
+		length = map->device_map.status.size;
+		regs->status = devm_cxl_iomap_block(dev, addr, length);
+		if (!regs->status)
+			return -ENOMEM;
+	}
+
+	if (map->device_map.mbox.valid) {
+		resource_size_t addr;
+		resource_size_t length;
+
+		addr = phys_addr + map->device_map.mbox.offset;
+		length = map->device_map.mbox.size;
+		regs->mbox = devm_cxl_iomap_block(dev, addr, length);
+		if (!regs->mbox)
+			return -ENOMEM;
+	}
+
+	if (map->device_map.memdev.valid) {
+		resource_size_t addr;
+		resource_size_t length;
+
+		addr = phys_addr + map->device_map.memdev.offset;
+		length = map->device_map.memdev.size;
+		regs->memdev = devm_cxl_iomap_block(dev, addr, length);
+		if (!regs->memdev)
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_map_device_regs);


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations
  2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
                   ` (3 preceding siblings ...)
  2021-07-30 20:07 ` [PATCH v3 4/6] cxl/core: Move register mapping infrastructure Dan Williams
@ 2021-07-30 20:07 ` Dan Williams
  2021-08-02 15:04   ` Jonathan Cameron
  2021-07-30 20:07 ` [PATCH v3 6/6] cxl/core: Move memdev management to core Dan Williams
  2021-08-02 15:13 ` [PATCH v3 0/6] CXL core reorganization Jonathan Cameron
  6 siblings, 1 reply; 12+ messages in thread
From: Dan Williams @ 2021-07-30 20:07 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, Jonathan.Cameron, vishal.l.verma, alison.schofield

In preparation for moving cxl_memdev allocation to the core, introduce
cdevm_file_operations to coordinate file operations shutdown relative to
driver data release.

The motivation for moving cxl_memdev allocation to the core (beyond
better file organization of sysfs attributes in core/ and drivers in
cxl/), is that device lifetime is longer than module lifetime. The cxl_pci
module should be free to come and go without needing to coordinate with
devices that need the text associated with cxl_memdev_release() to stay
resident. The move will fix a use after free bug when looping driver
load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y.

Another motivation for passing in file_operations to the core cxl_memdev
creation flow is to allow for alternate drivers, like unit test code, to
define their own ioctl backends.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/mem.h |   15 ++++++++++++
 drivers/cxl/pci.c |   65 +++++++++++++++++++++++++++++++----------------------
 2 files changed, 53 insertions(+), 27 deletions(-)

diff --git a/drivers/cxl/mem.h b/drivers/cxl/mem.h
index 8f02d02b26b4..77f4411c7c6a 100644
--- a/drivers/cxl/mem.h
+++ b/drivers/cxl/mem.h
@@ -34,6 +34,21 @@
  */
 #define CXL_MEM_MAX_DEVS 65536
 
+/**
+ * struct cdevm_file_operations - devm coordinated cdev file operations
+ * @fops: typical file operations
+ * @shutdown: disconnect driver data
+ *
+ * @shutdown is invoked in the devres release path to disconnect any
+ * driver instance data from @dev. It assumes synchronization with any
+ * fops operation that requires driver data. After @shutdown an
+ * operation may only reference @device data.
+ */
+struct cdevm_file_operations {
+	struct file_operations fops;
+	void (*shutdown)(struct device *dev);
+};
+
 /**
  * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
  * @dev: driver core device object
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index 4cf351a3cf99..e20a643ee0c0 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -806,13 +806,30 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file)
 	return 0;
 }
 
-static const struct file_operations cxl_memdev_fops = {
-	.owner = THIS_MODULE,
-	.unlocked_ioctl = cxl_memdev_ioctl,
-	.open = cxl_memdev_open,
-	.release = cxl_memdev_release_file,
-	.compat_ioctl = compat_ptr_ioctl,
-	.llseek = noop_llseek,
+static struct cxl_memdev *to_cxl_memdev(struct device *dev)
+{
+	return container_of(dev, struct cxl_memdev, dev);
+}
+
+static void cxl_memdev_shutdown(struct device *dev)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+
+	down_write(&cxl_memdev_rwsem);
+	cxlmd->cxlm = NULL;
+	up_write(&cxl_memdev_rwsem);
+}
+
+static const struct cdevm_file_operations cxl_memdev_fops = {
+	.fops = {
+		.owner = THIS_MODULE,
+		.unlocked_ioctl = cxl_memdev_ioctl,
+		.open = cxl_memdev_open,
+		.release = cxl_memdev_release_file,
+		.compat_ioctl = compat_ptr_ioctl,
+		.llseek = noop_llseek,
+	},
+	.shutdown = cxl_memdev_shutdown,
 };
 
 static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
@@ -1161,11 +1178,6 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
 	return ret;
 }
 
-static struct cxl_memdev *to_cxl_memdev(struct device *dev)
-{
-	return container_of(dev, struct cxl_memdev, dev);
-}
-
 static void cxl_memdev_release(struct device *dev)
 {
 	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
@@ -1281,24 +1293,22 @@ static const struct device_type cxl_memdev_type = {
 	.groups = cxl_memdev_attribute_groups,
 };
 
-static void cxl_memdev_shutdown(struct cxl_memdev *cxlmd)
-{
-	down_write(&cxl_memdev_rwsem);
-	cxlmd->cxlm = NULL;
-	up_write(&cxl_memdev_rwsem);
-}
-
 static void cxl_memdev_unregister(void *_cxlmd)
 {
 	struct cxl_memdev *cxlmd = _cxlmd;
 	struct device *dev = &cxlmd->dev;
+	struct cdev *cdev = &cxlmd->cdev;
+	const struct cdevm_file_operations *cdevm_fops;
+
+	cdevm_fops = container_of(cdev->ops, typeof(*cdevm_fops), fops);
+	cdevm_fops->shutdown(dev);
 
 	cdev_device_del(&cxlmd->cdev, dev);
-	cxl_memdev_shutdown(cxlmd);
 	put_device(dev);
 }
 
-static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
+static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
+					   const struct file_operations *fops)
 {
 	struct pci_dev *pdev = cxlm->pdev;
 	struct cxl_memdev *cxlmd;
@@ -1324,7 +1334,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
 	device_set_pm_not_required(dev);
 
 	cdev = &cxlmd->cdev;
-	cdev_init(cdev, &cxl_memdev_fops);
+	cdev_init(cdev, fops);
 	return cxlmd;
 
 err:
@@ -1332,15 +1342,16 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
 	return ERR_PTR(rc);
 }
 
-static struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
-					      struct cxl_mem *cxlm)
+static struct cxl_memdev *
+devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
+		    const struct cdevm_file_operations *cdevm_fops)
 {
 	struct cxl_memdev *cxlmd;
 	struct device *dev;
 	struct cdev *cdev;
 	int rc;
 
-	cxlmd = cxl_memdev_alloc(cxlm);
+	cxlmd = cxl_memdev_alloc(cxlm, &cdevm_fops->fops);
 	if (IS_ERR(cxlmd))
 		return cxlmd;
 
@@ -1370,7 +1381,7 @@ static struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
 	 * The cdev was briefly live, shutdown any ioctl operations that
 	 * saw that state.
 	 */
-	cxl_memdev_shutdown(cxlmd);
+	cdevm_fops->shutdown(dev);
 	put_device(dev);
 	return ERR_PTR(rc);
 }
@@ -1611,7 +1622,7 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		return rc;
 
-	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm);
+	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm, &cxl_memdev_fops);
 	if (IS_ERR(cxlmd))
 		return PTR_ERR(cxlmd);
 


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 6/6] cxl/core: Move memdev management to core
  2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
                   ` (4 preceding siblings ...)
  2021-07-30 20:07 ` [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations Dan Williams
@ 2021-07-30 20:07 ` Dan Williams
  2021-08-02 15:13 ` [PATCH v3 0/6] CXL core reorganization Jonathan Cameron
  6 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-07-30 20:07 UTC (permalink / raw)
  To: linux-cxl
  Cc: Ben Widawsky, Jonathan.Cameron, vishal.l.verma, alison.schofield

From: Ben Widawsky <ben.widawsky@intel.com>

The motivation for moving cxl_memdev allocation to the core (beyond
better file organization of sysfs attributes in core/ and drivers in
cxl/), is that device lifetime is longer than module lifetime. The cxl_pci
module should be free to come and go without needing to coordinate with
devices that need the text associated with cxl_memdev_release() to stay
resident. The move fixes a use after free bug when looping driver
load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y.

Another motivation for disconnecting cxl_memdev creation from cxl_pci is
to enable other drivers, like a unit test driver, to registers memdevs.

Fixes: b39cb1052a5c ("cxl/mem: Register CXL memX devices")
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/Makefile |    1 
 drivers/cxl/core/bus.c    |   16 +++
 drivers/cxl/core/core.h   |    3 +
 drivers/cxl/core/memdev.c |  245 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/cxl/mem.h         |   15 ++-
 drivers/cxl/pci.c         |  228 ------------------------------------------
 6 files changed, 274 insertions(+), 234 deletions(-)
 create mode 100644 drivers/cxl/core/memdev.c

diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index a49bd72dfed3..1463d44f6c3b 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -5,3 +5,4 @@ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL -I./drivers/cxl
 cxl_core-y := bus.o
 cxl_core-y += pmem.o
 cxl_core-y += regs.o
+cxl_core-y += memdev.o
diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
index 176db3058575..c0bc27caed4d 100644
--- a/drivers/cxl/core/bus.c
+++ b/drivers/cxl/core/bus.c
@@ -633,12 +633,26 @@ EXPORT_SYMBOL_GPL(cxl_bus_type);
 
 static __init int cxl_core_init(void)
 {
-	return bus_register(&cxl_bus_type);
+	int rc;
+
+	rc = cxl_memdev_init();
+	if (rc)
+		return rc;
+
+	rc = bus_register(&cxl_bus_type);
+	if (rc)
+		goto err;
+	return 0;
+
+err:
+	cxl_memdev_exit();
+	return rc;
 }
 
 static void cxl_core_exit(void)
 {
 	bus_unregister(&cxl_bus_type);
+	cxl_memdev_exit();
 }
 
 module_init(cxl_core_init);
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 49045daf8bd7..036a3c8106b4 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -14,4 +14,7 @@ static inline void unregister_cxl_dev(void *dev)
 	device_unregister(dev);
 }
 
+int cxl_memdev_init(void);
+void cxl_memdev_exit(void);
+
 #endif /* __CXL_CORE_H__ */
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
new file mode 100644
index 000000000000..e005d5558d4e
--- /dev/null
+++ b/drivers/cxl/core/memdev.c
@@ -0,0 +1,245 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. */
+
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/idr.h>
+#include <linux/pci.h>
+#include <mem.h>
+
+/*
+ * An entire PCI topology full of devices should be enough for any
+ * config
+ */
+#define CXL_MEM_MAX_DEVS 65536
+
+static int cxl_mem_major;
+static DEFINE_IDA(cxl_memdev_ida);
+
+static void cxl_memdev_release(struct device *dev)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+
+	ida_free(&cxl_memdev_ida, cxlmd->id);
+	kfree(cxlmd);
+}
+
+static char *cxl_memdev_devnode(struct device *dev, umode_t *mode, kuid_t *uid,
+				kgid_t *gid)
+{
+	return kasprintf(GFP_KERNEL, "cxl/%s", dev_name(dev));
+}
+
+static ssize_t firmware_version_show(struct device *dev,
+				     struct device_attribute *attr, char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+
+	return sysfs_emit(buf, "%.16s\n", cxlm->firmware_version);
+}
+static DEVICE_ATTR_RO(firmware_version);
+
+static ssize_t payload_max_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+
+	return sysfs_emit(buf, "%zu\n", cxlm->payload_size);
+}
+static DEVICE_ATTR_RO(payload_max);
+
+static ssize_t label_storage_size_show(struct device *dev,
+				       struct device_attribute *attr, char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+
+	return sysfs_emit(buf, "%zu\n", cxlm->lsa_size);
+}
+static DEVICE_ATTR_RO(label_storage_size);
+
+static ssize_t ram_size_show(struct device *dev, struct device_attribute *attr,
+			     char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	unsigned long long len = range_len(&cxlm->ram_range);
+
+	return sysfs_emit(buf, "%#llx\n", len);
+}
+
+static struct device_attribute dev_attr_ram_size =
+	__ATTR(size, 0444, ram_size_show, NULL);
+
+static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr,
+			      char *buf)
+{
+	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+	struct cxl_mem *cxlm = cxlmd->cxlm;
+	unsigned long long len = range_len(&cxlm->pmem_range);
+
+	return sysfs_emit(buf, "%#llx\n", len);
+}
+
+static struct device_attribute dev_attr_pmem_size =
+	__ATTR(size, 0444, pmem_size_show, NULL);
+
+static struct attribute *cxl_memdev_attributes[] = {
+	&dev_attr_firmware_version.attr,
+	&dev_attr_payload_max.attr,
+	&dev_attr_label_storage_size.attr,
+	NULL,
+};
+
+static struct attribute *cxl_memdev_pmem_attributes[] = {
+	&dev_attr_pmem_size.attr,
+	NULL,
+};
+
+static struct attribute *cxl_memdev_ram_attributes[] = {
+	&dev_attr_ram_size.attr,
+	NULL,
+};
+
+static struct attribute_group cxl_memdev_attribute_group = {
+	.attrs = cxl_memdev_attributes,
+};
+
+static struct attribute_group cxl_memdev_ram_attribute_group = {
+	.name = "ram",
+	.attrs = cxl_memdev_ram_attributes,
+};
+
+static struct attribute_group cxl_memdev_pmem_attribute_group = {
+	.name = "pmem",
+	.attrs = cxl_memdev_pmem_attributes,
+};
+
+static const struct attribute_group *cxl_memdev_attribute_groups[] = {
+	&cxl_memdev_attribute_group,
+	&cxl_memdev_ram_attribute_group,
+	&cxl_memdev_pmem_attribute_group,
+	NULL,
+};
+
+static const struct device_type cxl_memdev_type = {
+	.name = "cxl_memdev",
+	.release = cxl_memdev_release,
+	.devnode = cxl_memdev_devnode,
+	.groups = cxl_memdev_attribute_groups,
+};
+
+static void cxl_memdev_unregister(void *_cxlmd)
+{
+	struct cxl_memdev *cxlmd = _cxlmd;
+	struct device *dev = &cxlmd->dev;
+	struct cdev *cdev = &cxlmd->cdev;
+	const struct cdevm_file_operations *cdevm_fops;
+
+	cdevm_fops = container_of(cdev->ops, typeof(*cdevm_fops), fops);
+	cdevm_fops->shutdown(dev);
+
+	cdev_device_del(&cxlmd->cdev, dev);
+	put_device(dev);
+}
+
+static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
+					   const struct file_operations *fops)
+{
+	struct pci_dev *pdev = cxlm->pdev;
+	struct cxl_memdev *cxlmd;
+	struct device *dev;
+	struct cdev *cdev;
+	int rc;
+
+	cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
+	if (!cxlmd)
+		return ERR_PTR(-ENOMEM);
+
+	rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL);
+	if (rc < 0)
+		goto err;
+	cxlmd->id = rc;
+
+	dev = &cxlmd->dev;
+	device_initialize(dev);
+	dev->parent = &pdev->dev;
+	dev->bus = &cxl_bus_type;
+	dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
+	dev->type = &cxl_memdev_type;
+	device_set_pm_not_required(dev);
+
+	cdev = &cxlmd->cdev;
+	cdev_init(cdev, fops);
+	return cxlmd;
+
+err:
+	kfree(cxlmd);
+	return ERR_PTR(rc);
+}
+
+struct cxl_memdev *
+devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
+		    const struct cdevm_file_operations *cdevm_fops)
+{
+	struct cxl_memdev *cxlmd;
+	struct device *dev;
+	struct cdev *cdev;
+	int rc;
+
+	cxlmd = cxl_memdev_alloc(cxlm, &cdevm_fops->fops);
+	if (IS_ERR(cxlmd))
+		return cxlmd;
+
+	dev = &cxlmd->dev;
+	rc = dev_set_name(dev, "mem%d", cxlmd->id);
+	if (rc)
+		goto err;
+
+	/*
+	 * Activate ioctl operations, no cxl_memdev_rwsem manipulation
+	 * needed as this is ordered with cdev_add() publishing the device.
+	 */
+	cxlmd->cxlm = cxlm;
+
+	cdev = &cxlmd->cdev;
+	rc = cdev_device_add(cdev, dev);
+	if (rc)
+		goto err;
+
+	rc = devm_add_action_or_reset(host, cxl_memdev_unregister, cxlmd);
+	if (rc)
+		return ERR_PTR(rc);
+	return cxlmd;
+
+err:
+	/*
+	 * The cdev was briefly live, shutdown any ioctl operations that
+	 * saw that state.
+	 */
+	cdevm_fops->shutdown(dev);
+	put_device(dev);
+	return ERR_PTR(rc);
+}
+EXPORT_SYMBOL_GPL(devm_cxl_add_memdev);
+
+__init int cxl_memdev_init(void)
+{
+	dev_t devt;
+	int rc;
+
+	rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl");
+	if (rc)
+		return rc;
+
+	cxl_mem_major = MAJOR(devt);
+
+	return 0;
+}
+
+void cxl_memdev_exit(void)
+{
+	unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
+}
diff --git a/drivers/cxl/mem.h b/drivers/cxl/mem.h
index 77f4411c7c6a..cd957a661ead 100644
--- a/drivers/cxl/mem.h
+++ b/drivers/cxl/mem.h
@@ -28,12 +28,6 @@
 	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
 	 CXLMDEV_RESET_NEEDED_NOT)
 
-/*
- * An entire PCI topology full of devices should be enough for any
- * config
- */
-#define CXL_MEM_MAX_DEVS 65536
-
 /**
  * struct cdevm_file_operations - devm coordinated cdev file operations
  * @fops: typical file operations
@@ -63,6 +57,15 @@ struct cxl_memdev {
 	int id;
 };
 
+static inline struct cxl_memdev *to_cxl_memdev(struct device *dev)
+{
+	return container_of(dev, struct cxl_memdev, dev);
+}
+
+struct cxl_memdev *
+devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
+		    const struct cdevm_file_operations *cdevm_fops);
+
 /**
  * struct cxl_mem - A CXL memory device
  * @pdev: The PCI device associated with this CXL device.
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index e20a643ee0c0..b54bcfab57c9 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -94,8 +94,6 @@ struct mbox_cmd {
 #define CXL_MBOX_SUCCESS 0
 };
 
-static int cxl_mem_major;
-static DEFINE_IDA(cxl_memdev_ida);
 static DECLARE_RWSEM(cxl_memdev_rwsem);
 static struct dentry *cxl_debugfs;
 static bool cxl_raw_allow_all;
@@ -806,11 +804,6 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file)
 	return 0;
 }
 
-static struct cxl_memdev *to_cxl_memdev(struct device *dev)
-{
-	return container_of(dev, struct cxl_memdev, dev);
-}
-
 static void cxl_memdev_shutdown(struct device *dev)
 {
 	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
@@ -1178,214 +1171,6 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
 	return ret;
 }
 
-static void cxl_memdev_release(struct device *dev)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-
-	ida_free(&cxl_memdev_ida, cxlmd->id);
-	kfree(cxlmd);
-}
-
-static char *cxl_memdev_devnode(struct device *dev, umode_t *mode, kuid_t *uid,
-				kgid_t *gid)
-{
-	return kasprintf(GFP_KERNEL, "cxl/%s", dev_name(dev));
-}
-
-static ssize_t firmware_version_show(struct device *dev,
-				     struct device_attribute *attr, char *buf)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-	struct cxl_mem *cxlm = cxlmd->cxlm;
-
-	return sysfs_emit(buf, "%.16s\n", cxlm->firmware_version);
-}
-static DEVICE_ATTR_RO(firmware_version);
-
-static ssize_t payload_max_show(struct device *dev,
-				struct device_attribute *attr, char *buf)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-	struct cxl_mem *cxlm = cxlmd->cxlm;
-
-	return sysfs_emit(buf, "%zu\n", cxlm->payload_size);
-}
-static DEVICE_ATTR_RO(payload_max);
-
-static ssize_t label_storage_size_show(struct device *dev,
-				struct device_attribute *attr, char *buf)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-	struct cxl_mem *cxlm = cxlmd->cxlm;
-
-	return sysfs_emit(buf, "%zu\n", cxlm->lsa_size);
-}
-static DEVICE_ATTR_RO(label_storage_size);
-
-static ssize_t ram_size_show(struct device *dev, struct device_attribute *attr,
-			     char *buf)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-	struct cxl_mem *cxlm = cxlmd->cxlm;
-	unsigned long long len = range_len(&cxlm->ram_range);
-
-	return sysfs_emit(buf, "%#llx\n", len);
-}
-
-static struct device_attribute dev_attr_ram_size =
-	__ATTR(size, 0444, ram_size_show, NULL);
-
-static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr,
-			      char *buf)
-{
-	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
-	struct cxl_mem *cxlm = cxlmd->cxlm;
-	unsigned long long len = range_len(&cxlm->pmem_range);
-
-	return sysfs_emit(buf, "%#llx\n", len);
-}
-
-static struct device_attribute dev_attr_pmem_size =
-	__ATTR(size, 0444, pmem_size_show, NULL);
-
-static struct attribute *cxl_memdev_attributes[] = {
-	&dev_attr_firmware_version.attr,
-	&dev_attr_payload_max.attr,
-	&dev_attr_label_storage_size.attr,
-	NULL,
-};
-
-static struct attribute *cxl_memdev_pmem_attributes[] = {
-	&dev_attr_pmem_size.attr,
-	NULL,
-};
-
-static struct attribute *cxl_memdev_ram_attributes[] = {
-	&dev_attr_ram_size.attr,
-	NULL,
-};
-
-static struct attribute_group cxl_memdev_attribute_group = {
-	.attrs = cxl_memdev_attributes,
-};
-
-static struct attribute_group cxl_memdev_ram_attribute_group = {
-	.name = "ram",
-	.attrs = cxl_memdev_ram_attributes,
-};
-
-static struct attribute_group cxl_memdev_pmem_attribute_group = {
-	.name = "pmem",
-	.attrs = cxl_memdev_pmem_attributes,
-};
-
-static const struct attribute_group *cxl_memdev_attribute_groups[] = {
-	&cxl_memdev_attribute_group,
-	&cxl_memdev_ram_attribute_group,
-	&cxl_memdev_pmem_attribute_group,
-	NULL,
-};
-
-static const struct device_type cxl_memdev_type = {
-	.name = "cxl_memdev",
-	.release = cxl_memdev_release,
-	.devnode = cxl_memdev_devnode,
-	.groups = cxl_memdev_attribute_groups,
-};
-
-static void cxl_memdev_unregister(void *_cxlmd)
-{
-	struct cxl_memdev *cxlmd = _cxlmd;
-	struct device *dev = &cxlmd->dev;
-	struct cdev *cdev = &cxlmd->cdev;
-	const struct cdevm_file_operations *cdevm_fops;
-
-	cdevm_fops = container_of(cdev->ops, typeof(*cdevm_fops), fops);
-	cdevm_fops->shutdown(dev);
-
-	cdev_device_del(&cxlmd->cdev, dev);
-	put_device(dev);
-}
-
-static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
-					   const struct file_operations *fops)
-{
-	struct pci_dev *pdev = cxlm->pdev;
-	struct cxl_memdev *cxlmd;
-	struct device *dev;
-	struct cdev *cdev;
-	int rc;
-
-	cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
-	if (!cxlmd)
-		return ERR_PTR(-ENOMEM);
-
-	rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL);
-	if (rc < 0)
-		goto err;
-	cxlmd->id = rc;
-
-	dev = &cxlmd->dev;
-	device_initialize(dev);
-	dev->parent = &pdev->dev;
-	dev->bus = &cxl_bus_type;
-	dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
-	dev->type = &cxl_memdev_type;
-	device_set_pm_not_required(dev);
-
-	cdev = &cxlmd->cdev;
-	cdev_init(cdev, fops);
-	return cxlmd;
-
-err:
-	kfree(cxlmd);
-	return ERR_PTR(rc);
-}
-
-static struct cxl_memdev *
-devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
-		    const struct cdevm_file_operations *cdevm_fops)
-{
-	struct cxl_memdev *cxlmd;
-	struct device *dev;
-	struct cdev *cdev;
-	int rc;
-
-	cxlmd = cxl_memdev_alloc(cxlm, &cdevm_fops->fops);
-	if (IS_ERR(cxlmd))
-		return cxlmd;
-
-	dev = &cxlmd->dev;
-	rc = dev_set_name(dev, "mem%d", cxlmd->id);
-	if (rc)
-		goto err;
-
-	/*
-	 * Activate ioctl operations, no cxl_memdev_rwsem manipulation
-	 * needed as this is ordered with cdev_add() publishing the device.
-	 */
-	cxlmd->cxlm = cxlm;
-
-	cdev = &cxlmd->cdev;
-	rc = cdev_device_add(cdev, dev);
-	if (rc)
-		goto err;
-
-	rc = devm_add_action_or_reset(host, cxl_memdev_unregister, cxlmd);
-	if (rc)
-		return ERR_PTR(rc);
-	return cxlmd;
-
-err:
-	/*
-	 * The cdev was briefly live, shutdown any ioctl operations that
-	 * saw that state.
-	 */
-	cdevm_fops->shutdown(dev);
-	put_device(dev);
-	return ERR_PTR(rc);
-}
-
 static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out)
 {
 	u32 remaining = size;
@@ -1651,25 +1436,15 @@ static struct pci_driver cxl_mem_driver = {
 static __init int cxl_mem_init(void)
 {
 	struct dentry *mbox_debugfs;
-	dev_t devt;
 	int rc;
 
 	/* Double check the anonymous union trickery in struct cxl_regs */
 	BUILD_BUG_ON(offsetof(struct cxl_regs, memdev) !=
 		     offsetof(struct cxl_regs, device_regs.memdev));
 
-	rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl");
-	if (rc)
-		return rc;
-
-	cxl_mem_major = MAJOR(devt);
-
 	rc = pci_register_driver(&cxl_mem_driver);
-	if (rc) {
-		unregister_chrdev_region(MKDEV(cxl_mem_major, 0),
-					 CXL_MEM_MAX_DEVS);
+	if (rc)
 		return rc;
-	}
 
 	cxl_debugfs = debugfs_create_dir("cxl", NULL);
 	mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
@@ -1683,7 +1458,6 @@ static __exit void cxl_mem_exit(void)
 {
 	debugfs_remove_recursive(cxl_debugfs);
 	pci_unregister_driver(&cxl_mem_driver);
-	unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
 }
 
 MODULE_LICENSE("GPL v2");


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v4 1/6] cxl: Move cxl_core to new directory
  2021-07-30 20:07 ` [PATCH v3 1/6] cxl: Move cxl_core to new directory Dan Williams
@ 2021-07-31 16:35   ` Dan Williams
  0 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-07-31 16:35 UTC (permalink / raw)
  To: linux-cxl; +Cc: Ben Widawsky, Jonathan.Cameron

From: Ben Widawsky <ben.widawsky@intel.com>

CXL core is growing, and it's already arguably unmanageable. To support
future growth, move core functionality to a new directory and rename the
file to represent just bus support. Future work will remove non-bus
functionality.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
Changes since v3:

Fix support for building in a separate build directory (0day)

 Documentation/driver-api/cxl/memory-devices.rst |    2 +-
 drivers/cxl/Makefile                            |    4 +---
 drivers/cxl/core/Makefile                       |    5 +++++
 drivers/cxl/core/bus.c                          |    4 ++--
 4 files changed, 9 insertions(+), 6 deletions(-)
 create mode 100644 drivers/cxl/core/Makefile
 rename drivers/cxl/{core.c => core/bus.c} (99%)

diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index 487ce4f41d77..a86e2c7c551a 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -36,7 +36,7 @@ CXL Core
 .. kernel-doc:: drivers/cxl/cxl.h
    :internal:
 
-.. kernel-doc:: drivers/cxl/core.c
+.. kernel-doc:: drivers/cxl/core/bus.c
    :doc: cxl core
 
 External Interfaces
diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
index 32954059b37b..d1aaabc940f3 100644
--- a/drivers/cxl/Makefile
+++ b/drivers/cxl/Makefile
@@ -1,11 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
-obj-$(CONFIG_CXL_BUS) += cxl_core.o
+obj-$(CONFIG_CXL_BUS) += core/
 obj-$(CONFIG_CXL_MEM) += cxl_pci.o
 obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
 obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o
 
-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL
-cxl_core-y := core.o
 cxl_pci-y := pci.o
 cxl_acpi-y := acpi.o
 cxl_pmem-y := pmem.o
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
new file mode 100644
index 000000000000..ad137f96e5c8
--- /dev/null
+++ b/drivers/cxl/core/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_CXL_BUS) += cxl_core.o
+
+ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL -I$(srctree)/drivers/cxl
+cxl_core-y := bus.o
diff --git a/drivers/cxl/core.c b/drivers/cxl/core/bus.c
similarity index 99%
rename from drivers/cxl/core.c
rename to drivers/cxl/core/bus.c
index a2e4d54fc7bc..6ce04e3976d2 100644
--- a/drivers/cxl/core.c
+++ b/drivers/cxl/core/bus.c
@@ -6,8 +6,8 @@
 #include <linux/pci.h>
 #include <linux/slab.h>
 #include <linux/idr.h>
-#include "cxl.h"
-#include "mem.h"
+#include <cxl.h>
+#include <mem.h>
 
 /**
  * DOC: cxl core


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations
  2021-07-30 20:07 ` [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations Dan Williams
@ 2021-08-02 15:04   ` Jonathan Cameron
  2021-08-02 16:15     ` Dan Williams
  0 siblings, 1 reply; 12+ messages in thread
From: Jonathan Cameron @ 2021-08-02 15:04 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl, Ben Widawsky, vishal.l.verma, alison.schofield

On Fri, 30 Jul 2021 13:07:22 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> In preparation for moving cxl_memdev allocation to the core, introduce
> cdevm_file_operations to coordinate file operations shutdown relative to
> driver data release.
> 
> The motivation for moving cxl_memdev allocation to the core (beyond
> better file organization of sysfs attributes in core/ and drivers in
> cxl/), is that device lifetime is longer than module lifetime. The cxl_pci
> module should be free to come and go without needing to coordinate with
> devices that need the text associated with cxl_memdev_release() to stay
> resident. The move will fix a use after free bug when looping driver
> load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y.
> 
> Another motivation for passing in file_operations to the core cxl_memdev
> creation flow is to allow for alternate drivers, like unit test code, to
> define their own ioctl backends.
> 
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Hi Dan / Ben,

One utterly trivial question inline, otherwise looks good.

> ---
>  drivers/cxl/mem.h |   15 ++++++++++++
>  drivers/cxl/pci.c |   65 +++++++++++++++++++++++++++++++----------------------
>  2 files changed, 53 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/cxl/mem.h b/drivers/cxl/mem.h
> index 8f02d02b26b4..77f4411c7c6a 100644
> --- a/drivers/cxl/mem.h
> +++ b/drivers/cxl/mem.h
> @@ -34,6 +34,21 @@
>   */
>  #define CXL_MEM_MAX_DEVS 65536
>  
> +/**
> + * struct cdevm_file_operations - devm coordinated cdev file operations
> + * @fops: typical file operations

Why typical? Seems that simply saying "file operations" conveys the same
information.  What is atypical here?

> + * @shutdown: disconnect driver data
> + *
> + * @shutdown is invoked in the devres release path to disconnect any
> + * driver instance data from @dev. It assumes synchronization with any
> + * fops operation that requires driver data. After @shutdown an
> + * operation may only reference @device data.
> + */
> +struct cdevm_file_operations {
> +	struct file_operations fops;
> +	void (*shutdown)(struct device *dev);
> +};
> +
>  /**
>   * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
>   * @dev: driver core device object
> diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
> index 4cf351a3cf99..e20a643ee0c0 100644
> --- a/drivers/cxl/pci.c
> +++ b/drivers/cxl/pci.c
> @@ -806,13 +806,30 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file)
>  	return 0;
>  }
>  
> -static const struct file_operations cxl_memdev_fops = {
> -	.owner = THIS_MODULE,
> -	.unlocked_ioctl = cxl_memdev_ioctl,
> -	.open = cxl_memdev_open,
> -	.release = cxl_memdev_release_file,
> -	.compat_ioctl = compat_ptr_ioctl,
> -	.llseek = noop_llseek,
> +static struct cxl_memdev *to_cxl_memdev(struct device *dev)
> +{
> +	return container_of(dev, struct cxl_memdev, dev);
> +}
> +
> +static void cxl_memdev_shutdown(struct device *dev)
> +{
> +	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> +
> +	down_write(&cxl_memdev_rwsem);
> +	cxlmd->cxlm = NULL;
> +	up_write(&cxl_memdev_rwsem);
> +}
> +
> +static const struct cdevm_file_operations cxl_memdev_fops = {
> +	.fops = {
> +		.owner = THIS_MODULE,
> +		.unlocked_ioctl = cxl_memdev_ioctl,
> +		.open = cxl_memdev_open,
> +		.release = cxl_memdev_release_file,
> +		.compat_ioctl = compat_ptr_ioctl,
> +		.llseek = noop_llseek,
> +	},
> +	.shutdown = cxl_memdev_shutdown,
>  };
>  
>  static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
> @@ -1161,11 +1178,6 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
>  	return ret;
>  }
>  
> -static struct cxl_memdev *to_cxl_memdev(struct device *dev)
> -{
> -	return container_of(dev, struct cxl_memdev, dev);
> -}
> -
>  static void cxl_memdev_release(struct device *dev)
>  {
>  	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> @@ -1281,24 +1293,22 @@ static const struct device_type cxl_memdev_type = {
>  	.groups = cxl_memdev_attribute_groups,
>  };
>  
> -static void cxl_memdev_shutdown(struct cxl_memdev *cxlmd)
> -{
> -	down_write(&cxl_memdev_rwsem);
> -	cxlmd->cxlm = NULL;
> -	up_write(&cxl_memdev_rwsem);
> -}
> -
>  static void cxl_memdev_unregister(void *_cxlmd)
>  {
>  	struct cxl_memdev *cxlmd = _cxlmd;
>  	struct device *dev = &cxlmd->dev;
> +	struct cdev *cdev = &cxlmd->cdev;
> +	const struct cdevm_file_operations *cdevm_fops;
> +
> +	cdevm_fops = container_of(cdev->ops, typeof(*cdevm_fops), fops);
> +	cdevm_fops->shutdown(dev);
>  
>  	cdev_device_del(&cxlmd->cdev, dev);
> -	cxl_memdev_shutdown(cxlmd);
>  	put_device(dev);
>  }
>  
> -static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
> +static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
> +					   const struct file_operations *fops)
>  {
>  	struct pci_dev *pdev = cxlm->pdev;
>  	struct cxl_memdev *cxlmd;
> @@ -1324,7 +1334,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
>  	device_set_pm_not_required(dev);
>  
>  	cdev = &cxlmd->cdev;
> -	cdev_init(cdev, &cxl_memdev_fops);
> +	cdev_init(cdev, fops);
>  	return cxlmd;
>  
>  err:
> @@ -1332,15 +1342,16 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
>  	return ERR_PTR(rc);
>  }
>  
> -static struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
> -					      struct cxl_mem *cxlm)
> +static struct cxl_memdev *
> +devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
> +		    const struct cdevm_file_operations *cdevm_fops)
>  {
>  	struct cxl_memdev *cxlmd;
>  	struct device *dev;
>  	struct cdev *cdev;
>  	int rc;
>  
> -	cxlmd = cxl_memdev_alloc(cxlm);
> +	cxlmd = cxl_memdev_alloc(cxlm, &cdevm_fops->fops);
>  	if (IS_ERR(cxlmd))
>  		return cxlmd;
>  
> @@ -1370,7 +1381,7 @@ static struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
>  	 * The cdev was briefly live, shutdown any ioctl operations that
>  	 * saw that state.
>  	 */
> -	cxl_memdev_shutdown(cxlmd);
> +	cdevm_fops->shutdown(dev);
>  	put_device(dev);
>  	return ERR_PTR(rc);
>  }
> @@ -1611,7 +1622,7 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  	if (rc)
>  		return rc;
>  
> -	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm);
> +	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm, &cxl_memdev_fops);
>  	if (IS_ERR(cxlmd))
>  		return PTR_ERR(cxlmd);
>  
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 0/6] CXL core reorganization
  2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
                   ` (5 preceding siblings ...)
  2021-07-30 20:07 ` [PATCH v3 6/6] cxl/core: Move memdev management to core Dan Williams
@ 2021-08-02 15:13 ` Jonathan Cameron
  6 siblings, 0 replies; 12+ messages in thread
From: Jonathan Cameron @ 2021-08-02 15:13 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl, Ben Widawsky, vishal.l.verma, alison.schofield

On Fri, 30 Jul 2021 13:06:55 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> Changes since v2 [1]:
> - Rebase on top of the Makefile changes
> - Split register and pmem moving into 2 independent patches
> - Drop inclusion of mem.h and cxl.h from core.h. I.e. require all
>   compilation units to directly include only the headers they need.
> - Squash / rewrite "cxl/mem: Move character device region creation" to
>   move the char-dev infrastructure to drivers/cxl/core/memdev.c
> - Rewrite the justification in some of the changelogs
> - Rewrite "cxl: Pass fops and shutdown to memdev creation" to introduce
>   cdevm_file_operations.
> 
> [1]: https://lore.kernel.org/linux-cxl/20210720180742.89992-1-ben.widawsky@intel.com/
> 
> ---
> 
> Given Ben is out for a bit I have folded my review comments into the set
> directly.
> 
> Original Cover from Ben:
> 
> The main motivation of the patch series is to establish the cxl_core driver in
> its own directory and modularize it. Specifically, the patch series aims to
> achieve three things:
> 1. Move existing core functionality to a new directory.
> 2. Split existing core functionality into multiple files.
> 3. Migrate memdev functionality into core.
> 
> #1 is trivially accomplished with git mv. The file itself is renamed back to
> bus.c since the goal is to break up core functionality into multiple files, and
> so the name core.c doesn't make sense in that context.
> 
> #2 is also trivially accomplished via cut/paste.
> 
> #3 is slightly invasive in that it has certain functional changes to improve the
> existing interfaces and make them more generic. The rest of the change is
> cut/paste. This is also the only part of the series which has runtime functional
> change in that some interfaces are removed from cxl_pci, moved into cxl_core,
> and exported for other drivers to use.

FWIW all looks good to me and new break up of files seems sensible.
Seems a bit excessive for this set, but we don't have a sanity-checked-by tag.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> 
> ---
> 
> Ben Widawsky (3):
>       cxl: Move cxl_core to new directory
>       cxl/core: Improve CXL core kernel docs
>       cxl/core: Move memdev management to core
> 
> Dan Williams (3):
>       cxl/core: Move pmem functionality
>       cxl/core: Move register mapping infrastructure
>       cxl/pci: Introduce cdevm_file_operations
> 
> 
>  Documentation/driver-api/cxl/memory-devices.rst |    8 
>  drivers/cxl/Makefile                            |    4 
>  drivers/cxl/core/Makefile                       |    8 
>  drivers/cxl/core/bus.c                          |  463 +----------------------
>  drivers/cxl/core/core.h                         |   20 +
>  drivers/cxl/core/memdev.c                       |  245 ++++++++++++
>  drivers/cxl/core/pmem.c                         |  204 ++++++++++
>  drivers/cxl/core/regs.c                         |  235 ++++++++++++
>  drivers/cxl/mem.h                               |   26 +
>  drivers/cxl/pci.c                               |  257 +------------
>  10 files changed, 791 insertions(+), 679 deletions(-)
>  create mode 100644 drivers/cxl/core/Makefile
>  rename drivers/cxl/{core.c => core/bus.c} (58%)
>  create mode 100644 drivers/cxl/core/core.h
>  create mode 100644 drivers/cxl/core/memdev.c
>  create mode 100644 drivers/cxl/core/pmem.c
>  create mode 100644 drivers/cxl/core/regs.c
> 
> base-commit: ff1176468d368232b684f75e82563369208bc371


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations
  2021-08-02 15:04   ` Jonathan Cameron
@ 2021-08-02 16:15     ` Dan Williams
  2021-08-02 16:30       ` Jonathan Cameron
  0 siblings, 1 reply; 12+ messages in thread
From: Dan Williams @ 2021-08-02 16:15 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-cxl, Ben Widawsky, Vishal L Verma, Schofield, Alison

On Mon, Aug 2, 2021 at 8:05 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Fri, 30 Jul 2021 13:07:22 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
>
> > In preparation for moving cxl_memdev allocation to the core, introduce
> > cdevm_file_operations to coordinate file operations shutdown relative to
> > driver data release.
> >
> > The motivation for moving cxl_memdev allocation to the core (beyond
> > better file organization of sysfs attributes in core/ and drivers in
> > cxl/), is that device lifetime is longer than module lifetime. The cxl_pci
> > module should be free to come and go without needing to coordinate with
> > devices that need the text associated with cxl_memdev_release() to stay
> > resident. The move will fix a use after free bug when looping driver
> > load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y.
> >
> > Another motivation for passing in file_operations to the core cxl_memdev
> > creation flow is to allow for alternate drivers, like unit test code, to
> > define their own ioctl backends.
> >
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>
> Hi Dan / Ben,
>
> One utterly trivial question inline, otherwise looks good.
>
> > ---
> >  drivers/cxl/mem.h |   15 ++++++++++++
> >  drivers/cxl/pci.c |   65 +++++++++++++++++++++++++++++++----------------------
> >  2 files changed, 53 insertions(+), 27 deletions(-)
> >
> > diff --git a/drivers/cxl/mem.h b/drivers/cxl/mem.h
> > index 8f02d02b26b4..77f4411c7c6a 100644
> > --- a/drivers/cxl/mem.h
> > +++ b/drivers/cxl/mem.h
> > @@ -34,6 +34,21 @@
> >   */
> >  #define CXL_MEM_MAX_DEVS 65536
> >
> > +/**
> > + * struct cdevm_file_operations - devm coordinated cdev file operations
> > + * @fops: typical file operations
>
> Why typical? Seems that simply saying "file operations" conveys the same
> information.  What is atypical here?

Hmm, I was thinking the need for an extra shutdown operation is
"atypical", but you're right this description does not add anything.
I'll change it too:

@fops: file operations that are synchronized against @shutdown

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations
  2021-08-02 16:15     ` Dan Williams
@ 2021-08-02 16:30       ` Jonathan Cameron
  0 siblings, 0 replies; 12+ messages in thread
From: Jonathan Cameron @ 2021-08-02 16:30 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl, Ben Widawsky, Vishal L Verma, Schofield, Alison

On Mon, 2 Aug 2021 09:15:10 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> On Mon, Aug 2, 2021 at 8:05 AM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Fri, 30 Jul 2021 13:07:22 -0700
> > Dan Williams <dan.j.williams@intel.com> wrote:
> >  
> > > In preparation for moving cxl_memdev allocation to the core, introduce
> > > cdevm_file_operations to coordinate file operations shutdown relative to
> > > driver data release.
> > >
> > > The motivation for moving cxl_memdev allocation to the core (beyond
> > > better file organization of sysfs attributes in core/ and drivers in
> > > cxl/), is that device lifetime is longer than module lifetime. The cxl_pci
> > > module should be free to come and go without needing to coordinate with
> > > devices that need the text associated with cxl_memdev_release() to stay
> > > resident. The move will fix a use after free bug when looping driver
> > > load / unload with CONFIG_DEBUG_KOBJECT_RELEASE=y.
> > >
> > > Another motivation for passing in file_operations to the core cxl_memdev
> > > creation flow is to allow for alternate drivers, like unit test code, to
> > > define their own ioctl backends.
> > >
> > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>  
> >
> > Hi Dan / Ben,
> >
> > One utterly trivial question inline, otherwise looks good.
> >  
> > > ---
> > >  drivers/cxl/mem.h |   15 ++++++++++++
> > >  drivers/cxl/pci.c |   65 +++++++++++++++++++++++++++++++----------------------
> > >  2 files changed, 53 insertions(+), 27 deletions(-)
> > >
> > > diff --git a/drivers/cxl/mem.h b/drivers/cxl/mem.h
> > > index 8f02d02b26b4..77f4411c7c6a 100644
> > > --- a/drivers/cxl/mem.h
> > > +++ b/drivers/cxl/mem.h
> > > @@ -34,6 +34,21 @@
> > >   */
> > >  #define CXL_MEM_MAX_DEVS 65536
> > >
> > > +/**
> > > + * struct cdevm_file_operations - devm coordinated cdev file operations
> > > + * @fops: typical file operations  
> >
> > Why typical? Seems that simply saying "file operations" conveys the same
> > information.  What is atypical here?  
> 
> Hmm, I was thinking the need for an extra shutdown operation is
> "atypical", but you're right this description does not add anything.
> I'll change it too:
> 
> @fops: file operations that are synchronized against @shutdown

Indeed, much better.

Jonathan

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-08-02 16:30 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-30 20:06 [PATCH v3 0/6] CXL core reorganization Dan Williams
2021-07-30 20:07 ` [PATCH v3 1/6] cxl: Move cxl_core to new directory Dan Williams
2021-07-31 16:35   ` [PATCH v4 " Dan Williams
2021-07-30 20:07 ` [PATCH v3 2/6] cxl/core: Improve CXL core kernel docs Dan Williams
2021-07-30 20:07 ` [PATCH v3 3/6] cxl/core: Move pmem functionality Dan Williams
2021-07-30 20:07 ` [PATCH v3 4/6] cxl/core: Move register mapping infrastructure Dan Williams
2021-07-30 20:07 ` [PATCH v3 5/6] cxl/pci: Introduce cdevm_file_operations Dan Williams
2021-08-02 15:04   ` Jonathan Cameron
2021-08-02 16:15     ` Dan Williams
2021-08-02 16:30       ` Jonathan Cameron
2021-07-30 20:07 ` [PATCH v3 6/6] cxl/core: Move memdev management to core Dan Williams
2021-08-02 15:13 ` [PATCH v3 0/6] CXL core reorganization Jonathan Cameron

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.