All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* Platform Inventory for redfish
@ 2019-06-14  3:48 Neeraj Ladkani
  2019-06-14 18:16 ` Ed Tanous
  0 siblings, 1 reply; 4+ messages in thread
From: Neeraj Ladkani @ 2019-06-14  3:48 UTC (permalink / raw
  To: OpenBMC Maillist

HI All, 

How does we manage platform inventory like CPU, memory and PCIe devices since BMC may not always have presence pins for all components.  For IPMI , we have SDRs that can be programmed with correct SKU configurations. I am wondering what is solution for redfish (except BIOS sending inventory on USB ethernet using redfish).  

Platform inventory includes 

1. Number of host CPUs and type of CPUs
2. Number of memory and types of memory 
3. IO expander cards 
4. SMBUS devices on PCI cards 

Thanks
Neeraj

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Platform Inventory for redfish
  2019-06-14  3:48 Platform Inventory for redfish Neeraj Ladkani
@ 2019-06-14 18:16 ` Ed Tanous
  2019-06-14 18:59   ` Neeraj Ladkani
  0 siblings, 1 reply; 4+ messages in thread
From: Ed Tanous @ 2019-06-14 18:16 UTC (permalink / raw
  To: Neeraj Ladkani, OpenBMC Maillist

On 6/13/19 8:48 PM, Neeraj Ladkani wrote:
> HI All, 
> 
> How does we manage platform inventory like CPU, memory and PCIe devices since BMC may not always have presence pins for all components.  For IPMI , we have SDRs that can be programmed with correct SKU configurations. I am wondering what is solution for redfish (except BIOS sending inventory on USB ethernet using redfish).
This varies widely dependent on the architecture.  I can answer the
specifics for x86 servers and systems using entity-manager, but in
short, whatever exists in Dbus with the correct interfaces is populated
in Redfish.

> 
> Platform inventory includes 
> 
> 1. Number of host CPUs and type of CPUs

This is managed over a combination of PECI and SMBIOS/MDR tables.  PECI
can interrogate the CPU directly.  SMBIOS has more detailed information.

> 2. Number of memory and types of memory 
Same answer as CPU.  PECI allows us to get inventory counts, presence,
and temperatures.  SMBIOS allows us to get more detailed information on
types, timings, and inventory information.

> 3. IO expander cards 
This is done over Smbus.  On Wolf Pass, we use the FruDevice application
here, which scans all busses for valid FRUs.
https://github.com/openbmc/entity-manager/blob/master/src/FruDevice.cpp

> 4. SMBUS devices on PCI cards 

Same answer as IO expanders.  We check for a valid FRU, once we find it,
we check to see if it's a card we understand the topology of, by
instantiating an instance of a config file like this:
https://github.com/openbmc/entity-manager/blob/master/configurations/AXX1P100HSSI_AIC.json

Most cards key off the product name field in the board section, but
there are ways to key off of other fields as well.

At the end of the day the "probe" statement in entity manager is just a
dbus match, so if your platform has a different way to identifying a
"card is present" just make that data available on dbus, and add an
appropriate match to entity manager.

> 
> Thanks
> Neeraj
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: Platform Inventory for redfish
  2019-06-14 18:16 ` Ed Tanous
@ 2019-06-14 18:59   ` Neeraj Ladkani
  2019-06-14 19:55     ` Ed Tanous
  0 siblings, 1 reply; 4+ messages in thread
From: Neeraj Ladkani @ 2019-06-14 18:59 UTC (permalink / raw
  To: Ed Tanous, OpenBMC Maillist

Thanks Ed,

1. How does BMC reads SMBIOS tables as they are managed by HOST? 
2. PCIe devices are usually not powered on stand by rails so if BMC need to enable/disable sensors based on certain PCIe card, it would need to wait till the platform is powered. This creates a lot of problems specially if we are building platform SKUs using a common building block.
3. I think we need of a feature where we can specify platform inventory in JSON file that get picked up by "probe" .. for example to detect a M.2 .. we could use something like this...

{
            "BehindSwitch": false,
            "DeviceClass": "MassStorageController",
            "DeviceName": "PM983",
            "Id": 6,
            "PhysicalLocation": {
                "LocationOrdinalValue": 5,
                "LocationType": "Slot"
            },
            "SMBusCount": 1,
            "SMBusInfo": [
                {
                    "BusNumber": 6,
                    "DeviceType": "NVME",
                    "Id": 1,
                    "MultiMaster": false,
                    "MuxCount": 2,
                    "MuxInfo": [
                        {
                            "Channel": 1,
                            "Id": 1,
                            "SlaveAddr": "0xE2"
                        },
                        {
                            "Channel": 0,
                            "Id": 2,
                            "SlaveAddr": "0xE6"
                        }
                    ],
                    "Protocol": "CSI",
                    "SlaveAddr": "0xD4"
                }
            ],
            "SlotType": "FullLength"
        },

Thanks
Neeraj

-----Original Message-----
From: Ed Tanous <ed.tanous@intel.com> 
Sent: Friday, June 14, 2019 11:17 AM
To: Neeraj Ladkani <neladk@microsoft.com>; OpenBMC Maillist <openbmc@lists.ozlabs.org>
Subject: Re: Platform Inventory for redfish

On 6/13/19 8:48 PM, Neeraj Ladkani wrote:
> HI All,
> 
> How does we manage platform inventory like CPU, memory and PCIe devices since BMC may not always have presence pins for all components.  For IPMI , we have SDRs that can be programmed with correct SKU configurations. I am wondering what is solution for redfish (except BIOS sending inventory on USB ethernet using redfish).
This varies widely dependent on the architecture.  I can answer the specifics for x86 servers and systems using entity-manager, but in short, whatever exists in Dbus with the correct interfaces is populated in Redfish.

> 
> Platform inventory includes
> 
> 1. Number of host CPUs and type of CPUs

This is managed over a combination of PECI and SMBIOS/MDR tables.  PECI can interrogate the CPU directly.  SMBIOS has more detailed information.

> 2. Number of memory and types of memory
Same answer as CPU.  PECI allows us to get inventory counts, presence, and temperatures.  SMBIOS allows us to get more detailed information on types, timings, and inventory information.

> 3. IO expander cards
This is done over Smbus.  On Wolf Pass, we use the FruDevice application here, which scans all busses for valid FRUs.
https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenbmc%2Fentity-manager%2Fblob%2Fmaster%2Fsrc%2FFruDevice.cpp&amp;data=02%7C01%7Cneladk%40microsoft.com%7Cf9437cecf81c49d4735c08d6f0f47818%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636961330112803303&amp;sdata=hogp4v%2F3e52cl4L9oEf%2Fdi9CtwajPD8tWWSngjEcQ08%3D&amp;reserved=0

> 4. SMBUS devices on PCI cards

Same answer as IO expanders.  We check for a valid FRU, once we find it, we check to see if it's a card we understand the topology of, by instantiating an instance of a config file like this:
https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenbmc%2Fentity-manager%2Fblob%2Fmaster%2Fconfigurations%2FAXX1P100HSSI_AIC.json&amp;data=02%7C01%7Cneladk%40microsoft.com%7Cf9437cecf81c49d4735c08d6f0f47818%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636961330112803303&amp;sdata=qmThWWJ7P1E4JkiJPmLEKjZsmaNjP%2FzKomGEvut3h90%3D&amp;reserved=0

Most cards key off the product name field in the board section, but there are ways to key off of other fields as well.

At the end of the day the "probe" statement in entity manager is just a dbus match, so if your platform has a different way to identifying a "card is present" just make that data available on dbus, and add an appropriate match to entity manager.

> 
> Thanks
> Neeraj
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Platform Inventory for redfish
  2019-06-14 18:59   ` Neeraj Ladkani
@ 2019-06-14 19:55     ` Ed Tanous
  0 siblings, 0 replies; 4+ messages in thread
From: Ed Tanous @ 2019-06-14 19:55 UTC (permalink / raw
  To: Neeraj Ladkani, OpenBMC Maillist

On 6/14/19 11:59 AM, Neeraj Ladkani wrote:
> Thanks Ed,
> 
> 1. How does BMC reads SMBIOS tables as they are managed by HOST? 

BIOS generally writes them on first boot or change using IPMI.

> 2. PCIe devices are usually not powered on stand by rails so if BMC need to enable/disable sensors based on certain PCIe card, it would need to wait till the platform is powered. This creates a lot of problems specially if we are building platform SKUs using a common building block.
In general, the FRU devices to identify the PCIe card tend to be
available on the 3.3 Aux rail, which is before the platform is powered.
With that said, devices that don't handle this behavior are covered by
the implementation, and a dc rail state change will trigger a rescan.

> 3. I think we need of a feature where we can specify platform inventory in JSON file that get picked up by "probe" .. for example to detect a M.2 .. we could use something like this...
The current implementation can detect M.2 drives, although we hit some
troubles with the MCTP implementation, so there are no configs as of yet.

> 
> {
>             "BehindSwitch": false,
I'm assuming this refers to "behind a mux".  In general this is already
handled, and the scanning will instantiate the correct mux devices in
the kernel, at which point a user of this would just need to point at
the correct bus, which is available via a symlink structure to get to
teh /dev/i2c-X device on the system.

>             "DeviceClass": "MassStorageController",
In entity manager this field is called "Type"

>             "DeviceName": "PM983",
In entity manager this field is called "Name"

>             "Id": 6,
Not really clear what this is doing.

>             "PhysicalLocation": {
>                 "LocationOrdinalValue": 5,
>                 "LocationType": "Slot"
>             },
This is currently managed by the slot naming convention in the
PCA95XXMux device type.  Naming a leg as "M2_Slot5" would give you the
behavior that you're trying to emulate.


>             "SMBusCount": 1Not really clear what the "count" would allow you to do.
>             "SMBusInfo": [
The problem with what you've declared here is that this assumes that the
complete smbus topology can be known for a given system.  This
assumption falls down when a PCIe add in card can implement a mux, or an
optional midplane board implements a mux.

An example of one such card is here:
https://github.com/openbmc/entity-manager/blob/15c49902cf030a91a5b4bd325d185ee74b760359/configurations/AXX2PRTHDHD.json#L13

This card gives access to the MiniSASHD ports through its own mux.
Given that it's a standard add in card that could be installed in any
system, now every system that's currently supported would need to add
its smbus topology to the tree.

>                 {
>                     "BusNumber": 6,
>                     "DeviceType": "NVME",
>                     "Id": 1,
>                     "MultiMaster": false,
>                     "MuxCount": 2,
>                     "MuxInfo": [
>                         {
>                             "Channel": 1,
>                             "Id": 1,
>                             "SlaveAddr": "0xE2"
>                         },
>                         {
>                             "Channel": 0,
>                             "Id": 2,
>                             "SlaveAddr": "0xE6"
>                         }
>                     ],
>                     "Protocol": "CSI",
I've never heard of a BMC using the MIPI CSI protocol to get to
anything, but it could certainly be added.
It's also not really clear why the smbus MUXes would be underneath the
mass storage controller.  It seems like they would be separate.
>                     "SlaveAddr": "0xD4"
>                 }
>             ],
>             "SlotType": "FullLength"
>         },
> 

I would recommend taking a look at the implementation that's there
today.  You seem to have just covered a number of features that already
exist (like mux declaration and topology management) but changed them
into your own schema.  That's fine if there are changes to be made to
make things work.  Currently the implementation does the following steps:
1. FruDevice scans for valid FRU devices
2. Baseboard entity gets instantiated via the FRU that's found.
2a. This may or may not instantiate MUX devices (if the baseboard
supports them) or simply label some of the lanes as "slot" lanes.  After
this step is complete, there will be named symlinks if the board
possesses M.2 Slots.
3. Frudevice rescans behind the mux, being careful to not duplicate
devices that exist ahead of the MUX. One of the legs it sees is the M.2
leg, which contains a drive with an NVMe-MI compliant FRU.
4. EntityManager instantiates a copy of the "drive" which may be a
specific drive model, or a general purpose "NVMEDevice" entity.
5. Other system services can trigger the appropriate sensor scanning,
log monitoring, and other services as needed.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-06-14 19:55 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-14  3:48 Platform Inventory for redfish Neeraj Ladkani
2019-06-14 18:16 ` Ed Tanous
2019-06-14 18:59   ` Neeraj Ladkani
2019-06-14 19:55     ` Ed Tanous

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.