From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754006AbbIKTnu (ORCPT ); Fri, 11 Sep 2015 15:43:50 -0400 Received: from quartz.orcorp.ca ([184.70.90.242]:41345 "EHLO quartz.orcorp.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751540AbbIKTnq (ORCPT ); Fri, 11 Sep 2015 15:43:46 -0400 Date: Fri, 11 Sep 2015 13:43:11 -0600 From: Jason Gunthorpe To: "Hefty, Sean" Cc: Tejun Heo , Doug Ledford , Parav Pandit , "cgroups@vger.kernel.org" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "lizefan@huawei.com" , Johannes Weiner , Jonathan Corbet , "james.l.morris@oracle.com" , "serge@hallyn.com" , Haggai Eran , Or Gerlitz , Matan Barak , "raindel@mellanox.com" , "akpm@linux-foundation.org" , "linux-security-module@vger.kernel.org" Subject: Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource Message-ID: <20150911194311.GA18755@obsidianresearch.com> References: <20150908152340.GA13749@mtj.duckdns.org> <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> <1828884A29C6694DAF28B7E6B8A82373A903A586@ORSMSX109.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1828884A29C6694DAF28B7E6B8A82373A903A586@ORSMSX109.amr.corp.intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Broken-Reverse-DNS: no host name found for IP address 10.0.0.160 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 11, 2015 at 07:22:56PM +0000, Hefty, Sean wrote: > Trying to limit the number of QPs that an app can allocate, > therefore, just limits how much of the address space an app can use. > There's no clear link between QP limits and HW resource limits, > unless you assume a very specific underlying implementation. Isn't that the point though? We have several vendors with hardware that does impose hard limits on specific resources. There is no way to avoid that, and ultimately, those exact HW resources need to be limited. If we want to talk about abstraction, then I'd suggest something very general and simple - two limits: '% of the RDMA hardware resource pool' (per device or per ep?) 'bytes of kernel memory for RDMA structures' (all devices) That comfortably covers all the various kinds of hardware we support in a reasonable fashion. Unless there really is a reason why we need to constrain exactly and precisely PD/QP/MR/AH (I can't think of one off hand) The 'RDMA hardware resource pool' is a vendor-driver-device specific thing, with no generic definition beyond something that doesn't fit in the other limit. Jason From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Gunthorpe Subject: Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource Date: Fri, 11 Sep 2015 13:43:11 -0600 Message-ID: <20150911194311.GA18755@obsidianresearch.com> References: <20150908152340.GA13749@mtj.duckdns.org> <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> <1828884A29C6694DAF28B7E6B8A82373A903A586@ORSMSX109.amr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <1828884A29C6694DAF28B7E6B8A82373A903A586-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: "Hefty, Sean" Cc: Tejun Heo , Doug Ledford , Parav Pandit , "cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org" , Johannes Weiner , Jonathan Corbet , "james.l.morris-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org" , "serge-A9i7LUbDfNHQT0dZR+AlfA@public.gmane.org" , Haggai Eran , Or Gerlitz , Matan Barak , "raindel-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org" , "akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org" , "linux-security-module-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" List-Id: linux-rdma@vger.kernel.org On Fri, Sep 11, 2015 at 07:22:56PM +0000, Hefty, Sean wrote: > Trying to limit the number of QPs that an app can allocate, > therefore, just limits how much of the address space an app can use. > There's no clear link between QP limits and HW resource limits, > unless you assume a very specific underlying implementation. Isn't that the point though? We have several vendors with hardware that does impose hard limits on specific resources. There is no way to avoid that, and ultimately, those exact HW resources need to be limited. If we want to talk about abstraction, then I'd suggest something very general and simple - two limits: '% of the RDMA hardware resource pool' (per device or per ep?) 'bytes of kernel memory for RDMA structures' (all devices) That comfortably covers all the various kinds of hardware we support in a reasonable fashion. Unless there really is a reason why we need to constrain exactly and precisely PD/QP/MR/AH (I can't think of one off hand) The 'RDMA hardware resource pool' is a vendor-driver-device specific thing, with no generic definition beyond something that doesn't fit in the other limit. Jason