From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753900AbbIKUGg (ORCPT ); Fri, 11 Sep 2015 16:06:36 -0400 Received: from mga11.intel.com ([192.55.52.93]:30574 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752661AbbIKUGc convert rfc822-to-8bit (ORCPT ); Fri, 11 Sep 2015 16:06:32 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,513,1437462000"; d="scan'208";a="787634157" From: "Hefty, Sean" To: Jason Gunthorpe CC: Tejun Heo , Doug Ledford , "Parav Pandit" , "cgroups@vger.kernel.org" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "lizefan@huawei.com" , Johannes Weiner , Jonathan Corbet , "james.l.morris@oracle.com" , "serge@hallyn.com" , Haggai Eran , Or Gerlitz , Matan Barak , "raindel@mellanox.com" , "akpm@linux-foundation.org" , "linux-security-module@vger.kernel.org" Subject: RE: [PATCH 0/7] devcg: device cgroup extension for rdma resource Thread-Topic: [PATCH 0/7] devcg: device cgroup extension for rdma resource Thread-Index: AQHQ6a16eOku0bSgrkK/cOF4T/rNM54zNqsAgADSqgCAAmoOAIAAD/GAgAArZwCAAHpSAIAABsaAgAAFr4CAAK9egP//0mKQgAB+6oD//46oAA== Date: Fri, 11 Sep 2015 20:06:31 +0000 Message-ID: <1828884A29C6694DAF28B7E6B8A82373A903A5DB@ORSMSX109.amr.corp.intel.com> References: <20150908152340.GA13749@mtj.duckdns.org> <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> <1828884A29C6694DAF28B7E6B8A82373A903A586@ORSMSX109.amr.corp.intel.com> <20150911194311.GA18755@obsidianresearch.com> In-Reply-To: <20150911194311.GA18755@obsidianresearch.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.138] Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > Trying to limit the number of QPs that an app can allocate, > > therefore, just limits how much of the address space an app can use. > > There's no clear link between QP limits and HW resource limits, > > unless you assume a very specific underlying implementation. > > Isn't that the point though? We have several vendors with hardware > that does impose hard limits on specific resources. There is no way to > avoid that, and ultimately, those exact HW resources need to be > limited. My point is that limiting the number of QPs that an app can allocate doesn't necessarily mean anything. Is allocating 1000 QPs with 1 entry each better or worse than 1 QP with 10,000 entries? Who knows? > If we want to talk about abstraction, then I'd suggest something very > general and simple - two limits: > '% of the RDMA hardware resource pool' (per device or per ep?) > 'bytes of kernel memory for RDMA structures' (all devices) Yes - this makes more sense to me.