From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753874AbbIKTXB (ORCPT ); Fri, 11 Sep 2015 15:23:01 -0400 Received: from mga11.intel.com ([192.55.52.93]:48738 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753271AbbIKTW5 convert rfc822-to-8bit (ORCPT ); Fri, 11 Sep 2015 15:22:57 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,512,1437462000"; d="scan'208";a="767239732" From: "Hefty, Sean" To: Tejun Heo , Doug Ledford CC: Parav Pandit , "cgroups@vger.kernel.org" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "lizefan@huawei.com" , Johannes Weiner , Jonathan Corbet , "james.l.morris@oracle.com" , "serge@hallyn.com" , Haggai Eran , Or Gerlitz , Matan Barak , "raindel@mellanox.com" , "akpm@linux-foundation.org" , "linux-security-module@vger.kernel.org" Subject: RE: [PATCH 0/7] devcg: device cgroup extension for rdma resource Thread-Topic: [PATCH 0/7] devcg: device cgroup extension for rdma resource Thread-Index: AQHQ6a16eOku0bSgrkK/cOF4T/rNM54zNqsAgADSqgCAAmoOAIAAD/GAgAArZwCAAHpSAIAABsaAgAAFr4CAAK9egP//0mKQ Date: Fri, 11 Sep 2015 19:22:56 +0000 Message-ID: <1828884A29C6694DAF28B7E6B8A82373A903A586@ORSMSX109.amr.corp.intel.com> References: <1441658303-18081-1-git-send-email-pandit.parav@gmail.com> <20150908152340.GA13749@mtj.duckdns.org> <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> In-Reply-To: <20150911145213.GQ8114@mtj.duckdns.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.138] Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > So, the existence of resource limitations is fine. That's what we > deal with all the time. The problem usually with this sort of > interfaces which expose implementation details to users directly is > that it severely limits engineering manuevering space. You usually > want your users to express their intentions and a mechanism to > arbitrate resources to satisfy those intentions (and in a way more > graceful than "we can't, maybe try later?"); otherwise, implementing > any sort of high level resource distribution scheme becomes painful > and usually the only thing possible is preventing runaway disasters - > you don't wanna pin unused resource permanently if there actually is > contention around it, so usually all you can do with hard limits is > overcommiting limits so that it at least prevents disasters. I agree with Tejun that this proposal is at the wrong level of abstraction. If you look at just trying to limit QPs, it's not clear what that attempts to accomplish. Conceptually, a QP is little more than an addressable endpoint. It may or may not map to HW resources (for Intel NICs it does not). Even when HW resources do back the QP, the hardware is limited by how many QPs can realistically be active at any one time, based on how much caching is available in the NIC. Trying to limit the number of QPs that an app can allocate, therefore, just limits how much of the address space an app can use. There's no clear link between QP limits and HW resource limits, unless you assume a very specific underlying implementation. - Sean