From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753717AbbIKQe5 (ORCPT ); Fri, 11 Sep 2015 12:34:57 -0400 Received: from mail-yk0-f179.google.com ([209.85.160.179]:36194 "EHLO mail-yk0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753117AbbIKQey (ORCPT ); Fri, 11 Sep 2015 12:34:54 -0400 Date: Fri, 11 Sep 2015 12:34:49 -0400 From: Tejun Heo To: Parav Pandit Cc: Doug Ledford , cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, lizefan@huawei.com, Johannes Weiner , Jonathan Corbet , james.l.morris@oracle.com, serge@hallyn.com, Haggai Eran , Or Gerlitz , Matan Barak , raindel@mellanox.com, akpm@linux-foundation.org, linux-security-module@vger.kernel.org Subject: Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource Message-ID: <20150911163449.GS8114@mtj.duckdns.org> References: <20150908152340.GA13749@mtj.duckdns.org> <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, Parav. On Fri, Sep 11, 2015 at 09:56:31PM +0530, Parav Pandit wrote: > Resource run away by application can lead to (a) kernel and (b) other > applications left out with no resources situation. Yeap, that this controller would be able to prevent to a reasonable extent. > Both the problems are the target of this patch set by accounting via cgroup. > > Performance contention can be resolved with higher level user space, > which will tune it. If individual applications are gonna be allowed to do that, what's to prevent them from jacking up their limits? So, I assume you're thinking of a central authority overseeing distribution and enforcing the policy through cgroups? > Threshold and fail counters are on the way in follow on patch. If you're planning on following what the existing memcg did in this area, it's unlikely to go well. Would you mind sharing what you have on mind in the long term? Where do you see this going? Thanks. -- tejun From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tejun Heo Subject: Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource Date: Fri, 11 Sep 2015 12:34:49 -0400 Message-ID: <20150911163449.GS8114@mtj.duckdns.org> References: <20150908152340.GA13749@mtj.duckdns.org> <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Parav Pandit Cc: Doug Ledford , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org, Johannes Weiner , Jonathan Corbet , james.l.morris-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, serge-A9i7LUbDfNHQT0dZR+AlfA@public.gmane.org, Haggai Eran , Or Gerlitz , Matan Barak , raindel-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org, akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, linux-security-module-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: linux-rdma@vger.kernel.org Hello, Parav. On Fri, Sep 11, 2015 at 09:56:31PM +0530, Parav Pandit wrote: > Resource run away by application can lead to (a) kernel and (b) other > applications left out with no resources situation. Yeap, that this controller would be able to prevent to a reasonable extent. > Both the problems are the target of this patch set by accounting via cgroup. > > Performance contention can be resolved with higher level user space, > which will tune it. If individual applications are gonna be allowed to do that, what's to prevent them from jacking up their limits? So, I assume you're thinking of a central authority overseeing distribution and enforcing the policy through cgroups? > Threshold and fail counters are on the way in follow on patch. If you're planning on following what the existing memcg did in this area, it's unlikely to go well. Would you mind sharing what you have on mind in the long term? Where do you see this going? Thanks. -- tejun