From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754693AbbINKSz (ORCPT ); Mon, 14 Sep 2015 06:18:55 -0400 Received: from mail-wi0-f182.google.com ([209.85.212.182]:36364 "EHLO mail-wi0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751938AbbINKSx (ORCPT ); Mon, 14 Sep 2015 06:18:53 -0400 MIME-Version: 1.0 In-Reply-To: <20150911192517.GU8114@mtj.duckdns.org> References: <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> <20150911163449.GS8114@mtj.duckdns.org> <20150911192517.GU8114@mtj.duckdns.org> Date: Mon, 14 Sep 2015 15:48:51 +0530 Message-ID: Subject: Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource From: Parav Pandit To: Tejun Heo Cc: Doug Ledford , cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, lizefan@huawei.com, Johannes Weiner , Jonathan Corbet , james.l.morris@oracle.com, serge@hallyn.com, Haggai Eran , Or Gerlitz , Matan Barak , raindel@mellanox.com, akpm@linux-foundation.org, linux-security-module@vger.kernel.org Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Sep 12, 2015 at 12:55 AM, Tejun Heo wrote: > Hello, Parav. > > On Fri, Sep 11, 2015 at 10:09:48PM +0530, Parav Pandit wrote: >> > If you're planning on following what the existing memcg did in this >> > area, it's unlikely to go well. Would you mind sharing what you have >> > on mind in the long term? Where do you see this going? >> >> At least current thoughts are: central entity authority monitors fail >> count and new threashold count. >> Fail count - as similar to other indicates how many time resource >> failure occured >> threshold count - indicates upto what this resource has gone upto in >> usage. (application might not be able to poll on thousands of such >> resources entries). >> So based on fail count and threshold count, it can tune it further. > > So, regardless of the specific resource in question, implementing > adaptive resource distribution requires more than simple thresholds > and failcnts. May be yes. Buts in difficult to go through the whole design to shape up right now. This is the infrastructure getting build with few capabilities. I see this as starting point instead of end point. > The very minimum would be a way to exert reclaim > pressure and then a way to measure how much lack of a given resource > is affecting the workload. Maybe it can adaptively lower the limits > and then watch how often allocation fails but that's highly unlikely > to be an effective measure as it can't do anything to hoarders and the > frequency of allocation failure doesn't necessarily correlate with the > amount of impact the workload is getting (it's not a measure of > usage). It can always kill the hoarding process(es), which is holding up the resources without using it. Such processes will eventually will get restarted but will not be able to hoard so much because its been on the radar for hoarding and its limits have been reduced. > > This is what I'm awry about. The kernel-userland interface here is > cut pretty low in the stack leaving most of arbitration and management > logic in the userland, which seems to be what people wanted and that's > fine, but then you're trying to implement an intelligent resource > control layer which straddles across kernel and userland with those > low level primitives which inevitably would increase the required > interface surface as nobody has enough information. > We might be able to get the information as we go along. Such arbitration and management layer outside (instead of inside) has more visibility into multiple systems which are part of single cluster and processes are spreaded across cgroup in each such system. While a logic inside can manage just a manage a process of single node which are using multiple cgroups. > Just to illustrate the point, please think of the alsa interface. We > expose hardware capabilities pretty much as-is leaving management and > multiplexing to userland and there's nothing wrong with it. It fits > better that way; however, we don't then go try to implement cgroup > controller for PCM channels. To do any high-level resource > management, you gotta do it where the said resource is actually > managed and arbitrated. > > What's the allocation frequency you're expecting? It might be better > to just let allocations themselves go through the agent that you're > planning. In that case we might need to build FUSE style infrastructure. Frequency for RDMA resource allocation is certainly less than read/write calls. > You sure can use cgroup membership to identify who's asking > tho. Given how the whole thing is architectured, I'd suggest thinking > more about how the whole thing should turn out eventually. > Yes, I agree. At this point, its software solution to provide resource isolation in simple manner which has scope to become adaptive in future. > Thanks. > > -- > tejun From mboxrd@z Thu Jan 1 00:00:00 1970 From: Parav Pandit Subject: Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource Date: Mon, 14 Sep 2015 15:48:51 +0530 Message-ID: References: <20150910164946.GH8114@mtj.duckdns.org> <20150910202210.GL8114@mtj.duckdns.org> <20150911040413.GA18850@htj.duckdns.org> <55F25781.20308@redhat.com> <20150911145213.GQ8114@mtj.duckdns.org> <20150911163449.GS8114@mtj.duckdns.org> <20150911192517.GU8114@mtj.duckdns.org> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Return-path: In-Reply-To: <20150911192517.GU8114-qYNAdHglDFBN0TnZuCh8vA@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Tejun Heo Cc: Doug Ledford , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org, Johannes Weiner , Jonathan Corbet , james.l.morris-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, serge-A9i7LUbDfNHQT0dZR+AlfA@public.gmane.org, Haggai Eran , Or Gerlitz , Matan Barak , raindel-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org, akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, linux-security-module-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: linux-rdma@vger.kernel.org On Sat, Sep 12, 2015 at 12:55 AM, Tejun Heo wrote: > Hello, Parav. > > On Fri, Sep 11, 2015 at 10:09:48PM +0530, Parav Pandit wrote: >> > If you're planning on following what the existing memcg did in this >> > area, it's unlikely to go well. Would you mind sharing what you have >> > on mind in the long term? Where do you see this going? >> >> At least current thoughts are: central entity authority monitors fail >> count and new threashold count. >> Fail count - as similar to other indicates how many time resource >> failure occured >> threshold count - indicates upto what this resource has gone upto in >> usage. (application might not be able to poll on thousands of such >> resources entries). >> So based on fail count and threshold count, it can tune it further. > > So, regardless of the specific resource in question, implementing > adaptive resource distribution requires more than simple thresholds > and failcnts. May be yes. Buts in difficult to go through the whole design to shape up right now. This is the infrastructure getting build with few capabilities. I see this as starting point instead of end point. > The very minimum would be a way to exert reclaim > pressure and then a way to measure how much lack of a given resource > is affecting the workload. Maybe it can adaptively lower the limits > and then watch how often allocation fails but that's highly unlikely > to be an effective measure as it can't do anything to hoarders and the > frequency of allocation failure doesn't necessarily correlate with the > amount of impact the workload is getting (it's not a measure of > usage). It can always kill the hoarding process(es), which is holding up the resources without using it. Such processes will eventually will get restarted but will not be able to hoard so much because its been on the radar for hoarding and its limits have been reduced. > > This is what I'm awry about. The kernel-userland interface here is > cut pretty low in the stack leaving most of arbitration and management > logic in the userland, which seems to be what people wanted and that's > fine, but then you're trying to implement an intelligent resource > control layer which straddles across kernel and userland with those > low level primitives which inevitably would increase the required > interface surface as nobody has enough information. > We might be able to get the information as we go along. Such arbitration and management layer outside (instead of inside) has more visibility into multiple systems which are part of single cluster and processes are spreaded across cgroup in each such system. While a logic inside can manage just a manage a process of single node which are using multiple cgroups. > Just to illustrate the point, please think of the alsa interface. We > expose hardware capabilities pretty much as-is leaving management and > multiplexing to userland and there's nothing wrong with it. It fits > better that way; however, we don't then go try to implement cgroup > controller for PCM channels. To do any high-level resource > management, you gotta do it where the said resource is actually > managed and arbitrated. > > What's the allocation frequency you're expecting? It might be better > to just let allocations themselves go through the agent that you're > planning. In that case we might need to build FUSE style infrastructure. Frequency for RDMA resource allocation is certainly less than read/write calls. > You sure can use cgroup membership to identify who's asking > tho. Given how the whole thing is architectured, I'd suggest thinking > more about how the whole thing should turn out eventually. > Yes, I agree. At this point, its software solution to provide resource isolation in simple manner which has scope to become adaptive in future. > Thanks. > > -- > tejun