From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753756AbbIHIW6 (ORCPT ); Tue, 8 Sep 2015 04:22:58 -0400 Received: from mail-am1on0081.outbound.protection.outlook.com ([157.56.112.81]:47821 "EHLO emea01-am1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752129AbbIHIWv (ORCPT ); Tue, 8 Sep 2015 04:22:51 -0400 Authentication-Results: spf=pass (sender IP is 193.47.165.134) smtp.mailfrom=mellanox.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=mellanox.com; Subject: Re: [PATCH 5/7] devcg: device cgroup's extension for RDMA resource. To: Parav Pandit , , , , , , , , References: <1441658303-18081-1-git-send-email-pandit.parav@gmail.com> <1441658303-18081-6-git-send-email-pandit.parav@gmail.com> CC: , , , , , , , From: Haggai Eran Message-ID: <55EE9AE0.5030508@mellanox.com> Date: Tue, 8 Sep 2015 11:22:56 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <1441658303-18081-6-git-send-email-pandit.parav@gmail.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.0.52.254] X-EOPAttributedMessage: 0 X-Microsoft-Exchange-Diagnostics: 1;DB3FFO11FD033;1:pOKCiTwlR2IWhhZnnvoHpFnCwnfvbYgD/bfQSoWPCRWYLzEPVYxi59pEzpdMfJ8eajneZX1uOvbGXH4LrtwSy0mxsmWlajOjJTwzKs7Ag9k9Mj88bI82Q7V+B2HAoOcV8zuJKorJQyMnXwK9AVx65w+6LJYkfsyPhR3200ZXvAjyMtURPMariaVYmar2b5MuuMAeRaEusozp829hHKJ/TUFYIajuaScLxlJN/L7IB0JB89rnl011PuEgJFjr65Bx2tQRL4qbM92OYXaGNbIAT5XXyf0075AaYy2b9rJx2ZRLsp9R5lFtzZtbo43nch9erFZvcY7oJN7M6DLr42pQ2GbGKn5ia83w55IIXnMCmESV2+R/Od9m/L8ZgmouJHpzOQH85UpiOJRN9pctY9jrAvpwYx39oKPJTNm/If6Uw00= X-Forefront-Antispam-Report: CIP:193.47.165.134;CTRY:IL;IPV:NLI;EFV:NLI;SFV:NSPM;SFS:(10009020)(6009001)(2980300002)(438002)(199003)(189002)(24454002)(479174004)(59896002)(50466002)(65816999)(83506001)(46102003)(5001830100001)(2950100001)(106466001)(5001770100001)(189998001)(87936001)(5001860100001)(2201001)(86362001)(50986999)(5004730100002)(5007970100001)(87266999)(54356999)(76176999)(47776003)(33656002)(92566002)(77156002)(64706001)(62966003)(65956001)(6806004)(4001540100001)(97736004)(4001350100001)(80316001)(77096005)(65806001)(68736005)(64126003)(23746002)(36756003)(3940600001);DIR:OUT;SFP:1101;SCL:1;SRVR:DB3PR05MB364;H:mtlcas13.mtl.com;FPR:;SPF:Pass;PTR:ErrorRetry;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: 1;DB3PR05MB364;2:59xzG/+0CN4Rmiey2t+PgLdH3jc0Jl+Mo54XRMTgrVU3eQYva4ti6z7MiOSwwkgACJvAnUupbUYZQU2P0QXaDwj30PlGC64kmTQZmCE4+KRjHeBd+1ij/s2DeASLj4tRDYxv8tWUSGHwHca+oWgnvXgUdKOBF17IIHdNP7TrgyM=;3:uncsFZ/Ozs2ot/XBDP8kXu12xTCLqdvpW3V+l6cI3NTXskfWNypNupz2NKFEUovygIxGgKwnb5lDWfRFjJBlzzwW27cepJoAeg/a30MgXvxa8I1AjWljUVu2yCapygdJ6ptMPNoGY2aajhd5U+XYZhaKlzlly1XGdVPZCLsWe0FjTtEgBQmLjn9gQI6GSXHC+NgR6Nb0PXlpmV6hG+F1Mzgb1PQXdcjItvpOAnaEDuHgMQHSge8+zj6xnprEHnpU4oIzk2cXVO3hrmc1ZPG2zw==;25:Kg/QwmqKDWjLctDziBImUWYj39+82090fAlNAkhIOGFif58n2lYn4WAIVBjNXSyncDlb+tUCkjHGW9b+KNiG0UvkflkX8xuqx6nyawhN2ngmhzpCBs4fHl2zieAsAWu4rubTB8ys/KuHP2f9N7XroA1Dg4u2QRGAXLj5P9S1/Df2VVA6r2+E4K7zKgFSOL/tsDUtE1+7ndFVcrGFRV/9AKfmgzXebY/Uzur1pJKX/eBfuORg2U3gLqvWVrY1eNLK X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(8251501001);SRVR:DB3PR05MB364; X-Microsoft-Exchange-Diagnostics: 1;DB3PR05MB364;20:8maSTVi8SNevCUfczRdm/oehHLq2ewAfwLK15ludoqExz7qtSTZVKwVK9CeXHOe+X3Mwr12JXuFzDawFDLNniErcfmePVmmbULiT989+3uTa7B1GJpV1ObcA76VzeWSf/9PC6h8J3fFPDC5g3tX3TZBUhVCVpsuy4oIO18kLMRbhp7ee9CMLHW4tN+qNni4gCgCFFLqmxnp5hdQMPx2zlSwFeGUSqykwfUoYHZu0NfwxMjG25NGsoStGPDQ7L9zFeSBjchr1N4eECk2vhRI4W3+nM3ecNldVYQ+416tXfrIXRnHCTTNj00nsblCbLP1Wx4Md5wf3rLiPbSZPvCQFwBK16XLRCXImUmDqu4H8ILHGkH0VdGuMjuTdXH8w+mfecO5hqt1AnW2/uprZjDKQPRAD9vqT77OguoGqxcCCx1KqNnnq4YcvRrGqKrdGP9JFhyjxokMBTxqAGFyqLKSiUlqw7gAriiXqw5d1OyrXYvz7Di0C6xQzVFY2Gaz01rcX;4:tngn01YX2m03cQk03osdjLdQvABDveDMx/wtSuoikaAknZAXqUTf4afMRUnD58j4eEcqPrepGluVuZV7HL0flC6xRI9HNd1Guvib4gx2mQV7s85w7zzWF9uQiU9Pq5PwJ4LxReMlT2C3pRyB4se1JGlKUWkklUCqybG0NNXshL3onfcAbzoUZzI3kZFx0fMdNoToDxvBn7c2G1zjWpb6tUVRsTZzRM9fAnRyi7ioIUSa+AyNjZsKGAt2U4fRB+kxYRqBIjx5D60PLJ7+xX5lu7gmt6usE9Yg2QIJuMYCyHtQoFHOgmEM51O+WGdFdWMr X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(601004)(5005006)(8121501046)(3002001);SRVR:DB3PR05MB364;BCL:0;PCL:0;RULEID:;SRVR:DB3PR05MB364; X-Forefront-PRVS: 069373DFB6 X-Microsoft-Exchange-Diagnostics: =?Windows-1252?Q?1;DB3PR05MB364;23:wSiTN7H9L8vI5AiSduVq8qukNZ1mKdqcleXEJ3?= =?Windows-1252?Q?/ki38MnISem5CF755QRWakMSMpqBXNg6w4Juenj6mvtppx0BdZ2VmqTo?= =?Windows-1252?Q?0B2D7ZIGK697spyqPzLDTUAo5MMaqIkpFLNGh/ox9KbgBLDpd6QA/ogU?= =?Windows-1252?Q?Uvb5xw97s0UUtZZiyqRJasDTTzqUwDIM+84XFAAYQNFGaNbTzSswm8M8?= =?Windows-1252?Q?1NQLq7Ia+Nk2kjd/prEkChyyaC4zmPjlwnSJ1qK2buGGpbBZ0EnzK5ET?= =?Windows-1252?Q?+M2Na4SCkTjyvGy2/fJT/Ndmie1Z6MWAZXT4Irf9YwT7pBqp/4vvonxY?= =?Windows-1252?Q?RVGLe6BSAMbZA7at7nYwFzJZqNG1DUt8OGf4JwuQYtN1tTV+BneBYwwX?= =?Windows-1252?Q?zAIODOwoFWQDGuazDZQm1f2DbS0XAOnTaUSAXCU9CyfehqXkn7dPs1Xg?= =?Windows-1252?Q?enT/Po86WPFzYpbI3wZpgc2BIz8s8bL1WEmeMAq5LXsAIB7qwCCfEEJh?= =?Windows-1252?Q?E71Kv0LzBGxcZgGxgcBqG6hk2eUP7bQcIY+Wos5TfjEQzx1KlH+MuFQY?= =?Windows-1252?Q?RyE7iVuJ839jnirUJDeg3bnfKOtCUw4BzvfcaFAoRNHticYLN/PQxEws?= =?Windows-1252?Q?45HBp++sJv5kCPUs4wuj7TAecv45TLa3aUdrT0x/Lq8P4woK8X+m3L8R?= =?Windows-1252?Q?pEVtEoRb/gy4Urfd1P1SyY31DR/ih6GnfbyN5v2kf6LucBT7PPmbRoCQ?= =?Windows-1252?Q?UUzaDHFO1DsOTrVYf34WC7UabjgNnCPuK7irleBBbwqjOEGD+d4v4Vwb?= =?Windows-1252?Q?ooV5PBJjSaXVTahAkdC/YdCRsz4CWzfF5Dtj/rOoKPQmWuApYo28RoxH?= =?Windows-1252?Q?iv+NDgxMqX1AiNkV7+G7rIG/v2a7zqAxTUXD46VXUiYb1uwyrkno2B//?= =?Windows-1252?Q?TKULSgrWeUmTTpRi+fNC8NVP4GySKstfI+Le4Xmk1yMmHtkkdhTUGrkX?= =?Windows-1252?Q?7xnaTkpHdBe/c2f5SdF/Tbegf0c8ERiuM2bUf7BS51awLbSN05Sem3zJ?= =?Windows-1252?Q?Anpknwo8zZvKpwew1tOJklHcjc1vV/henRJyPXUpOEk9ST/Y83KLBUwT?= =?Windows-1252?Q?7+D4jSV4TcdqS3soSbnrrmJ+hdWdPD/9GofeGtY8dmG3etynxEDCeWoo?= =?Windows-1252?Q?0aDqzxMay/sfJmBXi/+3S/NPqPuXWPVAW6aNLMcjqP4jRClXVs1X22bD?= =?Windows-1252?Q?28pW3SY5aWHS1oCVeBt7jMlfzUs/BEUDqCJU6iSiIdKdZba/7NWJZEnn?= =?Windows-1252?Q?ULTdfE+wAv9jhlcP/Rj+gqiA=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1;DB3PR05MB364;5:ycqLDWzHVZ3nvgBYYXRWqJ0ORgPFhST24YmhmkPE7DQnoatYh3Gso3tnnuJ9U4PQOMs4swThL8cK2gxWwnOQI0PSYZEwTGijHfdeVvOvzkMb7hH6NtG69KoG1SatoAA7RIyc5pc31N5V5x4KQ45Rfw==;24:mHeHXtw6i9jV2Jc8B3iFqtOeQvXFnNIjYrelDeLL/xpRjzrL9iIQ2KO0w9fvs/L+i5/E0MNhsfd1IJXGnNiw8HG+3gpkGpmV1ZGx1i29TxM=;20:/XnWIMEKvbVFfK4NTqIU5W7AUQ4io+Uk1VSIQET2OJ6AAXXfz4AlPSU0fsR2eMWH1AGYLJMznPElXt5uTMFrcQ== SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2015 08:22:42.5848 (UTC) X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=a652971c-7d2e-4d9b-a6a4-d149256f461b;Ip=[193.47.165.134];Helo=[mtlcas13.mtl.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR05MB364 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/09/2015 23:38, Parav Pandit wrote: > +/* RDMA resources from device cgroup perspective */ > +enum devcgroup_rdma_rt { > + DEVCG_RDMA_RES_TYPE_UCTX, > + DEVCG_RDMA_RES_TYPE_CQ, > + DEVCG_RDMA_RES_TYPE_PD, > + DEVCG_RDMA_RES_TYPE_AH, > + DEVCG_RDMA_RES_TYPE_MR, > + DEVCG_RDMA_RES_TYPE_MW, I didn't see memory windows in dev_cgroup_files in patch 3. Is it used? > + DEVCG_RDMA_RES_TYPE_SRQ, > + DEVCG_RDMA_RES_TYPE_QP, > + DEVCG_RDMA_RES_TYPE_FLOW, > + DEVCG_RDMA_RES_TYPE_MAX, > +}; > +struct devcgroup_rdma_tracker { > + int limit; > + atomic_t usage; > + int failcnt; > +}; Have you considered using struct res_counter? > + * RDMA resource limits are hierarchical, so the highest configured limit of > + * the hierarchy is enforced. Allowing resource limit configuration to default > + * cgroup allows fair share to kernel space ULPs as well. In what way is the highest configured limit of the hierarchy enforced? I would expect all the limits along the hierarchy to be enforced. > +int devcgroup_rdma_get_max_resource(struct seq_file *sf, void *v) > +{ > + struct dev_cgroup *dev_cg = css_to_devcgroup(seq_css(sf)); > + int type = seq_cft(sf)->private; > + u32 usage; > + > + if (dev_cg->rdma.tracker[type].limit == DEVCG_RDMA_MAX_RESOURCES) { > + seq_printf(sf, "%s\n", DEVCG_RDMA_MAX_RESOURCE_STR); > + } else { > + usage = dev_cg->rdma.tracker[type].limit; If this is the resource limit, don't name it 'usage'. > + seq_printf(sf, "%u\n", usage); > + } > + return 0; > +} > +int devcgroup_rdma_get_max_resource(struct seq_file *sf, void *v) > +{ > + struct dev_cgroup *dev_cg = css_to_devcgroup(seq_css(sf)); > + int type = seq_cft(sf)->private; > + u32 usage; > + > + if (dev_cg->rdma.tracker[type].limit == DEVCG_RDMA_MAX_RESOURCES) { > + seq_printf(sf, "%s\n", DEVCG_RDMA_MAX_RESOURCE_STR); I'm not sure hiding the actual number is good, especially in the show_usage case. > + } else { > + usage = dev_cg->rdma.tracker[type].limit; > + seq_printf(sf, "%u\n", usage); > + } > + return 0; > +} > +void devcgroup_rdma_uncharge_resource(struct ib_ucontext *ucontext, > + enum devcgroup_rdma_rt type, int num) > +{ > + struct dev_cgroup *dev_cg, *p; > + struct task_struct *ctx_task; > + > + if (!num) > + return; > + > + /* get cgroup of ib_ucontext it belong to, to uncharge > + * so that when its called from any worker tasks or any > + * other tasks to which this resource doesn't belong to, > + * it can be uncharged correctly. > + */ > + if (ucontext) > + ctx_task = get_pid_task(ucontext->tgid, PIDTYPE_PID); > + else > + ctx_task = current; > + dev_cg = task_devcgroup(ctx_task); > + > + spin_lock(&ctx_task->rdma_res_counter->lock); Don't you need an rcu read lock and rcu_dereference to access rdma_res_counter? > + ctx_task->rdma_res_counter->usage[type] -= num; > + > + for (p = dev_cg; p; p = parent_devcgroup(p)) > + uncharge_resource(p, type, num); > + > + spin_unlock(&ctx_task->rdma_res_counter->lock); > + > + if (type == DEVCG_RDMA_RES_TYPE_UCTX) > + rdma_free_res_counter(ctx_task); > +} > +EXPORT_SYMBOL(devcgroup_rdma_uncharge_resource); > +int devcgroup_rdma_try_charge_resource(enum devcgroup_rdma_rt type, int num) > +{ > + struct dev_cgroup *dev_cg = task_devcgroup(current); > + struct task_rdma_res_counter *res_cnt = current->rdma_res_counter; > + int status; > + > + if (!res_cnt) { > + res_cnt = kzalloc(sizeof(*res_cnt), GFP_KERNEL); > + if (!res_cnt) > + return -ENOMEM; > + > + spin_lock_init(&res_cnt->lock); > + rcu_assign_pointer(current->rdma_res_counter, res_cnt); Don't you need the task lock to update rdma_res_counter here? > + } > + > + /* synchronize with migration task by taking lock, to avoid > + * race condition of performing cgroup resource migration > + * in non atomic way with this task, which can leads to leaked > + * resources in older cgroup. > + */ > + spin_lock(&res_cnt->lock); > + status = try_charge_resource(dev_cg, type, num); > + if (status) > + goto busy; > + > + /* single task updating its rdma resource usage, so atomic is > + * not required. > + */ > + current->rdma_res_counter->usage[type] += num; > + > +busy: > + spin_unlock(&res_cnt->lock); > + return status; > +} > +EXPORT_SYMBOL(devcgroup_rdma_try_charge_resource); Regards, Haggai From mboxrd@z Thu Jan 1 00:00:00 1970 From: Haggai Eran Subject: Re: [PATCH 5/7] devcg: device cgroup's extension for RDMA resource. Date: Tue, 8 Sep 2015 11:22:56 +0300 Message-ID: <55EE9AE0.5030508@mellanox.com> References: <1441658303-18081-1-git-send-email-pandit.parav@gmail.com> <1441658303-18081-6-git-send-email-pandit.parav@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1441658303-18081-6-git-send-email-pandit.parav@gmail.com> Sender: linux-doc-owner@vger.kernel.org To: Parav Pandit , cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, tj@kernel.org, lizefan@huawei.com, hannes@cmpxchg.org, dledford@redhat.com Cc: corbet@lwn.net, james.l.morris@oracle.com, serge@hallyn.com, ogerlitz@mellanox.com, matanb@mellanox.com, raindel@mellanox.com, akpm@linux-foundation.org, linux-security-module@vger.kernel.org List-Id: linux-rdma@vger.kernel.org On 07/09/2015 23:38, Parav Pandit wrote: > +/* RDMA resources from device cgroup perspective */ > +enum devcgroup_rdma_rt { > + DEVCG_RDMA_RES_TYPE_UCTX, > + DEVCG_RDMA_RES_TYPE_CQ, > + DEVCG_RDMA_RES_TYPE_PD, > + DEVCG_RDMA_RES_TYPE_AH, > + DEVCG_RDMA_RES_TYPE_MR, > + DEVCG_RDMA_RES_TYPE_MW, I didn't see memory windows in dev_cgroup_files in patch 3. Is it used? > + DEVCG_RDMA_RES_TYPE_SRQ, > + DEVCG_RDMA_RES_TYPE_QP, > + DEVCG_RDMA_RES_TYPE_FLOW, > + DEVCG_RDMA_RES_TYPE_MAX, > +}; > +struct devcgroup_rdma_tracker { > + int limit; > + atomic_t usage; > + int failcnt; > +}; Have you considered using struct res_counter? > + * RDMA resource limits are hierarchical, so the highest configured limit of > + * the hierarchy is enforced. Allowing resource limit configuration to default > + * cgroup allows fair share to kernel space ULPs as well. In what way is the highest configured limit of the hierarchy enforced? I would expect all the limits along the hierarchy to be enforced. > +int devcgroup_rdma_get_max_resource(struct seq_file *sf, void *v) > +{ > + struct dev_cgroup *dev_cg = css_to_devcgroup(seq_css(sf)); > + int type = seq_cft(sf)->private; > + u32 usage; > + > + if (dev_cg->rdma.tracker[type].limit == DEVCG_RDMA_MAX_RESOURCES) { > + seq_printf(sf, "%s\n", DEVCG_RDMA_MAX_RESOURCE_STR); > + } else { > + usage = dev_cg->rdma.tracker[type].limit; If this is the resource limit, don't name it 'usage'. > + seq_printf(sf, "%u\n", usage); > + } > + return 0; > +} > +int devcgroup_rdma_get_max_resource(struct seq_file *sf, void *v) > +{ > + struct dev_cgroup *dev_cg = css_to_devcgroup(seq_css(sf)); > + int type = seq_cft(sf)->private; > + u32 usage; > + > + if (dev_cg->rdma.tracker[type].limit == DEVCG_RDMA_MAX_RESOURCES) { > + seq_printf(sf, "%s\n", DEVCG_RDMA_MAX_RESOURCE_STR); I'm not sure hiding the actual number is good, especially in the show_usage case. > + } else { > + usage = dev_cg->rdma.tracker[type].limit; > + seq_printf(sf, "%u\n", usage); > + } > + return 0; > +} > +void devcgroup_rdma_uncharge_resource(struct ib_ucontext *ucontext, > + enum devcgroup_rdma_rt type, int num) > +{ > + struct dev_cgroup *dev_cg, *p; > + struct task_struct *ctx_task; > + > + if (!num) > + return; > + > + /* get cgroup of ib_ucontext it belong to, to uncharge > + * so that when its called from any worker tasks or any > + * other tasks to which this resource doesn't belong to, > + * it can be uncharged correctly. > + */ > + if (ucontext) > + ctx_task = get_pid_task(ucontext->tgid, PIDTYPE_PID); > + else > + ctx_task = current; > + dev_cg = task_devcgroup(ctx_task); > + > + spin_lock(&ctx_task->rdma_res_counter->lock); Don't you need an rcu read lock and rcu_dereference to access rdma_res_counter? > + ctx_task->rdma_res_counter->usage[type] -= num; > + > + for (p = dev_cg; p; p = parent_devcgroup(p)) > + uncharge_resource(p, type, num); > + > + spin_unlock(&ctx_task->rdma_res_counter->lock); > + > + if (type == DEVCG_RDMA_RES_TYPE_UCTX) > + rdma_free_res_counter(ctx_task); > +} > +EXPORT_SYMBOL(devcgroup_rdma_uncharge_resource); > +int devcgroup_rdma_try_charge_resource(enum devcgroup_rdma_rt type, int num) > +{ > + struct dev_cgroup *dev_cg = task_devcgroup(current); > + struct task_rdma_res_counter *res_cnt = current->rdma_res_counter; > + int status; > + > + if (!res_cnt) { > + res_cnt = kzalloc(sizeof(*res_cnt), GFP_KERNEL); > + if (!res_cnt) > + return -ENOMEM; > + > + spin_lock_init(&res_cnt->lock); > + rcu_assign_pointer(current->rdma_res_counter, res_cnt); Don't you need the task lock to update rdma_res_counter here? > + } > + > + /* synchronize with migration task by taking lock, to avoid > + * race condition of performing cgroup resource migration > + * in non atomic way with this task, which can leads to leaked > + * resources in older cgroup. > + */ > + spin_lock(&res_cnt->lock); > + status = try_charge_resource(dev_cg, type, num); > + if (status) > + goto busy; > + > + /* single task updating its rdma resource usage, so atomic is > + * not required. > + */ > + current->rdma_res_counter->usage[type] += num; > + > +busy: > + spin_unlock(&res_cnt->lock); > + return status; > +} > +EXPORT_SYMBOL(devcgroup_rdma_try_charge_resource); Regards, Haggai