From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752928AbbFQJsf (ORCPT ); Wed, 17 Jun 2015 05:48:35 -0400 Received: from smtpbg298.qq.com ([184.105.67.102]:50173 "EHLO smtpbg298.qq.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750762AbbFQJs2 (ORCPT ); Wed, 17 Jun 2015 05:48:28 -0400 X-QQ-mid: bizesmtp1t1434534472t804t181 X-QQ-SSF: 0140000000200010F322B00A0000000 X-QQ-FEAT: uRfewdugggxXu2V2KuR5ttmzKaoeV1Q6tvviABZPSFHXItOVTcPdGg4nJC0+h OD9fqouTczlv6g6Rfv64Xf78slFHP0yNNGBFgqEFP02p832MUGAmJWe9ZKxEm+rZpWo238P 4yPMCFzTeoqRvvXD4XSCmJDgUC1rm0OkbhPLKDiD7Z1pZl1VY55JCo64+Uap0egMdtDnEsQ AwY6pYi/HjoG/omCpGu63fGZhXx+E2KharVyosOi1E8RHs4UgZ7pP X-QQ-GoodBg: 2 Message-ID: <55814247.4090301@unitedstack.com> Date: Wed, 17 Jun 2015 17:47:51 +0800 From: juncheng bai User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Ilya Dryomov CC: idryomov@redhat.com, Alex Elder , Josh Durgin , Guangliang Zhao , jeff@garzik.org, yehuda@hq.newdream.net, Sage Weil , elder@inktank.com, "linux-kernel@vger.kernel.org" , Ceph Development Subject: Re: [PATCH RFC] storage:rbd: make the size of request is equal to the, size of the object References: <557EB47F.6090708@unitedstack.com> <557ED1D4.20605@unitedstack.com> <557F97CB.6070608@unitedstack.com> <55800F24.6060100@unitedstack.com> <55802F46.7020804@unitedstack.com> <5580E3A0.6050504@unitedstack.com> In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-QQ-SENDSIZE: 520 X-QQ-FName: CB7E6F59D1C849A8BA69E0AA4E9A4B5E X-QQ-LocalIP: 127.0.0.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2015/6/17 16:24, Ilya Dryomov wrote: > On Wed, Jun 17, 2015 at 6:04 AM, juncheng bai > wrote: >> Hi. >> Yeah, you are right, use the default max_segments, the request size can >> be the object size, because the bi_phys_segments of bio could be recount, >> there's just a possibility. >> >> I want to fully understand the bi_phys_segments, hope you can give me some >> information, thanks. >> >> The test information as shown below: >> The systemtap script: >> global greq=0; >> probe kernel.function("bio_attempt_back_merge") >> { >> greq=pointer_arg(2); >> } >> >> probe kernel.function("bio_attempt_back_merge").return >> { >> printf("after req addr:%p req segments:%d req offset:%lu req >> length:%lu\n", >> greq, >> @cast(greq, "request")->nr_phys_segments, >> @cast(greq, "request")->__sector * 512, >> @cast(greq, "request")->__data_len); >> } >> >> probe kernel.function("blk_mq_start_request") >> { >> printf("req addr:%p nr_phys_segments:%d, offset:%lu len:%lu\n", >> pointer_arg(1), >> @cast(pointer_arg(1), "request")->nr_phys_segments, >> @cast(pointer_arg(1), "request")->__sector * 512, >> @cast(pointer_arg(1), "request")->__data_len); >> } >> >> Test command: >> dd if=/dev/zero of=/dev/rbd0 bs=4M count=2 oflag=direct seek=100 >> >> Cast one: >> blk_queue_max_segments(q, 256); >> >> The output of stap: >> after req addr:0xffff880ff60a08c0 req segments:73 req offset:419430400 req >> length:2097152 >> after req addr:0xffff880ff60a08c0 req segments:73 req offset:419430400 req >> length:2097152 >> after req addr:0xffff880ff60a0a80 req segments:186 req offset:421527552 req >> length:1048576 >> req addr:0xffff880ff60a08c0 nr_phys_segments:73, offset:419430400 >> len:2097152 >> req addr:0xffff880ff60a0a80 nr_phys_segments:186, offset:421527552 >> len:1048576 >> req addr:0xffff880ff60a0c40 nr_phys_segments:232, offset:422576128 >> len:1048576 >> >> after req addr:0xffff880ff60a0c40 req segments:73 req offset:423624704 req >> length:2097152 >> after req addr:0xffff880ff60a0c40 req segments:73 req offset:423624704 req >> length:2097152 >> after req addr:0xffff880ff60a0e00 req segments:186 req offset:425721856 req >> length:1048576 >> req addr:0xffff880ff60a0c40 nr_phys_segments:73, offset:423624704 >> len:2097152 >> req addr:0xffff880ff60a0e00 nr_phys_segments:186, offset:425721856 >> len:1048576 >> req addr:0xffff880ff60a0fc0 nr_phys_segments:232, offset:426770432 >> len:1048576 >> >> Case two: >> blk_queue_max_segments(q, segment_size / PAGE_SIZE); >> >> The output of stap: >> after req addr:0xffff88101c9a0000 req segments:478 req offset:419430400 req >> length:4194304 >> req addr:0xffff88101c9a0000 nr_phys_segments:478, offset:419430400 >> len:4194304 >> >> after req addr:0xffff88101c9a0000 req segments:478 req offset:423624704 req >> length:4194304 >> req addr:0xffff88101c9a0000 nr_phys_segments:478, offset:423624704 >> len:4194304 >> >> 1.Based on the setting of max_sectors and max_segments, decides the >> size of a request. >> 2.We have already set max_sectors to an object's size, so we should try >> to ensure that a request to the size as possible as merge bio. > > Yeah, I also tried to explain this in the commit description [1]. > Initially I had BIO_MAX_PAGES in there, and realistically I still think > it's enough for most cases, but discussion with you made me consider > readv/writev case and so I changed it in my patch to max_hw_sectors > (i.e. segment_size / SECTOR_SIZE) - this ensures that max_segments will > never be a limiting factor even in theory. > > [1] https://github.com/ceph/ceph-client/commit/2d8006795564fbc0fa68d75758f605fe9f7a108e > > Thanks, > > Ilya > Yeah, I agree with you. Thanks. ---- juncheng bai