All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Ritesh Harjani <ritesh.list@gmail.com>
Cc: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>,
	willy@infradead.org, brauner@kernel.org, david@fromorbit.com,
	chandan.babu@oracle.com, akpm@linux-foundation.org,
	linux-fsdevel@vger.kernel.org, hare@suse.de,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-xfs@vger.kernel.org, mcgrof@kernel.org,
	gost.dev@samsung.com, p.raghav@samsung.com
Subject: Re: [PATCH v4 00/11] enable bs > ps in XFS
Date: Fri, 26 Apr 2024 22:05:17 -0700	[thread overview]
Message-ID: <20240427050517.GC360898@frogsfrogsfrogs> (raw)
In-Reply-To: <87y18zxvpd.fsf@gmail.com>

On Sat, Apr 27, 2024 at 10:12:38AM +0530, Ritesh Harjani wrote:
> "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes:
> 
> > From: Pankaj Raghav <p.raghav@samsung.com>
> >
> > This is the fourth version of the series that enables block size > page size
> > (Large Block Size) in XFS. The context and motivation can be seen in cover
> > letter of the RFC v1[1]. We also recorded a talk about this effort at LPC [3],
> > if someone would like more context on this effort.
> >
> > This series does not split a folio during truncation even though we have
> > an API to do so due to some issues with writeback. While it is not a
> > blocker, this feature can be added as a future improvement once we
> > get the base patches upstream (See patch 7).
> >
> > A lot of emphasis has been put on testing using kdevops. The testing has
> > been split into regression and progression.
> >
> > Regression testing:
> > In regression testing, we ran the whole test suite to check for
> > *regression on existing profiles due to the page cache changes.
> >
> > No regression was found with the patches added on top.
> >
> > Progression testing:
> > For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.
> > To compare it with existing support, an ARM VM with 64k base page system
> > (without our patches) was used as a reference to check for actual failures
> > due to LBS support in a 4k base page size system.
> >
> > There are some tests that assumes block size < page size that needs to
> > be fixed. I have a tree with fixes for xfstests here [6], which I will be
> > sending soon to the list. Already a part of this has been upstreamed to
> > fstest.
> >
> > No new failures were found with the LBS support.
> 
> I just did portability testing by creating XFS with 16k bs on x86 VM (4k
> pagesize), created some files + checksums. I then moved the disk to
> Power VM with 64k pagesize and mounted this. I was able to mount and
> all the file checksums passed.
> 
> Then I did the vice versa, created a filesystem on Power VM with 64k
> blocksize and created 10 files with random data of 10MB each. I then
> hotplugged this device out from Power and plugged it into x86 VM and
> mounted it.
> 
> <Logs of the 2nd operation>
> ~# mount /dev/vdk /mnt1/
> [   35.145350] XFS (vdk): EXPERIMENTAL: Filesystem with Large Block Size (65536 bytes) enabled.
> [   35.149858] XFS (vdk): Mounting V5 Filesystem 91933a8b-1370-4931-97d1-c21213f31f8f
> [   35.227459] XFS (vdk): Ending clean mount
> [   35.235090] xfs filesystem being mounted at /mnt1 supports timestamps until 2038-01-19 (0x7fffffff)
> ~# cd /mnt1/
> ~# sha256sum -c checksums 
> file-1.img: OK
> file-2.img: OK
> file-3.img: OK
> file-4.img: OK
> file-5.img: OK
> file-6.img: OK
> file-7.img: OK
> file-8.img: OK
> file-9.img: OK
> file-10.img: OK
> 
> So thanks for this nice portability which this series offers :) 

Yessss this is awesome to see this coming together after many years!

--D

> -ritesh
> 
> 

  reply	other threads:[~2024-04-27  5:05 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-25 11:37 [PATCH v4 00/11] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-04-25 11:37 ` [PATCH v4 01/11] readahead: rework loop in page_cache_ra_unbounded() Pankaj Raghav (Samsung)
2024-04-25 11:37 ` [PATCH v4 02/11] fs: Allow fine-grained control of folio sizes Pankaj Raghav (Samsung)
2024-04-25 18:07   ` Hannes Reinecke
2024-04-26 15:09   ` Darrick J. Wong
2024-04-25 11:37 ` [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
2024-04-25 19:04   ` Hannes Reinecke
2024-04-26 15:12   ` Darrick J. Wong
2024-04-28 20:59     ` Pankaj Raghav (Samsung)
2024-04-25 11:37 ` [PATCH v4 04/11] readahead: allocate folios with mapping_min_order in readahead Pankaj Raghav (Samsung)
2024-04-25 18:53   ` Matthew Wilcox
2024-04-25 11:37 ` [PATCH v4 05/11] mm: do not split a folio if it has minimum folio order requirement Pankaj Raghav (Samsung)
2024-04-25 20:10   ` Matthew Wilcox
2024-04-26  0:47     ` Luis Chamberlain
2024-04-26 23:46       ` Luis Chamberlain
2024-04-28  0:57         ` Luis Chamberlain
2024-04-29  3:56           ` Luis Chamberlain
2024-04-29 14:29             ` Zi Yan
2024-04-30  0:31               ` Luis Chamberlain
2024-04-30  0:49                 ` Luis Chamberlain
2024-04-30  2:43                 ` Zi Yan
2024-04-30 19:27                   ` Luis Chamberlain
2024-05-01  4:13                     ` Matthew Wilcox
2024-05-01 14:28                       ` Matthew Wilcox
2024-04-26 15:49     ` Pankaj Raghav (Samsung)
2024-04-25 11:37 ` [PATCH v4 06/11] filemap: cap PTE range to be created to i_size in folio_map_range() Pankaj Raghav (Samsung)
2024-04-25 20:24   ` Matthew Wilcox
2024-04-26 12:54     ` Pankaj Raghav (Samsung)
2024-04-25 11:37 ` [PATCH v4 07/11] iomap: fix iomap_dio_zero() for fs bs > system page size Pankaj Raghav (Samsung)
2024-04-26  6:22   ` Christoph Hellwig
2024-04-26 11:43     ` Pankaj Raghav (Samsung)
2024-04-27  5:12       ` Christoph Hellwig
2024-04-29 21:02         ` Pankaj Raghav (Samsung)
2024-04-27  3:26     ` Matthew Wilcox
2024-04-27  4:52       ` Christoph Hellwig
2024-04-25 11:37 ` [PATCH v4 08/11] xfs: use kvmalloc for xattr buffers Pankaj Raghav (Samsung)
2024-04-26 15:18   ` Darrick J. Wong
2024-04-28 21:06     ` Pankaj Raghav (Samsung)
2024-04-25 11:37 ` [PATCH v4 09/11] xfs: expose block size in stat Pankaj Raghav (Samsung)
2024-04-26 15:15   ` Darrick J. Wong
2024-04-25 11:37 ` [PATCH v4 10/11] xfs: make the calculation generic in xfs_sb_validate_fsb_count() Pankaj Raghav (Samsung)
2024-04-26 15:16   ` Darrick J. Wong
2024-04-25 11:37 ` [PATCH v4 11/11] xfs: enable block size larger than page size support Pankaj Raghav (Samsung)
2024-04-26 15:18   ` Darrick J. Wong
2024-04-27  4:42 ` [PATCH v4 00/11] enable bs > ps in XFS Ritesh Harjani
2024-04-27  5:05   ` Darrick J. Wong [this message]
2024-04-29 20:39   ` Pankaj Raghav (Samsung)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240427050517.GC360898@frogsfrogsfrogs \
    --to=djwong@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=chandan.babu@oracle.com \
    --cc=david@fromorbit.com \
    --cc=gost.dev@samsung.com \
    --cc=hare@suse.de \
    --cc=kernel@pankajraghav.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=mcgrof@kernel.org \
    --cc=p.raghav@samsung.com \
    --cc=ritesh.list@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.