All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Sebastian Roller <sebastian.roller@gmail.com>
To: Chris Murphy <lists@colorremedies.com>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: All files are damaged after btrfs restore
Date: Tue, 9 Mar 2021 18:02:34 +0100	[thread overview]
Message-ID: <CALS+qHPDYFU2mrzudR8w057Vo33NZ=YsRWJUYmAFUih1pWbz-w@mail.gmail.com> (raw)
In-Reply-To: <CAJCQCtRWRq-AR1+hF03W0q+bG3sO618p6GzTtN1EWCJijzKe9g@mail.gmail.com>

> > Would it make sense to just try  restore -t on any root I got with
> > btrfs-find-root with all of the snapshots?
>
> Yes but I think you've tried this and you only got corrupt files or
> files with holes, so that suggests very recent roots are just bad due
> to the corruption, and older ones are pointing to a mix of valid and
> stale blocks and it just ends up in confusion.
>
> I think what you're after is 'btrfs restore -f'
>
>        -f <bytenr>
>            only restore files that are under specified subvolume root
> pointed by <bytenr>
>
> You can get this value from each 'tree root' a.k.a. the root of roots
> tree, what the super calls simply 'root'. That contains references for
> all the other trees' roots. For example:
>
>     item 12 key (257 ROOT_ITEM 0) itemoff 12936 itemsize 439
>         generation 97406 root_dirid 256 bytenr 30752768 level 1 refs 1
>         lastsnap 93151 byte_limit 0 bytes_used 2818048 flags 0x0(none)
>         uuid 4a0fa0d3-783c-bc42-bee1-ffcbe7325753
>         ctransid 97406 otransid 7 stransid 0 rtransid 0
>         ctime 1615103595.233916841 (2021-03-07 00:53:15)
>         otime 1603562604.21506964 (2020-10-24 12:03:24)
>         drop key (0 UNKNOWN.0 0) level 0
>     item 13 key (257 ROOT_BACKREF 5) itemoff 12911 itemsize 25
>         root backref key dirid 256 sequence 2 name newpool
>
>
>
> The name of this subvolume is newpool, the subvolid is 257, and its
> address is bytenr 30752768. That's the value to plug into btrfs
> restore -f

I found 12 of these 'tree roots' on the volume. All the snapshots are
under the same tree root. This seems to be the subvolume where I put
the snapshots. So for the snapshots there is only one option to use
with btrfs restore -r. But I also found the data I'm looking for under
some other of these tree roots. One of them is clearly the subvolume
the backup went to (the source of the snapshots). But there is also a
very old snapshot (4 years old) that has a tree root on its own. The
files I restored  from there are different -- regarding checksums.
They are also corrupted, but different. I have to do some more
hexdumps to figure out, if it's better.

> The thing is, it needs an intact chunk tree, i.e. not damaged and not
> too old, in order to translate that logical address into a physical
> device and physical address.

> > > OK so you said there's an original and backup file system, are they
> > > both in equally bad shape, having been on the same controller? Are
> > > they both btrfs?
> >
> > The original / live file system was not btrfs but xfs. It is in a
> > different but equally bad state than the backup. We used bcache with a
> > write-back cache on a ssd which is now completely dead (does not get
> > recognized by any server anymore). To get the file system mounted I
> > ran xfs-repair. After that only 6% of the data was left and this is
> > nearly completely in lost+found. I'm now trying to sort these files by
> > type, since the data itself looks OK. Unfortunately the surviving
> > files seem to be the oldest ones.
>
> Yeah writeback means the bcache device must survive and be healthy
> before any repair attempts should be made, even restore attempts. It
> also means you need hardware isolation, one SSD per HDD. Otherwise one
> SSD failing means the whole thing falls apart. The mode to use for
> read caching is writethrough.

Hmm. Lessons learned there. This server handles the home-directories
of our personal workstations on which we work on  quite large files (>
5 GiB). So combining write-caching with 2x 10GBit Ethernet was quite
good -- regarding performance.

>
> >         backup 0:
> >                 backup_tree_root:       122583415865344 gen: 825256     level: 2
> >                 backup_chunk_root:      141944043454464 gen: 825256     level: 2
>
>
> >         backup 1:
> >                 backup_tree_root:       122343302234112 gen: 825253     level: 2
> >                 backup_chunk_root:      141944034426880 gen: 825251     level: 2
>
> >         backup 2:
> >                 backup_tree_root:       122343762804736 gen: 825254     level: 2
> >                 backup_chunk_root:      141944034426880 gen: 825251     level: 2
>
> >         backup 3:
> >                 backup_tree_root:       122574011269120 gen: 825255     level: 2
> >                 backup_chunk_root:      141944034426880 gen: 825251     level: 2
>
> OK this is interesting. There's two chunk trees to choose from. So is
> the restore problem because older roots point to the older chunk tree
> which is already going stale, and just isn't assembling blocks
> correctly anymore? Or is it because the new chunk tree is bad?

Is there a way to choose the chunk tree I'm using for operations like
btrfs restore?

> On 72 TB, the last thing I want to recommend is chunk-recover. That'll
> take forever but it'd be interesting to know which of these chunk
> trees is good. The chunk tree is in the system block group. It's
> pretty tiny so it's a small target for being overwritten...and it's
> cow. So there isn't a reason to immediately start overwriting it. I'm
> thinking maybe the new one got interrupted by the failure and the old
> one is intact.

I already ran chunk-recover. It needs two days to finish. But I used
btrfs-tools version 4.14 and it failed.

root@hikitty:/mnt$ btrfs rescue chunk-recover /dev/sdf1
Scanning: DONE in dev0
checksum verify failed on 99593231630336 found E4E3BDB6 wanted 00000000
checksum verify failed on 99593231630336 found E4E3BDB6 wanted 00000000
checksum verify failed on 124762809384960 found E4E3BDB6 wanted 00000000
checksum verify failed on 124762809384960 found E4E3BDB6 wanted 00000000
checksum verify failed on 124762809384960 found E4E3BDB6 wanted 00000000
checksum verify failed on 124762809384960 found E4E3BDB6 wanted 00000000
bytenr mismatch, want=124762809384960, have=0
open with broken chunk error
Chunk tree recovery failed

I could try again with a newer version. (?) Because with version 4.14
also btrfs restore failed.

> btrfs insp dump-t -t 1 /dev/sdi1
>
> And you'll need to look for a snapshot name in there, find its bytenr,
> and let's first see if just using that works. If it doesn't then maybe
> combining it with the next most recent root tree will work.

I am working backwards right now using btrfs restore -f in combination
with -t. So far no success.

Sebastian

  reply	other threads:[~2021-03-09 17:03 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-23 15:45 All files are damaged after btrfs restore Sebastian Roller
2021-02-25  5:40 ` Chris Murphy
2021-02-25  5:52   ` Chris Murphy
2021-02-26 16:01     ` Sebastian Roller
2021-02-27  1:04       ` Chris Murphy
2021-03-04 15:34         ` Sebastian Roller
2021-03-05  3:01           ` Chris Murphy
2021-03-07 13:58             ` Sebastian Roller
2021-03-08  0:56               ` Chris Murphy
2021-03-09 17:02                 ` Sebastian Roller [this message]
2021-03-09 20:34                   ` Chris Murphy
2021-03-16  9:35                     ` Sebastian Roller
2021-03-16 19:34                       ` Chris Murphy
2021-03-17  1:38 ` Qu Wenruo
2021-03-17  2:59   ` Chris Murphy
2021-03-17  9:01     ` Sebastian Roller
2021-03-17  1:54 ` Dāvis Mosāns
2021-03-17 10:50   ` Sebastian Roller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALS+qHPDYFU2mrzudR8w057Vo33NZ=YsRWJUYmAFUih1pWbz-w@mail.gmail.com' \
    --to=sebastian.roller@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.