All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* dm-cache: please check/repair metadata
@ 2016-12-08  1:36 Ian Pilcher
  2016-12-08 10:23 ` Zdenek Kabelac
  0 siblings, 1 reply; 7+ messages in thread
From: Ian Pilcher @ 2016-12-08  1:36 UTC (permalink / raw
  To: dm-devel

I have a dm-cache device that throws the following error when I try to
create it:

  device-mapper: cache: 253:20: unable to switch cache to write mode 
until repaired.
  device-mapper: cache: 253:20: switching cache to read-only mode 

  device-mapper: table: 253:20: cache: Unable to get write access to 
metadata, please check/repair metadata.
  device-mapper: ioctl: error adding target to table 


How can I go about checking/repairing the metadata?

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: dm-cache: please check/repair metadata
  2016-12-08  1:36 dm-cache: please check/repair metadata Ian Pilcher
@ 2016-12-08 10:23 ` Zdenek Kabelac
  2016-12-08 14:53   ` Ian Pilcher
  0 siblings, 1 reply; 7+ messages in thread
From: Zdenek Kabelac @ 2016-12-08 10:23 UTC (permalink / raw
  To: Ian Pilcher, dm-devel

Dne 8.12.2016 v 02:36 Ian Pilcher napsal(a):
> I have a dm-cache device that throws the following error when I try to
> create it:
>
>  device-mapper: cache: 253:20: unable to switch cache to write mode until
> repaired.
>  device-mapper: cache: 253:20: switching cache to read-only mode
>  device-mapper: table: 253:20: cache: Unable to get write access to metadata,
> please check/repair metadata.
>  device-mapper: ioctl: error adding target to table
>
> How can I go about checking/repairing the metadata?
>


Hi

I'm sorry but you've attached so little information there is no way to
give you some useful hint.

We need to know kernel version, OS (distribution),  lvm2 version,
version of cache_check tools and likely some more (dmesg/messages log)

Also you need to attach  'dmsetup table'  & 'dmsetup status'
'lsblk'

You may get better results opening  bugzilla at:
https://bugzilla.redhat.com/enter_bug.cgi?product=LVM%20and%20device-mapper


Regards

Zdenek

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: dm-cache: please check/repair metadata
  2016-12-08 10:23 ` Zdenek Kabelac
@ 2016-12-08 14:53   ` Ian Pilcher
  2016-12-18  0:34     ` Ian Pilcher
  0 siblings, 1 reply; 7+ messages in thread
From: Ian Pilcher @ 2016-12-08 14:53 UTC (permalink / raw
  To: Zdenek Kabelac, dm-devel

On 12/08/2016 04:23 AM, Zdenek Kabelac wrote:
> I'm sorry but you've attached so little information there is no way to
> give you some useful hint.

Actually you did.  :-)  I was just asking what tool(s) I should be using
to "check/repair" the metadata.  (Google is oddly silent on the
subject.)

Your reference to "cache_check" was the hint I needed to get started.

Running cache_repair against the metadata device gives me this error:

   transaction_manager::new_block() couldn't allocate new block

I strongly suspect that my metadata device is too small.  It was sized
with the algorithm that I posted to this list about a year ago:

   https://www.redhat.com/archives/dm-devel/2015-November/msg00221.html

Looking at the source code for cache_metadata_size, I see that it adds
additional space for "hints", which the old algorithm didn't account
for.

Assuming that my suspicion is correct, is there any straightforward way
to recover this cache device?  I do need to reclaim the storage used by
the origin device, so I'm guessing that my best course of action will be
to simply recreate the cache device with a sufficiently large metadata
device.

Thanks!

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: dm-cache: please check/repair metadata
  2016-12-08 14:53   ` Ian Pilcher
@ 2016-12-18  0:34     ` Ian Pilcher
  2016-12-18 12:44       ` Zdenek Kabelac
  0 siblings, 1 reply; 7+ messages in thread
From: Ian Pilcher @ 2016-12-18  0:34 UTC (permalink / raw
  To: dm-devel

On 12/08/2016 08:53 AM, Ian Pilcher wrote:
> Running cache_repair against the metadata device gives me this error:
>
>   transaction_manager::new_block() couldn't allocate new block
>
> I strongly suspect that my metadata device is too small.  It was sized
> with the algorithm that I posted to this list about a year ago:
>
>   https://www.redhat.com/archives/dm-devel/2015-November/msg00221.html
>
> Looking at the source code for cache_metadata_size, I see that it adds
> additional space for "hints", which the old algorithm didn't account
> for.

I finally got around to testing my hypothesis, and I can confirm that
the size of the metadata device is indeed the problem.  With a larger
metadata device, cache_repair succeeds, and I am able to assemble the
cache device.

So I obviously need to change the formula that I'm using to calculate
the size of the metadata device, which begs the question ... what is the
CANONICAL formula for doing this?

lvmcache(7) says, "The size of this LV should be 1000 times smaller
than the cache data LV, with a minimum size of 8MiB."  But this is
definitely *not* the formula used by cache_metadata_size, and
cache_metadata_size seems to assume that hints will never be larger
than 4 bytes.

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: dm-cache: please check/repair metadata
  2016-12-18  0:34     ` Ian Pilcher
@ 2016-12-18 12:44       ` Zdenek Kabelac
  2016-12-18 15:51         ` Ian Pilcher
  0 siblings, 1 reply; 7+ messages in thread
From: Zdenek Kabelac @ 2016-12-18 12:44 UTC (permalink / raw
  To: Ian Pilcher, dm-devel

Dne 18.12.2016 v 01:34 Ian Pilcher napsal(a):
> On 12/08/2016 08:53 AM, Ian Pilcher wrote:
>> Running cache_repair against the metadata device gives me this error:
>>
>>   transaction_manager::new_block() couldn't allocate new block
>>
>> I strongly suspect that my metadata device is too small.  It was sized
>> with the algorithm that I posted to this list about a year ago:
>>
>>   https://www.redhat.com/archives/dm-devel/2015-November/msg00221.html
>>
>> Looking at the source code for cache_metadata_size, I see that it adds
>> additional space for "hints", which the old algorithm didn't account
>> for.
>
> I finally got around to testing my hypothesis, and I can confirm that
> the size of the metadata device is indeed the problem.  With a larger
> metadata device, cache_repair succeeds, and I am able to assemble the
> cache device.
>
> So I obviously need to change the formula that I'm using to calculate
> the size of the metadata device, which begs the question ... what is the
> CANONICAL formula for doing this?
>
> lvmcache(7) says, "The size of this LV should be 1000 times smaller
> than the cache data LV, with a minimum size of 8MiB."  But this is
> definitely *not* the formula used by cache_metadata_size, and
> cache_metadata_size seems to assume that hints will never be larger
> than 4 bytes.

Hi

I'm not exactly sure what are you doing, are you maintaining your own dm cache 
volumes ?  (Writing your own volume manager and not using lvm2 - what is lvm2 
missing/doing wrong ?)
Are you going to write also your own recovery support ?

Otherwise it's normally a business of lvm2 to maintain proper size of
cached LVs (and even resize them when needed via monitoring)
This will be necessary once we will support online resize of cache pools & 
cached volumes.

 From the source code of lvm2 it seems it uses about 44  bytes per cache chunk 
and with some transaction overhead for metadata (this could be possible 
lowered for 'smq' policy...)

Regards

Zdenek

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: dm-cache: please check/repair metadata
  2016-12-18 12:44       ` Zdenek Kabelac
@ 2016-12-18 15:51         ` Ian Pilcher
  2016-12-18 19:37           ` Zdenek Kabelac
  0 siblings, 1 reply; 7+ messages in thread
From: Ian Pilcher @ 2016-12-18 15:51 UTC (permalink / raw
  To: Zdenek Kabelac, dm-devel

> I'm not exactly sure what are you doing, are you maintaining your own dm
> cache volumes ?  (Writing your own volume manager and not using lvm2 -
> what is lvm2 missing/doing wrong ?)

Yup.  I've continuing to use my "zodcache" system for assembling
dm-cache devices.

As to what lvm2 is missing or doing wrong ... It's not that there's
anything "wrong" with it.  It's just that I prefer to have a "simpler"
setup with a cached PVs/VGs, rather than cached LVs.  I've also always
found the fact that LVM cache puts both the fast and slow devices into
the same VG to be counterintuitive.

These are really matters of personal preference, though.  I want to be
very, very clear that I don't think that there's anything wrong with
LVM.  (To be perfectly honest, my preferences are probably colored by
the fact that I first used bcache, so I'm used to its "model".)

> Are you going to write also your own recovery support ?

Until a few days ago I wasn't aware that such a thing might even be
necessary.  :-)

> Otherwise it's normally a business of lvm2 to maintain proper size of
> cached LVs (and even resize them when needed via monitoring)
> This will be necessary once we will support online resize of cache pools
> & cached volumes.

I didn't realize that LVM did that.  That's cool.

> From the source code of lvm2 it seems it uses about 44  bytes per cache
> chunk and with some transaction overhead for metadata (this could be
> possible lowered for 'smq' policy...)

I came across this in one of my Google searches.

 
http://people.redhat.com/msnitzer/patches/upstream/dm-cache-for-v3.13/dm-cache-variable-hints.patch

I wonder if I should just use 128.  It would be a bit wasteful, but
hopefully it would be future-proof.

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: dm-cache: please check/repair metadata
  2016-12-18 15:51         ` Ian Pilcher
@ 2016-12-18 19:37           ` Zdenek Kabelac
  0 siblings, 0 replies; 7+ messages in thread
From: Zdenek Kabelac @ 2016-12-18 19:37 UTC (permalink / raw
  To: Ian Pilcher, dm-devel

Dne 18.12.2016 v 16:51 Ian Pilcher napsal(a):
>> I'm not exactly sure what are you doing, are you maintaining your own dm
>> cache volumes ?  (Writing your own volume manager and not using lvm2 -
>> what is lvm2 missing/doing wrong ?)
>
> Yup.  I've continuing to use my "zodcache" system for assembling
> dm-cache devices.
>
> As to what lvm2 is missing or doing wrong ... It's not that there's
> anything "wrong" with it.  It's just that I prefer to have a "simpler"
> setup with a cached PVs/VGs, rather than cached LVs.  I've also always
> found the fact that LVM cache puts both the fast and slow devices into
> the same VG to be counterintuitive.

It's not counterintuitive - lvm2 gives you FULL control where to place cache 
data and metadata...

ATM lvm2 has no notation what is 'slow' and what is 'fast'
For the 'cache'  both  data & metadata are supposed to be on 'fast'.

Please check 'man lvmcache' for full description.

> These are really matters of personal preference, though.  I want to be
> very, very clear that I don't think that there's anything wrong with
> LVM.  (To be perfectly honest, my preferences are probably colored by
> the fact that I first used bcache, so I'm used to its "model".)
>
>> Are you going to write also your own recovery support ?
>
> Until a few days ago I wasn't aware that such a thing might even be
> necessary.  :-)

Well errors do happen and having repair logic at hands is always useful.

There are also quite some types of failures - so I probably need to wish best 
luck with rediscovering all this.

Unless your  'zodcache' is meant to be used as quick 'experimental' sort of 
device where you do not care mostly about data being unusable.  In this case 
plain  activation of 'dm' device via couple ioctl  calls surely can't be 
beaten by lvm2 - but that's not where lvm2 usage is targeted.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-12-18 19:37 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-12-08  1:36 dm-cache: please check/repair metadata Ian Pilcher
2016-12-08 10:23 ` Zdenek Kabelac
2016-12-08 14:53   ` Ian Pilcher
2016-12-18  0:34     ` Ian Pilcher
2016-12-18 12:44       ` Zdenek Kabelac
2016-12-18 15:51         ` Ian Pilcher
2016-12-18 19:37           ` Zdenek Kabelac

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.