All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* Can anything be done about do_rootfs speed?
@ 2013-08-27 22:27 Paul D. DeRocco
  2013-08-27 23:25 ` Gary Thomas
  0 siblings, 1 reply; 14+ messages in thread
From: Paul D. DeRocco @ 2013-08-27 22:27 UTC (permalink / raw
  To: yocto

I got really tired of waiting for builds on my lowly dual-core Atom
machine, so I went out and bought a nice fast machine with an i7-4770K and
a Samsung SSD. Full builds are now screechingly fast, over 10x compared to
the Atom.

But when making a tiny change, I still have to wait for do_rootfs. Since
this is a single task, it runs on a single core, so it only runs maybe
three times as fast. For my Gumstix stuff, it takes five minutes instead
of something like 15. That's a meaningful improvement, but it still seems
long when the task implementing the minor change took, oh, five seconds.
Since it's the one task that _always_ gets executed, it seems like a
bottleneck that should be addressed.

Is there any way, in the future, of breaking do_rootfs into multiple
threads, so they can take advantage of multiple cores? Or has something
like this been tried already, and found not to produce much of a speedup?
Or is the process intriniscally sequential?

-- 

Ciao,               Paul D. DeRocco
Paul                mailto:pderocco@ix.netcom.com 
 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-27 22:27 Can anything be done about do_rootfs speed? Paul D. DeRocco
@ 2013-08-27 23:25 ` Gary Thomas
  2013-08-28  0:10   ` Paul D. DeRocco
  0 siblings, 1 reply; 14+ messages in thread
From: Gary Thomas @ 2013-08-27 23:25 UTC (permalink / raw
  To: yocto

On 2013-08-27 16:27, Paul D. DeRocco wrote:
> I got really tired of waiting for builds on my lowly dual-core Atom
> machine, so I went out and bought a nice fast machine with an i7-4770K and
> a Samsung SSD. Full builds are now screechingly fast, over 10x compared to
> the Atom.
>
> But when making a tiny change, I still have to wait for do_rootfs. Since
> this is a single task, it runs on a single core, so it only runs maybe
> three times as fast. For my Gumstix stuff, it takes five minutes instead
> of something like 15. That's a meaningful improvement, but it still seems
> long when the task implementing the minor change took, oh, five seconds.
> Since it's the one task that _always_ gets executed, it seems like a
> bottleneck that should be addressed.
>
> Is there any way, in the future, of breaking do_rootfs into multiple
> threads, so they can take advantage of multiple cores? Or has something
> like this been tried already, and found not to produce much of a speedup?
> Or is the process intriniscally sequential?
>

As far as I understand, the 'do_rootfs' step in building an image is basically
equivalent to running "${PKG_MGR} install <all_required_packages>", where PKG_MGR
is your package management method of choice - ipk or rpm.  This seems to me to
be a very single-threaded process.

Perhaps you should think more about how you are using this.  If you don't need
to rebuild the whole image every time, maybe you can use the package management
tools instead?  For example, I routinely build images as well but I also try to
use 'opkg' as much as possible to manage package updates, etc.   This is a huge
time saver, especially when making small or incremental changes.  I only rely
on the full image builds when I want to "checkpoint" the state of the system.

-- 
------------------------------------------------------------
Gary Thomas                 |  Consulting for the
MLB Associates              |    Embedded world
------------------------------------------------------------


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-27 23:25 ` Gary Thomas
@ 2013-08-28  0:10   ` Paul D. DeRocco
  2013-08-28  7:20     ` Martin Jansa
  2013-08-28 10:55     ` Gary Thomas
  0 siblings, 2 replies; 14+ messages in thread
From: Paul D. DeRocco @ 2013-08-28  0:10 UTC (permalink / raw
  To: 'Gary Thomas'; +Cc: yocto

> From: Gary Thomas
> 
> As far as I understand, the 'do_rootfs' step in building an 
> image is basically
> equivalent to running "${PKG_MGR} install 
> <all_required_packages>", where PKG_MGR
> is your package management method of choice - ipk or rpm.  
> This seems to me to
> be a very single-threaded process.

If there's a way to command the package manager to install a package
without enforcing dependencies (Is that what opkg --nodeps does?), then
couldn't the package manager be invoked on one package at a time in n
threads, just like the other tasks are now run? I don't really have any
sense of how long it takes to install the packages, as opposed to building
the final tarball or hddimage and applying the permissions from the pseudo
database, which would certainly be single-threaded.

> Perhaps you should think more about how you are using this.  
> If you don't need
> to rebuild the whole image every time, maybe you can use the 
> package management
> tools instead?  For example, I routinely build images as well 
> but I also try to
> use 'opkg' as much as possible to manage package updates, 
> etc.   This is a huge
> time saver, especially when making small or incremental 
> changes.  I only rely
> on the full image builds when I want to "checkpoint" the 
> state of the system.

I'd like to try that, but I'm not sure how. If I've tweaked one recipe,
how do I get it to build it and package it, and then stop? Do I use
"bitbake -c package"? And then do I use "opkg -d" to manually install it
directly onto my SD card? If my rootfs is a loop mounted hddimage in a
FAT16 file (as it is on my Atom project), do I loop mount it on my build
system and install into that?

Installing directly to the card would be nice because copying the whole
damn rootfs to the card takes an annoying amount of time, too.

-- 

Ciao,               Paul D. DeRocco
Paul                mailto:pderocco@ix.netcom.com 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28  0:10   ` Paul D. DeRocco
@ 2013-08-28  7:20     ` Martin Jansa
  2013-08-28  8:36       ` Paul D. DeRocco
  2013-08-28 10:55     ` Gary Thomas
  1 sibling, 1 reply; 14+ messages in thread
From: Martin Jansa @ 2013-08-28  7:20 UTC (permalink / raw
  To: Paul D. DeRocco; +Cc: yocto

[-- Attachment #1: Type: text/plain, Size: 2413 bytes --]

On Tue, Aug 27, 2013 at 05:10:42PM -0700, Paul D. DeRocco wrote:
> > From: Gary Thomas
> > 
> > As far as I understand, the 'do_rootfs' step in building an 
> > image is basically
> > equivalent to running "${PKG_MGR} install 
> > <all_required_packages>", where PKG_MGR
> > is your package management method of choice - ipk or rpm.  
> > This seems to me to
> > be a very single-threaded process.
> 
> If there's a way to command the package manager to install a package
> without enforcing dependencies (Is that what opkg --nodeps does?), then
> couldn't the package manager be invoked on one package at a time in n
> threads, just like the other tasks are now run? I don't really have any
> sense of how long it takes to install the packages, as opposed to building
> the final tarball or hddimage and applying the permissions from the pseudo
> database, which would certainly be single-threaded.
> 
> > Perhaps you should think more about how you are using this.  
> > If you don't need
> > to rebuild the whole image every time, maybe you can use the 
> > package management
> > tools instead?  For example, I routinely build images as well 
> > but I also try to
> > use 'opkg' as much as possible to manage package updates, 
> > etc.   This is a huge
> > time saver, especially when making small or incremental 
> > changes.  I only rely
> > on the full image builds when I want to "checkpoint" the 
> > state of the system.
> 
> I'd like to try that, but I'm not sure how. If I've tweaked one recipe,
> how do I get it to build it and package it, and then stop? Do I use
> "bitbake -c package"? And then do I use "opkg -d" to manually install it
> directly onto my SD card? If my rootfs is a loop mounted hddimage in a
> FAT16 file (as it is on my Atom project), do I loop mount it on my build
> system and install into that?
> 
> Installing directly to the card would be nice because copying the whole
> damn rootfs to the card takes an annoying amount of time, too.

Are you sure that you're not building some unnecessary IMAGE_FSTYPES?
Last time someone asked my why it takes so long I've added some debug
output to do_rootfs and found out that only half of the time was opkg
installing packages and the rest was various IMAGE_FSTYPES.

e.g. tar.bz2 takes very long without pbzip2 or lbzip2

-- 
Martin 'JaMa' Jansa     jabber: Martin.Jansa@gmail.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 205 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28  7:20     ` Martin Jansa
@ 2013-08-28  8:36       ` Paul D. DeRocco
  2013-08-28  8:49         ` Samuel Stirtzel
  0 siblings, 1 reply; 14+ messages in thread
From: Paul D. DeRocco @ 2013-08-28  8:36 UTC (permalink / raw
  To: 'Martin Jansa'; +Cc: yocto

> From: Martin Jansa
> 
> Are you sure that you're not building some unnecessary IMAGE_FSTYPES?

No, I'm not sure.

> Last time someone asked my why it takes so long I've added some debug
> output to do_rootfs and found out that only half of the time was opkg
> installing packages and the rest was various IMAGE_FSTYPES.
> 
> e.g. tar.bz2 takes very long without pbzip2 or lbzip2

Is there a standard way to use those in a build? Do I replace bzip2 with a
link to one of those? Or does Yocto build its own bzip2?

-- 

Ciao,               Paul D. DeRocco
Paul                mailto:pderocco@ix.netcom.com 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28  8:36       ` Paul D. DeRocco
@ 2013-08-28  8:49         ` Samuel Stirtzel
  2013-08-28 11:30           ` Samuel Stirtzel
  0 siblings, 1 reply; 14+ messages in thread
From: Samuel Stirtzel @ 2013-08-28  8:49 UTC (permalink / raw
  To: Paul D. DeRocco; +Cc: yocto@yoctoproject.org

2013/8/28 Paul D. DeRocco <pderocco@ix.netcom.com>:
>> From: Martin Jansa
>>
>> Are you sure that you're not building some unnecessary IMAGE_FSTYPES?
>
> No, I'm not sure.
>
>> Last time someone asked my why it takes so long I've added some debug
>> output to do_rootfs and found out that only half of the time was opkg
>> installing packages and the rest was various IMAGE_FSTYPES.
>>
>> e.g. tar.bz2 takes very long without pbzip2 or lbzip2
>
> Is there a standard way to use those in a build? Do I replace bzip2 with a
> link to one of those? Or does Yocto build its own bzip2?
>

Hi,

look/grep for IMAGE_FSTYPE, if there is a += "tar.bz2" or multiple
identical += lines then you can be sure that do_rootfs is wasting
time.


A virtual package manager which only composes the package database in
a multi-threaded way could be seen as a silver bullet here.
Also pigz [1] and pbzip2 [2] could save some minutes / seconds
depending on the image size.


[1] http://zlib.net/pigz/
[2] http://compression.ca/pbzip2/



-- 
Regards
Samuel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28  0:10   ` Paul D. DeRocco
  2013-08-28  7:20     ` Martin Jansa
@ 2013-08-28 10:55     ` Gary Thomas
  2013-08-28 11:29       ` Paul Barker
                         ` (2 more replies)
  1 sibling, 3 replies; 14+ messages in thread
From: Gary Thomas @ 2013-08-28 10:55 UTC (permalink / raw
  To: Paul D. DeRocco; +Cc: yocto

On 2013-08-27 18:10, Paul D. DeRocco wrote:
>> From: Gary Thomas
>>
>> As far as I understand, the 'do_rootfs' step in building an
>> image is basically
>> equivalent to running "${PKG_MGR} install
>> <all_required_packages>", where PKG_MGR
>> is your package management method of choice - ipk or rpm.
>> This seems to me to
>> be a very single-threaded process.
>
> If there's a way to command the package manager to install a package
> without enforcing dependencies (Is that what opkg --nodeps does?), then
> couldn't the package manager be invoked on one package at a time in n
> threads, just like the other tasks are now run? I don't really have any
> sense of how long it takes to install the packages, as opposed to building
> the final tarball or hddimage and applying the permissions from the pseudo
> database, which would certainly be single-threaded.
>
>> Perhaps you should think more about how you are using this.
>> If you don't need
>> to rebuild the whole image every time, maybe you can use the
>> package management
>> tools instead?  For example, I routinely build images as well
>> but I also try to
>> use 'opkg' as much as possible to manage package updates,
>> etc.   This is a huge
>> time saver, especially when making small or incremental
>> changes.  I only rely
>> on the full image builds when I want to "checkpoint" the
>> state of the system.
>
> I'd like to try that, but I'm not sure how. If I've tweaked one recipe,
> how do I get it to build it and package it, and then stop? Do I use
> "bitbake -c package"? And then do I use "opkg -d" to manually install it
> directly onto my SD card? If my rootfs is a loop mounted hddimage in a
> FAT16 file (as it is on my Atom project), do I loop mount it on my build
> system and install into that?

Not quite - you build the packages (recipes) on your build host, but manage
them directly on your target hardware.  Of course, this assumes that your
build host and target hardware are network connected.

You can [re]build any single package like this:
   % bitbake <recipe-name>
The '-c' option is used to select a particular build phase, not what you
are looking for here.

In order to use the package management on your target device, you need
to export the package set.  This is typically done using HTTP, so you'll
need some sort of "web server".  If you don't already have one running
on your build host (or wherever you want to host the packages), I'd suggest
using 'lighttpd' which is very simple to set up.  Once you have it running,
simply export the package set.  For example, I run lighttpd on my build host
and export the packages for a particular machine/build like this:
   % ln -s ${BUILD}/tmp/deploy/ipk /var/www/lighttpd/BOARD-feeds
As you can see, I use 'ipk' packaging, which on the board is handled by the
'opkg' tool.

One key thing is on your build host you must remember to update/rebuild the
package database(s) whenever you make any changes, be it building new packages
or just rebuilding extant ones.  This is done using a special recipe:
   % bitbake package-index
I typically mix these like this:
   % bitbake some-recipe && bitbake package-index
Notice that you can't put them both on the same command, e.g.
   % bitbake some-recipe package-index
as the 'package-index' recipe can only be built when all other recipes are
complete and that won't happen if you try them both at the same time.

In the case of 'ipk' packages, you'll need to set up the board to make use of
the exported package sets.  There are a number of ways to do this, but in the
case of 'ipk' you can do it all in a single file.  Here's an example on my
SabreLite (i.MX6 ARM system):
   root@sabrelite:~# cat /etc/opkg/base-feeds.conf
   src/gz poky_am-all http://192.168.1.125/sabrelite-feeds/all
   src/gz poky_am-armv7a-vfp-neon http://192.168.1.125/sabrelite-feeds/armv7a-vfp-neon
   src/gz poky_am-sabrelite http://192.168.1.125/sabrelite-feeds/sabrelite

This file tells 'opkg' where to find the various packages which have been broken down
into board specific, architecture specific and general packages.  This is how Poky/Yocto
is setting things up, so this file is just making those connections.  In the example
above, 192.168.1.125 is the IP address of my build host (which you can specify using
any DNS or IP notation) and 'sabrelite-feeds' is the link to my board specific packages,
set up as above.

To use this set up on the board, first you need to update the board's copy of the
package databases.  This is used to figure out what packages are available, what they
contain/provide and what the package dependencies are.
   root@sabrelite:~# opkg update
Once the databases are up to date, you can install/remove/... as needed.
   root@sabrelite:~# opkg install some-new-package

I'm sure there are many details I've left out, so feel free to ask questions
as needed.  I also explicitly left out any discussion of 'rpm' packaging as I
don't use that on my targets and really don't know the details.  Hopefully
someone will document all of this in great detail some day. (in fact I filed a
bug to this end many years ago...)

>
> Installing directly to the card would be nice because copying the whole
> damn rootfs to the card takes an annoying amount of time, too.
>

In my mind, the key is to do this level of copying as rarely as possible and
just use the package management tools the rest of the time.  It can be a huge
time saver, for example my build host is at my main desk in the US and many of
my target boards are actually in the UK.  It's quite painful to transfer large
(complete system) images "across the pond" whereas the 'opkg update;opkg install xxx'
runs very quickly and keeps the whole process manageable :-)

-- 
------------------------------------------------------------
Gary Thomas                 |  Consulting for the
MLB Associates              |    Embedded world
------------------------------------------------------------


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28 10:55     ` Gary Thomas
@ 2013-08-28 11:29       ` Paul Barker
  2013-09-02 12:26       ` Burton, Ross
  2013-10-02 16:37       ` Trevor Woerner
  2 siblings, 0 replies; 14+ messages in thread
From: Paul Barker @ 2013-08-28 11:29 UTC (permalink / raw
  To: Gary Thomas; +Cc: Yocto discussion list, Paul D. DeRocco

On 28 August 2013 11:55, Gary Thomas <gary@mlbassoc.com> wrote:
>
> Not quite - you build the packages (recipes) on your build host, but manage
> them directly on your target hardware.  Of course, this assumes that your
> build host and target hardware are network connected.
>

I threw together a minimal Raspberry Pi system image for a hobbyist
group I run and do exactly this. I wrote a recipe to create the
required feed config, it may be a useful template for others:

https://bitbucket.org/homebrewtech/meta-mmmpi/src/master/recipes-core/mmmpi-feed/mmmpi-feed.bb

You probably need to change the distro name, list of architectures and
the URLs, but the structure is fine.

-- 
Paul Barker

Email: paul@paulbarker.me.uk
http://www.paulbarker.me.uk


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28  8:49         ` Samuel Stirtzel
@ 2013-08-28 11:30           ` Samuel Stirtzel
  2013-09-04 23:50             ` Paul D. DeRocco
  0 siblings, 1 reply; 14+ messages in thread
From: Samuel Stirtzel @ 2013-08-28 11:30 UTC (permalink / raw
  To: Paul D. DeRocco; +Cc: yocto@yoctoproject.org

2013/8/28 Samuel Stirtzel <s.stirtzel@googlemail.com>:
> 2013/8/28 Paul D. DeRocco <pderocco@ix.netcom.com>:
>>> From: Martin Jansa
>>>
>>> Are you sure that you're not building some unnecessary IMAGE_FSTYPES?
>>
>> No, I'm not sure.
>>
>>> Last time someone asked my why it takes so long I've added some debug
>>> output to do_rootfs and found out that only half of the time was opkg
>>> installing packages and the rest was various IMAGE_FSTYPES.
>>>
>>> e.g. tar.bz2 takes very long without pbzip2 or lbzip2
>>
>> Is there a standard way to use those in a build? Do I replace bzip2 with a
>> link to one of those? Or does Yocto build its own bzip2?
>>
>
> Hi,
>
> look/grep for IMAGE_FSTYPE, if there is a += "tar.bz2" or multiple
> identical += lines then you can be sure that do_rootfs is wasting
> time.
>
>
> A virtual package manager which only composes the package database in
> a multi-threaded way could be seen as a silver bullet here.
> Also pigz [1] and pbzip2 [2] could save some minutes / seconds
> depending on the image size.
>
>
> [1] http://zlib.net/pigz/
> [2] http://compression.ca/pbzip2/
>

Forgot to mention something important...

If you change:

COMPRESS_CMD_gz = "gzip -f -9 -c ${IMAGE_NAME}.rootfs.${type} >
${IMAGE_NAME}.rootfs.${type}.gz"
COMPRESS_CMD_bz2 = "bzip2 -f -k ${IMAGE_NAME}.rootfs.${type}"

in [...]/meta/classes/image_types.bbclass, then can try pbzip2 / pigz.



-- 
Regards
Samuel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28 10:55     ` Gary Thomas
  2013-08-28 11:29       ` Paul Barker
@ 2013-09-02 12:26       ` Burton, Ross
  2013-10-02 16:37       ` Trevor Woerner
  2 siblings, 0 replies; 14+ messages in thread
From: Burton, Ross @ 2013-09-02 12:26 UTC (permalink / raw
  To: Gary Thomas; +Cc: yocto@yoctoproject.org, Paul D. DeRocco

On 28 August 2013 11:55, Gary Thomas <gary@mlbassoc.com> wrote:
>
> In the case of 'ipk' packages, you'll need to set up the board to make use
> of
> the exported package sets.  There are a number of ways to do this, but in
> the
> case of 'ipk' you can do it all in a single file.  Here's an example on my
> SabreLite (i.MX6 ARM system):
>   root@sabrelite:~# cat /etc/opkg/base-feeds.conf
>   src/gz poky_am-all http://192.168.1.125/sabrelite-feeds/all
>   src/gz poky_am-armv7a-vfp-neon
> http://192.168.1.125/sabrelite-feeds/armv7a-vfp-neon
>   src/gz poky_am-sabrelite http://192.168.1.125/sabrelite-feeds/sabrelite

meta-oe has a distro-feed-configs recipe that can do this bit for you
if you set a few variables in your configuration.

Ross


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28 11:30           ` Samuel Stirtzel
@ 2013-09-04 23:50             ` Paul D. DeRocco
  2013-09-05  6:56               ` Nicolas Dechesne
  0 siblings, 1 reply; 14+ messages in thread
From: Paul D. DeRocco @ 2013-09-04 23:50 UTC (permalink / raw
  To: 'Samuel Stirtzel'; +Cc: yocto

> From: Samuel Stirtzel
> 
> If you change:
> 
> COMPRESS_CMD_gz = "gzip -f -9 -c ${IMAGE_NAME}.rootfs.${type} >
> ${IMAGE_NAME}.rootfs.${type}.gz"
> COMPRESS_CMD_bz2 = "bzip2 -f -k ${IMAGE_NAME}.rootfs.${type}"
> 
> in [...]/meta/classes/image_types.bbclass, then can try 
> pbzip2 / pigz.

This raises a question that has come up for me before: is there a way to
override something in a .bbclass, without just editing the file and
therefore losing it whenever new metadata is released? I really have
little sense of the order in which things are read or obeyed in the whole
bitbake process. Those variables look like they're read by the
get_imagecmds() script; is there an opportunity for a recipe to change the
value of COMPRESS_CMD_gz or _bz2 after it's defined, but before that
script gets called?

-- 

Ciao,               Paul D. DeRocco
Paul                mailto:pderocco@ix.netcom.com 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-09-04 23:50             ` Paul D. DeRocco
@ 2013-09-05  6:56               ` Nicolas Dechesne
  0 siblings, 0 replies; 14+ messages in thread
From: Nicolas Dechesne @ 2013-09-05  6:56 UTC (permalink / raw
  To: Paul D. DeRocco; +Cc: Yocto list discussion

[-- Attachment #1: Type: text/plain, Size: 2097 bytes --]

On Thu, Sep 5, 2013 at 1:50 AM, Paul D. DeRocco <pderocco@ix.netcom.com>wrote:

> > From: Samuel Stirtzel
> >
> > If you change:
> >
> > COMPRESS_CMD_gz = "gzip -f -9 -c ${IMAGE_NAME}.rootfs.${type} >
> > ${IMAGE_NAME}.rootfs.${type}.gz"
> > COMPRESS_CMD_bz2 = "bzip2 -f -k ${IMAGE_NAME}.rootfs.${type}"
> >
> > in [...]/meta/classes/image_types.bbclass, then can try
> > pbzip2 / pigz.
>
> This raises a question that has come up for me before: is there a way to
> override something in a .bbclass, without just editing the file and
> therefore losing it whenever new metadata is released?
>

yes, you can ;-)


>  I really have
> little sense of the order in which things are read or obeyed in the whole
> bitbake process. Those variables look like they're read by the
> get_imagecmds() script; is there an opportunity for a recipe to change the
> value of COMPRESS_CMD_gz or _bz2 after it's defined, but before that
> script gets called?
>

there are 2 types of 'files'. Configuration files and recipes (.bb,
.bbclass, .bbappend). Configuration files are read in the order defined in
meta/conf/bitbake.conf (see toward the end, all .conf files are being
included). The order there will dicdacte "who overrides what". Then you can
use the various operators ( =, ?=, +=, ...).

For 'recipes' my understanding is that they are parsed line by line from
the beginning of the .bb file till the end. When reading a .bb file when
the parser encounters a 'require/include' or a 'inherit' it then parses the
corresponding file.

in your specific case, COMPRESS_CMD_gz is set in
 meta/classes/image_types.bbclass, which is inherited in image.bbclass. So
if you want to override it, you need to do it *after* your inherit "image"
in your own image .bb file (or core-image, if that's what you use).

you can how bitbake parsed a variable with:

bitbake -e <image> | grep ^VARIABLE.

E.g. in my image, after adding:

inherit core-image
+COMPRESS_CMD_gz="my_own_gz"



$ bitbake -e my-generic-image | grep ^COMPRESS_CMD_gz
COMPRESS_CMD_gz="my_own_gz"

[-- Attachment #2: Type: text/html, Size: 3777 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-08-28 10:55     ` Gary Thomas
  2013-08-28 11:29       ` Paul Barker
  2013-09-02 12:26       ` Burton, Ross
@ 2013-10-02 16:37       ` Trevor Woerner
  2013-10-02 19:20         ` Gary Thomas
  2 siblings, 1 reply; 14+ messages in thread
From: Trevor Woerner @ 2013-10-02 16:37 UTC (permalink / raw
  To: Gary Thomas; +Cc: yocto@yoctoproject.org

On 28 August 2013 06:55, Gary Thomas <gary@mlbassoc.com> wrote:
> Hopefully
> someone will document all of this in great detail some day. (in fact I filed
> a
> bug to this end many years ago...)

It would appear that
https://bugzilla.yoctoproject.org/show_bug.cgi?id=1088 can now be
closed as a result of commit 581778c52493b662f449bbbed36453f161501c18
in git://git.yoctoproject.org/yocto-docs.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Can anything be done about do_rootfs speed?
  2013-10-02 16:37       ` Trevor Woerner
@ 2013-10-02 19:20         ` Gary Thomas
  0 siblings, 0 replies; 14+ messages in thread
From: Gary Thomas @ 2013-10-02 19:20 UTC (permalink / raw
  To: Trevor Woerner; +Cc: yocto@yoctoproject.org

On 2013-10-02 10:37, Trevor Woerner wrote:
> On 28 August 2013 06:55, Gary Thomas <gary@mlbassoc.com> wrote:
>> Hopefully
>> someone will document all of this in great detail some day. (in fact I filed
>> a
>> bug to this end many years ago...)
>
> It would appear that
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=1088 can now be
> closed as a result of commit 581778c52493b662f449bbbed36453f161501c18
> in git://git.yoctoproject.org/yocto-docs.

Technically, yes, but the section on IPK based systems is incredibly
thin.  I would be hard pressed to figure this out if I didn't know how
to do it already.  I can't really comment on the RPM section as I don't
use that method.

-- 
------------------------------------------------------------
Gary Thomas                 |  Consulting for the
MLB Associates              |    Embedded world
------------------------------------------------------------


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2013-10-02 19:20 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-27 22:27 Can anything be done about do_rootfs speed? Paul D. DeRocco
2013-08-27 23:25 ` Gary Thomas
2013-08-28  0:10   ` Paul D. DeRocco
2013-08-28  7:20     ` Martin Jansa
2013-08-28  8:36       ` Paul D. DeRocco
2013-08-28  8:49         ` Samuel Stirtzel
2013-08-28 11:30           ` Samuel Stirtzel
2013-09-04 23:50             ` Paul D. DeRocco
2013-09-05  6:56               ` Nicolas Dechesne
2013-08-28 10:55     ` Gary Thomas
2013-08-28 11:29       ` Paul Barker
2013-09-02 12:26       ` Burton, Ross
2013-10-02 16:37       ` Trevor Woerner
2013-10-02 19:20         ` Gary Thomas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.