Using IceCC: Difference between revisions

From Openembedded.org
Jump to navigation Jump to search
(New page: == IceCC and OE == It is possible to compile on a cluster of machines with OE, and quite easily so. This code is still somewhat experimental, but should rapidly stabilize thanks to the w...)
 
No edit summary
 
(14 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== IceCC and OE ==
== IceCC and OE ==
It is possible to compile on a cluster of machines with OE, and quite easily so. This code is still somewhat experimental, but should rapidly stabilize thanks to the work that Ifaistos has put into it recently and previous work from likewise and zecke (I hope I did not forget anyone).  You need to take the following steps to prepare for compilation with icecc.  How to set up icecc itself is beyond this intro.
It is possible to compile on a cluster of machines with OE, and quite easily so. To do so add the following to your local.conf:


* Put ''INHERIT += "icecc"'' in your local.conf
ICECC_PARALLEL_MAKE = "-j 24"
* copy the customized [http://bugs.openembedded.org/attachment.cgi?id=1032&action=view create-icecc-env.sh] script to /path/to/your/oe/root/tmp/ and make it executable.
INHERIT += "icecc"
* set ''ICECC_ENV_EXEC = /path/to/your/oe/root/tmp/create-icecc-env.sh'' in your local.conf
* set ''ICECC_PATH = /usr/bin/icecc'' in your local.conf (be sure this matches the output of 'which icecc')


User now can specify if certain packages or packages belonging to class should not use icecc to distribute compile jobs to remote machines, but handled localy, by defining ICECC_USER_CLASS_BL and ICECC_USER_PACKAGE_BL with the appropriate values in local.conf
General consensus is that ICECC_PARALLEL_MAKE should be between 2x to 4x the number of cores your CPU has. Too many and your PC will be unusable, too few and you won't keep the cluster busy.


=== Icecc and sstate interaction ===


=== ICECC config ===
Inheriting icecc will change the hashes for all tasks, so you cannot share sstate between builds that have icecc inherited and those that don't. The recommended solution is add support for icecream to all builds as follows:


INHERIT += "icecc"
ICECC_DISABLED ??= "1"


#
Now, users that want to use icecream can enable it by setting ICECC_DISABLED = "0", but everyone can still share sstate.
# Nice level of running compilers
#
# ICECC_NICE_LEVEL="5"
ICECC_NICE_LEVEL="5"
#
# icecc daemon log file
#
# ICECC_LOG_FILE="/var/log/iceccd.log"
ICECC_LOG_FILE="/var/log/iceccd.log"
#
# Identification for the network the scheduler and daemon run on.
# You can have several distinct icecc networks in the same LAN
# for whatever reason.
#
# ICECC_NETNAME=""
ICECC_NETNAME="oe"
#
# You can overwrite here the number of jobs to run in parallel. Per
# default this depends on the number of (virtual) CPUs installed.
#
# ICECC_MAX_JOBS=""
ICECC_MAX_JOBS="3"
#
# This is the directory where the icecc daemon stores the environments
# it compiles in. In a big network this can grow quite a bit, so use some
# path if your /tmp is small - but the user icecc has to write to it.
#
# ICECC_BASEDIR="/var/cache/icecc"
ICECC_BASEDIR="/var/cache/icecc"
#
# icecc scheduler log file
#
# ICECC_SCHEDULER_LOG_FILE="/var/log/icecc_scheduler.log"
ICECC_SCHEDULER_LOG_FILE="/var/log/icecc_scheduler.log"
#
# If the daemon can't find the scheduler by broadcast (e.g. because
# of a firewall) you can specify it.
#
# ICECC_SCHEDULER_HOST=""
ICECC_SCHEDULER_HOST=""


=== Recipe build failures ===


=== Local Configuration ===
Many recipes may fail to build under icecream for a variety of reasons. If you find a recipe that does so, you can add it to ICECC_USER_PACKAGE_BL in local.conf.


A sample local.conf entry for icecc that does not distribute compiles jobs for native packages looks like this.
We try to keep the system blacklist (ICECC_SYSTEM_PACKAGE_BL) up to date, but packages sometimes get missed (or aren't part of oe-core and don't get tested at all).
Change the paths to match your setup


PARALLEL_MAKE = "-j 10"
Note that any recipe that disables PARALLEL_MAKE by setting it to "" will automatically have icecream disabled.
ICECC_PATH = "/usr/bin/icecc"
 
ICECC_ENV_EXEC = "/proj/oplinux-0.2/op-linux/branches/oplinux-0.2/tmp/icecc-create-env"
=== Dedicating a scheduler ===
ICECC_USER_CLASS_BL = " native"
By default, icecream will pseudo-randomly assign one of the compile nodes to be the scheduler (the node that decides where all the compiles get distributed). In some cases, it may be desirable
INHERIT += "icecc"
to dedicate a specific computer to be the scheduler. You may want to do this if:
* The cluster is large. The CPU time required to do scheduling can be noticable
* The cluster has a lot of mobile devices (e.g. laptops). If one of these happens to be assigned to be the scheduler and then goes offline, it can take an while to recover.


=== Network Bandwidth Considerations ===
Icecream performance is very dependent on the speed of the network between the cluster nodes. As a rule of thumb, 1 Gpbs or better is ideal, and with anything less than 100 Mbps, it's probably not worth using at all. Wifi is dubiously useful at any speed.


=== Credits ===
Icecream support was originally implement by zecke, improved by Ifaistos, and most recently by Joshua Watt (JPEW)


=== Successes and Failures ===
=== Utilities ===
There are several useful utilities when dealing with icecream:
* [https://github.com/icecc/icemon icemon] - A gui tool for monitoring the cluster
* [https://github.com/JPEWdev/icecream-sundae icecream-sundae] - A command line tool for monitoring the cluster


This whole explanation is probably highly dependent on the icecc version used. Please do add success reports here.  Also note that a lot of packages turn PARALLEL_MAKE off, as they break if compiled on a single machine, although i have noticed that they do not fail under icecc.  Any feedback on this would be helpful
[[Category:User]]
[[Category:FAQ]]


* mixing versions of the icecc package can create problems. For example the icecc package from dapper and edgy are incompatible.
=== More Information ===
* I (Laibsch) have a working setup between two edgy machines now.  I had trouble until I set ICECC_NETNAME and ICECC_SCHEDULER_HOST to the respective, pingable hostnames of the machines.  This is weird since I am on a LAN (192.168.1.x) for both machines and there is no firewall.
* [https://www.youtube.com/watch?v=VpK27pI64jQ&list=PLbzoR-pLrL6ol7Cf1g_4rsCda23OiLh8d&index=26 Sweeten Your Yocto Build Times with Icecream] - Joshua Watt, Embedded Linux Conference North America 2019 ([https://drive.google.com/file/d/1SKI9P86fx-IHLoststTn9kh22xaln6AA/view?usp=sharing slides])

Latest revision as of 15:00, 15 January 2020

IceCC and OE

It is possible to compile on a cluster of machines with OE, and quite easily so. To do so add the following to your local.conf:

ICECC_PARALLEL_MAKE = "-j 24"
INHERIT += "icecc"

General consensus is that ICECC_PARALLEL_MAKE should be between 2x to 4x the number of cores your CPU has. Too many and your PC will be unusable, too few and you won't keep the cluster busy.

Icecc and sstate interaction

Inheriting icecc will change the hashes for all tasks, so you cannot share sstate between builds that have icecc inherited and those that don't. The recommended solution is add support for icecream to all builds as follows:

INHERIT += "icecc"
ICECC_DISABLED ??= "1"

Now, users that want to use icecream can enable it by setting ICECC_DISABLED = "0", but everyone can still share sstate.

Recipe build failures

Many recipes may fail to build under icecream for a variety of reasons. If you find a recipe that does so, you can add it to ICECC_USER_PACKAGE_BL in local.conf.

We try to keep the system blacklist (ICECC_SYSTEM_PACKAGE_BL) up to date, but packages sometimes get missed (or aren't part of oe-core and don't get tested at all).

Note that any recipe that disables PARALLEL_MAKE by setting it to "" will automatically have icecream disabled.

Dedicating a scheduler

By default, icecream will pseudo-randomly assign one of the compile nodes to be the scheduler (the node that decides where all the compiles get distributed). In some cases, it may be desirable to dedicate a specific computer to be the scheduler. You may want to do this if:

  • The cluster is large. The CPU time required to do scheduling can be noticable
  • The cluster has a lot of mobile devices (e.g. laptops). If one of these happens to be assigned to be the scheduler and then goes offline, it can take an while to recover.

Network Bandwidth Considerations

Icecream performance is very dependent on the speed of the network between the cluster nodes. As a rule of thumb, 1 Gpbs or better is ideal, and with anything less than 100 Mbps, it's probably not worth using at all. Wifi is dubiously useful at any speed.

Credits

Icecream support was originally implement by zecke, improved by Ifaistos, and most recently by Joshua Watt (JPEW)

Utilities

There are several useful utilities when dealing with icecream:

  • icemon - A gui tool for monitoring the cluster
  • icecream-sundae - A command line tool for monitoring the cluster

More Information