[OE-core] [yocto] RFC: Reference updater filesystem
Jens Rehsack
rehsack at gmail.com
Mon Nov 30 18:20:38 UTC 2015
> 2015-11-30 14:10 GMT-02:00 Jens Rehsack <rehsack at gmail.com>:
>>
>>> Am 23.11.2015 um 22:15 schrieb Mariano Lopez <mariano.lopez at intel.com>:
>>>
>>> There has been interest in an image based software updater in Yocto Project. The proposed solution for a image based updater is to use Stefano Babic's software updater (http://sbabic.github.io/swupdate). This software do a binary copy, so it is needed to have at least two partitions, these partitions would be the rootfs and the maintenance partition. The rootfs it's the main partition used to boot during the normal device operation, on the other hand, the maintenance will be used to update the main partition.
>>>
>>> To update the system, the user has to connect to device and boot in the maintenance partition; once in the maintenance partition the software updater will copy the new image in the rootfs partition. A final reboot into the rootfs it is necessary to complete the upgrade.
>>>
>>> As mentioned before the the software will copy an image to the partition, so everything in that partition will be wiped out, including custom configurations. To avoid the loss of configuration I explore three different solutions:
>>> 1. Use a separate partition for the configuration.
>>> a. The pro of this method is the partition is not touched during the update.
>>> b. The con of this method is the configuration is not directly in rootfs (example: /etc).
>>>
>>> 2. Do the backup during the update.
>>> a. The pro is the configuration is directly in rootfs.
>>> b. The con is If the update fail most likely the configuration would be lost.
>>>
>>> 3. Have an OverlayFS for the rootfs or the partition that have the configuration.
>>> a. The pro is the configuration is "directly" in rootfs.
>>> b. The con is there is need to provide a custom init to guarantee the Overlay is mounted before the boot process.
>>>
>>> With the above information I'm proposing to use a separate partition for the configuration; this is because is more reliable and doesn't require big changes in the current architecture.
>>>
>>> So, the idea is to have 4 partitions in the media:
>>> 1. boot. This is the usual boot partition
>>> 2. data. This will hold the configuration files. Not modified by updates.
>>> 3. maintenance. This partition will be used to update rootfs.
>>> 4. rootfs. Partition used for normal operation.
>>
>> That's what we currently have implemented and running in field for a while with a small difference:
>>
>> 1) We don't use Stefano Babic's software updater, but an own script which deals with initial software flash and later update similar - https://github.com/rdm-dev/meta-jens/tree/jethro/recipes-rdm/prd
>> 2) We have integrated the updater with an update-service which can download the new image and install based on a manifest (signature support comes with next update) - https://github.com/rdm-dev/meta-jens/tree/jethro/recipes-rdm/system-image // http://www.netbsd.org/~sno/talks/nrpm/Moo-at-System-Image-Update.pdf
>> 3) We use
>>
>> boot
>> rootfs
>> maintfs
>> data
>>
>> This layout allows us to extend data to fit the entire storage with know sizes for boot, rootfs and maintfs
>>
>> 4) Overlayfs with all serices is implemented (Update-wise, when coming from 3.10 to 3.18 or coming from 3.0 with unionfs to overlay ...) - https://github.com/rdm-dev/meta-jens/tree/jethro/recipes-core/initoverlay
>>
>> Feel free to use that solution if you want.
> Am 30.11.2015 um 17:54 schrieb Daniel. <danielhilst at gmail.com>:
>
> Hi,
>
> Hey Jen, I was looking for an image upgrade solution and factory reset
> solution using overlayfs. The idea have two partitions one read-only
> with the factory image, other to hold the changes that were made by
> time. The factory reset feature should be triggered by a hided button
> that can be pressed with help of a clips. I was thinking in using an
> init ram disk to wipe out the rw partition, making the rootfs clean as
> after an image installation. The upgrader tool shold re-flash a new
> image to rootfs. Old rootfs is lost. The configuration changes that
> have been holded by overlayfs should be wiped-out too, I didn't think
> about that, is something to take in account.
>
> Are you using overlayfs? How is it going? What difficulties you have found?
Yes, we do. All difficulties we'd found are solved in referred initoverlay
recipe. Maybe one fine I write a blog post regarding that topic ;)
> Other solution whould be using Smart package manager to upgrade the
> rootfs, but this doesn't attend my need for factory-reset.
How does that fit into an ro rootfs?
> Please tell me more about your experience with overlayfs :)
It's more stable than unionfs. There was little effort updating
systems from 3.10 with overlay patch to 4.1 with overlay builtin
kernel (work dirs must be created - https://github.com/rdm-dev/meta-jens/blob/jethro/recipes-core/initoverlay/initoverlay/migrate2overlay.sh#L28-L30)
Cheers
--
Jens Rehsack - rehsack at gmail.com
More information about the Openembedded-core
mailing list