[OE-core] [PATCH V2] sqlite3: Revert ad601c7962 from 3.14.1 amalgamation package
Jussi Kukkonen
jussi.kukkonen at intel.com
Thu Nov 10 18:32:13 UTC 2016
On 10 November 2016 at 19:53, Patrick Ohly <patrick.ohly at intel.com> wrote:
> On Thu, 2016-10-13 at 13:16 -0700, Jianxun Zhang wrote:
> > It turns out this change between 3.12.2 and 3.13 introduces
> > a 2% increase of build time based on statistic data in
> > bz10367.
>
> Let me add that this patch increased build performance in Ostro even
> more: apparently one big impact of the sqlite performance issue is on
> pseudo. Ostro depends fairly heavily on pseudo because of meta-swupd and
> xattrs on all files.
>
> When this patch and others recently landed in Ostro, total build times
> dropped from 4:46h (build #508,
> https://ostroproject.org/jenkins/job/build_intel-corei7-64/2763/console)
> to 2:07h (build #510,
> https://ostroproject.org/jenkins/job/build_intel-corei7-64/2831/console).
>
> That could also be because of other improvements, perhaps even in our CI
> hardware, so take those numbers with a large chunk of salt.
>
> However, in local builds with and without this patch (i.e. everything
> else the same) I also see big differences for pseudo-heavy operations:
>
> $ buildstats-diff --diff-attr walltime --min-val 60 with-patch/
> without-patch/
> Ignoring tasks less than 01:00.0 (60.0s)
> Ignoring differences less than 00:02.0 (2.0s)
>
> PKG TASK
> ABSDIFF RELDIFF WALLTIME1 -> WALLTIME2
> ...
> bundle-ostro-image-swupd-qa-bundle-b do_rootfs
> 78.1s +115.4% 67.7s -> 145.8s
> bundle-ostro-image-swupd-qa-bundle-a do_rootfs
> 80.3s +116.8% 68.8s -> 149.1s
> bundle-ostro-image-swupd-qa-bundle-a do_image
> 106.8s +291.9% 36.6s -> 143.3s
> bundle-ostro-image-swupd-qa-bundle-b do_image
> 107.9s +298.2% 36.2s -> 144.1s
> bundle-ostro-image-swupd-mega do_image
> 244.4s +74.2% 329.2s -> 573.6s
> bundle-ostro-image-swupd-world-dev do_rootfs
> 246.7s +207.2% 119.1s -> 365.8s
> bundle-ostro-image-swupd-world-dev do_image
> 269.2s +83.5% 322.6s -> 591.7s
> bundle-ostro-image-swupd-mega do_rootfs
> 272.6s +246.1% 110.8s -> 383.3s
> ostro-image-swupd do_rootfs
> 676.1s +808.1% 83.7s -> 759.8s
> bundle-ostro-image-swupd-world-dev do_copy_bundle_contents
> 1339.5s +2957.6% 45.3s -> 1384.8s
> bundle-ostro-image-swupd-qa-bundle-b do_copy_bundle_contents
> 1475.0s +3147.8% 46.9s -> 1521.9s
> bundle-ostro-image-swupd-qa-bundle-a do_copy_bundle_contents
> 1503.9s +3283.0% 45.8s -> 1549.8s
>
> Cumulative walltime:
> 6070.9s +326.9% 30:57.3 (1857.3s) -> 2:12:08.2 (7928.2s)
>
yikes.
So it really seems that the sqlite change is a very relevant
> improvement.
>
> That leads me to a bigger question: has upstream been notified about
> this?
>
> Our observation may also be relevant to other sqlite users. Besides, not
> getting this fixed upstream means that we'll have to do the same tricky
> revert for the next upstream version update.
>
Completely true. It may also be worthwhile to profile what pseudo is really
doing. The feeling I had was that there could be something going wrong
there as recipes that install lots of files not only take very long but
also create _very_ large database file: The increases do not seem even
close to linear as one might expect. Seebs gave some advice about this in
the bug (https://bugzilla.yoctoproject.org/show_bug.cgi?id=10367#c11).
Jussi
> --
> Best Regards, Patrick Ohly
>
> The content of this message is my personal opinion only and although
> I am an employee of Intel, the statements I make here in no way
> represent Intel's position on the issue, nor am I authorized to speak
> on behalf of Intel on this matter.
>
>
>
> --
> _______________________________________________
> Openembedded-core mailing list
> Openembedded-core at lists.openembedded.org
> http://lists.openembedded.org/mailman/listinfo/openembedded-core
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openembedded.org/pipermail/openembedded-core/attachments/20161110/85754872/attachment-0002.html>
More information about the Openembedded-core
mailing list