Is there an way to reduce recompile time?

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

Is there an way to reduce recompile time?

梁栋
I'm an thunderbird developer, I develop in the source code comm-release.
In the past, I had alredy download source code and finish compile, but
the problem is: everytime I change the source code and in the recompile
process, I will spend 10~20
minutes in linking libxul.so.
After some search , I find we uesd to have an option --disable-libxul,
but this options could not use now.
I also try add options "ac_add_options --enable-static", "ac_add_options
--with-ccache=/usr/bin/ccache" in .mozconfig, but nothing change!
Is there any way to disable libxul.so to be compile or reduce recompile
time?
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

Gregory Szorc-3
libxul is the main library used by Gecko. It is possible to not compile it after changes by running `mach build <directory>` or `make -C <directory>`, but changes won't be visible in Thunderbird. i.e. this mode is only useful as a compilation check.

If libxul takes 10-20 minutes to link, your system is likely running into hardware limitations (slow I/O, swapping, slow CPU, etc). You can try reducing the system requirements to build and link by disabling debug symbols. "ac_add_options --disable-debug-symbols" Of course, debugging won't be easy if you go this route.


On Thu, Apr 23, 2015 at 7:07 PM, 梁栋 <[hidden email]> wrote:
I'm an thunderbird developer, I develop in the source code comm-release.
In the past, I had alredy download source code and finish compile, but the problem is: everytime I change the source code and in the recompile process, I will spend 10~20
minutes in linking libxul.so.
After some search , I find we uesd to have an option --disable-libxul, but this options could not use now.
I also try add options "ac_add_options --enable-static", "ac_add_options --with-ccache=/usr/bin/ccache" in .mozconfig, but nothing change!
Is there any way to disable libxul.so to be compile or reduce recompile time?
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds


_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

梁栋
In reply to this post by 梁栋
Thanks for your answer.
Maybe my PC is too slow, ^_^
Beacuse I'm on the develop and debug, so I think I can't add options
"--disable-debug-symbols", I think I can build code in two dirctory, one
for code change without debug symbol, If error happen I debug on other
dirctory, but this will speed my develop down.
Now I'm trying to change moz.config file, to build mailnew into separate
shared library (libmail.so) which shouldn't  link to libxul, Or first I
should change a faster computer.
Thanks for your answer again, I think my appreciation is more than words
that I can say!


在 2015/4/25 1:31, Gregory Szorc 写道:

> libxul is the main library used by Gecko. It is possible to not compile
> it after changes by running `mach build <directory>` or `make -C
> <directory>`, but changes won't be visible in Thunderbird. i.e. this
> mode is only useful as a compilation check.
>
> If libxul takes 10-20 minutes to link, your system is likely running
> into hardware limitations (slow I/O, swapping, slow CPU, etc). You can
> try reducing the system requirements to build and link by disabling
> debug symbols. "ac_add_options --disable-debug-symbols" Of course,
> debugging won't be easy if you go this route.
>
>
> On Thu, Apr 23, 2015 at 7:07 PM, 梁栋 <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     I'm an thunderbird developer, I develop in the source code comm-release.
>     In the past, I had alredy download source code and finish compile,
>     but the problem is: everytime I change the source code and in the
>     recompile process, I will spend 10~20
>     minutes in linking libxul.so.
>     After some search , I find we uesd to have an option
>     --disable-libxul, but this options could not use now.
>     I also try add options "ac_add_options --enable-static",
>     "ac_add_options --with-ccache=/usr/bin/ccache" in .mozconfig, but
>     nothing change!
>     Is there any way to disable libxul.so to be compile or reduce
>     recompile time?
>     _______________________________________________
>     dev-builds mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.mozilla.org/listinfo/dev-builds
>
>


_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

ISHIKAWA,chiaki
In reply to this post by 梁栋
(Sorry for top posting.)

You may want to use GNU gold linker if you don't use it already.
(I am assuming that you are compiling under posix-compliant systems such as
linux or MacOS X.
You mention .so, thus Windows can be ruled out, I suppose.)

Use of GNU gold linker has sped up link speed very much for me.
It is indispensable now.

Also, you might want to use -gsplit-dwarf compiler option,
which forces GCC to store part of debug information in a separate file from
the main object file:
That is, GCC transforms source.c into two files: source.o and source.dwo
source.dwo contains the bulk of debug information although source.o still
contains some debug information.
GNU gold linker needs to handle only source.o, leaving source.dwo alone,
thus I/O
during linking is reduced very much: faster execution time!

Using GNU gold liner with -gsplit-dwarf cut down my link time very much
under 32-bit linux
AND the processing uses much less virtual memory: actually without
-gsplit-dwarf and GNU gold linker,
I began running out 32-bit address space of 32-bit Debian GNU/linux
during linking of full DEBUG-version of TB a couple of years ago.
So thunderbird could not be built any more under 32-bit linux.

Then I learnt of GNU gold.

With gcc -gsplit-dwarf and GNU gold linker, I could link thunderbird under
32-bit linux and linking of libxul went down to a few minutes from a dozen
minutes! Unbelievable at first, but it is true.
(But now I am using 64-bit linux for main development due to other reasons.
I do compile 32-bit TB under 32-bit linux occasionally to test a local patch
of the moment. So I am sure that TB can be linked
with -gsplit-dwarf and GNU gold linker under 32-bit linux today. At least I
could do so early this month.)

Also, if you are under linux or posix-variant OS, you may want to install
ccache to avoid
unnecessary compilation. That helps when one need to shuffle patch queues using
hg qpush/qpop commands. They update the source file's modified time thus forcing
re-compilation even if the content remains the same as before when you
compiled them last :-(
ccache can help us here by looking at the preprocessed source file and
decides if a true compilation and generation of new binary is necessary or
not, and if the content remains the same, ccache picks up the
old cached binary instantly.

CAUTION: the official ccache does not support -gsplit-dwarf.
There is a hacked version to support -gsplit-dwarf.
See https://bugzilla.samba.org/show_bug.cgi?id=10005 for details.
You must use this hacked version to take advantage of -gsplit-dwarf.
Guess who wrote the hacked version :-)
The last two patches in the entries must be applied to the main ccache
source tree since
they are not merged yet.
I should clean up these two patches so that they will make it to the
official ccache repository, but I did not have much time this spring.

Hope this helps.

CI

On 2015年04月25日 12:02, dliang wrote:

> Thanks for your answer.
> Maybe my PC is too slow, ^_^
> Beacuse I'm on the develop and debug, so I think I can't add options
> "--disable-debug-symbols", I think I can build code in two dirctory, one for
> code change without debug symbol, If error happen I debug on other dirctory,
> but this will speed my develop down.
> Now I'm trying to change moz.config file, to build mailnew into separate
> shared library (libmail.so) which shouldn't  link to libxul, Or first I
> should change a faster computer.
> Thanks for your answer again, I think my appreciation is more than words
> that I can say!
>
>
> 在 2015/4/25 1:31, Gregory Szorc 写道:
>> libxul is the main library used by Gecko. It is possible to not compile
>> it after changes by running `mach build <directory>` or `make -C
>> <directory>`, but changes won't be visible in Thunderbird. i.e. this
>> mode is only useful as a compilation check.
>>
>> If libxul takes 10-20 minutes to link, your system is likely running
>> into hardware limitations (slow I/O, swapping, slow CPU, etc). You can
>> try reducing the system requirements to build and link by disabling
>> debug symbols. "ac_add_options --disable-debug-symbols" Of course,
>> debugging won't be easy if you go this route.
>>
>>
>> On Thu, Apr 23, 2015 at 7:07 PM, 梁栋 <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>>     I'm an thunderbird developer, I develop in the source code comm-release.
>>     In the past, I had alredy download source code and finish compile,
>>     but the problem is: everytime I change the source code and in the
>>     recompile process, I will spend 10~20
>>     minutes in linking libxul.so.
>>     After some search , I find we uesd to have an option
>>     --disable-libxul, but this options could not use now.
>>     I also try add options "ac_add_options --enable-static",
>>     "ac_add_options --with-ccache=/usr/bin/ccache" in .mozconfig, but
>>     nothing change!
>>     Is there any way to disable libxul.so to be compile or reduce
>>     recompile time?
>>     _______________________________________________
>>     dev-builds mailing list
>>     [hidden email] <mailto:[hidden email]>
>>     https://lists.mozilla.org/listinfo/dev-builds
>>
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

netmana
Linking is a process of resolving symbols from object files with
external libraries, and it should not take that long.  When you have a
chance, try to compile the project under a ram filesystem like tmpfs.
I use it most of the time, even when I have an SSD drive.  The tmpfs
will speed up compilation significantly.
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

ISHIKAWA,chiaki
In reply to this post by ISHIKAWA,chiaki
On 2015/04/28 23:30, Toan Pham wrote:
> Linking is a process of resolving symbols from object files with
> external libraries, and it should not take that long.  When you have a
> chance, try to compile the project under a ram filesystem like tmpfs.
> I use it most of the time, even when I have an SSD drive.  The tmpfs
> will speed up compilation significantly.
>

With full debug build of thunderbird (without -gsplit-dwarf), I think
libxul.so becomes close to 1GB in size.
So creating it puts much workload in terms of I/O, and
in terms of memory pressure, too, since ordinary linker's data structure
to handle the large number of symbols simply runs out of  32-bit memory
space during linking.

Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.

I may want to try to set aside 2GB of RAM to tmpfs given this size of
libxul.so, but as of now, linking libxul runs in a reasonable time by
using GNU gold linker and -gsplit-dwarf option to GCC.

Thank you for the suggestion, if something goes wrong with
linking/build I may try to create a large tmpfs using RAM.

At the same time, I have to report that I monitor the memory usage
during build, and saw most of memory is used as cache/buffer during
linking. Hence I am not sure if tmpfs based on RAM will bring about much
speedup in my environment. My CPU is probably 1 ~ 1.5 generation behind
the latest one. That may explain the slow speed.

TIA


_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

Joshua Cranmer 🐧
In reply to this post by ISHIKAWA,chiaki
On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:

> On 2015/04/28 23:30, Toan Pham wrote:
>> Linking is a process of resolving symbols from object files with
>> external libraries, and it should not take that long.  When you have a
>> chance, try to compile the project under a ram filesystem like tmpfs.
>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>> will speed up compilation significantly.
>>
>
> With full debug build of thunderbird (without -gsplit-dwarf), I think
> libxul.so becomes close to 1GB in size.
> So creating it puts much workload in terms of I/O, and
> in terms of memory pressure, too, since ordinary linker's data
> structure to handle the large number of symbols simply runs out of  
> 32-bit memory space during linking.
>
> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>
> I may want to try to set aside 2GB of RAM to tmpfs given this size of
> libxul.so, but as of now, linking libxul runs in a reasonable time by
> using GNU gold linker and -gsplit-dwarf option to GCC.
>
> Thank you for the suggestion, if something goes wrong with
> linking/build I may try to create a large tmpfs using RAM.
>
> At the same time, I have to report that I monitor the memory usage
> during build, and saw most of memory is used as cache/buffer during
> linking. Hence I am not sure if tmpfs based on RAM will bring about
> much speedup in my environment. My CPU is probably 1 ~ 1.5 generation
> behind the latest one. That may explain the slow speed.

Linking effectively requires building a list of used files,
concatenating the used sections together, dropping redundant COMDATs,
and then patching offsets into the binary, which requires scanning all
of the .o files at least once--the key, then, is to make sure that the
scanning process never touches disk. In other words, you need enough RAM
that the .o files stay resident in the file system cache from when they
were last touched in a compile-edit-rebuild cycle. Outside of ensuring
you have at least 8GB of RAM [1], the only recommendation I can give for
speeding up is using split-dwarf (removes the ginormous debug
information from the linking equation) and using gold, at least on
Linux. I don't think ramfs style builds speed up builds if the
filesystem is already resident in cache.

[1] To be clear, 8GB of RAM isn't the minimum necessary to build mozilla
code (that number appears to be ~4GB). It's just that if you have less
than 8GB of RAM, you're using an underpowered machine to begin with and
therefore should expect that your build times are going to be slow as a
result. Buying more RAM is a relatively cheap investment and it solves
problems much more effectively than trying to convince people to
radically overhaul build systems.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

Mike Hommey
On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:

> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
> >On 2015/04/28 23:30, Toan Pham wrote:
> >>Linking is a process of resolving symbols from object files with
> >>external libraries, and it should not take that long.  When you have a
> >>chance, try to compile the project under a ram filesystem like tmpfs.
> >>I use it most of the time, even when I have an SSD drive.  The tmpfs
> >>will speed up compilation significantly.
> >>
> >
> >With full debug build of thunderbird (without -gsplit-dwarf), I think
> >libxul.so becomes close to 1GB in size.
> >So creating it puts much workload in terms of I/O, and
> >in terms of memory pressure, too, since ordinary linker's data structure
> >to handle the large number of symbols simply runs out of  32-bit memory
> >space during linking.
> >
> >Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
> >
> >I may want to try to set aside 2GB of RAM to tmpfs given this size of
> >libxul.so, but as of now, linking libxul runs in a reasonable time by
> >using GNU gold linker and -gsplit-dwarf option to GCC.
> >
> >Thank you for the suggestion, if something goes wrong with
> >linking/build I may try to create a large tmpfs using RAM.
> >
> >At the same time, I have to report that I monitor the memory usage during
> >build, and saw most of memory is used as cache/buffer during linking.
> >Hence I am not sure if tmpfs based on RAM will bring about much speedup in
> >my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
> >one. That may explain the slow speed.
>
> Linking effectively requires building a list of used files, concatenating
> the used sections together, dropping redundant COMDATs, and then patching
> offsets into the binary, which requires scanning all of the .o files at
> least once--the key, then, is to make sure that the scanning process never
> touches disk. In other words, you need enough RAM that the .o files stay
> resident in the file system cache from when they were last touched in a
> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
> [1], the only recommendation I can give for speeding up is using split-dwarf
> (removes the ginormous debug information from the linking equation) and
> using gold, at least on Linux. I don't think ramfs style builds speed up
> builds if the filesystem is already resident in cache.

(note gold is the default for local builds)
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

梁栋
In reply to this post by Joshua Cranmer 🐧
actullay, now in mozilla build system, gold-ld is the default linker if
your OS has already exist gold-ld, follow is codes copy from
mozilla/configure, (my code version is tb-beta 34-b1):

10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
10031         GOLD=$($CC -print-prog-name=ld.gold)
10032         case "$GOLD" in
10033         /*)
10034             ;;
10035         *)
10036             GOLD=$(which $GOLD)
10037             ;;
10038         esac
10039         if test -n "$GOLD"; then
10040             mkdir -p $_objdir/build/unix/gold
10041             rm -f $_objdir/build/unix/gold/ld
10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
10043             if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1
| grep -q "GNU gold"; then
10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
10045             else
10046                 rm -rf $_objdir/build/unix/gold
10047             fi
10048         fi
10049     fi
10050 fi

On 2015年04月29日 07:07, Mike Hommey wrote:

> On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
>> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
>>> On 2015/04/28 23:30, Toan Pham wrote:
>>>> Linking is a process of resolving symbols from object files with
>>>> external libraries, and it should not take that long.  When you have a
>>>> chance, try to compile the project under a ram filesystem like tmpfs.
>>>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>>>> will speed up compilation significantly.
>>>>
>>>
>>> With full debug build of thunderbird (without -gsplit-dwarf), I think
>>> libxul.so becomes close to 1GB in size.
>>> So creating it puts much workload in terms of I/O, and
>>> in terms of memory pressure, too, since ordinary linker's data structure
>>> to handle the large number of symbols simply runs out of  32-bit memory
>>> space during linking.
>>>
>>> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>>>
>>> I may want to try to set aside 2GB of RAM to tmpfs given this size of
>>> libxul.so, but as of now, linking libxul runs in a reasonable time by
>>> using GNU gold linker and -gsplit-dwarf option to GCC.
>>>
>>> Thank you for the suggestion, if something goes wrong with
>>> linking/build I may try to create a large tmpfs using RAM.
>>>
>>> At the same time, I have to report that I monitor the memory usage during
>>> build, and saw most of memory is used as cache/buffer during linking.
>>> Hence I am not sure if tmpfs based on RAM will bring about much speedup in
>>> my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
>>> one. That may explain the slow speed.
>>
>> Linking effectively requires building a list of used files, concatenating
>> the used sections together, dropping redundant COMDATs, and then patching
>> offsets into the binary, which requires scanning all of the .o files at
>> least once--the key, then, is to make sure that the scanning process never
>> touches disk. In other words, you need enough RAM that the .o files stay
>> resident in the file system cache from when they were last touched in a
>> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
>> [1], the only recommendation I can give for speeding up is using split-dwarf
>> (removes the ginormous debug information from the linking equation) and
>> using gold, at least on Linux. I don't think ramfs style builds speed up
>> builds if the filesystem is already resident in cache.
>
> (note gold is the default for local builds)
>

_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

ISHIKAWA,chiaki
On 2015年04月30日 10:51, dliang wrote:

> actullay, now in mozilla build system, gold-ld is the default linker if your
> OS has already exist gold-ld, follow is codes copy from mozilla/configure,
> (my code version is tb-beta 34-b1):
>
> 10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
> 10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
> 10031         GOLD=$($CC -print-prog-name=ld.gold)
> 10032         case "$GOLD" in
> 10033         /*)
> 10034             ;;
> 10035         *)
> 10036             GOLD=$(which $GOLD)
> 10037             ;;
> 10038         esac
> 10039         if test -n "$GOLD"; then
> 10040             mkdir -p $_objdir/build/unix/gold
> 10041             rm -f $_objdir/build/unix/gold/ld
> 10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
> 10043             if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1 |
> grep -q "GNU gold"; then
> 10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
> 10045             else
> 10046                 rm -rf $_objdir/build/unix/gold
> 10047             fi
> 10048         fi
> 10049     fi
> 10050 fi
>
> On 2015年04月29日 07:07, Mike Hommey wrote:
>> On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
>>> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
>>>> On 2015/04/28 23:30, Toan Pham wrote:
>>>>> Linking is a process of resolving symbols from object files with
>>>>> external libraries, and it should not take that long.  When you have a
>>>>> chance, try to compile the project under a ram filesystem like tmpfs.
>>>>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>>>>> will speed up compilation significantly.
>>>>>
>>>>
>>>> With full debug build of thunderbird (without -gsplit-dwarf), I think
>>>> libxul.so becomes close to 1GB in size.
>>>> So creating it puts much workload in terms of I/O, and
>>>> in terms of memory pressure, too, since ordinary linker's data structure
>>>> to handle the large number of symbols simply runs out of  32-bit memory
>>>> space during linking.
>>>>
>>>> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>>>>
>>>> I may want to try to set aside 2GB of RAM to tmpfs given this size of
>>>> libxul.so, but as of now, linking libxul runs in a reasonable time by
>>>> using GNU gold linker and -gsplit-dwarf option to GCC.
>>>>
>>>> Thank you for the suggestion, if something goes wrong with
>>>> linking/build I may try to create a large tmpfs using RAM.
>>>>
>>>> At the same time, I have to report that I monitor the memory usage during
>>>> build, and saw most of memory is used as cache/buffer during linking.
>>>> Hence I am not sure if tmpfs based on RAM will bring about much speedup in
>>>> my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
>>>> one. That may explain the slow speed.
>>>
>>> Linking effectively requires building a list of used files, concatenating
>>> the used sections together, dropping redundant COMDATs, and then patching
>>> offsets into the binary, which requires scanning all of the .o files at
>>> least once--the key, then, is to make sure that the scanning process never
>>> touches disk. In other words, you need enough RAM that the .o files stay
>>> resident in the file system cache from when they were last touched in a
>>> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
>>> [1], the only recommendation I can give for speeding up is using split-dwarf
>>> (removes the ginormous debug information from the linking equation) and
>>> using gold, at least on Linux. I don't think ramfs style builds speed up
>>> builds if the filesystem is already resident in cache.
>>
>> (note gold is the default for local builds)
>>

Dear Liang,

I am not entirely sure if configure in my environment picks up GNU gold
automatically.
Actually I create a ~/bin/ld shell script that invokes GNU gold linker
explicitly.
and put my ~/bin near the beginning of PATH environment variable before
/usr/bin so that
"ld" noticed by configure is always my ~/bin/ld and invokes GNU gold no
matter what the symlink from /usr/bin/ld is.
(And this trick is useful for OTHER programs that don't look for GNU gold
seriously in its configure script.)

You might want to check that your GNU gold on your computer under whatever
filename is
*IS* invoked during the build of TB.
(I suppose major linux distribution installs GNU gold linker as ld.gold by
default. But you may want to check just in case. What distribution of linux
do you use assuming you use linux?)

If you link time is still in 10-20 minutes range using GNU gold linker,
you might want to increase RAM.
How much RAM do you have in your computer?
I think 8GB is bare minimum for comfortable linking.
(I am assuming that you use 64-bit OS.)

Also, what is your CPU? I am just curious.
With four cores allocated to a virtual machine, my build time is not that
bad after a modification of several files including the not-so-long link
time (don't forget -gsplit-dwarf to CC option.) and this is inside VirtualBox.

If you have a few disk drives, try spreading the
source files in one disk, the object tree in other disk, etc.
on different I/O channels (i.e., not multiplexing them on one bus)
if possible although this is easier said than done when your disks are
almost full already.
Also, try using the fastest disk for object files. [you may need
experimenting here.]

That is all I can think of speeding up link time significantly.

CI
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

梁栋
In reply to this post by 梁栋
On 2015年04月30日 18:53, ishikawa wrote:

> On 2015年04月30日 10:51, dliang wrote:
>> actullay, now in mozilla build system, gold-ld is the default linker if your
>> OS has already exist gold-ld, follow is codes copy from mozilla/configure,
>> (my code version is tb-beta 34-b1):
>>
>> 10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
>> 10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
>> 10031         GOLD=$($CC -print-prog-name=ld.gold)
>> 10032         case "$GOLD" in
>> 10033         /*)
>> 10034             ;;
>> 10035         *)
>> 10036             GOLD=$(which $GOLD)
>> 10037             ;;
>> 10038         esac
>> 10039         if test -n "$GOLD"; then
>> 10040             mkdir -p $_objdir/build/unix/gold
>> 10041             rm -f $_objdir/build/unix/gold/ld
>> 10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
>> 10043             if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1 |
>> grep -q "GNU gold"; then
>> 10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
>> 10045             else
>> 10046                 rm -rf $_objdir/build/unix/gold
>> 10047             fi
>> 10048         fi
>> 10049     fi
>> 10050 fi
>>
>> On 2015年04月29日 07:07, Mike Hommey wrote:
>>> On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
>>>> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
>>>>> On 2015/04/28 23:30, Toan Pham wrote:
>>>>>> Linking is a process of resolving symbols from object files with
>>>>>> external libraries, and it should not take that long.  When you have a
>>>>>> chance, try to compile the project under a ram filesystem like tmpfs.
>>>>>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>>>>>> will speed up compilation significantly.
>>>>>>
>>>>>
>>>>> With full debug build of thunderbird (without -gsplit-dwarf), I think
>>>>> libxul.so becomes close to 1GB in size.
>>>>> So creating it puts much workload in terms of I/O, and
>>>>> in terms of memory pressure, too, since ordinary linker's data structure
>>>>> to handle the large number of symbols simply runs out of  32-bit memory
>>>>> space during linking.
>>>>>
>>>>> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>>>>>
>>>>> I may want to try to set aside 2GB of RAM to tmpfs given this size of
>>>>> libxul.so, but as of now, linking libxul runs in a reasonable time by
>>>>> using GNU gold linker and -gsplit-dwarf option to GCC.
>>>>>
>>>>> Thank you for the suggestion, if something goes wrong with
>>>>> linking/build I may try to create a large tmpfs using RAM.
>>>>>
>>>>> At the same time, I have to report that I monitor the memory usage during
>>>>> build, and saw most of memory is used as cache/buffer during linking.
>>>>> Hence I am not sure if tmpfs based on RAM will bring about much speedup in
>>>>> my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
>>>>> one. That may explain the slow speed.
>>>>
>>>> Linking effectively requires building a list of used files, concatenating
>>>> the used sections together, dropping redundant COMDATs, and then patching
>>>> offsets into the binary, which requires scanning all of the .o files at
>>>> least once--the key, then, is to make sure that the scanning process never
>>>> touches disk. In other words, you need enough RAM that the .o files stay
>>>> resident in the file system cache from when they were last touched in a
>>>> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
>>>> [1], the only recommendation I can give for speeding up is using split-dwarf
>>>> (removes the ginormous debug information from the linking equation) and
>>>> using gold, at least on Linux. I don't think ramfs style builds speed up
>>>> builds if the filesystem is already resident in cache.
>>>
>>> (note gold is the default for local builds)
>>>
>
> Dear Liang,
>
> I am not entirely sure if configure in my environment picks up GNU gold
> automatically.
> Actually I create a ~/bin/ld shell script that invokes GNU gold linker
> explicitly.
> and put my ~/bin near the beginning of PATH environment variable before
> /usr/bin so that
> "ld" noticed by configure is always my ~/bin/ld and invokes GNU gold no
> matter what the symlink from /usr/bin/ld is.
> (And this trick is useful for OTHER programs that don't look for GNU gold
> seriously in its configure script.)
>
> You might want to check that your GNU gold on your computer under whatever
> filename is
> *IS* invoked during the build of TB.
> (I suppose major linux distribution installs GNU gold linker as ld.gold by
> default. But you may want to check just in case. What distribution of linux
> do you use assuming you use linux?)
>
> If you link time is still in 10-20 minutes range using GNU gold linker,
> you might want to increase RAM.
> How much RAM do you have in your computer?
> I think 8GB is bare minimum for comfortable linking.
> (I am assuming that you use 64-bit OS.)
>
> Also, what is your CPU? I am just curious.
> With four cores allocated to a virtual machine, my build time is not that
> bad after a modification of several files including the not-so-long link
> time (don't forget -gsplit-dwarf to CC option.) and this is inside VirtualBox.
>
> If you have a few disk drives, try spreading the
> source files in one disk, the object tree in other disk, etc.
> on different I/O channels (i.e., not multiplexing them on one bus)
> if possible although this is easier said than done when your disks are
> almost full already.
> Also, try using the fastest disk for object files. [you may need
> experimenting here.]
>
> That is all I can think of speeding up link time significantly.
>
> CI
>

Dear ishikawa,
ありがとうございますお忙しいところをお返事のメール,
Your answer is perfectly solved my problems. Thank you very much.
Now my TB re-build time has reduce to almost one minute and the time
used in link libxul.so is nearly 20~50 seconds,  this is all because of
the build options " -gsplit-dwarf " .
My cpu info is follow:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
stepping : 9
microcode : 0x17
cpu MHz : 1600.000
cache size : 6144 KB

and my memory info:
MemTotal:        8067636 kB
MemFree:         2164108 kB
Buffers:          564716 kB
Cached:          3872940 kB

my linux distribution is Ubuntu 14.04.2 LTS

These facts proved that "-gsplit-dwarf" is an very useful option.
About the discussion of gold-ld, i believe in the link process of
libxul, TB build system use gold-ld, but in compile process , TB build
system just use ld, so I use command "ln -s /usr/bin/gold /usr/bin/ld"
to replace ld.
And about the use of tmpfs, i think this will not save time after
pratice, because my cpu usage is almost 100% in the all build process,
so i think the cpu frequency is the limit factor of build time but not
disk I/O.
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

梁栋
In reply to this post by 梁栋
On 2015年04月30日 18:53, ishikawa wrote:

> On 2015年04月30日 10:51, dliang wrote:
>> actullay, now in mozilla build system, gold-ld is the default linker if your
>> OS has already exist gold-ld, follow is codes copy from mozilla/configure,
>> (my code version is tb-beta 34-b1):
>>
>> 10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
>> 10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
>> 10031         GOLD=$($CC -print-prog-name=ld.gold)
>> 10032         case "$GOLD" in
>> 10033         /*)
>> 10034             ;;
>> 10035         *)
>> 10036             GOLD=$(which $GOLD)
>> 10037             ;;
>> 10038         esac
>> 10039         if test -n "$GOLD"; then
>> 10040             mkdir -p $_objdir/build/unix/gold
>> 10041             rm -f $_objdir/build/unix/gold/ld
>> 10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
>> 10043             if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1 |
>> grep -q "GNU gold"; then
>> 10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
>> 10045             else
>> 10046                 rm -rf $_objdir/build/unix/gold
>> 10047             fi
>> 10048         fi
>> 10049     fi
>> 10050 fi
>>
>> On 2015年04月29日 07:07, Mike Hommey wrote:
>>> On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
>>>> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
>>>>> On 2015/04/28 23:30, Toan Pham wrote:
>>>>>> Linking is a process of resolving symbols from object files with
>>>>>> external libraries, and it should not take that long.  When you have a
>>>>>> chance, try to compile the project under a ram filesystem like tmpfs.
>>>>>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>>>>>> will speed up compilation significantly.
>>>>>>
>>>>>
>>>>> With full debug build of thunderbird (without -gsplit-dwarf), I think
>>>>> libxul.so becomes close to 1GB in size.
>>>>> So creating it puts much workload in terms of I/O, and
>>>>> in terms of memory pressure, too, since ordinary linker's data structure
>>>>> to handle the large number of symbols simply runs out of  32-bit memory
>>>>> space during linking.
>>>>>
>>>>> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>>>>>
>>>>> I may want to try to set aside 2GB of RAM to tmpfs given this size of
>>>>> libxul.so, but as of now, linking libxul runs in a reasonable time by
>>>>> using GNU gold linker and -gsplit-dwarf option to GCC.
>>>>>
>>>>> Thank you for the suggestion, if something goes wrong with
>>>>> linking/build I may try to create a large tmpfs using RAM.
>>>>>
>>>>> At the same time, I have to report that I monitor the memory usage during
>>>>> build, and saw most of memory is used as cache/buffer during linking.
>>>>> Hence I am not sure if tmpfs based on RAM will bring about much speedup in
>>>>> my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
>>>>> one. That may explain the slow speed.
>>>>
>>>> Linking effectively requires building a list of used files, concatenating
>>>> the used sections together, dropping redundant COMDATs, and then patching
>>>> offsets into the binary, which requires scanning all of the .o files at
>>>> least once--the key, then, is to make sure that the scanning process never
>>>> touches disk. In other words, you need enough RAM that the .o files stay
>>>> resident in the file system cache from when they were last touched in a
>>>> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
>>>> [1], the only recommendation I can give for speeding up is using split-dwarf
>>>> (removes the ginormous debug information from the linking equation) and
>>>> using gold, at least on Linux. I don't think ramfs style builds speed up
>>>> builds if the filesystem is already resident in cache.
>>>
>>> (note gold is the default for local builds)
>>>
>
> Dear Liang,
>
> I am not entirely sure if configure in my environment picks up GNU gold
> automatically.
> Actually I create a ~/bin/ld shell script that invokes GNU gold linker
> explicitly.
> and put my ~/bin near the beginning of PATH environment variable before
> /usr/bin so that
> "ld" noticed by configure is always my ~/bin/ld and invokes GNU gold no
> matter what the symlink from /usr/bin/ld is.
> (And this trick is useful for OTHER programs that don't look for GNU gold
> seriously in its configure script.)
>
> You might want to check that your GNU gold on your computer under whatever
> filename is
> *IS* invoked during the build of TB.
> (I suppose major linux distribution installs GNU gold linker as ld.gold by
> default. But you may want to check just in case. What distribution of linux
> do you use assuming you use linux?)
>
> If you link time is still in 10-20 minutes range using GNU gold linker,
> you might want to increase RAM.
> How much RAM do you have in your computer?
> I think 8GB is bare minimum for comfortable linking.
> (I am assuming that you use 64-bit OS.)
>
> Also, what is your CPU? I am just curious.
> With four cores allocated to a virtual machine, my build time is not that
> bad after a modification of several files including the not-so-long link
> time (don't forget -gsplit-dwarf to CC option.) and this is inside VirtualBox.
>
> If you have a few disk drives, try spreading the
> source files in one disk, the object tree in other disk, etc.
> on different I/O channels (i.e., not multiplexing them on one bus)
> if possible although this is easier said than done when your disks are
> almost full already.
> Also, try using the fastest disk for object files. [you may need
> experimenting here.]
>
> That is all I can think of speeding up link time significantly.
>
> CI
>

Dear ishikawa,
ありがとうございますお忙しいところをお返事のメール,
Your answer is perfectly solved my problems. Thank you very much.
Now my TB re-build time has reduce to almost one minute and the time
used in link libxul.so is nearly 20~50 seconds,  this is all because of
the build options " -gsplit-dwarf " .
My cpu info is follow:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
stepping : 9
microcode : 0x17
cpu MHz : 1600.000
cache size : 6144 KB

and my memory info:
MemTotal:        8067636 kB
MemFree:         2164108 kB
Buffers:          564716 kB
Cached:          3872940 kB

my linux distribution is Ubuntu 14.04.2 LTS

These facts proved that "-gsplit-dwarf" is an very useful option.
About the discussion of gold-ld, i believe in the link process of
libxul, TB build system use gold-ld, but in compile process , TB build
system just use ld, so I use command "ln -s /usr/bin/gold /usr/bin/ld"
to replace ld.
And about the use of tmpfs, i think this will not save time after
pratice, because my cpu usage is almost 100% in the all build process,
so i think the cpu frequency is the limit factor of build time but not
disk I/O.
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

梁栋
In reply to this post by 梁栋
On 2015年04月30日 18:53, ishikawa wrote:

> On 2015年04月30日 10:51, dliang wrote:
>> actullay, now in mozilla build system, gold-ld is the default linker if your
>> OS has already exist gold-ld, follow is codes copy from mozilla/configure,
>> (my code version is tb-beta 34-b1):
>>
>> 10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
>> 10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
>> 10031         GOLD=$($CC -print-prog-name=ld.gold)
>> 10032         case "$GOLD" in
>> 10033         /*)
>> 10034             ;;
>> 10035         *)
>> 10036             GOLD=$(which $GOLD)
>> 10037             ;;
>> 10038         esac
>> 10039         if test -n "$GOLD"; then
>> 10040             mkdir -p $_objdir/build/unix/gold
>> 10041             rm -f $_objdir/build/unix/gold/ld
>> 10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
>> 10043             if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1 |
>> grep -q "GNU gold"; then
>> 10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
>> 10045             else
>> 10046                 rm -rf $_objdir/build/unix/gold
>> 10047             fi
>> 10048         fi
>> 10049     fi
>> 10050 fi
>>
>> On 2015年04月29日 07:07, Mike Hommey wrote:
>>> On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
>>>> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
>>>>> On 2015/04/28 23:30, Toan Pham wrote:
>>>>>> Linking is a process of resolving symbols from object files with
>>>>>> external libraries, and it should not take that long.  When you have a
>>>>>> chance, try to compile the project under a ram filesystem like tmpfs.
>>>>>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>>>>>> will speed up compilation significantly.
>>>>>>
>>>>>
>>>>> With full debug build of thunderbird (without -gsplit-dwarf), I think
>>>>> libxul.so becomes close to 1GB in size.
>>>>> So creating it puts much workload in terms of I/O, and
>>>>> in terms of memory pressure, too, since ordinary linker's data structure
>>>>> to handle the large number of symbols simply runs out of  32-bit memory
>>>>> space during linking.
>>>>>
>>>>> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>>>>>
>>>>> I may want to try to set aside 2GB of RAM to tmpfs given this size of
>>>>> libxul.so, but as of now, linking libxul runs in a reasonable time by
>>>>> using GNU gold linker and -gsplit-dwarf option to GCC.
>>>>>
>>>>> Thank you for the suggestion, if something goes wrong with
>>>>> linking/build I may try to create a large tmpfs using RAM.
>>>>>
>>>>> At the same time, I have to report that I monitor the memory usage during
>>>>> build, and saw most of memory is used as cache/buffer during linking.
>>>>> Hence I am not sure if tmpfs based on RAM will bring about much speedup in
>>>>> my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
>>>>> one. That may explain the slow speed.
>>>>
>>>> Linking effectively requires building a list of used files, concatenating
>>>> the used sections together, dropping redundant COMDATs, and then patching
>>>> offsets into the binary, which requires scanning all of the .o files at
>>>> least once--the key, then, is to make sure that the scanning process never
>>>> touches disk. In other words, you need enough RAM that the .o files stay
>>>> resident in the file system cache from when they were last touched in a
>>>> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
>>>> [1], the only recommendation I can give for speeding up is using split-dwarf
>>>> (removes the ginormous debug information from the linking equation) and
>>>> using gold, at least on Linux. I don't think ramfs style builds speed up
>>>> builds if the filesystem is already resident in cache.
>>>
>>> (note gold is the default for local builds)
>>>
>
> Dear Liang,
>
> I am not entirely sure if configure in my environment picks up GNU gold
> automatically.
> Actually I create a ~/bin/ld shell script that invokes GNU gold linker
> explicitly.
> and put my ~/bin near the beginning of PATH environment variable before
> /usr/bin so that
> "ld" noticed by configure is always my ~/bin/ld and invokes GNU gold no
> matter what the symlink from /usr/bin/ld is.
> (And this trick is useful for OTHER programs that don't look for GNU gold
> seriously in its configure script.)
>
> You might want to check that your GNU gold on your computer under whatever
> filename is
> *IS* invoked during the build of TB.
> (I suppose major linux distribution installs GNU gold linker as ld.gold by
> default. But you may want to check just in case. What distribution of linux
> do you use assuming you use linux?)
>
> If you link time is still in 10-20 minutes range using GNU gold linker,
> you might want to increase RAM.
> How much RAM do you have in your computer?
> I think 8GB is bare minimum for comfortable linking.
> (I am assuming that you use 64-bit OS.)
>
> Also, what is your CPU? I am just curious.
> With four cores allocated to a virtual machine, my build time is not that
> bad after a modification of several files including the not-so-long link
> time (don't forget -gsplit-dwarf to CC option.) and this is inside VirtualBox.
>
> If you have a few disk drives, try spreading the
> source files in one disk, the object tree in other disk, etc.
> on different I/O channels (i.e., not multiplexing them on one bus)
> if possible although this is easier said than done when your disks are
> almost full already.
> Also, try using the fastest disk for object files. [you may need
> experimenting here.]
>
> That is all I can think of speeding up link time significantly.
>
> CI
>

Dear ishikawa,
ありがとうございますお忙しいところをお返事のメール,
Your answer is perfectly solved my problems. Thank you very much.
Now my TB re-build time has reduce to almost one minute and the time
used in link libxul.so is nearly 20~50 seconds,  this is all because of
the build options " -gsplit-dwarf " .
My cpu info is follow:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
stepping : 9
microcode : 0x17
cpu MHz : 1600.000
cache size : 6144 KB

and my memory info:
MemTotal:        8067636 kB
MemFree:         2164108 kB
Buffers:          564716 kB
Cached:          3872940 kB

my linux distribution is Ubuntu 14.04.2 LTS

These facts proved that "-gsplit-dwarf" is an very useful option.
About the discussion of gold-ld, i believe in the link process of
libxul, TB build system use gold-ld, but in compile process , TB build
system just use ld, so I use command "ln -s /usr/bin/gold /usr/bin/ld"
to replace ld.
And about the use of tmpfs, i think this will not save time after
pratice, because my cpu usage is almost 100% in the all build process,
so i think the cpu frequency is the limit factor of build time but not
disk I/O.
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds
Reply | Threaded
Open this post in threaded view
|

Re: Is there an way to reduce recompile time?

ISHIKAWA,chiaki
In reply to this post by 梁栋
On 2015年05月06日 10:04, dliang wrote:

> On 2015年04月30日 18:53, ishikawa wrote:
>> On 2015年04月30日 10:51, dliang wrote:
>>> actullay, now in mozilla build system, gold-ld is the default linker if your
>>> OS has already exist gold-ld, follow is codes copy from mozilla/configure,
>>> (my code version is tb-beta 34-b1):
>>>
>>> 10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
>>> 10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
>>> 10031         GOLD=$($CC -print-prog-name=ld.gold)
>>> 10032         case "$GOLD" in
>>> 10033         /*)
>>> 10034             ;;
>>> 10035         *)
>>> 10036             GOLD=$(which $GOLD)
>>> 10037             ;;
>>> 10038         esac
>>> 10039         if test -n "$GOLD"; then
>>> 10040             mkdir -p $_objdir/build/unix/gold
>>> 10041             rm -f $_objdir/build/unix/gold/ld
>>> 10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
>>> 10043             if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1 |
>>> grep -q "GNU gold"; then
>>> 10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
>>> 10045             else
>>> 10046                 rm -rf $_objdir/build/unix/gold
>>> 10047             fi
>>> 10048         fi
>>> 10049     fi
>>> 10050 fi
>>>
>>> On 2015年04月29日 07:07, Mike Hommey wrote:
>>>> On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
>>>>> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
>>>>>> On 2015/04/28 23:30, Toan Pham wrote:
>>>>>>> Linking is a process of resolving symbols from object files with
>>>>>>> external libraries, and it should not take that long.  When you have a
>>>>>>> chance, try to compile the project under a ram filesystem like tmpfs.
>>>>>>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>>>>>>> will speed up compilation significantly.
>>>>>>>
>>>>>>
>>>>>> With full debug build of thunderbird (without -gsplit-dwarf), I think
>>>>>> libxul.so becomes close to 1GB in size.
>>>>>> So creating it puts much workload in terms of I/O, and
>>>>>> in terms of memory pressure, too, since ordinary linker's data structure
>>>>>> to handle the large number of symbols simply runs out of  32-bit memory
>>>>>> space during linking.
>>>>>>
>>>>>> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>>>>>>
>>>>>> I may want to try to set aside 2GB of RAM to tmpfs given this size of
>>>>>> libxul.so, but as of now, linking libxul runs in a reasonable time by
>>>>>> using GNU gold linker and -gsplit-dwarf option to GCC.
>>>>>>
>>>>>> Thank you for the suggestion, if something goes wrong with
>>>>>> linking/build I may try to create a large tmpfs using RAM.
>>>>>>
>>>>>> At the same time, I have to report that I monitor the memory usage during
>>>>>> build, and saw most of memory is used as cache/buffer during linking.
>>>>>> Hence I am not sure if tmpfs based on RAM will bring about much speedup in
>>>>>> my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
>>>>>> one. That may explain the slow speed.
>>>>>
>>>>> Linking effectively requires building a list of used files, concatenating
>>>>> the used sections together, dropping redundant COMDATs, and then patching
>>>>> offsets into the binary, which requires scanning all of the .o files at
>>>>> least once--the key, then, is to make sure that the scanning process never
>>>>> touches disk. In other words, you need enough RAM that the .o files stay
>>>>> resident in the file system cache from when they were last touched in a
>>>>> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
>>>>> [1], the only recommendation I can give for speeding up is using split-dwarf
>>>>> (removes the ginormous debug information from the linking equation) and
>>>>> using gold, at least on Linux. I don't think ramfs style builds speed up
>>>>> builds if the filesystem is already resident in cache.
>>>>
>>>> (note gold is the default for local builds)
>>>>
>>
>> Dear Liang,
>>
>> I am not entirely sure if configure in my environment picks up GNU gold
>> automatically.
>> Actually I create a ~/bin/ld shell script that invokes GNU gold linker
>> explicitly.
>> and put my ~/bin near the beginning of PATH environment variable before
>> /usr/bin so that
>> "ld" noticed by configure is always my ~/bin/ld and invokes GNU gold no
>> matter what the symlink from /usr/bin/ld is.
>> (And this trick is useful for OTHER programs that don't look for GNU gold
>> seriously in its configure script.)
>>
>> You might want to check that your GNU gold on your computer under whatever
>> filename is
>> *IS* invoked during the build of TB.
>> (I suppose major linux distribution installs GNU gold linker as ld.gold by
>> default. But you may want to check just in case. What distribution of linux
>> do you use assuming you use linux?)
>>
>> If you link time is still in 10-20 minutes range using GNU gold linker,
>> you might want to increase RAM.
>> How much RAM do you have in your computer?
>> I think 8GB is bare minimum for comfortable linking.
>> (I am assuming that you use 64-bit OS.)
>>
>> Also, what is your CPU? I am just curious.
>> With four cores allocated to a virtual machine, my build time is not that
>> bad after a modification of several files including the not-so-long link
>> time (don't forget -gsplit-dwarf to CC option.) and this is inside VirtualBox.
>>
>> If you have a few disk drives, try spreading the
>> source files in one disk, the object tree in other disk, etc.
>> on different I/O channels (i.e., not multiplexing them on one bus)
>> if possible although this is easier said than done when your disks are
>> almost full already.
>> Also, try using the fastest disk for object files. [you may need
>> experimenting here.]
>>
>> That is all I can think of speeding up link time significantly.
>>
>> CI
>>
>
> Dear ishikawa,
> ありがとうございますお忙しいところをお返事のメール,

You are very welcome.
どういたしまして :-)

> Your answer is perfectly solved my problems. Thank you very much.

From reading the following, you have a
reasonably powerful CPU, I think.
Like my prior experience, slow disk and 8G memory puts a limit on I/O throughput
and thus -gsplit-dwarf was very helpful.
I now use -gsplit-dwarf all the time.

Anyway, welcome to the exciting world of developers who compile TB locally :-)

CI

> Now my TB re-build time has reduce to almost one minute and the time
> used in link libxul.so is nearly 20~50 seconds,  this is all because of
> the build options " -gsplit-dwarf " .
> My cpu info is follow:
> processor : 3
> vendor_id : GenuineIntel
> cpu family : 6
> model : 58
> model name : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
> stepping : 9
> microcode : 0x17
> cpu MHz : 1600.000
> cache size : 6144 KB
>
> and my memory info:
> MemTotal:        8067636 kB
> MemFree:         2164108 kB
> Buffers:          564716 kB
> Cached:          3872940 kB
>
> my linux distribution is Ubuntu 14.04.2 LTS
>
> These facts proved that "-gsplit-dwarf" is an very useful option.
> About the discussion of gold-ld, i believe in the link process of
> libxul, TB build system use gold-ld, but in compile process , TB build
> system just use ld, so I use command "ln -s /usr/bin/gold /usr/bin/ld"
> to replace ld.
> And about the use of tmpfs, i think this will not save time after
> pratice, because my cpu usage is almost 100% in the all build process,
> so i think the cpu frequency is the limit factor of build time but not
> disk I/O.
>
_______________________________________________
dev-builds mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-builds