Unsolved mac: cores not saturating during compile?
-
i've got a 20 core mac.
when i compile from the command line with -j20, according to Activity Monitor the cores just sit there doing mostly nothing. So i bumped it up to 80, and they still do mostly nothing.is this... normal? how can i get the cores to put their back into it? to be clear i DO see 80 instances of "clang" spun up in the activity list.
-
Some of the time is going to be spent waiting for files to be read, so a 1:1 compilation task to core ratio will likely leave idle time at least initially. Increasing the count of tasks will still encounter this limitation until enough data has been read to feed a compiler stage.
If this is an m1 mac, half of the cores are low power and reserved for background tasks. https://arstechnica.com/gadgets/2021/05/apples-m1-is-a-fast-cpu-but-m1-macs-feel-even-faster-due-to-qos/
I don't know if m1 systems come in a 20 core, or 10 x 2 configuration.If this is a hyperthreaded CPU that is really 10 cores with 2 threads per core, the ability of the cores to perform independent computation is limited by their shared hardware. Every other core is shown as idle, suggesting this is at least partly responsible. Actually, this is relevant even for full cores. The ability to use CPU caches effectively is limited by other cores creating contention for memory bandwidth and trampling cache.
Another possibility if this is a battery powered device, or running really hot, is that performance is being throttled to conserve power.
Increasing the scheduling priority with nice(1) might improve performance. The same goes with marking jobs for background or foreground processing. I don't know how to trigger that one.
-
this is an intel iMac, not battery powered.
can i tell "make" to "increase the priority" or otherwise script this?
if the problem were disk bandwidth, wouldn't we see see-saw type spikes in the cpu history graph? -
@davecotter You should check how many compiler instances are running using Activityview (not sure how exactly it is called in English).
-
@davecotter said in mac: cores not saturating during compile?:
if the problem were disk bandwidth, wouldn't we see see-saw type spikes in the cpu history graph?
Yes and no. If your files are really small the peak is to short to show up in the CPU history graph. Adding more threads will provide more contention on the hard drive.
If the CPU is overheating the frequency would be reduced and based on that frequency the CPU should still run at 100%. So, this is most likely not the problem.
The best way to speed up compilation in this case (and also see some CPUs at 100%) are so-called unity builds. Basically, this is a bunch of cpp-files that
#include
several of the cpp-files you originally intended to compile. That way the unity cpp files are large enough to start saturating a CPU core for long enough. However, it is not advisable to manage unity files by hand. See this link https://onqtam.com/programming/2018-07-07-unity-builds/ towards the bottom for a list of tools that support unity builds in different environments.(We are using FASTBuild with a setup based on a qmake project file https://github.com/SimonSchroeder/QMake2Fastbuild. This allows to also throw pre-compiled headers, caching and distributed builds into the mix – all in one single tool.)
-
there is a clue above: disk bandwidth.
indeed, when i build locally, with an SSD, the CPUs are totally saturated, which is awesome.i only see the problem when building on a different computer, network connected to server hosting the SSD with the source code and obj folder.
the network is gigabit capable, but real world throughput is about 600 Mbit.
i suppose therein lies the problem?