commit | 33d63516f712bc81c6c15f8348272c390b05719e | [log] [tgz] |
---|---|---|
author | Googler <noreply@google.com> | Fri Dec 02 00:27:22 2016 +0000 |
committer | Irina Iancu <elenairina@google.com> | Fri Dec 02 07:45:20 2016 +0000 |
tree | b97135b719bdac3531171463d34a08653fc4f982 | |
parent | 67f1a85466b60dd44b9ea1d5a5be543d8b25a89c [diff] |
Prune modules when building modules themselves to reduce build times shorten critical paths. When the inputs of a module M are reduced to a set S, then that same set S also needs to be supplied to compile something that uses M. To do this, input discovery is divided into two stages. For each CppCompileAction, the first stage discovers the necessary modules (M above). These are then added as inputs to ensure that they are built. The second stage then finds all the modules (S above) that are required to use those and also adds them as inputs. For now, the new behavior is guarded by a new flag --experimental_prune_more_modules. This is currently implemented by reading the .d files of used modules add adding all their module dependencies. There are two noteworthy alternatives: 1. Hack up input discovery to understand modules, e.g. if a modular header is hit, continue scanning from all it's headers. However, this seems very brittle and a lot of additional information would have to be passed to the input discovery. 2. Directly pass the results from input discovery of one CppCompileAction to another one. However, this seems to tightly couple the execution of different CppCompileActions and might lead to a mess of different states, more memory consumption, etc. With the current implementation, there is a bit of runtime overhead of reading the .d files (many times). This could potentially be improved by caching the results. However, even without this caching, the runtime overhead is limited (<10%) for all builds I have tried (I have tried with builds where all the compile results are already in the executor's cache. -- MOS_MIGRATED_REVID=140793217
{Fast, Correct} - Choose two
Bazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google‘s software, and thus it has been designed to handle build problems present in Google’s development environment, including:
A massive, shared code repository, in which all software is built from source. Bazel has been built for speed, using both caching and parallelism to achieve this. Bazel is critical to Google's ability to continue to scale its software development practices as the company grows.
An emphasis on automated testing and releases. Bazel has been built for correctness and reproducibility, meaning that a build performed on a continuous build machine or in a release pipeline will generate bitwise-identical outputs to those generated on a developer's machine.
Language and platform diversity. Bazel's architecture is general enough to support many different programming languages within Google, and can be used to build both client and server software targeting multiple architectures from the same underlying codebase.
Find more background about Bazel in our FAQ.