blob: dd341d31ad1219ec3a07ac61af3ed354b12bfe02 [file] [log] [blame] [view]
Tobias Werth590209f2016-06-22 15:23:56 +00001---
2layout: contribute
3title: Why is it so difficult to write Bazel rules?
4---
5
6# Why is it difficult to write Bazel rules?
7
8We have heard feedback from various people multiple times that Bazel rules are
9hard to write. There is no single root cause, but it’s due to a combination of
10historical circumstances and intrinsic complexity in the problem domain. This
11document attempts to give a high level overview of the specific issues that we
12believe to be the main contributors.
13
14* Assumption: Aim for Correctness, Throughput, Ease of Use & Latency
15* Assumption: Large Scale Repositories
16* Assumption: BUILD-like Description Language
17* Intrinsic: Remote Execution and Caching are Hard
18* Historic: Hard Separation between Loading, Analysis, and Execution is
19 Outdated, but still affects the API
20* Intrinsic: Using Change Information for Correct and Fast Incremental Builds
21 requires Unusual Coding Patterns
22* Intrinsic: Avoiding Quadratic Time and Memory Consumption is Hard
23
24## Assumption: Aim for Correctness, Throughput, Ease of Use & Latency
25
26We assume that the build system needs to be first and foremost correct with
27respect to incremental builds, i.e., for a given source tree, the output of the
28same build should always be the same, regardless of what the output tree looks
29like. In the first approximation, this means Bazel needs to know every single
30input that goes into a given build step, such that it can rerun that step if any
31of the inputs change. There are limits to how correct Bazel can get, as it leaks
32some information such as date / time of the build, and ignores certain types of
33changes such as changes to file attributes. Sandboxing helps ensure correctness
34by preventing reads to undeclared input files. Besides the intrinsic limits of
35the system, there are a few known correctness issues, most of which are related
36to Fileset or the C++ rules, which are both hard problems. We have long-term
37efforts to fix these.
38
39The second goal of the build system is to have high throughput; we are
40permanently pushing the boundaries of what can be done within the current
41machine allocation for a remote execution service. If the remote execution
42service gets overloaded, nobody can get work done.
43
44Ease of use comes next, i.e., of multiple correct approaches with the same (or
45similar) footprint of the remote execution service, we choose the one that is
46easier to use.
47
48For the purpose of this document, latency denotes the time it takes from
49starting a build to getting the intended result, whether that is a test log from
50a passing or failing test, or an error message that a BUILD file has a
51typo.
52
53Note that these goals often overlap; latency is as much a function of throughput
54of the remote execution service as is correctness relevant for ease of use.
55
56
57## Assumption: Large Scale Repositories
58
59The build system needs to operate at the scale of large repositories where large
60scale means that it does not fit on a single hard drive, so it is impossible to
61do a full checkout on virtually all developer machines. A medium-sized build
62will need to read and parse tens of thousands of BUILD files, and evaluate
63hundreds of thousands of globs. While it is theoretically possible to read all
64BUILD files on a single machine, we have not yet been able to do so within a
65reasonable amount of time and memory. As such, it is critical that BUILD files
66can be loaded and parsed independently.
67
68
69## Assumption: BUILD-like Description Language
70
71For the purpose of this document, we assume a configuration language that is
72roughly similar to BUILD files, i.e., declaration of library and binary rules
73and their interdependencies. BUILD files can be read and parsed independently,
74and we avoid even looking at source files whenever we can (except for
75existence).
76
77
78## Intrinsic: Remote Execution and Caching are Hard
79
80Remote execution and caching improve build times in large repositories by
81roughly two orders of magnitude compared to running the build on a single
82machine. However, the scale at which it needs to perform is staggering: Google's
83remote execution service is designed to handle a huge number of requests per
84second, and the protocol carefully avoids unnecessary roundtrips as well as
85unnecessary work on the service side.
86
87At this time, the protocol requires that the build system knows all inputs to a
88given action ahead of time; the build system then computes a unique action
89fingerprint, and asks the scheduler for a cache hit. If a cache hit is found,
90the scheduler replies with the digests of the output files; the files itself are
91addressed by digest later on. However, this imposes restrictions on the Bazel
92rules, which need to declare all input files ahead of time.
93
94
95## Historic: Hard Separation between Loading, Analysis, and Execution is Outdated, but still affects the API
96
97Technically, it is sufficient for a rule to know the input and output files of
98an action just before the action is sent to remote execution. However, the
99original Bazel code base had a strict separation of loading packages, then
100analyzing rules using a configuration (command-line flags, essentially), and
101only then running any actions. This distinction is still part of the rules API
102today, even though the core of Bazel no longer requires it (more details below).
103
104That means that the rules API requires a declarative description of the rule
105interface (what attributes it has, types of attributes). There are some
106exceptions where the API allows custom code to run during the loading phase to
107compute implicit names of output files and implicit values of attributes. For
108example, a java_library rule named ‘foo’ implicitly generates an output named
109‘libfoo.jar’, which can be referenced from other rules in the build graph.
110
111Furthermore, the analysis of a rule cannot read any source files or inspect the
112output of an action; instead, it needs to generate a partial directed bipartite
113graph of build steps and output file names that is only determined from the rule
114itself and its dependencies.
115
116
117## Intrinsic: Using Change Information for Correct and Fast Incremental Builds requires Unusual Coding Patterns
118
119Above, we argued that in order to be correct, Bazel needs to know all the input
120files that go into a build step in order to detect whether that build step is
121still up-to-date. The same is true for package loading and rule analysis, and we
122have designed [Skyframe] (http://www.bazel.io/docs/skyframe.html) to handle this
123in general. Skyframe is a graph library and evaluation framework that takes a
124goal node (such as ‘build //foo with these options’), and breaks it down into
125its constituent parts, which are then evaluated and combined to yield this
126result. As part of this process, Skyframe reads packages, analyzes rules, and
127executes actions.
128
129At each node, Skyframe tracks exactly which nodes any given node used to compute
130its own output, all the way from the goal node down to the input files (which
131are also Skyframe nodes). Having this graph explicitly represented in memory
132allows the build system to identify exactly which nodes are affected by a given
133change to an input file (including creation or deletion of an input file), doing
134the minimal amount of work to restore the output tree to its intended state.
135
136As part of this, each node performs a dependency discovery process; i.e., each
137node can declare dependencies, and then use the contents of those dependencies
138to declare even further dependencies. In principle, this maps well to a
139thread-per-node model. However, medium-sized builds contain hundreds of
140thousands of Skyframe nodes, which isn’t easily possible with current Java
141technology (and for historical reasons, we’re currently tied to using Java, so
142no lightweight threads and no continuations).
143
144Instead, Bazel uses a fixed-size thread pool. However, that means that if a node
145declares a dependency that isn’t available yet, we may have to abort that
146evaluation and restart it (possibly in another thread), when the dependency is
147available. This, in turn, means that nodes should not do this excessively; a
148node that declares N dependencies serially can potentially be restarted N times,
149costing O(N^2) time. Instead, we aim for up-front bulk declaration of
150dependencies, which sometimes requires reorganizing the code, or even splitting
151a node into multiple nodes to limit the number of restarts.
152
153Note that this technology isn’t currently available in the rules API; instead,
154the rules API is still defined using the legacy concepts of loading, analysis,
155and execution phases. However, a fundamental restriction is that all accesses to
156other nodes have to go through the framework so that it can track the
157corresponding dependencies. Regardless of the language in which the build system
158is implemented or in which the rules are written (they don’t have to be the
159same), rule authors must not use standard libraries or patterns that bypass
160Skyframe. For Java, that means avoiding java.io.File as well as any form of
161reflection, and any library that does either. Libraries that support dependency
162injection of these low-level interfaces still need to be setup correctly for
163Skyframe.
164
165This strongly suggests to avoid exposing rule authors to a full language runtime
166in the first place. The danger of accidental use of such APIs is just too big -
167several Bazel bugs in the past were caused by rules using unsafe APIs, even
168though the rules were written by the Bazel team, i.e., by the domain experts.
169
170
171## Intrinsic: Avoiding Quadratic Time and Memory Consumption is Hard
172
173To make matters worse, apart from the requirements imposed by Skyframe, the
174historical constraints of using Java, and the outdatedness of the rules API,
175accidentally introducting quadratic time or memory consumption is a fundamental
176problem in any build system based on library and binary rules. There are two
177very common patterns that introduce quadratic memory consumption (and therefore
178quadratic time consumption).
179
1801. Chains of Library Rules
181Consider the case of a chain of library rules A depends on B, depends on C, and
182so on. Then, we want to compute some property over the transitive closure of
183these rules, such as the Java runtime classpath, or the C++ linker command for
184each library. Naively, we might take a standard list implementation; however,
185this already introduces quadratic memory consumption: the first library
186contains one entry on the classpath, the second two, the third three, and so
187on, for a total of 1+2+3+...+N = O(N^2) entries.
188
1892. Binary Rules Depending on the Same Library Rules
190Consider the case where a set of binaries that depend on the same library
191rules; for example, you might have a number of test rules that test the same
192library code. Let’s say out of N rules, half the rules are binary rules, and
193the other half library rules. Now consider that each binary makes a copy of
194some property computed over the transitive closure of library rules, such as
195the Java runtime classpath, or the C++ linker command line. For example, it
196could expand the command line string representation of the C++ link action. N/2
197copies of N/2 elements is O(N^2) memory.
198
199
200### Custom Collections Classes to Avoid Quadratic Complexity
201
202Bazel is heavily affected by both of these scenarios, so we introduced a set of
203custom collection classes that effectively compress the information in memory by
204avoiding the copy at each step. Almost all of these data structures have set
205semantics, so we called the class NestedSet. The majority of changes to reduce
206Bazel’s memory consumption over the past several years were changes to use
207NestedSet instead of whatever was previously used.
208
209Unfortunately, usage of NestedSet does not automatically solve all the issues;
210in particular, even just iterating over a NestedSet in each rule re-introduces
211quadratic time consumption. NestedSet also has some helper methods to facilitate
212interoperability with normal collections classes; unfortunately, accidentally
213passing a NestedSet to one of these methods leads to copying behavior, and
214reintroduces quadratic memory consumption.