commit | 3d78dc7e3a927612c23a6d1542bc38aad5a9703e | [log] [tgz] |
---|---|---|
author | Googler <noreply@google.com> | Fri Feb 19 09:48:31 2021 -0800 |
committer | Copybara-Service <copybara-worker@google.com> | Fri Feb 19 09:49:34 2021 -0800 |
tree | 40446a4f0b2b384026e0acce873f41fad8a6f489 | |
parent | 5a8a92489342d59ea74077174e29d8f10ac2989c [diff] |
Further overhaul Rules concept doc The main goals here: * Make the part that all rule implementers need to understand shorter. This is done largely by pushing more esoteric stuff into a "Advanced topics" section further down, and deprecated syntax into a "Deprecated features" section at the bottom. * Have a single through-line of examples which flows through the main section. This is changed to use "example" instead of "metal", I want to use the general idea of a C++-like programming language with sources and headers, compiled libraries, and linked binaries. But I don't want to confuse things for people familiar with how Metal actually works or to dig into those details myself. Adding a "hdrs" attribute to this example for headers also makes it straightforward for the example to demonstrate that rule-specific providers can provide both generated files (e.g. compilation outputs for linking) and source files (e.g. headers for further compilation) to their consumers. Other significant changes by section: * mdformat was used to rewrap text throughout. * Functions are consistently referred to just by name without the trailing open-close-paren. * "Implementation function": Don't describe this as the "actual logic" of the rule, the stuff that happens at build time is also "actual". Instead, describe a bit more precisely what happens in the analysis phase and how that relates to the adjacent phases of the build. Mention what implementation functions take and return more up-front. This replaces a list which mixes up things done by the rule context specifically (access attributes, create actions) with things done by the implementation function in general (return providers) with a briefer prose description, pushing down some of the detail to subsections. "Targets", "Files", "Outputs", "Actions", and "Providers" are made top-level sections of "Implementation function". * "Targets": Make the wording a bit briefer, and pull up the part about the syntax for accessing providers from a target. * "Files": A digression about how file Targets work is pruned. That seemed like a really long way to say "a file target's default outputs are just that File", and it doesn't really fit with the rest of the section that well? Generally when you want the default outputs from a dependency, you use ctx.files. Briefly mention ctx.file and ctx.executable just so the latter can be used in an idiomatic way in a later example. * "Outputs": ctx.actions.write doesn't create a non-predeclared output, it takes a File object as an input and returns None. Be a bit more explicit that declare_file and declare_directory return File objects. * "Actions": Briefly mention ctx.actions.args. Condense a long list of paragraphs about the constraints on action inputs and outputs into some briefer prose that's more consistent with the style of the rest of this page. * "Providers": In the lead-in to this section, mention that rule's can only read from their immediate dependencies and that intermediate dependencies may need to forward information more concisely. Mention that `DefaultInfo` is added implicitly up front (because the rest about that gets split into two subsections). Divide the rest into "Default outputs", "Runfiles", "Coverage configuration", and "Custom providers". * "Default outputs": This pulls up the part about what default outputs are from the section on selecting outputs (OutputGroupInfo etc.), which is pushed down to the "Advanced topics" section. * "Runfiles": The subsection on "Runfiles symlinks" is pushed down to "Advanced features". The subsection on "Runfiles location" is pushed down to be a subsection of "Executable and test rules", since it's about what happens when those executable outputs are run by "blaze run" or "blaze test" (and leaving it here breaks up the flow of this section too much). * "Coverage configuration": The bit about InstrumentedFilesInfo is pulled up here. Even rules that don't support coverage instrumentation for their sources at all need to deal with this, since they may have something in their runtime dependencies (in particular, binaries included via data) that does. The remainder is pushed down to "Advanced topics". * "Custom providers": This is split into a subsection. * "Executable rules and test rules": Add an example of binary and test rule definitions. This gets at the conventional library/binary/test pattern, and the fact that binary/test might have exactly the same implementation function. (Though it doesn't get into detail on how they might share logic between the implementation function of that and the library rule, nor does it show how those might share logic in defining their attribute schemas. But that seems to be veering off the "how rules work" topic a bit too much to be worth the extra length.) Link to the documentation about which attributes are added to executable and test rules, instead of listing the attributes added to test rules. * "Runfiles location": Pulled down to be a subsection of "Executable and test rules", and also mention that this applies to "test" as well as "run". * "Requesting output files": Parts about default outputs are moved up to the "Default outputs" section. The advice about not using OutputGroupInfo or DefaultInfo in lieu of more structured rule-specific providers is reworded for brevity. Also pull in the stuff about predeclared outputs, since most rules don't use those. * "Code coverage instrumentation": This is adjusted for brevity, and the stuff about the InstrumentedFilesInfo provider is moved up. RELNOTES: None. PiperOrigin-RevId: 358424440
{Fast, Correct} - Choose two
Build and test software of any size, quickly and reliably.
Speed up your builds and tests: Bazel rebuilds only what is necessary. With advanced local and distributed caching, optimized dependency analysis and parallel execution, you get fast and incremental builds.
One tool, multiple languages: Build and test Java, C++, Android, iOS, Go, and a wide variety of other language platforms. Bazel runs on Windows, macOS, and Linux.
Scalable: Bazel helps you scale your organization, codebase, and continuous integration solution. It handles codebases of any size, in multiple repositories or a huge monorepo.
Extensible to your needs: Easily add support for new languages and platforms with Bazel's familiar extension language. Share and re-use language rules written by the growing Bazel community.
Follow our tutorials:
See CONTRIBUTING.md