Project: /_project.yaml Book: /_book.yaml
{% include “_buttons.html” %}
This page covers remote caching, setting up a server to host the cache, and running builds using the remote cache.
A remote cache is used by a team of developers and/or a continuous integration (CI) system to share build outputs. If your build is reproducible, the outputs from one machine can be safely reused on another machine, which can make builds significantly faster.
Bazel breaks a build into discrete steps, which are called actions. Each action has inputs, output names, a command line, and environment variables. Required inputs and expected outputs are declared explicitly for each action.
You can set up a server to be a remote cache for build outputs, which are these action outputs. These outputs consist of a list of output file names and the hashes of their contents. With a remote cache, you can reuse build outputs from another user's build rather than building each new output locally.
To use remote caching:
The remote cache stores two types of data:
Note that the remote cache additionally stores the stdout and stderr for every action. Inspecting the stdout/stderr of Bazel thus is not a good signal for estimating cache hits.
Once a server is set up as the remote cache, you use the cache in multiple ways:
When you run a Bazel build that can read and write to the remote cache, the build follows these steps:
You need to set up a server to act as the cache‘s backend. A HTTP/1.1 server can treat Bazel’s data as opaque bytes and so many existing servers can be used as a remote caching backend. Bazel's HTTP Caching Protocol is what supports remote caching.
You are responsible for choosing, setting up, and maintaining the backend server that will store the cached outputs. When choosing a server, consider:
There are many backends that can be used for a remote cache. Some options include:
nginx is an open source web server. With its [WebDAV module], it can be used as a remote cache for Bazel. On Debian and Ubuntu you can install the nginx-extras
package. On macOS nginx is available via Homebrew:
brew tap denji/nginx brew install nginx-full --with-webdav
Below is an example configuration for nginx. Note that you will need to change /path/to/cache/dir
to a valid directory where nginx has permission to write and read. You may need to change client_max_body_size
option to a larger value if you have larger output files. The server will require other configuration such as authentication.
Example configuration for server
section in nginx.conf
:
location /cache/ { # The path to the directory where nginx should store the cache contents. root /path/to/cache/dir; # Allow PUT dav_methods PUT; # Allow nginx to create the /ac and /cas subdirectories. create_full_put_path on; # The maximum size of a single file. client_max_body_size 1G; allow all; }
bazel-remote is an open source remote build cache that you can use on your infrastructure. It has been successfully used in production at several companies since early 2018. Note that the Bazel project does not provide technical support for bazel-remote.
This cache stores contents on disk and also provides garbage collection to enforce an upper storage limit and clean unused artifacts. The cache is available as a [docker image] and its code is available on GitHub{: .external}. Both the REST and gRPC remote cache APIs are supported.
Refer to the GitHub{: .external} page for instructions on how to use it.
[Google Cloud Storage] is a fully managed object store which provides an HTTP API that is compatible with Bazel's remote caching protocol. It requires that you have a Google Cloud account with billing enabled.
To use Cloud Storage as the cache:
Create a storage bucket{: .external}. Ensure that you select a bucket location that's closest to you, as network bandwidth is important for the remote cache.
Create a service account for Bazel to authenticate to Cloud Storage. See Creating a service account{: .external}.
Generate a secret JSON key and then pass it to Bazel for authentication. Store the key securely, as anyone with the key can read and write arbitrary data to/from your GCS bucket.
Connect to Cloud Storage by adding the following flags to your Bazel command:
--remote_cache=https://storage.googleapis.com{{ '<var>' }}/bucket-name{{ '</var>' }}
where bucket-name
is the name of your storage bucket.--google_credentials={{ '<var>' }}/path/to/your/secret-key{{ '</var>'}}.json
, or --google_default_credentials
to use Application Authentication{: .external}.You can configure Cloud Storage to automatically delete old files. To do so, see Managing Object Lifecycles{: .external}.
You can set up any HTTP/1.1 server that supports PUT and GET as the cache's backend. Users have reported success with caching backends such as Hazelcast{: .external}, Apache httpd{: .external}, and AWS S3{: .external}.
As of version 0.11.0 support for HTTP Basic Authentication was added to Bazel. You can pass a username and password to Bazel via the remote cache URL. The syntax is https://username:password@hostname.com:port/path
. Note that HTTP Basic Authentication transmits username and password in plaintext over the network and it's thus critical to always use it with HTTPS.
Bazel supports remote caching via HTTP/1.1. The protocol is conceptually simple: Binary data (BLOB) is uploaded via PUT requests and downloaded via GET requests. Action result metadata is stored under the path /ac/
and output files are stored under the path /cas/
.
For example, consider a remote cache running under http://localhost:8080/cache
. A Bazel request to download action result metadata for an action with the SHA256 hash 01ba4719...
will look as follows:
GET /cache/ac/01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b HTTP/1.1 Host: localhost:8080 Accept: */* Connection: Keep-Alive
A Bazel request to upload an output file with the SHA256 hash 15e2b0d3...
to the CAS will look as follows:
PUT /cache/cas/15e2b0d3c33891ebb0f1ef609ec419420c20e320ce94c65fbc8c3312448eb225 HTTP/1.1 Host: localhost:8080 Accept: */* Content-Length: 9 Connection: Keep-Alive 0x310x320x330x340x350x360x370x380x39
Once a server is set up as the remote cache, to use the remote cache you need to add flags to your Bazel command. See list of configurations and their flags below.
You may also need configure authentication, which is specific to your chosen server.
You may want to add these flags in a .bazelrc
file so that you don't need to specify them every time you run Bazel. Depending on your project and team dynamics, you can add flags to a .bazelrc
file that is:
Take care in who has the ability to write to the remote cache. You may want only your CI system to be able to write to the remote cache.
Use the following flag to read from and write to the remote cache:
build --remote_cache=http://{{ '<var>' }}your.host:port{{ '</var>' }}
Besides HTTP
, the following protocols are also supported: HTTPS
, grpc
, grpcs
.
Use the following flag in addition to the one above to only read from the remote cache:
build --remote_upload_local_results=false
To exclude specific targets from using the remote cache, tag the target with no-remote-cache
. For example:
java_library( name = "target", tags = ["no-remote-cache"], )
Deleting content from the remote cache is part of managing your server. How you delete content from the remote cache depends on the server you have set up as the cache. When deleting outputs, either delete the entire cache, or delete old outputs.
The cached outputs are stored as a set of names and hashes. When deleting content, there's no way to distinguish which output belongs to a specific build.
You may want to delete content from the cache to:
The remote HTTP cache supports connecting over unix domain sockets. The behavior is similar to curl's --unix-socket
flag. Use the following to configure unix domain socket:
build --remote_cache=http://{{ '<var>' }}your.host:port{{ '</var>' }} build --remote_cache_proxy=unix:/{{ '<var>' }}path/to/socket{{ '</var>' }}
This feature is unsupported on Windows.
Bazel can use a directory on the file system as a remote cache. This is useful for sharing build artifacts when switching branches and/or working on multiple workspaces of the same project, such as multiple checkouts. Since Bazel does not garbage-collect the directory, you might want to automate a periodic cleanup of this directory. Enable the disk cache as follows:
build --disk_cache={{ '<var>' }}path/to/build/cache{{ '</var>' }}
You can pass a user-specific path to the --disk_cache
flag using the ~
alias (Bazel will substitute the current user‘s home directory). This comes in handy when enabling the disk cache for all developers of a project via the project’s checked in .bazelrc
file.
Input file modification during a build
When an input file is modified during a build, Bazel might upload invalid results to the remote cache. You can enable a change detection with the --experimental_guard_against_concurrent_changes
flag. There are no known issues and it will be enabled by default in a future release. See [issue #3360] for updates. Generally, avoid modifying source files during a build.
Environment variables leaking into an action
An action definition contains environment variables. This can be a problem for sharing remote cache hits across machines. For example, environments with different $PATH
variables won‘t share cache hits. Only environment variables explicitly whitelisted via --action_env
are included in an action definition. Bazel’s Debian/Ubuntu package used to install /etc/bazel.bazelrc
with a whitelist of environment variables including $PATH
. If you are getting fewer cache hits than expected, check that your environment doesn't have an old /etc/bazel.bazelrc
file.
Bazel does not track tools outside a workspace
Bazel currently does not track tools outside a workspace. This can be a problem if, for example, an action uses a compiler from /usr/bin/
. Then, two users with different compilers installed will wrongly share cache hits because the outputs are different but they have the same action hash. See issue #4558{: .external} for updates.
Incremental in-memory state is lost when running builds inside docker containers Bazel uses server/client architecture even when running in single docker container. On the server side, Bazel maintains an in-memory state which speeds up builds. When running builds inside docker containers such as in CI, the in-memory state is lost and Bazel must rebuild it before using the remote cache.
Your Build in a Datacenter: The Bazel team gave a talk{: .external} about remote caching and execution at FOSDEM 2018.
Faster Bazel builds with remote caching: a benchmark: Nicolò Valigi wrote a blog post{: .external} in which he benchmarks remote caching in Bazel.
WebDAV module{: .external}
Docker image{: .external}
bazel-remote{: .external}
Google Cloud Storage{: .external}
Google Cloud Console{: .external}
Bucket locations{: .external}
Hazelcast{: .external}
Apache httpd{: .external}
AWS S3{: .external}
issue #3360{: .external}
gRPC{: .external}
gRPC protocol{: .external}
Buildbarn{: .external}
Buildfarm{: .external}
BuildGrid{: .external}
issue #4558{: .external}
Application Authentication{: .external}