BuildGrid is made up of a number of components which work together to provide client-agnostic remote caching and remote execution functionality. These components can be independently deployed, if only some of the total set of services are needed for your use case.

For detail on the APIs provided by the services, see Resources.

digraph buildgrid_overview {

     graph [fontsize=14 fontname="Verdana" compound=true];
     node [shape=box fontsize=10 fontname="Verdana"];
     edge [fontsize=10 fontname="Verdana"];

     label="BuildGrid Deployment Example";

     subgraph cluster_bgd_cas {
         label="CAS service";

         cas [
         bytestream [

     subgraph cluster_bgd_ac {
         label="Action Cache service";

         action_cache [
             label="Action Cache"

     subgraph cluster_bgd_execution {
         label="Execution service";

         execution [
         operations [

     subgraph cluster_bgd_bots {
         label="Bots service";

         bots [

     cas, execution, operations, bots -> sql;
     cas, bytestream, action_cache -> s3;

     sql [
         label="PostgreSQL (configurable)"
     s3 [
         label="S3 (configurable)"


The CAS, or Content Addressable Storage, is a service which stores blobs can retrieve them using the “digest” of the blobs themselves. A digest here is a pair of the hash of the content, and the size of the blob.

The CAS can be used to store and retrieve arbitrary blobs, but more pertinently is used in BuildGrid for input and output files, gRPC messages (such as the Actions sent by clients, and the corresponding ActionResults), and also stdout/stderr from Action execution. For remote caching only, the CAS would be used to store the actual cached blobs.

BuildGrid’s CAS implementation supports a number of storage backends, and some more complex options.


This stores blobs in-memory, which is fast but obviously has limitations on both the number of blobs that can be stored, and the size those blobs can be. This is probably most useful for testing, or as the cache part of a two-level CAS (see Cache + Fallback).

If adding a new blob results in the CAS being full, then old blobs are deleted on a least-recently-used basis.

Local Disk

This stores blobs in a directory on the CAS machine’s local disk. This is slower than the in-memory storage, but doesn’t have limitations on size and number of blobs.

There is currently no internal mechanism to clean up this storage, but work is ongoing to implement a cleanup command to work alongside Indexed CAS which will be able to handle this.


This stores blobs in a Redis key/value store. This also has no enforced limitations on blob counts and size, though it is probably somewhat unwise to use this for very large blobs.


This storage backend stores blobs using the AWS S3 API. It should be compatible with anything which exposes the S3 API; from AWS itself to other object storage implementations like Ceph or Swift.

There is currently no internal mechanism to clean up this storage, but work is ongoing to implement a cleanup command to work alongside Indexed CAS which will be able to handle this.

Cache + Fallback

This is an implementation of BuildGrid’s storage API which handles writing blobs to multiple other storage implementations. It is used to provide a cache layer for speed over the top of a slower but persistent storage, such as S3.

Indexed CAS

Indexed CAS is a storage implementation which maintains an index of the storage’s contents, and hands the reading/writing off to another backend.

This index is used to speed up requests like FindMissingBlobs, by looking up blobs in the index rather than in a slower storage.

The index will also be used for handling cleanup of storages which don’t have a built-in mechanism for cleanup/expiry of blobs, since it can track when blobs were last accessed.


The ByteStream service is a generic API for writing/reading bytes to/from a resource. BuildGrid uses it to write/read blobs to/from CAS, and as such a ByteStream service should be deployed in the same server as the CAS. It is also used by BuildGrid’s LogStream service, to handle reading/writing streams of logs. Any LogStream service also needs a ByteStream service in the same server to function correctly.

Action Cache

The Action Cache is a key/value store which maps Action digests to their corresponding ActionResults. It actually stores the result digest, but also handles retrieving the result message from the CAS.

BuildGrid’s Action Cache can be configured to store this mapping either in-memory or using the S3 API.

Write-Once Action Cache

BuildGrid also has an Action Cache which only allows a given key to be written once. This was added for testing purposes, but may be useful anywhere that an immutable cache of Action results is needed.


The Operations service is used to inspect the state of Actions currently being executed by BuildGrid. It also handles cancellation of requested Actions, and is normally deployed in the same place as the Execution service (some tools expect it to be accessible at the same endpoint). The Operations service can be used to either inspect Operations (GetOperation) or list all Operations that BuildGrid knows about (ListOperations).

Note that BuildGrid currently maintains knowledge of all past Operations, so listing the Operations can get quite long. To deal with this, Operations are returned in paginated responses, with each ListOperationsResponse containing a next_page_token to get the next page of results.

ListOperations Filtering and Sorting

You can filter the output of ListOperations by passing a string to the filter parameter. A filter string looks like the following:

  • completed_time > 2020-07-30T14:30:00 & stage = COMPLETED

The supported parameters are:

  • name (the operation name without the instance name prefix)


  • queued_time (an ISO-8601 timestamp indicating the time the Action was queued)

  • start_time (an ISO-8601 timestamp indicating the time work on the Action began)

  • completed_time (an ISO-8601 timestamp indicating the time work on the Action completed)

The supported operators are: =, !=, >, >=, <, <=

You can also use a special sort_order parameter to adjust the order the results are displayed, like this:

  • completed_time > 2020-07-30T14:30:00 & sort_order = completed_time

Any of the filtering parameters above can be used as values for sort_order. By default, sort_order indicates ascending order. You can use (asc) or (desc) at the end of the value to explicitly call out ascending or descending order, like this:

  • completed_time > 2020-07-30T14:30:00 & sort_order = completed_time(asc)

  • completed_time > 2020-07-30T14:30:00 & sort_order = completed_time(desc)

You can use multiple sort_order keys in the filter string. Each subsequent sort_order key breaks ties among elements sorted by previous keys.

  • completed_time > 2020-07-30T14:30:00 & sort_order = stage & sort_order = queued_time

The default filter is:

  • stage != COMPLETED & sort_order = queued_time


The Execution service implements the execution part of the Remote Execution API. It receives Execute requests containing Action digests, and schedules the Action for execution. Actions are prioritized first by their priority, where smaller integers are higher priority, and then by how long the Action been queued.

BuildGrid’s Execution service has a pluggable scheduling component. Currently there are two scheduler implementations; in-memory and SQL-based. The SQL scheduler is tested with sqlite and PostgreSQL, but theoretically could work with any database supported by sqlalchemy. Production BuildGrid deployments should use the SQL scheduler with PostgreSQL, to provide a reliable and persistent job queue.


The Bots service implements the Remote Workers API. It handles assigning queued Actions to workers, and reporting updates on their execution.

If the Execution service is using an in-memory scheduler, the Bots service needs to be deployed in the same server. However, using an SQL scheduler allows the Bots service to be independently deployed, as long as it uses the same database as the Execution service.


The LogStream service implements the LogStream API. In a BuildGrid context, this provides a mechanism for workers to stream logs to interested clients whilst the build is in progress. The client doesn’t necessarily need to be the tool which made an Execute request; the resource name used to read the stream can be obtained using the Operations API.

The LogStream service just handles creating the actual stream resource, reading from and writing to the stream uses the ByteStream API. This means that any config including a LogStream service also needs a ByteStream service to function correctly.

Use of the LogStream service isn’t limited to streaming build logs from a BuildBox worker, the buildbox-tools repository provides tooling for writing to a stream generically which could be reused for other purposes. The LogStream service is also completely independent of the rest of BuildGrid (except the ByteStream used for read/write access), and so can be used in situations with no need for the rest of the remote execution/caching functionality. An example LogStream-only deployment is provided in this docker-compose example