A SQL index implementation. This can be pointed to either a remote SQL server or a local SQLite database.

class, connection_string: str, automigrate: bool = False, window_size: int = 1000, inclause_limit: int = -1, connection_timeout: int = 5, **kwargs)


session(*, sqlite_lock_immediately: bool = False, reraise: bool = False) → Any

Context manager for convenience use of sessions. Automatically commits when the context ends and rolls back failed transactions.

Setting sqlite_lock_immediately will only yield a session once the SQLite database has been locked for exclusive use.

Setting reraise to True causes this to reraise any exceptions after rollback so they can be handled directly by client logic.

has_blob(digest: build.bazel.remote.execution.v2.remote_execution_pb2.Digest) → bool

Return True if the blob with the given instance/digest exists.

get_blob(digest: build.bazel.remote.execution.v2.remote_execution_pb2.Digest) → Optional[BinaryIO]

Return a file-like object containing the blob.

If the blob isn’t present in storage, return None.

delete_blob(digest: build.bazel.remote.execution.v2.remote_execution_pb2.Digest) → None

Delete a blob from the index. Return True if the blob was deleted, or False otherwise.

TODO: This method will be promoted to StorageABC in a future commit.

begin_write(digest: build.bazel.remote.execution.v2.remote_execution_pb2.Digest) → BinaryIO

Return a file-like object to which a blob’s contents could be written.

commit_write(digest: build.bazel.remote.execution.v2.remote_execution_pb2.Digest, write_session: BinaryIO) → None

Commit the write operation. write_session must be an object returned by begin_write.

The storage object is not responsible for verifying that the data written to the write_session actually matches the digest. The caller must do that.

missing_blobs(digests: List[build.bazel.remote.execution.v2.remote_execution_pb2.Digest]) → List[build.bazel.remote.execution.v2.remote_execution_pb2.Digest]

Return a container containing the blobs not present in CAS.

bulk_update_blobs(blobs: List[Tuple[build.bazel.remote.execution.v2.remote_execution_pb2.Digest, bytes]]) → List[google.rpc.status_pb2.Status]

Given a container of (digest, value) tuples, add all the blobs to CAS. Return a list of Status objects corresponding to the result of uploading each of the blobs.

Unlike in commit_write, the storage object will verify that each of the digests matches the provided data.

bulk_read_blobs(digests: List[build.bazel.remote.execution.v2.remote_execution_pb2.Digest]) → Dict[str, BinaryIO]

Given an iterable container of digests, return a {hash: file-like object} dictionary corresponding to the blobs represented by the input digests.

least_recent_digests() → Iterator[build.bazel.remote.execution.v2.remote_execution_pb2.Digest]

Generator to iterate through the digests in LRU order

get_total_size() → int

Return the sum of the size of all blobs within the index