Forwwards storage requests to a remote storage.

class str, instance_name: str, channel_options: Sequence[Tuple[str, Any]] | None = None, credentials: ClientCredentials | None = None, retries: int = 0, max_backoff: int = 64, request_timeout: float | None = None)

Bases: StorageABC

start() None
stop() None
get_capabilities() CacheCapabilities
has_blob(digest: Digest) bool

Return True if the blob with the given instance/digest exists.

get_blob(digest: Digest) IO[bytes] | None

Return a file-like object containing the blob. Most implementations will read the entire file into memory and return a BytesIO object. Eventually this should be corrected to handle files which cannot fit into memory.

The file-like object must be readable and seekable.

If the blob isn’t present in storage, return None.

delete_blob(digest: Digest) None

The REAPI doesn’t have a deletion method, so we can’t support deletion for remote storage.

bulk_delete(digests: List[Digest]) List[str]

The REAPI doesn’t have a deletion method, so we can’t support bulk deletion for remote storage.

commit_write(digest: Digest, write_session: IO[bytes]) None

Store the contents for a digest.

The storage object is not responsible for verifying that the data written to the write_session actually matches the digest. The caller must do that.

missing_blobs(digests: List[Digest]) List[Digest]

Return a container containing the blobs not present in CAS.

bulk_update_blobs(blobs: List[Tuple[Digest, bytes]]) List[Status]

Given a container of (digest, value) tuples, add all the blobs to CAS. Return a list of Status objects corresponding to the result of uploading each of the blobs.

Unlike in commit_write, the storage object will verify that each of the digests matches the provided data.

bulk_read_blobs(digests: List[Digest]) Dict[str, bytes]

Given an iterable container of digests, return a {hash: file-like object} dictionary corresponding to the blobs represented by the input digests.

Each file-like object must be readable and seekable.