buildgrid.server.cas.storage.s3 module
S3Storage
A storage provider that stores data in an Amazon S3 bucket.
- class buildgrid.server.cas.storage.s3.HeadObjectResult(digest: buildgrid._protos.build.bazel.remote.execution.v2.remote_execution_pb2.Digest, version_id: str | None, last_modified: datetime.datetime, size: int)
Bases:
object
- digest: Digest
- version_id: str | None
- last_modified: datetime
- size: int
- buildgrid.server.cas.storage.s3.publish_s3_object_metrics(s3_objects: list[buildgrid.server.cas.storage.s3.HeadObjectResult]) None
- buildgrid.server.cas.storage.s3.s3_date_to_datetime(datetime_string: str) datetime
- class buildgrid.server.cas.storage.s3.S3Storage(bucket: str, page_size: int = 1000, s3_read_timeout_seconds_per_kilobyte: float | None = None, s3_write_timeout_seconds_per_kilobyte: float | None = None, s3_read_timeout_min_seconds: float = 120, s3_write_timeout_min_seconds: float = 120, s3_versioned_deletes: bool = False, s3_hash_prefix_size: int | None = None, s3_path_prefix_string: str | None = None, **kwargs: Any)
Bases:
StorageABC
- TYPE: str = 'S3'
- has_blob(digest: Digest) bool
Return True if the blob with the given instance/digest exists.
- get_blob(digest: Digest) IO[bytes] | None
Return a file-like object containing the blob. Most implementations will read the entire file into memory and return a BytesIO object. Eventually this should be corrected to handle files which cannot fit into memory.
The file-like object must be readable and seekable.
If the blob isn’t present in storage, return None.
- delete_blob(digest: Digest) None
Delete the blob from storage if it’s present.
- bulk_delete(digests: list[buildgrid._protos.build.bazel.remote.execution.v2.remote_execution_pb2.Digest]) list[str]
Delete a list of blobs from storage.
- commit_write(digest: Digest, write_session: IO[bytes]) None
Store the contents for a digest.
The storage object is not responsible for verifying that the data written to the write_session actually matches the digest. The caller must do that.
- missing_blobs(digests: list[buildgrid._protos.build.bazel.remote.execution.v2.remote_execution_pb2.Digest]) list[buildgrid._protos.build.bazel.remote.execution.v2.remote_execution_pb2.Digest]
Return a container containing the blobs not present in CAS.
- bulk_update_blobs(blobs: list[tuple[buildgrid._protos.build.bazel.remote.execution.v2.remote_execution_pb2.Digest, bytes]]) list[buildgrid._protos.google.rpc.status_pb2.Status]
Given a container of (digest, value) tuples, add all the blobs to CAS. Return a list of Status objects corresponding to the result of uploading each of the blobs.
Unlike in commit_write, the storage object will verify that each of the digests matches the provided data.
- bulk_read_blobs(digests: list[buildgrid._protos.build.bazel.remote.execution.v2.remote_execution_pb2.Digest]) dict[str, bytes]
Given an iterable container of digests, return a {hash: file-like object} dictionary corresponding to the blobs represented by the input digests.
Each file-like object must be readable and seekable.