buildgrid.server.cas.storage.s3 module¶
S3Storage¶
A storage provider that stores data in an Amazon S3 bucket.
-
class
buildgrid.server.cas.storage.s3.
S3Storage
(bucket, page_size=1000, **kwargs)¶ Bases:
buildgrid.server.cas.storage.storage_abc.StorageABC
-
has_blob
(digest)¶ Return True if the blob with the given instance/digest exists.
-
get_blob
(digest)¶ Return a file-like object containing the blob. Most implementations will read the entire file into memory and return a BytesIO object. Eventually this should be corrected to handle files which cannot fit into memory.
The file-like object must be readable and seekable.
If the blob isn’t present in storage, return None.
-
delete_blob
(digest)¶ Delete the blob from storage if it’s present.
-
bulk_delete
(digests: List[build.bazel.remote.execution.v2.remote_execution_pb2.Digest]) → List[str]¶ Delete a list of blobs from storage.
-
begin_write
(digest)¶ Return a file-like object to which a blob’s contents could be written.
-
commit_write
(digest, write_session)¶ Commit the write operation. write_session must be an object returned by begin_write.
The storage object is not responsible for verifying that the data written to the write_session actually matches the digest. The caller must do that.
-
is_cleanup_enabled
()¶
-
missing_blobs
(digests)¶ Return a container containing the blobs not present in CAS.
-
bulk_update_blobs
(blobs)¶ Given a container of (digest, value) tuples, add all the blobs to CAS. Return a list of Status objects corresponding to the result of uploading each of the blobs.
Unlike in commit_write, the storage object will verify that each of the digests matches the provided data.
-
bulk_read_blobs
(digests)¶ Given an iterable container of digests, return a {hash: file-like object} dictionary corresponding to the blobs represented by the input digests.
Each file-like object must be readable and seekable.
-