buildgrid.server.cas.storage.index.redis module
A storage provider that uses redis to maintain existence and expiry metadata for a storage.
- class buildgrid.server.cas.storage.index.redis.RedisIndex(redis: RedisProvider, storage: StorageABC, prefix: str | None = None)
Bases:
IndexABC
- TYPE: str = 'RedisIndex'
- start() None
- stop() None
- has_blob(digest: Digest) bool
Return True if the blob with the given instance/digest exists.
- get_blob(digest: Digest) IO[bytes] | None
Return a file-like object containing the blob. Most implementations will read the entire file into memory and return a BytesIO object. Eventually this should be corrected to handle files which cannot fit into memory.
The file-like object must be readable and seekable.
If the blob isn’t present in storage, return None.
- delete_blob(digest: Digest) None
Delete the blob from storage if it’s present.
- commit_write(digest: Digest, write_session: IO[bytes]) None
Store the contents for a digest.
The storage object is not responsible for verifying that the data written to the write_session actually matches the digest. The caller must do that.
- bulk_delete(digests: list[Digest]) list[str]
Delete a list of blobs from storage.
- missing_blobs(digests: list[Digest]) list[Digest]
Return a container containing the blobs not present in CAS.
- bulk_update_blobs(blobs: list[tuple[Digest, bytes]]) list[Status]
Given a container of (digest, value) tuples, add all the blobs to CAS. Return a list of Status objects corresponding to the result of uploading each of the blobs.
The storage object is not responsible for verifying that the data for each blob actually matches the digest. The caller must do that.
- bulk_read_blobs(digests: list[Digest]) dict[str, bytes]
Given an iterable container of digests, return a {hash: file-like object} dictionary corresponding to the blobs represented by the input digests.
Each file-like object must be readable and seekable.
- least_recent_digests() Iterator[Digest]
Generator to iterate through the digests in LRU order
- get_total_size() int
Return the sum of the size of all blobs within the index
- get_blob_count() int
Return the number of blobs within the index.
- delete_n_bytes(n_bytes: int, dry_run: bool = False, protect_blobs_after: datetime | None = None, large_blob_threshold: int | None = None, large_blob_lifetime: datetime | None = None) int
Iterate through the Redis Index using ‘SCAN’ and delete any entries older than ‘protect_blobs_after’. The ordering of the deletes is undefined and can’t be assumed to be LRU. Large blobs can optionally be configured to have a separate lifetime.