Züs Cache Preliminary Datasheet
This section introduces Züs Cache, the AWS-hosted deployment of the Züs storage platform available via the AWS Marketplace. It provides an end-to-end overview of the product’s architecture, key capabilities, configuration parameters, and benchmark results.
Züs Cache enables users to deploy a zero-trust, blockchain-secured storage cluster within their own AWS account, offering scalability, data ownership, and encrypted collaboration.
This datasheet serves as a technical reference for solution architects, DevOps engineers, and IT administrators who wish to understand the capabilities, performance characteristics, and operational parameters of Züs Cache on AWS before provisioning or onboarding the service.
High Level Architecture
ZS3Server – a gateway interface server that allows users to interact with the Züs storage cluster using standard S3-compatible APIs. Check out the Github here → https://github.com/0chain/zs3server
Blobber – a storage server which can mount EBS volumes or connect to an EFS file system. Check out the Github here → https://github.com/0chain/eblobber

Figure 1: Züs storage architecture of a single cluster on AWS
Recommended Allocation Parameters
Settings
Description
Value
Notes
Data/Parity (erasure code)
Data and Parity Shards
4 data, 1 parity or 8 data, 1 parity
Select 8/1 for higher throughput, lower latency, 4/1 for lower cost
EBS Provisioned storage
64 TB or 128 TB
Example: 4 data × 16 TB EBS volume = 64 TB for a 4/1 cluster
EFS
Usage-based storage
Unlimited
Automatically tiers to EFS-IA or Archive
Performance Results
The objective is to benchmark a single node Züs cluster performance for different data/parity combination and object sizes. These configurations and DevOps processes will be available on AWS Marketplace via Blimp.Software soon.
Test Setup
Eblobber HW/instance type:
c6i.2xlargeStorage: Mounted EBS (gp3): 500 GiB; IOPS 3000; Throughput 1000 MiB/sZS3Server Version used: https://github.com/0chain/zs3server/pull/177 HW/instance type:
c5n.18xlargezs3server.jsonfile:{ "encrypt": false, "compress": false, "max_batch_size": 40, "batch_wait_time": 70, "batch_workers": 72, "upload_workers": 72, "download_workers": 72, "max_concurrent_requests": 400 }Allocation – created through zboxcli Check out the Github here → https://github.com/0chain/zboxcli Data/parity: 4/1 and 8/1 Size: 800 GB
Test Procedure
Used Warp to benchmark the performance. Instructions can be found here.: https://docs.zus.network/zus-devops/zus-network-testing/performance-test/zs3server-warp-benchmark-testing
To generate benchmark data, each test was conducted 3 times in a row. Commands used were as follows:
./warp put --host <zs3server-ip:port> --access-key rootroot --secret-key rootroot \
--bucket benchmark --duration 3m --concurrent 16 --obj.size 128MB
./warp get --host <zs3server-ip:port> --access-key rootroot --secret-key rootroot \
--bucket benchmark --duration 3m --concurrent 16 --obj.size 128MB --objects 32
./warp put --host <zs3server-ip:port> --access-key rootroot --secret-key rootroot \
--bucket benchmark --duration 3m --concurrent 16 --obj.size 512MB
./warp get --host <zs3server-ip:port> --access-key rootroot --secret-key rootroot \
--bucket benchmark --duration 3m --concurrent 16 --obj.size 512MB --objects 32Warp Results
Results are generated based on the Warp configuration above.
GET
Metric
4/1 (128 MB)
4/1 (512 MB)
8/1 (128 MB)
8/1 (512 MB)
Throughput (GiB/s)
2.5
2.5
4.6
4.6
Latency (s)
0.8
3.1
0.4
1.7
TTFB (s)
0.3
1.0
0.2
0.4
PUT
Metric
4/1 (128 MB)
4/1 (512 MB)
8/1 (128 MB)
8/1 (512 MB)
Throughput (GiB/s)
1.6
1.9
2.1
2.4
Latency (s)
1.2
4.0
0.9
3.3
Last updated