Architecture and Data Management
Zus storage operates on a distributed and decentralized model, ensuring high availability, fault tolerance, and optimized performance.
This document outlines how writes, reads, data repair, rollback mechanisms, file size constraints, encryption, and performance-based selection work within the Zus storage ecosystem.
1. Write Process – Data +1 Consensus
To ensure data reliability and durability, Zus enforces a data+1 consensus requirement for every write operation:
A write is only considered successful when all required data shards have been stored plus one additional blobber acknowledges the write.
This extra redundancy enhances fault tolerance and ensures that even if a blobber fails during the write process, the data can still be reconstructed.
This approach guarantees strong data consistency and durability while minimizing risks of partial writes.
2. Read Process – Data Shard Requirement
Zus follows an erasure coding model, where reads require retrieving only the minimum number of data shards needed to reconstruct a file:
The exact number of required shards depends on the erasure coding scheme chosen by the user during allocation setup.
Clients can tune redundancy and performance based on their desired balance between storage efficiency and fault tolerance.
This approach optimizes bandwidth usage and download speed while ensuring that files remain accessible even if some blobbers go offline.
3. Automatic Data Repair – Self-Healing Mechanism
To maintain long-term data durability, Zus includes an automated repair mechanism:
If parity blobbers go offline or become unreachable, the system automatically reconstructs lost data shards.
The missing shards are redistributed to healthy blobbers, ensuring the allocation remains intact.
This self-healing mechanism prevents permanent data loss, keeping the storage network highly available and resilient.
4. Rollback Mechanism – Handling Partial Commits
To prevent data inconsistency during failed writes, Zus employs an atomic rollback mechanism:
If a write operation partially succeeds (i.e., some blobbers accept the data while others fail), the system reverts to the last known consistent state.
This ensures strong consistency guarantees, preventing corruption, orphaned data, or incomplete writes.
Users are assured that their stored data always reflects a coherent and valid state, regardless of network failures.
5. File Count and File Size Limits
Zus offers flexibility in terms of file storage:
There are no hard limits on the number of files within an allocation.
The maximum file size is capped by the allocation size itself.
If an allocation is 1000 GB, a single file can be as large as 1000 GB, provided no other files consume space.
This model enables large-scale data storage, allowing users to store and manage massive files efficiently.
6. Encryption and Secure File Sharing
Zus supports end-to-end encryption (E2EE) to protect data at rest and in transit:
Clients can encrypt their data before upload, ensuring that even storage providers (blobbers) cannot access raw content.
Proxy re-encryption technology allows users to securely share encrypted files without exposing their encryption keys.
This ensures confidentiality and controlled access, even when files are being shared between multiple users.
7. Data Verification – Merkle Tree Proofs
To maintain data integrity and verifiability, Zus utilizes Merkle Tree-based validation:
Each uploaded file is split into fixed-size chunks, with a Merkle Tree constructed over these chunks.
Each blobber stores its respective Merkle Tree, allowing efficient proof-of-storage verification.
During downloads, Merkle proofs are checked against the root hash stored on the blockchain:
Ensures no tampering or modification has occurred.
Guarantees that retrieved data matches the originally uploaded version.
This cryptographic auditability enhances trust in storage integrity.
8. Performance-Based Blobber Selection
To optimize download performance, Zus dynamically selects the best-performing blobbers based on real-time response metrics:
During initial download requests, the system tracks blobber response times.
Clients prioritize faster-performing blobbers for subsequent read operations.
This approach balances speed, redundancy, and reliability, ensuring users receive the best available performance.
9. Batch Processing of Write Markers
Each data commit generates a Write Marker, a cryptographic proof of successful storage:
The Write Marker includes:
Client ID
Blobber ID
Allocation ID
Size of data committed
Allocation root hash (post-commit)
Timestamp and signature
These Write Markers serve as proof-of-storage, ensuring blobbers are eligible for payment.
Write Marker Chain – Batch Submission and Verification
To optimize blockchain interactions, Zus employs a Write Marker Chain mechanism:
Instead of submitting each Write Marker individually, blobbers batch multiple markers into a single Write Marker Chain.
Each batch contains:
A sequence of write markers.
A cumulative root hash of the allocation.
A batched root hash, computed by hashing all root hashes in the batch.
The blockchain verifies the integrity of the entire chain, ensuring:
Proof-of-storage validation for blobbers.
Reduced transaction costs by minimizing on-chain interactions.
Efficient cryptographic auditing, allowing individual verification of each write operation.
This batch submission approach significantly enhances scalability, performance, and economic efficiency for both users and storage providers.
10. Geolocation-Aware Allocation Strategy
Zus optimizes storage allocation based on geolocation diversity, ensuring:
Global redundancy by distributing shards across multiple regions.
Enhanced data availability, reducing risks of localized outages.
Optimized performance, ensuring reads and writes are routed to the closest, best-performing blobbers.
For reads, Zus prioritizes selecting the best-performing blobbers based on latency, reliability, and response times and minimizing data retrieval time by choosing blobbers that optimize efficiency for the user’s location.
Last updated