☁️
Zus Docs
  • About Züs
  • System
    • Providers and Services
      • Miner
      • Sharder
      • Blobber
      • Validator
      • Authorizer
      • Node Locator (0DNS)
    • Storage
      • Architecture and Data Management
      • Protocol
        • Allocations
        • Reference Objects
        • Challenges
        • Write Markers
          • Chain Hashing
          • Two Commit
        • Blobber Repair Protocol
      • ZS3 Server
        • Backup, Recovery and Replication
        • Encryption and Compression
        • S3FS Setup and Usage
        • Backup & Recovery with Restic on Blimp + ZS3 Server
        • Backup & Recovery with Veeam on Blimp + ZS3 Server
      • File Operations
        • Upload
        • Download
        • File Sharing
        • Partial Error Recovery
        • Streaming
        • Rent a Blobber
    • Smart Contracts
      • Storage S.C.
      • Miner S.C.
      • ZCN S.C.
      • ERC-20 S.C.s
      • Bridge Protocol
    • Blockchain & Consensus
      • Entities
    • User Authentication and Wallet Management System
      • OKTA Integration
      • Key Management System (KMS)
  • APIs
    • 0DNS API
    • JS API
    • Mobile API
  • CLIs
    • Storage CLI
      • Quickstart
      • Configuring the tool
    • Wallet CLI
      • Wallet Configuration
      • Quickstart
      • Configuring the tool
  • SDKs
    • Go SDK
      • GO SDK Microservices
    • JS SDK
  • Tokenomics
    • Staking
    • Reward & Penalty
  • ✨Züs Apps
    • 🗝️Vult
      • Getting Started
        • Web
        • Mobile
      • Vult AI
        • Batch Processing
        • Memory Retention
        • Technical Implementation
        • Architecture Overview
      • Login / Register
      • File Management Pages
      • File Sharing
      • Storage Management Dashboard
      • Storage Maintenance and Troubleshooting
      • Züs Subscription
      • Wallet Management
      • Refer a friend
      • Settings
    • 🏗️Blimp
      • Getting Started
      • Login / Register
      • Configure Storage
        • Create Standard Storage Allocation
        • Create Enterprise Allocation
        • Create S3 Server Allocation
        • Create Cloud Migration Allocation
        • Allocation Maintenance and Troubleshooting
      • File Management Pages
      • File Sharing
      • Manage Allocations
      • Upgrade Storage
      • Blimp Vault
      • Refer a friend
      • Settings
      • Launching ZS3 Server
      • Using CLI to backup files into Blimp + ZS3 Server
    • 🏠Chimney
      • Getting Started
      • Login / Register
      • Create New Deployment
      • Manage Your Deployments
      • Homepage
      • Staking Dashboard
      • Rank Dashboard
      • Monitor Dashboard
      • Stats Dashboard
      • Logs Dashboard
      • Wallet Dashboard
      • Operations on your Deployments
      • Restricted Blobbers
      • Settings
        • Manage Profile
        • Wallet Settings
        • Update Blobber Settings
        • Update Blobber Version
        • Refer a friend
        • Help
    • 🌐Atlus
      • Getting Started
      • Home page
      • Service Providers Page
      • Charts Page
        • Market Charts
        • Network Charts
        • Storage Charts
      • Blockchain Page
      • Server Map Page
      • Storage Explainer Page
      • Details Pages
        • Block Details Page
        • Transaction Details Page
        • Wallet Details Page
        • Miner Details Page
        • Sharder Details Page
        • Blobber Details Page
        • Validator Details Page
        • Authorizer Details Page
        • Allocation Details Page
      • Appendix: Common Components
    • ⚡Bolt
      • Getting Started
        • Web
        • Mobile
      • Login / Register
      • Sign In with external wallet
      • Staking Dashboard
      • Staking/Unstaking a provider
      • Claiming Rewards
      • Send/Receive ZCN tokens
      • Buy ZCN
      • Deposit/Withdraw ZCN tokens
      • Activity Dashboard
      • Refer a friend
      • Settings
  • Releases
    • Hardfork
Powered by GitBook
On this page
  • 1. Write Process – Data +1 Consensus
  • 2. Read Process – Data Shard Requirement
  • 3. Automatic Data Repair – Self-Healing Mechanism
  • 4. Rollback Mechanism – Handling Partial Commits
  • 5. File Count and File Size Limits
  • 6. Encryption and Secure File Sharing
  • 7. Data Verification – Merkle Tree Proofs
  • 8. Performance-Based Blobber Selection
  • 9. Batch Processing of Write Markers
  • 10. Geolocation-Aware Allocation Strategy
  1. System
  2. Storage

Architecture and Data Management

Zus storage operates on a distributed and decentralized model, ensuring high availability, fault tolerance, and optimized performance.

This document outlines how writes, reads, data repair, rollback mechanisms, file size constraints, encryption, and performance-based selection work within the Zus storage ecosystem.

1. Write Process – Data +1 Consensus

To ensure data reliability and durability, Zus enforces a data+1 consensus requirement for every write operation:

  • A write is only considered successful when all required data shards have been stored plus one additional blobber acknowledges the write.

  • This extra redundancy enhances fault tolerance and ensures that even if a blobber fails during the write process, the data can still be reconstructed.

  • This approach guarantees strong data consistency and durability while minimizing risks of partial writes.

2. Read Process – Data Shard Requirement

Zus follows an erasure coding model, where reads require retrieving only the minimum number of data shards needed to reconstruct a file:

  • The exact number of required shards depends on the erasure coding scheme chosen by the user during allocation setup.

  • Clients can tune redundancy and performance based on their desired balance between storage efficiency and fault tolerance.

  • This approach optimizes bandwidth usage and download speed while ensuring that files remain accessible even if some blobbers go offline.

3. Automatic Data Repair – Self-Healing Mechanism

To maintain long-term data durability, Zus includes an automated repair mechanism:

  • If parity blobbers go offline or become unreachable, the system automatically reconstructs lost data shards.

  • The missing shards are redistributed to healthy blobbers, ensuring the allocation remains intact.

  • This self-healing mechanism prevents permanent data loss, keeping the storage network highly available and resilient.

4. Rollback Mechanism – Handling Partial Commits

To prevent data inconsistency during failed writes, Zus employs an atomic rollback mechanism:

  • If a write operation partially succeeds (i.e., some blobbers accept the data while others fail), the system reverts to the last known consistent state.

  • This ensures strong consistency guarantees, preventing corruption, orphaned data, or incomplete writes.

  • Users are assured that their stored data always reflects a coherent and valid state, regardless of network failures.

5. File Count and File Size Limits

Zus offers flexibility in terms of file storage:

  • There are no hard limits on the number of files within an allocation.

  • The maximum file size is capped by the allocation size itself.

    • If an allocation is 1000 GB, a single file can be as large as 1000 GB, provided no other files consume space.

  • This model enables large-scale data storage, allowing users to store and manage massive files efficiently.

6. Encryption and Secure File Sharing

Zus supports end-to-end encryption (E2EE) to protect data at rest and in transit:

  • Clients can encrypt their data before upload, ensuring that even storage providers (blobbers) cannot access raw content.

  • Proxy re-encryption technology allows users to securely share encrypted files without exposing their encryption keys.

  • This ensures confidentiality and controlled access, even when files are being shared between multiple users.

7. Data Verification – Merkle Tree Proofs

To maintain data integrity and verifiability, Zus utilizes Merkle Tree-based validation:

  • Each uploaded file is split into fixed-size chunks, with a Merkle Tree constructed over these chunks.

  • Each blobber stores its respective Merkle Tree, allowing efficient proof-of-storage verification.

  • During downloads, Merkle proofs are checked against the root hash stored on the blockchain:

    • Ensures no tampering or modification has occurred.

    • Guarantees that retrieved data matches the originally uploaded version.

  • This cryptographic auditability enhances trust in storage integrity.

8. Performance-Based Blobber Selection

To optimize download performance, Zus dynamically selects the best-performing blobbers based on real-time response metrics:

  • During initial download requests, the system tracks blobber response times.

  • Clients prioritize faster-performing blobbers for subsequent read operations.

  • This approach balances speed, redundancy, and reliability, ensuring users receive the best available performance.

9. Batch Processing of Write Markers

Each data commit generates a Write Marker, a cryptographic proof of successful storage:

  • The Write Marker includes:

    • Client ID

    • Blobber ID

    • Allocation ID

    • Size of data committed

    • Allocation root hash (post-commit)

    • Timestamp and signature

  • These Write Markers serve as proof-of-storage, ensuring blobbers are eligible for payment.

Write Marker Chain – Batch Submission and Verification

To optimize blockchain interactions, Zus employs a Write Marker Chain mechanism:

  • Instead of submitting each Write Marker individually, blobbers batch multiple markers into a single Write Marker Chain.

  • Each batch contains:

    • A sequence of write markers.

    • A cumulative root hash of the allocation.

    • A batched root hash, computed by hashing all root hashes in the batch.

  • The blockchain verifies the integrity of the entire chain, ensuring:

    • Proof-of-storage validation for blobbers.

    • Reduced transaction costs by minimizing on-chain interactions.

    • Efficient cryptographic auditing, allowing individual verification of each write operation.

This batch submission approach significantly enhances scalability, performance, and economic efficiency for both users and storage providers.

10. Geolocation-Aware Allocation Strategy

Zus optimizes storage allocation based on geolocation diversity, ensuring:

  • Global redundancy by distributing shards across multiple regions.

  • Enhanced data availability, reducing risks of localized outages.

  • Optimized performance, ensuring reads and writes are routed to the closest, best-performing blobbers.

For reads, Zus prioritizes selecting the best-performing blobbers based on latency, reliability, and response times and minimizing data retrieval time by choosing blobbers that optimize efficiency for the user’s location.


PreviousStorageNextProtocol

Last updated 2 months ago