Züs
Search…
⌃K

Storage

In Züs network, storage is provided by specialized entities called blobbers.
Blobber: A blobber is responsible for storing data in exchange for rewards.
Our design relies on the use of signed markers, as described in Züs Payment architecture page under Token Pools and Markers header. Critically for storage, when the blobber redeems these markers on the blockchain, they also serve as a public commitment to store the data provided. Our protocol was first outlined in a DAPPCON paper [8], as well as a related technical report [3].
For storage, there are different amounts of storage that must be understood for a blobber:
  • capacity: The total storage that a blobber physically offers.
  • staked capacity: The amount of capacity that is also backed by staked tokens, either from the blobber or delegates.
  • free capacity: The amount of staked capacity that has not already been purchased by a client.
  • purchased capacity: Of the staked capacity, the amount that clients have currently purchased, whether they are storing anything or not.
  • used storage: Of the purchased capacity, the amount that the client is currently using.
A stake pool stores tokens backing a specific blobber’s offer of storage. After the offer of storage expires, the stake pool tokens return to the delegates. Note that a stake pool is actually a collection of delegate pools, where each pool represents the tokens belonging to a specific delegate.
A blobber may offer additional capacity at any time. However, capacity can only be lowered if it has not already been staked. Similarly, while a delegate can request to unstake tokens at any time, the request can only be granted when it would not drop the staked capacity below the purchased capacity.

Initializing Blobber

When a blobber registers to provide storage, it specifies its total storage capacity, its pricing for both reads and writes, and the duration (max offer duration)for how long its pricing is valid. The duration starts from the timestamp of the transaction where the offer of storage was first made.
Note that the blobber cannot offer storage immediately. First, there must be enough tokens staked to guarantee service, as discussed in Service Providers, Staking and Delegates . These tokens do not have to be staked by the blobber itself, although we expect that the blobber will provide at least some of the stake. Other clients may serve as delegates, staking tokens on behalf of the blobber and sharing in the rewards offered.
The blobber goes through the same process when it wishes to expand or decrease the storage that it offers, increasing or decreasing the stake amount of tokens needed. A blobber can specify a capacity of 0 if they wish to stop providing storage altogether.
It should be noted that a blobber cannot abandon its existing storage agreements. The blobber must maintain those allocations until the user releases them, or until the duration of the storage offer elapses.

New Allocation

An allocation is a collection of data associated with a client, and may potentially be stored with many blobbers. To set up a new allocation, a client specifies the price range that they are willing to pay for reads and writes, the size of storage they require, and the duration that they need for the storage (specified as an expire parameter).
For each allocation, a client must have two token pools of funds that blobbers can draw on to be rewarded for the work they have done. The pools are:
  • A write pool, used to pay blobbers for any data uploaded and stored.
  • A read pool, used to pay blobbers for any reads of the data stored.
The read pool is associated with client’s wallet so that they can read from any blobbers. The write pool is tied to allocation and its set of specific blobbers. When requesting a new allocation, the client must specify:
  • The price ranges that it is willing to pay for both reads and writes.
  • The size of data that it needs to store.
  • The expiration time for when that storage will no longer be required.
  • The service-level agreement (SLA) parameter challenge completion time.
  • (Optionally) A list of preferred blobbers.

Challenge Pools

Blobbers and their delegates receive rewards for reads immediately. However, writes are paid through challenges where the blobber must prove that they are storing the data that they are paid to store. However, token rewards for writes are instead transferred to a challenge pool. Tokens in this pool are not made immediately available to the blobber or its delegates, but they may receive those rewards after passing a challenge proving that they are storing the data that they claim.
Outsourcing attacks, where the blobber stores their data with another storage provider, are of particular concern. Our protocol ensures that the content provided for verification is 64 kB and the content required to create this verified content is the full file fragment. Our process is illustrated in Figure 2 below. The file is divided into n 64 kB fragments based on n storage servers. Each of these 64 kB fragments is further divided into 64-byte chunks, so that there are 1024 such chunks in each 64 kB block that can be addressed using an index of 1 to 1024. The data at each of these indexes across the blocks is treated as a continuous message and hashed. Then the 1024 hashes serve as the leaf hashes of the Merkle Tree. The root of this Merkle tree is used to roll up the file hashes further to directory/allocation level. The Merkle proof provides the path from the leaf to the file root and from the file root to the allocation level. In this model, in order to pass a challenge for a file for a given index (between 1 and 1024), a dishonest blobber first needs to download all the content and do the chaining to construct the leaf hash. This approach discourages blobbers from outsourcing the content and faking a challenge response. There are 3 hashes for a file stored with a blobber. The actual file hash is used by a client to verify the checksum of a downloaded file; this hash is the hash of the original file. The content hash is the hash of data fragments after they have been erasure coded by the client; it may be used to verify the data uploaded to the blobber. Finally, the challenge hash (the Merkle root) is used to verify challenges, and is ultimately stored in the AllocationRoot field of the write marker. Figure 3 below shows a high level view of the payment process. A client must have funds committed to a write pool before uploading data. Then, when uploading files to the blobber, the client must include write markers along with the files. Critically, the challenge hash is used to build the AllocationRoot field of the write marker; thus, when the blobber commits those markers to the blockchain, it serves as the blobbers commitment to the stored data. Redeeming the markers transfers them to the challenge pool. When the blobber is challenged to prove that the data is stored correctly and successfully passes the challenge, the tokens are transferred from the challenge pool to the blobber. When a new block is produced, miners will slash the stake of blobbers who either failed a challenge or who have not responded within the allowed time. Every block also provides a new challenge based off of the VRF (discussed in Mining on the Züs Blockchain). There are 10 validators selected from other blobbers that may verify the challenge (though the blockchain may be configured to require more or less validators). Critically, the validators do not need to have any pre-existing knowledge of the data stored, since it can be verified against the write marker stored by the challenged blobber. At a high level, the challenge protocol involves three phases:
  1. 1.
    Using the VRF result, a single block of a file stored by one specific blobber is selected. We refer to this stage as the challenge issuance. How blobber is selected ? Specifically, we use the VRF to randomly select a partition of the blobbers, then randomly select a blobber from that partition, then a random non-empty allocation stored by the blobber, then a random file in that allocation, and finally a random block within that file. At all steps, the VRF provides the random seed.) .
Figure 2: Fixed Merkle Tree
Figure 3: Write Payment Overview
2. In the justification phase, the blobber broadcasts the data to the validators along with the metadata needed to verify the challenge.
3. Finally, in the judgment phase, the validators share their results. For a more detailed discussion on the challenge protocol, see figure above.

Write Marker Format

Write markers contain the following fields:
FileMetaRoot AllocationRootPreviousAllocationRoot AllocationID Size BlobberID Timestamp ClientID Signature FileID – Unique ID within an allocation for a file for its lifetime.
Operation – Used for Repairs and GDPR reporting (discussed in Blobber Support for General Data Protection Regulation).

Updating Allocation

A client can change the size or expiration of an allocation. If extending the allocation (by increasing either of these values), the client must negotiate new terms. However, if reducing both of these values, the client may continue to use the existing terms of the allocation.
Extending Allocation
For a client to extend their allocation, they must have sufficient tokens in their write pool and the blobbers must have sufficient storage capacity. Otherwise the operation will fail.
The client will continue to pay the original rate for their first allocation, but will pay the new rate for the extended period.
Reducing/Closing Allocation
When the client reduces its allocation, it may reclaim some of its tokens. However, there is a delay allowing blobbers to still claim the tokens for the services that they have already provided. However, note that any tokens in the challenge pool are not returned to the client; once they leave the write pool, they are considered to have been paid to the blobber and its delegates.
The client may cancel an allocation at any time, though they pay a penalty for doing so. If the client cancels the allocation, then the allocation is finished and the blobbers may stop storing the client’s data.

Adding/Removing Blobbers

Occasionally, a blobber may need to be replaced. This replacement might be triggered by the client who owns the data or it could be the result of repeated failed challenges (which the client could observe). In either case, this process is initiated by the client. First, the client writes a transaction to update their allocation to add a new blobber. At this point, the new blobber will accept writes (though it might need to queue them up). However, the new blobber won’t respond to reads until it has been able to sync up the data. The client must acquire the data to give to the blobber. The client might already have the data cached locally. If not, they must acquire it, either by reading from the old blobber if it is still available, or by reconstructing the data from the other blobbers. The client then uploads the data to the new blobber. Note that while the client must pay for these writes, they may have previously recovered token from failed challenges if the old blobber was not performing adequately.
Figure 4: Sequence diagram for concurrent writes
After the new blobber has been able to sync up, it writes a transaction to cash in their write markers, effectively declaring itself online. At this point, the new blobber is not available for reads and challenges. Finally, the client writes a transaction to the blockchain to drop the old blobber. The old blobber will no longer be selected for reads or writes, and may safely discard the data. However, it still may redeem outstanding markers.

Concurrent Writes

In this section, we address concurrent uploads for the same client. In this document, we detail an approach for a single client with multiple devices uploading data to the same allocation. Note that blobbers are ordered. This order can be adjusted periodically, but all client devices must agree on the order.

Concurrent Writes Sequence Figure 4 above shows the steps of the process.

First, the client device (1) sends one or more files to the first blobber. Once all files have been uploaded, (2) the client sends a write marker to the first blobber. The blobber then (3) stores the file and verifies that: • The marker is valid.
• The system state for the marker does not match a stale system state. (If it does, the blobber should notify the device of the most recent system state.)
• There is not already a pending marker.
If everything appears to be valid, the write marker and file are stored in a pending state. No additional write markers will be accepted for that client until the process completes. The blobber (4) notifies the client device that the marker and file were accepted (in a pending state).
The process is repeated (5-8) for the 2nd blobber, and every other blobber in sequence. By contacting the blobbers sequentially in order, we reduce the risk of deadlocks; if a device fails to have its write marker accepted by the first blobber, it won’t attempt to send write markers to the other blobbers. Once the client device has received confirmation from all blobbers, it (9,11) sends a commit message to all blobbers. The device can send out the commit messages to all blobbers concurrently without a problem. The blobbers then (10,12) commit the file and write marker. Additional write markers may now be accepted.

Concurrent Write Discussion

Blobber time outs If a blobber times out, the device can skip the blobber and contact the next blobber. However, there are some restrictions:
•A minimum of M blobbers must accept the marker. M can be config urable, but should be more than 2/3 of the blobbers.
• If a device fails to lock enough blobbers, it should release the lock on all blobbers.
Uploads while there is a pending write marker If a client device uploads data but there is another pending write marker, the file can be accepted in a pending state, but the device must upload a marker later. Livstreaming Züs provides support for livestreaming, allowing a client to upload audio/video data to Züs network on a continuous basis so that other clients can watch it continuously. We use M3U8 format for our livestreaming. The client providing the data divides the livestream into chunks of a specified duration (configured to one second at the time of this writing) and uploads them to the blobbers. The client viewing the livestream downloads the chunks locally and allows the viewer to watch the livestream. Videos For other videos, our files may be much larger than the files provided for livestreaming. In order to allow the viewer to jump around in the video file, the client viewing the data can download 64 KB data blocks from within the file without needing to download the entire file. Once downloaded, these are converted into a byte stream. 4.7.3 Uploading write markers concurrently In order to speed up the process of sending write markers, we can start off in sequential mode and then shift to a concurrent mode for sending write markers once it appears that a device is acquiring the locks. The idea is to start off in sequential mode, but progressively to be more cautious if any blobbers time out. For conciseness, we refer to the write markers as “locks” in this section. Some variables: • N – the number of locks required to switch to concurrent mode. Initially 1. • n – the number of locks acquired. Initially 0. • M – minimum number of locks needed before the write markers may be confirmed. • T – Total number of blobbers for the allocation. • i – Index of the blobber in the ordered list of blobbers. Initially 0. The Client device starts in sequential mode.
4.7.4 Sequential write marker mode The device first requests a lock from blobber i. If the lock is received, n and i are incremented by one. If n = N, it appears that the device will acquire the locks to the rest of the blobbers, and the device may switch to concurrent write marker mode. Otherwise, the device must redo this process for the next blobber If the lock is denied, the device should release any locks acquired with other blobbers. The device may attempt to retry the process later. If the request times out, the number of locks required to switch modes dou bles (N = N ∗ 2), and i is incremented by one. The device then moves on to the next blobber and attempts to acquire the lock. 4.7.5 Concurrent write marker mode When the device appears to be acquiring the locks for the blobbers, it may speed up the process. Once entering this mode, the device requests locks from all remaining blobbers.
After M locks are received, the device may send confirmation messages to all blobbers. (While another device might lock a single blobber, it will not send commit messages and will eventually release its lock). If more than T − M blobbers deny the lock, the device should release any locks with other blobbers and retry later.

Reading from Allocation

Similar to how writes are handled, clients write special read markers to pay blobbers for providing data. Token Pools and Markers details the philosophy behind markers in more depth. Read markers contain the following fields:
• ClientID – the reader of the file.
• ClientPublicKey
• BlobberID
• AllocationID
• OwnerID – the owner of the allocation.
• Timestamp
• ReadCounter – used to prevent the read marker being redeemed multiple times.
• Signature- When the ReadCounter is incremented, the price is determined by multi plying the increase in ReadCounter by the size of the block and the read price. The blobber is paid immediately when the read marker is redeemed.

Blobber Support for General Data Protection Regulation

In an effort to give people greater control over their personal data, the European Union introduced the General Data Protection Regulation (GDPR). The Züs network includes functionality to introduce privacy reports about the usage of a customer’s data on their request. With Züs, each blobber stores usage statistics in a local database. Therefore, the Züs network promises a best effort, relying on the blobbers to report accurate results. This feature is optional, and a gdpr boolean flag allows smart contracts to find blobbers that support it. For blobbers that do support it, the feature is enabled for all users by default. Of course, blobbers might charge a slightly higher price for this service.

Repair Protocol

Züs includes a consensus mechanism to accept an operation on the client side. The process is as follows:
  1. 1.
    The client sends the operation request to all blobbers, who store the re quest temporarily.
  2. 2.
    If consensus is met, the client sends a commit request and a write marker to all blobbers.
  3. 3.
    All blobbers apply the operation upon receiving the commit.
Züs uses 10/16 erasure coding, so as long as at least 10 successful commits are received the operation is successful. However, we wish to repair any shares of the data that did not complete successfully in order to maintain our redundancy.
If the commit operation fails on more than the minimum needed number to reconstruct the data, the client may undo the operation. For this process, we use the FileID field of the write marker; see Write Marker Format for more details on the format. The FileID field is unique (within the allocation) for the file for its lifetime. To rename a file thus requires the client to successfully lock to prevent discrepancy on the FileID. By reviewing writemarkers of each blobber and backtracking until we find same FileMetaRoot for all blobbers, we know the point where we can start the repair. For example, if one blobber has FileMetaRoots as h1, h2, h3, h4, h5, and another other has h1, h2, and h3, then with repair the second blobber needs to receive h4 and then h5.

Proxy Re-Encryption

Proxy re-encryption (PRE) allows a user to store confidential data in the cloud without having to trust the storage provider. The data is encrypted with the data owner’s private key; when they wish to share their data, they derive a re-encryption key from their own key pair and the receiver’s public key. This re-encryption key allows the data to be re-encrypted for the receiver’s public key without ever decrypting the data. As a result, the cloud provider can convert the data without being given an opportunity to read the confidential data. We use the approach outlined in Selvi et al. [11]. Figure 5 shows how data is uploaded when using proxy re-encryption. The client first (1) erasure codes the data into fragments, with one fragment per blobber. For each blobber, the client then (2,5) generates a public/private key pair (if it does not already have a keypair associated with that storage provider). The client then (3,6) encrypts the corresponding fragment with the public key, and (4,7) sends the encrypted data for the storage provider. For data transfer, the client requesting the data must first request the data from the client that owns the data. (For convenience, we will refer to the client owning the data as the seller and the client requesting the data as the buyer, even if the seller does not actually request any compensation for allowing access to their data.) Figure 6 shows an overview of this process. The buyer (1) requests data from the seller, specifying its public key and the details of the data desired. For each storage provider, the seller then (2,4) calculates the re-encryption key from the buyer’s public key and the keypair associated with the blobber. The seller then (3,5) sends the re-encryption key and the ID of the buyer to the corresponding storage provider. The storage provider retains this information.
Figure 5: Uploading Data with Proxy Re-Encryption
Once the initial phase is complete, the seller (6) sends a confirmation to the buyer including the list of blobbers. The buyer then (7,10) requests the data from each storage provider, specifying its ID and the data requested. Each storage provider (8,11) re-encrypts the data with the re-encryption key, sending the results to the buyer. The buyer (9,12) decrypts the fragments and (13) reconstructs the original data.