Backup, Recovery and Replication

Recovery

This section provides a comprehensive guide for setting up recovery between two ZS3Servers.

This setup uses the Blimp UI for deployment, Visual Studio Code for configuration, and MinIO Client (mc) for management.

Whether you're new to ZS3Server or an experienced user, this guide offers detailed instructions for every stage of the process.

Step 1: Deploy ZS3Server1

  1. Go to "Manage Allocations" and select "S3 Setup." and click on create new allocation

Fig1: Manage Allocations
Fig2: Create new allocation
  1. Now add blobbers and Click on "Confirm" to add blobbers. Review and confirm the details and pay from the existing balance. Create an allocation and the S3 server setup will open automatically. You will have to create your server instance by adding IP address.

Fig3: Server setup
  1. You need to obtain the IP address of your server, which can be provided by your hosting provider or retrieved using the ifconfig command in the terminal. For example, in the server logs, you may see an IP address such as 65.109.152.43.

Fig4: IP Address
  1. Now add your IP address and click on generate script in Blimp UI. Then enter the password for S3 deployment.

Fig5: S3 Server Setup
Fig6: S3 deployment password
  1. Now copy the script and run it in your server terminal.

Fig7: Copy Script
Fig8: Run Script

Step 2: Deploy ZS3Server2

  1. Go to "Standard Allocation" in the Blimp UI. Create a new allocation and name it allocation22.

Fig9: Create new standard allocation
  1. Click on "Confirm" and select the blobbers, then confirm again. This will create a standard allocation.

Fig9: Confirm Details
  1. In the Blimp UI, copy the newly created standard allocation ID.

Fig10: Copy Standard Allocation ID
  1. Go to your server, create a new .zcn2 folder in your home directory ($HOME), and navigate to the blimp folder. Inside the blimp folder, create the allocation.txt file and paste the allocation ID.

Fig11: Paste allocation ID
  1. Copy the docker-compose.yml, config.yaml, wallet.json and z3server.json files from the .zcn folder to the .zcn2 folder.

Fig12: Copy wallet.json
Fig 13: Copy z3server.json
  1. Edit the ports in docker-compose.yml.Go to line 55 in the .zcn2/docker-compose.yml file and note the ports 9002:9000. Copy these ports to the .zcn/docker-compose.yml file.

    Now run the following commands to start the Docker containers for .zcn2 folder defined in the docker-compose.yml:

    cd ~/.zcn2/
    docker-compose up -d
Fig14: Copy Ports
  • Change the first port in .zcn from 9002 to 9000 in docker-compose.yml file, making it 9000:9000.

Fig15: Paste ports

Now run the following commands to start the Docker containers defined in the docker-compose.yml:

cd ~/.zcn/
docker-compose up -d
Fig16: Run commands

Step 3: Install MinIO Client (mc)

Follow the installation guide provided in the ZS3Server documentation.

1. macOS Homebrew

Install mc packages using Homebrew

brew install minio/stable/mc
mc --help

2. GNU/Linux

Binary Download

wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
./mc --help

3. Microsoft Windows

Binary Download

Platform
Architecture
URL
mc.exe --help

Step 4: Create Alias for ZS3Server1

Run the following command to create an alias for zs3server1:

mc alias set zs3server1 http://65.109.152.43:9000 shahnawaz rootroot --api S3v2
Fig17: Create Alias for ZS3Server1

Step 5: Create Alias for ZS3Server2

Run the following command to create an alias for zs3server2:

mc alias set zs3server2 http://65.109.152.43:9002 shahnawaz rootroot --api S3v2
Fig18: Create Alias for ZS3Server2

Step 6: Create Bucket and Copy Data in ZS3Server1

  1. Create a bucket in zs3server1 with allocation21:

    mc mb zs3server1/allocation21
  2. Verify the bucket:

    mc ls zs3server1
  3. Copy data to the created bucket:

    mc cp ./ zs3server1/allocation21
Fig19: Create bucker and copy data for ZS3Server1

Step 7: Create Bucket and Copy Data in ZS3Server2

  1. Create a bucket in zs3server2 with allocation22:

    mc mb zs3server2/allocation22
  2. Verify the bucket:

    mc ls zs3server2
Fig20: Create bucker and copy data for ZS3Server1

Step 8: Set Up Replication

Run the following command to create replication between the buckets:

mc mirror zs3server1/allocation21/ zs3server2/allocation22/ --remove watch
Fig21 : Set up allocation

Any deletions in bucket1 will automatically replicate in bucket2.

Step 9: Perform Disaster Recovery

  1. Shut down zs3server1 and clean up its deployment.

  2. Create a new allocation (allocation23) using the Blimp UI with a standard allocation.

  3. Update the allocation.txt file in .zcn with the new allocation23 ID.

  4. Set up an alias for the new server:

    mc alias set zs3server3 http://65.109.152.43:9000 shahnawaz rootroot --api S3v2
  5. Create a new bucket:

    mc mb zs3server3/allocation23
  6. Restore data to the new allocation:

    mc mirror zs3server2/allocation22/ zs3server3/allocation23/
Fig22: Perform Disaster Recover

Step 10: Verify Recovery

Check that the files have been recovered to the new allocation:

mc ls zs3server3/allocation3

Replication

This section outlines the steps to set up replication between two ZS3Servers.

Replication ensures that data from one server is mirrored to another for backup and redundancy.

Follow the steps below to configure and initiate replication.

Video Tutorial

Prerequisites

  • For running two ZS3Servers on the same machine:

    • Copy the contents of the .zcn folder to .zcn2.

    • Update allocation.txt and zs3server.json in .zcn2.

    • Update the docker-compose.yml file to use unique ports.

  • Ensure both ZS3Servers are configured using MinIO Client aliases:

    • Alias for the first server (zcn):

      mc alias set primary http://<HOST_IP>:9000 root root --api S3v2
    • Alias for the second server (zcn2):

      mc alias set secondary http://<HOST_IP>:9002 root root --api S3v2

Step 1: Verify Configuration

  1. Navigate to the .zcn2 folder:

    cd .zcn2/
  2. Open the terminal and run:

    ls
    cat docker-compose.yml
    • Verify the ports are correctly set to 9002:9000.

Fig23: Verify Ports
  1. Check the standard allocation ID:

cat blimp/allocation.txt
  • Compare this ID with the allocation ID in the Blimp dashboard to ensure consistency.

Fig24: Check Allocation ID

Step 2: Configure Aliases

  1. Use the MinIO Client (mc) to configure aliases for both servers:

Format:

mc alias set primary http://<HOST_IP>:9000 miniouser miniopassword --api S3v2
mc alias set secondary http://<HOST_IP>:9002 miniouser miniopassword --api S3v2

Example:

mc alias set primary http://65.109.152.43:9000 root root --api S3v2
mc alias set secondary http://65.109.152.43:9002 root root --api S3v2
  1. Run the alias commands:

mc alias set .zcn http://<HOST_IP>:9000 root root --api S3v2
mc alias set .zcn2 http://<HOST_IP>:9002 root root --api S3v2

Step 3: Initiate Replication

  1. Use the following command to start replication:

    ./mc mirror primary/<BUCKET_PREFIX>/ secondary/<BUCKET_PREFIX>/ --remove --watch
    • Replace <BUCKET_PREFIX> with the appropriate bucket names.

  2. Example:

    ./mc mirror zcn/test2-s3/ zcn2/mbtest2 --remove --watch
  • test2-s3 is a folder in the S3 allocation.

Fig25: test2-s3
  • mbtest2 will be created in the standard allocation on zcn2.

Fig26: mbtest2
  • All data from test2-s3 will be replicated into mbtest2.

Step 4: Verify Replication

  1. Check the contents of the buckets:

    mc ls zcn
    mc ls zcn2
  2. Create new buckets if necessary:

    mc mb zcn/test2-s3
    mc mb zcn2/mbtest2
  3. Confirm that the data is mirrored correctly:

./mc mirror zcn/test2-s3/ zcn2/mbtest2 --remove --watch

Backup and Restore with Restic

Use Restic for lightweight, versioned, and secure backups to your ZS3 bucket.

Step 1: Install Restic

sudo apt update -y
sudo apt install restic -y
sudo restic self-update

Step 2: Set Environment Variables

export AWS_ACCESS_KEY_ID=<ACCESS_KEY>
export AWS_SECRET_ACCESS_KEY=<SECRET_KEY>
export RESTIC_REPOSITORY="s3:<ZS3_URL>"

Step 3: Initialize the Repository

restic init

Step 4: Run a Backup

restic -r s3:<ZS3_URL> --verbose backup <path-to-directory-to-backup>

Step 5: List Snapshots

restic snapshots

Step 6: Restore Snapshot

restic restore latest --target ~/ --verbose

Step 7: Automate via Crontab (Optional)

Use crontab -e to add periodic backup jobs.

Last updated