Use Case - FUSE-based S3FS Mountpoint
Use S3FS to interact with your bucket with filesystem commands.
Last updated
Use S3FS to interact with your bucket with filesystem commands.
Last updated
After deploying your S3 server, you can integrate S3FS to be able to use your allocation as a Data Lake, with S3 support for seamless migration, and lower egress cost compared to other providers. S3FS makes you operate files and directories in S3 bucket like a local file system, which mean that you can use it in pretty much every use case you can use a typical filesystem directory in (logging, database storage, regular file storage, mount directory for docker containers for data storage or logging). Moreover, if you have an application that depends on filesystem, you can utilize the power of Züs Storage with zero code change.
Here's a step-by-step guide on how you can apply this change:
Register to https://blimp.software
Deploy zs3server
as mentioned in this page. Once deployed, you can find your URL, access key and secret key in the "Use CLI" section in Blimp's S3 operations page, as shown in the image below.
Example of this command (with the credentials part in double square parts)
In this example:
The URL: https://blimpsogec.zus.network
The Access Key: devyetii
The secret key: Admin@123
Connect to the machine you'll be using to mount S3FS. This can be your local machine, or any other remote machine that you have. If it's a remote machine, then you can use SSH to connect to it as follows:
Create a path to mount your S3 directory using the command below:
You can create any mounting path. But make sure you remember it as we'll be using it in Step 7.
Prepare you environment variables as follows:
Create a bucket on your ZS3 server as follows:
aws-cli
installation and usage is explained in details here.
Install S3FS on your machine. Provided down below are commands to install in on an Ubuntu machine. If you have any other distro of Linux or another OS, you can refer to this link
Prepare the password file that you'll be using with S3FS to mount your bucket on the mount point. Run the following command to write the contents of your password file, which will be stored in $HOME/.passwd-s3fs
You should replace <your-access-key-from-step2>
and <your-secret-key-from-step2>
with the corresponding values.
Mount your bucket to the created directory path using s3fs
using the command below:
You should replace the placeholders in the command as follows:
<bucket-name>
should be replaced by the bucket name you used to create the bucket in step 6.
<mount-directory>
should be replaced by the mount directory you created in step 4.
<your-zs3server-url-from-step2>
should be replaced by the URL shown in step2.
You can use the command below to unmount the zus-storage from <directory_path>
:
umount -l <directory_path>
Check if your bucket is mounted successfully to S3FS mount directory as follows:
This command displays the mounting path details.
If there is no output, recheck all the steps to ensure they were executed correctly.
Once you mount the Zus storage, any created, copied, or deleted files or directories within the mount will be reflected on your allocation files in Blimp, and vice versa.
After mounting your bucket to the S3FS mountpoint, you can perform any typical file operation on this mount directory, and it will automatically be reflected to your ZS3 bucket, hence your allocation data showed in Blimp.
For example, write a file and save it to the mount directory (assume it's /mnt/s3fs)
To copy a file:
To move a file:
To delete a file:
If you encountered any issue working with your mountpoint (using Ubuntu), you can check syslog
as follows:
You can also enable debug logging on S3FS as follows:
You can configure your Postgres DB deployment (using Docker) to use the mounted S3FS directory as storage directory for the DB.
If you're using docker run
to deploy your Postgres DB, attach the S3FS directory as a mount directory to your deployed container as follows:
If you're using docker-compose
you can configure your Postgres DB service to use the mounted S3FS directory as storage directory for the DB as follows:
In both examples, you should replace <path-in-your-s3fs-mount-directory>
with some path in your S3FS mount directory, for example: /mnt/s3fs/data/postgres