ZeroFS: S3-backed ZPOOL Via NBD Service on FreeBSD (Proof of Concept, Part 2) #
Introduction #
ZeroFS turns S3 storage into a high-performance, always-encrypted, globally accessible, and scalable file system—no FUSE drivers required. There’s also an official ZeroFS github repo.
ZeroFS makes S3 storage feel like a real filesystem. Built on SlateDB, it
provides file-level access via NFS and 9P and block-level access via NBD. Fast
enough to compile code on, with clients already built into your OS. No FUSE
drivers, no kernel modules, just mount and go.
This post walks through a FreeBSD proof of concept using ZeroFS to serve network block devices (NBD), then creating a ZPOOL with those devices with the help of the FreeBSD GEOM NBD Client. It is part 2 in a short series.
It differs from part 1 in the use of Minio as S3 provider, because Garage doesn’t support conditional writes and the latest versions of ZeroFS generate an error with a Garage backend.
Additionally, ZeroFS has a rapid pace of development, with updates to configuration, and part 1 may be considered obsolete already.
If you want to experiment with ZeroFS on FreeBSD to provide NBD devices for a zpool, follow along.
Acknowledgements #
This proof of concept was made possible with input from Ryan Moeller, author of the FreeBSD GEOM NBD Client, and Pierre Barre, author of ZeroFS, in making the necessary changes to get things working correctly.
Prerequisites #
A FreeBSD 14.3 virtual host with ZFS. We’ll install a temporary S3 service using Minio and run the ZeroFS binary. In this guide, this host’s IP is 10.0.0.10
.
A second host on the same network to access NFS and NBD shares. For this guide, this host’s IP is 10.0.0.20
.
Outline #
Install a S3 service on the first virtual host, using Minio as S3 provider, configure credentials and bucket with access rights.
Then install ZeroFS, adjust configuration file, and enable a shell script to start the service.
On the other host, install the FreeBSD GEOM NBD Client, mount the NFS share to create NBD files, unmount NFS share, connect to NBD shares with the FreeBSD GEOM NBD Client, and create a zpool using the mounted devices.
Preparation #
Use the latest
pkg repository on your virtual host:
mkdir -p /usr/local/etc/pkg/repos
echo 'FreeBSD: { url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest" }' > /usr/local/etc/pkg/repos/FreeBSD.conf
Update and force-upgrade installed packages:
pkg update
pkg upgrade -f
Install a few handy tools:
pkg install -y ca_root_nss curl openssl tmux
First host: install Minio #
Create ZFS datasets for Minio:
zfs create -o mountpoint=/mnt/data zroot/data
zfs create -o mountpoint=/mnt/data/minio zroot/data/minio
Install minio
and minio-client
:
pkg install minio minio-client
Set minio as owner on the ZFS dataset:
chown -R minio:minio /mnt/data/minio
Generate a password for the Minio admin
user, for example:
openssl rand -hex 32
Configure RC entries for minio, setting minio_root_password
to the password generated in previous step:
sysrc minio_disks="/mnt/data/minio"
sysrc minio_address=":9000"
sysrc minio_root_user="admin"
sysrc minio_root_password="02fc91742b4399b88ec4bd59a1787a6537dba1df65799d5f6e6a79d4568b7c69"
sysrc minio_root_access="on"
sysrc minio_logfile="/var/log/minio.log"
Enable and start Minio:
service minio enable
service minio start
Create a directory for minio user config:
mkdir -p /root/.minio-client
Create /root/.minio-client/config.json
so minio-client
can perform administrative actions:
{
"version": "10",
"aliases": {
"gcs": {
"url": "https://storage.googleapis.com",
"accessKey": "YOUR-ACCESS-KEY-HERE",
"secretKey": "YOUR-SECRET-KEY-HERE",
"api": "S3v2",
"path": "dns"
},
"local": {
"url": "http://127.0.0.1:9000",
"accessKey": "YOUR-ACCESS-KEY-HERE",
"secretKey": "YOUR-SECRET-KEY-HERE",
"api": "S3v4",
"path": "auto"
},
"myminio": {
"url": "http://127.0.0.1:9000",
"accessKey": "admin",
"secretKey": "02fc91742b4399b88ec4bd59a1787a6537dba1df65799d5f6e6a79d4568b7c69",
"api": "S3v4",
"path": "auto"
},
"play": {
"url": "https://play.min.io",
"accessKey": "Q3AM3UQ867SPQQA43P2F",
"secretKey": "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
"api": "S3v4",
"path": "auto"
},
"s3": {
"url": "https://s3.amazonaws.com",
"accessKey": "YOUR-ACCESS-KEY-HERE",
"secretKey": "YOUR-SECRET-KEY-HERE",
"api": "S3v4",
"path": "dns"
}
}
}
Create the files
bucket in the myminio
instance:
minio-client --insecure mb \
--ignore-existing \
--with-lock \
myminio/files \
--config-dir "/root/.minio-client/"
Generate a password for the Minio zerofs
user, for example:
openssl rand -hex 32
Create the zerofs
user in the myminio
instance, adjusting password to match the one created in previous step:
minio-client --insecure admin user add \
myminio \
zerofs f839343afd282fd1a348450385791b9a664e054e09f4e4c648d920366fe8ee13 \
--config-dir "/root/.minio-client/"
Create a Minio policy file /tmp/zerofs.json
with contents:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::files/*"
}
]
}
Add the Minio policy file to the myminio
instance:
/usr/local/bin/minio-client --insecure admin policy create \
myminio \
zerofs \
"/tmp/zerofs.json" \
--config-dir "/root/.minio-client/"
Link the imported policy to the Minio zerofs
user:
/usr/local/bin/minio-client --insecure admin policy attach \
myminio \
zerofs \
--user "zerofs" \
--config-dir /root/.minio-client/
Finally, apply the policy:
/usr/local/bin/minio-client --insecure admin user policy \
myminio \
zerofs \
--config-dir /root/.minio-client/
Minio should be setup correctly with full access to the myminio/files
bucket by the Minio zerofs
user.
First host: install ZeroFS #
ZeroFS is in the ports tree. You must be using version 0.15.1 or higher for this guide.
To install:
pkg install zerofs
To build from ports:
pkg install git pkgconf gmake rust
echo "DEFAULT_VERSIONS+=ssl=openssl" > /etc/make.conf
git clone --depth=1 -b main https://git.freebsd.org/ports.git /usr/ports
cd /usr/ports/filesystems/zerofs/
make install
Building Zerofs will take a little while.
Create the ZeroFS configuration file:
cd /usr/local/etc
zerofs init
Then edit the generated file /usr/local/etc/zerofs.toml
to match the following (includes enabling IPv6):
[cache]
dir = "${HOME}/.cache/zerofs"
disk_size_gb = 10.0
memory_size_gb = 1.0
[storage]
url = "s3://files/zerofs-data"
encryption_password = "${ZEROFS_PASSWORD}"
[servers.nfs]
addresses = ["0.0.0.0:2049", "[::1]:2049"]
[servers.ninep]
addresses = ["0.0.0.0:5564", "[::1]:5564"]
unix_socket = "/tmp/zerofs.9p.sock"
[servers.nbd]
addresses = ["0.0.0.0:10809", "[::1]:10809"]
unix_socket = "/tmp/zerofs.nbd.sock"
[aws]
secret_access_key = "${AWS_SECRET_ACCESS_KEY}"
access_key_id = "${AWS_ACCESS_KEY_ID}"
# Optional AWS S3 settings (uncomment to use):
endpoint = "http://127.0.0.1:9000" # For S3-compatible services
default_region = "global"
allow_http = "true" # For non-HTTPS endpoints
Because there is no rc service/variables yet, use a small wrapper script to start ZeroFS with the right environment.
Create /root/bin/start-zerofs.sh
with the correct credentials for your system, and set a new value for ZEROFS_PASSWORD
:
#!/bin/sh
ZEROFS_PASSWORD=test-secret-test
export ZEROFS_PASSWORD
AWS_ACCESS_KEY_ID=zerofs
export AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=f839343afd282fd1a348450385791b9a664e054e09f4e4c648d920366fe8ee13
export AWS_SECRET_ACCESS_KEY
/usr/local/bin/zerofs run -c /usr/local/etc/zerofs.toml
Make it executable:
chmod +x /root/bin/start-zerofs.sh
Start ZeroFS (a tmux session is handy to keep it running):
/root/bin/start-zerofs.sh
Second host: install FreeBSD GEOM client #
On the second host, install git
:
pkg install git
Then clone the FreeBSD GEOM client git repo, build and install:
git clone https://github.com/ryan-moeller/kernel-nbd-client
cd kernel-nbd-client
make
make install
Load the kernel module:
kldload geom_nbd.ko
Check if loaded:
kldstat
Id Refs Address Size Name
1 16 0xffffffff80200000 1f41500 kernel
2 1 0xffffffff82142000 5e9328 zfs.ko
3 1 0xffffffff8272c000 7808 cryptodev.ko
4 1 0xffffffff82e10000 58c0 geom_nbd.ko
Second host: mount ZeroFS over NFS to create NBD devices #
Create a mount point for NFS for temporary step to create NBD files:
mkdir -p /mnt/files
Mount the NFS share (replace the IP with your ZeroFS host):
mount -t nfs -o nfsv3,nolockd,bg,hard,tcp,port=2049,mountport=2049,rsize=1048576,wsize=1048576 10.0.0.10:/ /mnt/files
Create NBD devices inside of /mnt/files/.nbd
, which will already exist in the NFS share:
truncate -s 2G /mnt/files/.nbd/nbd0
truncate -s 2G /mnt/files/.nbd/nbd1
truncate -s 2G /mnt/files/.nbd/nbd2
truncate -s 2G /mnt/files/.nbd/nbd3
Unmount the NFS share:
umount /mnt/files
You may see a benign error because there are no RPC services on the ZeroFS host:
umount /mnt/files
umount: 10.0.0.10: MOUNTPROG: RPC: Remote system error - Connection refused
Second host: mount NBD devices and create ZPOOL #
Initialise gnbd
and check the NBD exports on the ZeroFS host:
gnbd load
gnbd exports -p 10809 10.0.0.10
nbd0
nbd1
nbd2
nbd3
Use gnbd
to mount each device:
gnbd connect -n nbd0 10.0.0.10
gnbd connect -n nbd1 10.0.0.10
gnbd connect -n nbd2 10.0.0.10
gnbd connect -n nbd3 10.0.0.10
Create a RAID10 style mirrored zpool over the NBD devices (may be overkill):
zpool create testpool mirror /dev/nbd0 /dev/nbd1 mirror /dev/nbd2 /dev/nbd3
Check the zpool exists:
zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
testpool 3.75G 468K 3.75G - - 0% 0% 1.00x ONLINE -
zroot 47.5G 4.07G 43.4G - - 0% 8% 1.00x ONLINE -
Check the zpool layout:
zpool status testpool
pool: testpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nbd0 ONLINE 0 0 0
nbd1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
nbd2 ONLINE 0 0 0
nbd3 ONLINE 0 0 0
errors: No known data errors
Create a ZFS dataset on the testpool
zpool:
zfs create -o mountpoint=/mnt/testdata testpool/testdata
Add some files to the newly created ZFS dataset:
cp -a /usr/local/share/doc /mnt/testdata
Perform any other testing you need.
If finished testing, export the zpool before ending:
zpool export testpool
Then disconnect the NBD devices:
gnbd disconnect nbd3
gnbd disconnect nbd2
gnbd disconnect nbd1
gnbd disconnect nbd0
Conclusion #
If everything went smoothly, you now have a working proof of concept: S3-backed storage serving network block devices, as components of a zpool array, via ZeroFS.
Questions, suggestions, or improvements? Reach out on Mastodon.