ZeroFS: S3-backed NFS service on FreeBSD (proof of concept, part 1) #
Introduction #
ZeroFS turns S3 storage into a high-performance, always-encrypted, globally accessible, and scalable file system—no FUSE drivers required. There’s also an official ZeroFS github repo.
ZeroFS makes S3 storage feel like a real filesystem. Built on SlateDB, it
provides file-level access via NFS and 9P and block-level access via NBD. Fast
enough to compile code on, with clients already built into your OS. No FUSE
drivers, no kernel modules, just mount and go.
This post walks through a FreeBSD proof of concept using ZeroFS for NFS (with a brief note on 9P). It’s part 1 in a short series.
Part 2 will follow once FreeBSD has better NBD support, enabling ZFS pools backed by S3.
If you want to experiment with ZeroFS on FreeBSD to provide NFS and 9P shares, follow along.
Prerequisites #
A FreeBSD 14.3 virtual host with ZFS. We’ll install a temporary S3 service using Garage and run the ZeroFS binary. In this guide, that host’s IP is 10.0.0.10
.
A second host on the same network for NFS-client testing.
Outline #
Install a S3 service on the virtual host, using Garage as S3 provider, configure credentials and bucket with access rights.
Then install ZeroFS and configure a shell script to start it, making a NFS share available, with contents stored inside the SlateDB database in the S3 bucket.
Finally, mount the NFS share on another host, write files, unmount, remount and perform any other testing.
Preparation #
Use the latest
pkg repository on your virtual host:
mkdir -p /usr/local/etc/pkg/repos
echo 'FreeBSD: { url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest" }' > /usr/local/etc/pkg/repos/FreeBSD.conf
Update and force-upgrade installed packages:
pkg update
pkg upgrade -f
Install a few handy tools:
pkg install -y ca_root_nss curl openssl tmux
Install Garage #
Create ZFS datasets for Garage:
zfs create -o mountpoint=/opt zroot/opt
zfs create -o mountpoint=/opt/garagemetadata zroot/opt/garagemetadata
zfs create -o mountpoint=/mnt/data zroot/data
zfs create -o mountpoint=/mnt/data/garage zroot/data/garage
Install garage
and minio-client
:
pkg install garage minio-client
Generate three internal passwords using:
openssl rand -hex 32
Create /usr/local/etc/garage.toml
(replace SECRET1, SECRET2, and SECRET3 with the values generated above):
metadata_dir = "/opt/garagemetadata"
data_dir = "/mnt/data/garage"
db_engine = "lmdb"
replication_mode = "none"
rpc_bind_addr = "127.0.0.1:3901"
rpc_public_addr = "127.0.0.1:3901"
rpc_secret = "SECRET1"
bootstrap_peers = []
[s3_api]
s3_region = "garage"
api_bind_addr = "0.0.0.0:3900"
root_domain = ".s3.garage"
[s3_web]
bind_addr = "0.0.0.0:3902"
root_domain = ".web.garage"
index = "index.html"
[admin]
api_bind_addr = "0.0.0.0:3903"
admin_token = "SECRET2"
metrics_token = "SECRET3"
Register RC variables:
sysrc garage_config="/usr/local/etc/garage.toml"
sysrc garage_log_file="/var/log/garage.log"
Enable and start Garage:
service garage enable
service garage start
Verify it’s running:
tail -f /var/log/garage.log
or
garage status
Garage still needs a layout and a user. Get the Garage node id with:
/usr/local/bin/garage status | tail -1 | awk '{ print $1 }'
Assign a layout (substitute your zone name, capacity, and node ID):
/usr/local/bin/garage layout assign -z <name> -c <sizeGB> <nodeid>
For example:
/usr/local/bin/garage layout assign -z testzerofs -c 20GB 84d82b9a0310eb03
Apply the layout:
/usr/local/bin/garage layout apply --version 1
Confirm the layout:
/usr/local/bin/garage layout show
Create an administrative user (the following credentials are an example, do not skip this step by using the same values further on):
/usr/local/bin/garage key create admin
Key name: admin
Key ID: GKd998a885674ec7e6eaaa3587
Secret key: 088946b12c01af906fb7e2ab1ee231cbe64dd1ca4728458628516c89c3374a1f
Can create buckets: false
Key-specific bucket aliases:
Authorized buckets:
You’ll use the Key ID and Secret Key with ZeroFS.
Create a bucket and grant access:
/usr/local/bin/garage bucket create files
/usr/local/bin/garage bucket allow --read --write --owner files --key admin
Optionally, create /root/.minio-client/config.js
so minio-client
can list the bucket contents. Adjust the Garage alias to match your admin user (Key ID → accessKey, Secret key → secretKey):
{
"version": "10",
"aliases": {
"gcs": {
"url": "https://storage.googleapis.com",
"accessKey": "YOUR-ACCESS-KEY-HERE",
"secretKey": "YOUR-SECRET-KEY-HERE",
"api": "S3v2",
"path": "dns"
},
"local": {
"url": "http://127.0.0.1:9000",
"accessKey": "YOUR-ACCESS-KEY-HERE",
"secretKey": "YOUR-SECRET-KEY-HERE",
"api": "S3v4",
"path": "auto"
},
"garage": {
"url": "http://127.0.0.1:3900",
"accessKey": "GKd998a885674ec7e6eaaa3587",
"secretKey": "088946b12c01af906fb7e2ab1ee231cbe64dd1ca4728458628516c89c3374a1f",
"api": "S3v4",
"path": "auto"
},
"play": {
"url": "https://play.min.io",
"accessKey": "Q3AM3UQ867SPQQA43P2F",
"secretKey": "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG",
"api": "S3v4",
"path": "auto"
},
"s3": {
"url": "https://s3.amazonaws.com",
"accessKey": "YOUR-ACCESS-KEY-HERE",
"secretKey": "YOUR-SECRET-KEY-HERE",
"api": "S3v4",
"path": "dns"
}
}
}
Install ZeroFS #
ZeroFS is in the ports tree, though a pre-built pkg
may or may not be available.
To install:
pkg install zerofs
To build from ports:
pkg install git pkgconf gmake rust
echo "DEFAULT_VERSIONS+=ssl=openssl" > /etc/make.conf
git clone --depth=1 -b main https://git.freebsd.org/ports.git /usr/ports
cd /usr/ports/filesystems/zerofs/
make install
Building Zerofs will take a little while.
Because there’s no rc service/variables yet, use a small wrapper script to start ZeroFS with the right environment.
Create /root/bin/start-zerofs.sh
(set AWS_ACCESS_KEY_ID to your Garage admin Key ID and AWS_SECRET_ACCESS_KEY to the Secret key):
#!/bin/sh
SLATEDB_CACHE_DIR=/tmp/zerofs-cache
export SLATEDB_CACHE_DIR
SLATEDB_CACHE_SIZE_GB=5
export SLATEDB_CACHE_SIZE_GB
ZEROFS_ENCRYPTION_PASSWORD=test-secret-test
export ZEROFS_ENCRYPTION_PASSWORD
AWS_ENDPOINT=http://127.0.0.1:3900
export AWS_ENDPOINT
AWS_ACCESS_KEY_ID=GKd998a885674ec7e6eaaa3587
export AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=088946b12c01af906fb7e2ab1ee231cbe64dd1ca4728458628516c89c3374a1f
export AWS_SECRET_ACCESS_KEY
AWS_ALLOW_HTTP=true
export AWS_ALLOW_HTTP
AWS_DEFAULT_REGION=garage
export AWS_DEFAULT_REGION
ZEROFS_MEMORY_CACHE_SIZE_GB=1
export ZEROFS_MEMORY_CACHE_SIZE_GB
ZEROFS_NFS_HOST=0.0.0.0
export ZEROFS_NFS_HOST
ZEROFS_NFS_HOST_PORT=2049
export ZEROFS_NFS_HOST_PORT
ZEROFS_9P_HOST=0.0.0.0
export ZEROFS_9P_HOST
ZEROFS_9P_PORT=5564
export ZEROFS_9P_PORT
/usr/local/bin/zerofs s3://files
Make it executable:
chmod +x /root/bin/start-zerofs.sh
Create the cache directory:
mkdir -p /tmp/zerofs-cache
Start ZeroFS (a tmux session is handy to keep it running):
/root/bin/start-zerofs.sh
Test mounting the NFS share #
On another host, create a mount point:
mkdir -p /mnt/files
Mount the NFS export (replace the IP with your ZeroFS host):
mount -t nfs -o nfsv3,nolockd,bg,hard,tcp,port=2049,mountport=2049,rsize=1048576,wsize=1048576 10.0.0.10:/ /mnt/files
Create some test files:
cd /mnt/files
echo $(openssl rand -base64 1000) > test1.txt
echo $(openssl rand -hex 1000) > test2.txt
echo "This is my sample text" > test3.txt
(Or copy in larger files such as an ISO, photos, or videos.)
Unmount:
umount /mnt/files
You may see a benign error because there are no RPC services on the ZeroFS host:
umount /mnt/files
umount: 10.0.0.10: MOUNTPROG: RPC: Remote system error - Connection refused
Re-mount:
mount -t nfs -o nfsv3,nolockd,bg,hard,tcp,port=2049,mountport=2049,rsize=1048576,wsize=1048576 10.0.0.10:/ /mnt/files
Confirm contents:
ls -al /mnt/files
For mount details:
nfsstat -m
Umount again:
umount /mnt/files
To make the mount persistent, first enable the NFS client:
sysrc nfs_client_enable="YES"
Then add this to /etc/fstab
(use tabs; substitute your host IP):
#
10.0.0.10:/ /mnt/files nfs rw,nolockd,bg,hard,tcp,port=2049,mountport=2049,rsize=1048576,wsize=1048576 0 0
Reboot:
shutdown -r now
After boot, check /mnt/files
to confirm the NFS share is mounted and your files are present.
What about 9P? #
This post doesn’t cover mounting the 9P share. We need FreeBSD 15’s virtio_p9fs
module for straightforward 9P client support. It may be possible to use vm-bhyve
9P features, but that’s outside this post’s scope.
Conclusion #
If everything went smoothly, you now have a working proof of concept: S3-backed storage serving an NFS export via ZeroFS.
Questions, suggestions, or improvements? Reach out on Mastodon.