Micropod-sampler: A Minimal Viable FreeBSD-based Container Virtual Data Center PoC

Introduction

The Micropod-sampler ansible playbook is a minimal viable OCI container-based virtual data center.

The host is FreeBSD. The containers are FreeBSD, too.

The tooling is buildah and podman which provides a Docker-like container experience on FreeBSD.

The following playbook will configure everything needed to run a small virtual data center with:

  • minio for S3
  • consul for service orchestration
  • nomad for job management
  • traefik-consul for routing (not in use)
  • nginx-s3 nomad job to load website from S3 bucket in minio

Prerequisites

You will need a FreeBSD 14.0+ virtual machine or host.

The package stream must be configured to use latest packages:

mkdir -p /usr/local/etc/pkg/repos/
echo 'FreeBSD: { url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest" }' > /usr/local/etc/pkg/repos/FreeBSD.conf
sudo pkg update -f
sudp pkg upgrade

You will need SSH keys configured on the host for automatic access, or pass in --ask-become-pass to the ansible-playbook commands.

Run Playbook On Your Own Computer

Clone the git repo and change into the directory:

git clone https://codeberg.org/Honeyguide/micropod-sampler.git
cd micropod-sampler

Create a python virtual environment:

python3 -m pip install virtualenv
python3 -m venv .venv
source .venv/bin/activate
(.venv) .venv/bin/python3 -m pip install --upgrade pip
(.venv) .venv/bin/python3 -m pip install -r requirements.txt

Copy the hosts.sample file to hosts and edit to your needs:

cp inventory/hosts.sample inventory/hosts

Edit the file to change temporary holding values YOUR-USERNAME once, and YOUR-HOST-IP twice.

If the network interface is not vtnet0, adjust to the correct interface name:

[all:vars]
my_default_username=YOUR-USERNAME

# set to python3 or more specifically python3.9 or python3.12
[local]
localhost ansible_connection=local ansible_python_interpreter="env python3.12"

[servers]
server1 ansible_host=YOUR-HOST-IP ansible_port=22

[servers:vars]
my_network_interface=vtnet0
my_default_ip=YOUR-HOST-IP
bsd_packages_stream=latest
my_datacenter=micropod
external_dns_server=1.1.1.1
pf_ipv6_enable=yes
minio_name=myminio
minio_user=admin
minio_pass=s4mpl3p4ssw0rd
minio_bucket=mybucket
minio_ip_address=10.88.0.10
consul_ip_address=10.88.0.11
consul_gossip_key="oSPiRcrbp96JoWYO6SNzpAgItM14zlvZiw2OPT0UGmA="
nomad_ip_address=10.88.0.12
nomad_gossip_key="oSPiRcrbp96JoWYO6SNzpAgItM14zlvZiw2OPT0UGmA="
traefik_ip_address=10.88.0.13
nginx_s3_ip=10.88.0.20
nginx_s3_port=25000

Run the script to provision your virtual machine or host:

(.venv) .venv/bin/ansible-playbook -i inventory/hosts site.yml

Output of successful completion will look like this:

TASK [add_nomad_jobs : Debug nomad job status] *************************************************************************
ok: [server1] => {
    "nomad_job_status.stdout_lines": [
        "ID        Type     Priority  Status   Submit Date",
        "nginx-s3  service  50        running  2024-08-04T17:15:46+02:00"
    ]
}

TASK [add_nomad_jobs : Check if nginx-s3 is performing correctly] ******************************************************
changed: [server1]

TASK [add_nomad_jobs : Debug nginx-s3 status] **************************************************************************
ok: [server1] => {
    "nginx_s3_status.stdout_lines": [
        "<html>",
        "<head>",
        "<title>S3 default page</title>",
        "</head>",
        "<body>",
        "<p>blank page</p>",
        "</body>",
        "</html>"
    ]
}

PLAY RECAP *************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
server1                    : ok=92   changed=36   unreachable=0    failed=0    skipped=5    rescued=0    ignored=0

This confirms the nginx container, loaded via nomad job, is able to serve the files in minio S3 bucket.

What Next?

Take a look in roles/create_container_files/files/micropod/ and see how Containerfile are setup, with entrypoint.sh and associated configuration files.

Build new containers based on how these are configured.

The playbook can be adapted to build and run additional containers.