A Beginner's Guide to Building a Virtual Datacenter on FreeBSD with Ansible, Pot and More
Introduction
Back in 2020, a three-part blog series was published on building your own Virtual Datacenter (vDC).
While the detailed configuration instructions are outdated meanwhile (the images offer a lot more options today than back then), you can read part 1, part 2 and part 3 as a refresher.
This guide introduces an ansible
script to automate the provisioning of a vDC for your applications.
In addition to building a vDC with consul
and nomad
, it offers enhancements over the original blog series, such as the inclusion of monitoring and alerting, wireguard
mesh networking, along with openldap
and mariadb
instances, and nginx
nomad jobs.
The target audience for this post is comfortable installing FreeBSD on servers or virtual machines and comfortable enough to rename the default network interface, or to find out how to do so.
You don’t need to know much ansible
, you can pick it up from the scripts.
Part 1: Understanding the Basics
1.1 FreeBSD And Containers
There are several container or jail management solutions for FreeBSD these days. We like pot.
1.2 Pot For Container Management
pot
is a jail manager, a collection of shell scripts connecting standard features in FreeBSD, such as ZFS and jails, into portable images that can run on other hosts.
1.3 Potluck Repository
The Potluck repository is a DockerHub-like repository of pot
application images that can be downloaded and used without having to build your own.
Most of the potluck images have consul
integration. You’ll need a consul
image running to make use of them.
Part 2: The Ansible Pot Foundation Framework
2.1 What Is Installed?
The script will provision 1, 3 or 5 servers. Where 3 or 5 servers are used, a wireguard
mesh network will connect them.
Two VLANs will be created off the default network interface, which must be named untrusted
before you begin.
One VLAN will be for pot
images, and the other will be for nomad
jobs.
In cluster environments with the wireguard
mesh, all VLANs can see all other server VLANs over wireguard
links. This makes it possible to build consul
or nomad
clusters over wireguard
.
A pf
firewall with basic ruleset will be applied. One host will be a frontend host and accept HTTP requests.
For service discovery, a consul
server or cluster will be setup, and for job orchestration, a nomad
server or cluster is included. A nomad
client will be installed on the servers directly.
A traefik-consul
image is included, and also haproxy
for dealing with the frontend public website, along with acme.sh
for certificate registration.
Two nomad
jobs for nginx
pot
jails are included, a public website and an intranet site linking to various services in the vDC.
An openldap
instance is included, which has a useful web GUI automatically setup. This is the easiest, fully-functional openldap
service available.
A mariadb
instance is also included, but has no databases added at this stage.
For monitoring and alerting, the Beast-of-Argh pot
image is used, which is a single pot
collection of prometheus
, alertmanager
, promtail
, loki
, grafana
and custom dashboards and alerts.
(These applications are also available individually on the potluck website, albeit for more complex security environments)
2.2 What Are The Applications Used For?
In the simplest vDC, consul
is used for service discovery, while nomad
is used for job orchestration. The traefik-consul
image helps connect the two.
The addition of a monitoring and alerting component uses node_exporter
for metrics, and publishes via consul
, which is pulled into the The Beast-of-Argh image running prometheus
. Additionally syslog-ng
sends logs to a loki
syslog server.
The Beast-of-Argh image is in daily usage on live systems and is instumental for keeping an eye on things. Custom dashboards are included.
wireguard
is used to link servers in a cluster, or clients to the network for administration purposes.
pf
is used to handle firewalling.
openldap
is used for accounts management, and has more functionality with other pot
images for mail or chat services.
mariadb
is used as a general purpose database for pot
wordpress
images, or application databases. You might find postgresql
is better suited for your needs and that can be catered for with updates.
We hope that this example setup with the basic components is sufficient to accelerate development of your own custom services hosted in the environment.
Part 3: Setting Up The Virtual Datacenter
3.1 Ensure The System Meets The Requirements
You network interface must be named untrusted
. The ansible
script is expecting this name. The reasons stem from practises in live pot
environments, where it’s deemed a good idea.
You can configure this in /etc/rc.conf
. Make sure to get the interface name correct, then reboot or restart networking.
As an example with a non-public IP, edit /etc/rc.conf
for default interface to:
ifconfig_igb0_name="untrusted"
ifconfig_untrusted="inet 192.168.1.1 netmask 255.255.255.0"
You must have ssh
key access to the server as a user. This user must be a member of the wheel
group and have sudo
permissions without a password.
pkg install sudo
visudo
%wheel ALL=(ALL:ALL) NOPASSWD: ALL
You must have decided on the quarterly
or latest
package stream beforehand. This script will install from the already configured stream. pot
images make use of quarterly
packages.
3.2 Clone The git
Repository
Clone the git
repo:
pkg install git
git clone https://codeberg.org/Honeyguide/ansible-pot-foundation.git
cd ansible-pot-foundation
3.3 Copy host.sample
to hosts
File
Inside the correct inventory directory, copy the hosts.sample
file to hosts
and edit to your needs.
For example, a 5 server cluster will use the file in inventory.five
:
cd inventory.five
cp hosts.sample hosts
vi hosts
3.4 Configure Variables in hosts
File
Replace the following values in your inventory hosts
file with your data:
* REPLACE-WITH-USERNAME (username for your server)
* REPLACE-WITH-IP-HOSTNAME (IP or hostname of your server)
* REPLACE-WITH-EMAIL-ADDRESS (your email address)
* REPLACE-WITH-DOMAIN-NAME (your domain name)
* REPLACE-FRONTEND-IP (the IP address of your frontend host)
* REPLACE-GRAFANA-USER (username for grafana, e.g. admin)
* REPLACE-GRAFANA-PASSWORD (password for grafana user)
* REPLACE-WITH-LDAP-PASSWORD (ldap master password)
* REPLACE-DEFAULT-LDAP-USER (default openldap user)
* REPLACE-DEFAULT-LDAP-USER-PASSWORD (openldap user password)
* REPLACE-WITH-MYSQL-ROOT-PASSWORD (mariadb root password)
* REPLACE-WITH-PROM-SCRAPE-PASSWORD (password for prometheus stat scraping)
Some of these appear multiple times, for example REPLACE-WITH-EMAIL-ADDRESS, and may differ in value for different parts of the system. You might want a one email address associated with acme.sh
registration and another to get system notices from alertmanager
.
3.5 Run The site.yml
Play
Install a full system by running:
ansible-playbook -i inventory.five/hosts site.yml
Make sure to specify the correct inventory. Single-server setups will use inventory.single
, while cluster setups will use inventory.three
or inventory.five
.
The whole setup can take an hour to complete for new systems, and some errors are expected (and ignored) because they trigger tasks, such a missing keys which need to be created.
3.6 If Problems Occur, Clean Environment And Try Again
You can remove the clone images, persistent data, and wireguard setup with:
ansible-playbook -i inventory.five/hosts clean.yml
Part 4: Exploring The Demonstration vDC
4.1 This Is Great, Now What?
Command Line Activities
You can test some of the functionality in the base system. ssh
to a server and try running the following commands as root
:
# consul members
root@server1:~/bin # consul members
Node Address Status Type Build Protocol DC Partition Segment
server1_consul_server 10.1.0.10:8301 alive server 1.12.4 2 dc1 default <all>
server2_consul_server 10.2.0.10:8301 alive server 1.12.4 2 dc1 default <all>
server3_consul_server 10.3.0.10:8301 alive server 1.12.4 2 dc1 default <all>
server4_consul_server 10.4.0.10:8301 alive server 1.12.4 2 dc1 default <all>
server5_consul_server 10.5.0.10:8301 alive server 1.12.4 2 dc1 default <all>
server1_beast_server 10.1.0.100:8301 alive client 1.12.4 2 dc1 default <default>
server1_consul_client 10.1.0.1:8301 alive client 1.12.4 2 dc1 default <default>
server1_mariadb_server 10.1.0.13:8301 alive client 1.12.4 2 dc1 default <default>
server1_nomad_server 10.1.0.11:8301 alive client 1.12.4 2 dc1 default <default>
server1_openldap1_server 10.1.0.14:8301 alive client 1.12.4 2 dc1 default <default>
server1_traefikconsul_server 10.1.0.12:8301 alive client 1.12.4 2 dc1 default <default>
server2_consul_client 10.2.0.1:8301 alive client 1.12.4 2 dc1 default <default>
server2_nomad_server 10.2.0.11:8301 alive client 1.12.4 2 dc1 default <default>
server3_consul_client 10.3.0.1:8301 alive client 1.12.4 2 dc1 default <default>
server3_nomad_server 10.3.0.11:8301 alive client 1.12.4 2 dc1 default <default>
server4_consul_client 10.4.0.1:8301 alive client 1.12.4 2 dc1 default <default>
server4_nomad_server 10.4.0.11:8301 alive client 1.12.4 2 dc1 default <default>
server5_consul_client 10.5.0.1:8301 alive client 1.12.4 2 dc1 default <default>
server5_nomad_server 10.5.0.11:8301 alive client 1.12.4 2 dc1 default <default>
Or check the nomad
tools in the /root/bin directory:
# bin/nomad-raft-status.sh
root@server1:~ # bin/nomad-raft-status.sh
Node ID Address State Voter RaftProtocol
nomad-server-clone.server4.global bf7c92a3-966b-e245-5ca2-2f544365b008 10.4.0.11:4647 leader true 3
nomad-server-clone.server2.global 13b42085-af1c-3e82-6210-2361556fe3cb 10.2.0.11:4647 follower true 3
nomad-server-clone.server5.global 4b336424-fe5b-ba84-6bc9-eb67dae5d2c6 10.5.0.11:4647 follower true 3
nomad-server-clone.server3.global ca64ecf5-a7b0-1910-8769-681835ad6dc1 10.3.0.11:4647 follower true 3
nomad-server-clone.server1.global 2f13c1c4-43a0-1464-1b78-3cccdb35e616 10.1.0.11:4647 follower true 3
And check on nomad
jobs:
# bin/nomad-job-status.sh
root@server1:~ # bin/nomad-job-status.sh
ID Type Priority Status Submit Date
publicwebsite service 50 running 2023-05-04T22:53:16+02:00
stdwebsite service 50 running 2023-05-04T22:53:16+02:00
Graphical Interfaces
Wireguard Access
To access the intranet site, you need to connect to the wireguard
network as a roadwarrior client.
Two wireguard
roadwarrior clients are configured on the primary server. Using the keys created you can add a wireguard
link to the server.
root@server1:/usr/local/etc/wireguard # ls -al
total 26
drwx------ 2 root wheel 11 May 4 22:48 .
drwxr-xr-x 24 root wheel 42 May 1 21:27 ..
-rw------- 1 root wheel 45 May 4 22:48 admin_one_preshared.key
-rw------- 1 root wheel 45 May 4 22:48 admin_one_private.key
-rw------- 1 root wheel 45 May 4 22:48 admin_one_public.key
-rw------- 1 root wheel 45 May 4 22:48 admin_two_preshared.key
-rw------- 1 root wheel 45 May 4 22:48 admin_two_private.key
-rw------- 1 root wheel 45 May 4 22:48 admin_two_public.key
-rw------- 1 root wheel 45 May 4 22:48 server_private.key
-rw------- 1 root wheel 45 May 4 22:48 server_public.key
-rw------- 1 root wheel 1283 May 4 22:48 wg0.conf
Get the contents of admin_one_preshared.key
, admin_one_private.key
, admin_one_public.key
and server_public.key
and configure a wireguard
connection on your client host:
[Interface]
PrivateKey = { contents admin_one_private.key }
Address = 10.254.1.2/32
ListenPort = 51820
DNS = 1.1.1.1
MTU = 1360
[Peer]
PublicKey = { contents server_public.key }
PresharedKey = { contents admin_one_preshared.key }
Endpoint = { server1 public IP }:51820
AllowedIPs = 10.0.0.0/8
PersistentKeepalive = 25
Then connect wireguard
to join the environment.
Dashboards And More
You should now be able to access the intranet page at http://10.101.0.1:25080/
, with links to the following services on different IPs within the environment.
- Alertmanager
- Grafana
- Prometheus
- Consul UI
- Nomad UI
- Traefik-Consul Dashboard
4.2 Build Your Own Applications!
You can now build our own applications, to be run via nomad
job, or as additional pot
images. Monitoring and alerting is catered for, along with authentication via opeldap
, or database in mariadb
.
Perhaps you’d like to add a mail server pot
image, or a run one or many wordpress
sites via nomad
jobs?
The basic foundation is in place to get experimenting, or adapted to your needs.
4.3 Potential Applications For The vDC
Multiple live environments make use of a similar setup to this environment, using pot
images on FreeBSD.
The infrastructure which hosts this site, or the Potluck image repository, all works in the same way, just with more pot
image applications and nomad
jobs in the mix.
Part 5: Expanding Knowledge And Resources
5.1 Useful Links
- Pot homepage and github.
- Potluck site and github
- Hashicorp Consul
- Hashicorp Nomad
- Traefik
- Wireguard
- Prometheus and Alertmanager
- Grafana and Grafana Loki
5.2 Advanced Topics
You can check out the other pot
images on the Potluck site and integrate them manually, or with updates to the ansible
playbooks.
You can also try your hand at building pot
images within your environment. Check out the sources on github for ideas.
5.3 Feedback
We’d love to get some feedback, or even better, pull requests on github. Let us know how the setup performs in your environment.
Summary
The ansible
pot
foundation vDC is an easy to deploy, functional environment to host your applications. It can help reduce the time taken to setup an environment evaluating FreeBSD with pot
containers.