How To Set Up a Minio Cluster From Potluck, Complete With Nextcloud, Monitoring And Alerting
Introduction
Let’s build a two-server minio
storage system, complete with alerting and monitoring.
Then to illustrate application, we’ll include a nextcloud
pot image using minio
S3 as file storage.
Hint: If you do not want to go through the complete step by step guide, you can also skip directly to the end of this article and download the Minio-Sampler.
Requirements
You need two servers connected via high speed network, minimum 1GB/s.
Each server needs the following basic configuration:
- Two x 2Tb SATA/SSD drives, configured as ZFS mirror for ZROOT
- Four x 4Tb SATA drives, each becomes a
minio
disk - 64Gb to 128Gb memory
Drive Arrays
Configure ZROOT as a ZFS mirror array of two disks, which includes the default zroot & related datasets.
An additional two datasets, srv
and data
, are created on this mirror array.
srv
is where pot images will be run from, and data
is for storing persistent data.
Hint: the
pot
default is to use/opt
for running pots
With the 4 extra disks, minio
will operate with erasure coding across 8 drives and 2 servers.
For the purposes of illustration, the minio
hosts below are on a firewalled internal network in the 10.100.1.0 range.
Where external access is required, such as to the nextcloud
web frontend, a suitable forwarding/proxy rule is required on a firewall host.
Part 1: Minio Installation
We’ll assume FreeBSD-13.1 is already installed, with a ZFS mirror of 2 drives for zroot, with dataset zroot/srv
mounted on /mnt/srv
, and dataset zroot/data
mounted on /mnt/data
.
1. Preparing HOSTS File and DNS
On both servers
Add the IP and hostname for the minio
hosts to /etc/hosts
:
10.100.1.3 minio1
10.100.1.4 minio2
Make sure your internal DNS servers are added to /etc/resolv.conf
, or make use of 8.8.8.8, or 1.1.1.1, or 9.9.9.9:
nameserver 1.1.1.1
nameserver 9.9.9.9
2. Configure ssh
Access
On Both Servers
As root run ssh-keygen
and accept defaults.
edit /root/.ssh/config
and add:
Host minio1
HostName 10.100.1.3
StrictHostKeyChecking no
User %%username%%
IdentityFile ~/.ssh/id_rsa
Port 22
ServerAliveInterval 20
Host minio1
HostName 10.100.1.4
StrictHostKeyChecking no
User %%username%%
IdentityFile ~/.ssh/id_rsa
Port 22
ServerAliveInterval 20
Add the pubkey from minio1 to /root/.ssh/authorized_keys
on minio2
Add the pubkey from minio2 to /root/.ssh/authorized_keys
on minio1
Connect from each server to the other, and accept keys.
Remote root access can be removed shortly after setup, it’s needed to scp files from one to the other.
3. Setup Self-Signed Certificates For Use By minio
On minio1
Run the following commands:
mkdir -p /usr/local/etc/ssl/CAs
cd /usr/local/etc/ssl/CAs
openssl genrsa -out rootca.key 8192
openssl req -sha256 -new -x509 -days 3650 -key rootca.key -out rootca.crt -subj "/C=US/ST=None/L=City/O=Organisation/CN=minio1"
rsync -avz rootca.key root@minio2:/usr/local/etc/ssl/CAs/
rsync -avz rootca.crt root@minio2:/usr/local/etc/ssl/CAs/
Edit /usr/local/etc/ssl/openssl.conf
:
basicConstraints = CA:FALSE
nsCertType = server
nsComment = "OpenSSL Generated Server Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
IP.1 = 10.100.1.3
DNS.1 = minio1
Run the following commands:
cd /usr/local/etc/ssl
openssl genrsa -out private.key 4096
chown minio:minio /usr/local/etc/ssl/private.key
chmod 0644 /usr/local/etc/ssl/private.key
openssl req -new -key private.key -out public.crt -subj "/C=US/ST=None/L=City/O=Organisation/CN=minio1"
openssl x509 -req -in public.crt -CA CAs/rootca.crt -CAkey CAs/rootca.key -CAcreateserial -out public.crt -days 3650 -sha256 -extfile openssl.conf
chown minio:minio /usr/local/etc/ssl/public.crt
chmod 0644 /usr/local/etc/ssl/public.crt
Create certificate bundle for nginx
:
cat public.crt CAs/rootca.crt >> bundle.pem
On minio2
Create /usr/local/etc/ssl/openssl.conf
:
basicConstraints = CA:FALSE
nsCertType = server
nsComment = "OpenSSL Generated Server Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
IP.1 = 10.100.1.4
DNS.1 = minio2
Run the following commands:
openssl genrsa -out private.key 4096
chown minio:minio /usr/local/etc/ssl/private.key
chmod 0644 /usr/local/etc/ssl/private.key
openssl req -new -key private.key -out public.crt -subj /C=US/ST=None/L=City/O=Organisation/CN=minio2
openssl x509 -req -in public.crt -CA CAs/rootca.crt -CAkey CAs/rootca.key -CAcreateserial -out public.crt -days 3650 -sha256 -extfile openssl.conf
chown minio:minio /usr/local/etc/ssl/public.crt
chmod 0644 /usr/local/etc/ssl/public.crt
Create certificate bundle for nginx
:
cat public.crt CAs/rootca.crt >> bundle.pem
4. Preparing The SATA Drives
On Both Servers
mkdir -p /mnt/minio
zpool create -m /mnt/minio/disk1 minio-disk1 ada1p1
gpart create -s GPT ada2
gpart add -t freebsd-zfs -l minio-disk2 ada2
zpool create -m /mnt/minio/disk2 minio-disk2 ada2p1
gpart create -s GPT ada3
gpart add -t freebsd-zfs -l minio-disk3 ada3
zpool create -m /mnt/minio/disk3 minio-disk3 ada3p1
gpart create -s GPT ada4
gpart add -t freebsd-zfs -l minio-disk4 ada4
zpool create -m /mnt/minio/disk4 minio-disk4 ada4p1
5. Installation and Configuration of minio
On Both Servers
pkg install -y minio
service minio enable
sysrc minio_certs="/usr/local/etc/ssl"
sysrc minio_env="MINIO_ACCESS_KEY=access12345 MINIO_SECRET_KEY=password1234567890"
sysrc minio_disks="https://minio{1...2}:9000/mnt/minio/disk{1...4}"
service minio start
Access minio
at https://10.100.1.3:9000
or https://10.100.1.4:9000
and login with the same credentials used during setup.
You will need to accept the self-signed certificate!
Part 2 - Monitoring
In order to run the beast-of-argh
monitoring solution, or the nextcloud
nomad image, we need to setup several pot images on minio1 host, specifically:
- Consul server
- Nomad server
- Mariadb server
Then we can use the beast-of-argh
image which utilises consul
, node_exporter
, syslog-ng
.
We can also use nomad
to schedule a nextcloud
instance after setting up a bucket in minio
.
6. Installation and Configuration of ZFS Datasets For This Project
Reminder: zroot/srv mounted on /mnt/srv; and zroot/data mounted on /mnt/data
On minio1
zfs create -o mountpoint=/mnt/srv zroot/srv
zfs create -o mountpoint=/mnt/srv/pot zroot/srv/pot
zfs create -o mountpoint=/mnt/data zroot/data
zfs create -o mountpoint=/mnt/data/jaildata zroot/data/jaildata
zfs create -o mountpoint=/mnt/data/jaildata/traefik zroot/data/jaildata/traefik
zfs create -o mountpoint=/mnt/data/jaildata/beast zroot/data/jaildata/beast
zfs create -o mountpoint=/mnt/data/jaildata/mariadb zroot/data/jaildata/mariadb
mkdir -p /mnt/data/jaildata/mariadb/var_db_mysql
7. Install Client-Side consul
and nomad
With Configuration
On Both Servers
Install consul
, nomad
, node_exporter
packages:
pkg install -y consul nomad node_exporter
Create necessary directories:
mkdir -p /usr/local/etc/consul.d
chown consul:wheel /usr/local/etc/consul.d
chmod 750 /usr/local/etc/consul.d
Generate a gossip key with consul-keygen
and save for use:
consul-keygen
BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=
Create a user for node_exporter
and configure node_exporter
for later use:
pw useradd -n nodeexport -c 'nodeexporter user' -m -s /usr/bin/nologin -h -
service node_exporter enable
sysrc node_exporter_args="--log.level=warn"
sysrc node_exporter_user=nodeexport
sysrc node_exporter_group=nodeexport
service node_exporter restart
On minio1:
Configure /usr/local/etc/consul.d/agent.json
for consul
, using the gossip key created above:
{
"bind_addr": "10.100.1.3",
"server": false,
"node_name": "minio1",
"datacenter": "myminiotest",
"log_level": "WARN",
"data_dir": "/var/db/consul",
"verify_incoming": false,
"verify_outgoing": false,
"verify_server_hostname": false,
"verify_incoming_rpc": false,
"encrypt": "BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=",
"enable_syslog": true,
"leave_on_terminate": true,
"start_join": [
"{{ consul_ip }}"
],
"telemetry": {
"prometheus_retention_time": "24h"
},
"service": {
"name": "node-exporter",
"tags": ["_app=host-server", "_service=node-exporter", "_hostname=minio1", "_datacenter=myminiotest"],
"port": 9100
}
}
Check permissions:
chown -R consul:wheel /usr/local/etc/consul.d/
Don’t start consul
service yet.
Configure /usr/local/etc/nomad/client.hcl
for nomad
:
bind_addr = "10.100.1.3"
datacenter = "myminiotest"
advertise {
# This should be the IP of THIS MACHINE and must be routable by every node
# in your cluster
http = "10.100.1.3"
rpc = "10.100.1.3"
}
client {
enabled = true
options {
"driver.raw_exec.enable" = "1"
}
servers = ["10.100.1.11"]
}
plugin_dir = "/usr/local/libexec/nomad/plugins"
consul {
address = "127.0.0.1:8500"
client_service_name = "minio1"
auto_advertise = true
client_auto_join = true
}
tls {
http = false
rpc = false
verify_server_hostname = false
verify_https_client = false
}
telemetry {
collection_interval = "15s"
publish_allocation_metrics = true
publish_node_metrics = true
prometheus_metrics = true
disable_hostname = true
}
enable_syslog=true
log_level="WARN"
syslog_facility="LOCAL1"
Remove unnecessary files and set permissions:
rm -r /usr/local/etc/nomad/server.hcl
chown nomad:wheel /usr/local/etc/nomad/client.hcl
chmod 644 /usr/local/etc/nomad/client.hcl
chown root:wheel /var/tmp/nomad
chmod 700 /var/tmp/nomad
mkdir -p /var/log/nomad
touch /var/log/nomad/nomad.log
Finally configure nomad
sysrc entries:
sysrc nomad_user="root"
sysrc nomad_group="wheel"
sysrc nomad_env="PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/sbin:/bin"
sysrc nomad_args="-config=/usr/local/etc/nomad/client.hcl"
sysrc nomad_debug="YES"
Don’t start nomad
service yet.
On minio2
Configure /usr/local/etc/consul.d/agent.json
for consul
, using the gossip key created above:
{
"bind_addr": "10.100.1.4",
"server": false,
"node_name": "minio2",
"datacenter": "myminiotest",
"log_level": "WARN",
"data_dir": "/var/db/consul",
"verify_incoming": false,
"verify_outgoing": false,
"verify_server_hostname": false,
"verify_incoming_rpc": false,
"encrypt": "BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=",
"enable_syslog": true,
"leave_on_terminate": true,
"start_join": [
"{{ consul_ip }}"
],
"telemetry": {
"prometheus_retention_time": "24h"
},
"service": {
"name": "node-exporter",
"tags": ["_app=host-server", "_service=node-exporter", "_hostname=minio2", "_datacenter=myminiotest"],
"port": 9100
}
}
Check permissions:
chown -R consul:wheel /usr/local/etc/consul.d/
Don’t start consul
service yet.
8. Installation and Configuration of pot
On minio1
Install pot
from packages:
pkg install -y pot potnet
Create a pot.conf
or copy the pot.conf.sample
to pot.conf
and edit accordingly:
vi /usr/local/etc/pot.conf
POT_ZFS_ROOT=zroot/srv/pot
POT_FS_ROOT=/mnt/srv/pot
POT_CACHE=/var/cache/pot
POT_TMP=/tmp
POT_NETWORK=10.192.0.0/10
POT_NETMASK=255.192.0.0
POT_GATEWAY=10.192.0.1
POT_EXTIF=vtnet1 # em0, igb0, ixl0, vtnet0 etc
Initiate pot
and enable the service:
pot init -v
service pot enable
9. Installation and Configuration of consul
Pot Image
Next you will download and run the consul
pot image.
On minio1:
pot import -p consul-amd64-13_1 -t 2.0.27 -U https://potluck.honeyguide.net/consul
pot clone -P consul-amd64-13_1_2_0_27 -p consul-clone -N alias -i "vtnet1|10.100.1.10"
pot set-env -p consul-clone -E DATACENTER=myminiotest -E NODENAME=consul -E IP=10.100.1.10 -E BOOTSTRAP=1 -E PEERS="1.2.3.4" -E GOSSIPKEY="BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=" -E REMOTELOG="10.100.1.99"
pot set-attr -p consul-clone -A start-at-boot -V True
pot start consul-clone
On Both Servers
service consul restart
service node_exporter restart
10. Installation and Configuration of nomad
Pot Image
Install the nomad
pot image.
On minio1:
pot import -p nomad-server-amd64-13_1 -t 2.0.21 -U https://potluck.honeyguide.net/nomad-server
pot clone -P nomad-server-amd64-13_1_2_0_21 -p nomad-server-clone -N alias -i "vtnet1|10.100.1.11"
pot set-env -p nomad-server-clone -E NODENAME=nomad -E DATACENTER=myminiotest -E IP="10.100.1.11" -E CONSULSERVERS="10.100.1.10" -E BOOTSTRAP=1 -E GOSSIPKEY="BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=" -E REMOTELOG="10.100.1.99" -E IMPORTJOBS=0
pot set-attr -p nomad-server-clone -A start-at-boot -V True
pot start nomad-server-clone
On Both Servers
service nomad restart
11. Installation and Configuration of traefik
Pot Image
Install the traefik-consul
pot image.
On minio1
pot import -p traefik-consul-amd64-13_1 -t 1.3.5 -U https://potluck.honeyguide.net/traefik-consul
pot clone -P traefik-consul-amd64-13_1_1_3_5 -p traefik-consul-clone -N alias -i "vtnet1|10.100.1.12"
pot set-env -p traefik-consul-clone -E NODENAME=traefik -E DATACENTER=myminiotest -E IP="10.100.1.12" -E CONSULSERVERS="10.100.1.10" -E GOSSIPKEY="BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=" -E REMOTELOG="10.100.1.99"
pot set-attr -p traefik-consul-clone -A start-at-boot -V True
pot mount-in -p traefik-consul-clone -d /mnt/data/jaildata/traefik -m /var/log/traefik
pot start traefik-consul-clone
12. Installation and Configuration of mariadb
Pot Image
Install the mariadb
pot image.
On minio1
pot import -p mariadb-amd64-13_1 -t 2.0.12 -U https://potluck.honeyguide.net/mariadb
pot clone -P mariadb-amd64-13_1_2_0_12 -p mariadb-clone -N alias -i "vtnet1|10.100.1.14"
pot mount-in -p mariadb-clone -d /mnt/data/jaildata/mariadb/var_db_mysql -m /var/db/mysql
pot set-env -p mariadb-clone -E DATACENTER=myminiotest -E NODENAME=mariadb -E IP="10.100.1.14" -E CONSULSERVERS="10.100.1.10" -E GOSSIPKEY="BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=" -E DBROOTPASS=myrootpass -E DBSCRAPEPASS=myscrapepass -E DUMPSCHEDULE="5 21 * * *" -E DUMPUSER="root" -E DUMPFILE="/var/db/mysql/full_mariadb_backup.sql" -E REMOTELOG="10.100.1.99"
pot set-attr -p mariadb-clone -A start-at-boot -V True
pot start mariadb-clone
13. Installation and Configuration of Beast-Of-Argh Pot Image
On minio1
You can also install the beast-of-argh
monitoring solution, which includes grafana
, prometheus
, alertmanager
and the loki
log monitoring system. It also acts as a syslog server with syslog-ng
.
pot import -p beast-of-argh-amd64-13_1 -t 0.0.29 -U https://potluck.honeyguide.net/beast-of-argh/
pot clone -P beast-of-argh-amd64-13_1_0_0_29 -p beast-clone -N alias -i "vtnet1|10.100.1.99"
pot mount-in -p beast-clone -d /mnt/data/jaildata/beast -m /mnt
pot set-env -p beast-clone -E DATACENTER=myminiotest -E NODENAME=beast -E IP="10.100.1.99" -E CONSULSERVERS="10.100.1.10" -E GOSSIPKEY="BBtPyNSRI+/iP8RHB514CZ5By3x1jJLu4SqTVzM4gPA=" -E GRAFANAUSER=admin -E GRAFANAPASSWORD=password -E SCRAPECONSUL="10.100.1.10" -E SCRAPENOMAD="10.100.1.11" -E TRAEFIKSERVER="10.100.1.12" -E SMTPHOSTPORT="127.0.0.1:25" -E SMTPFROM="your@example.com" -E ALERTADDRESS="you@example.com" -E SMTPUSER="username" -E SMTPPASS="password" -E REMOTELOG="10.100.1.99"
pot set-attribute -p beast-clone -A start-at-boot -V YES
pot start beast-clone
Once started, add your minio
IPs to /mnt/prometheus/targets.d/minio.yml
as follows:
pot term beast-clone
cd /mnt/prometheus/targets.d/
minio
:
vi minio.yml
- targets:
- 10.100.1.3:9000
- 10.100.1.4:9000
labels:
job: minio
mariadb
:
vi mysql.yml
- targets:
- 10.100.1.15:9104
labels:
job: mysql
The servers should be polled via consul
but if you need to add them manually, you can edit mytargets.yml
:
#- targets:
# - 10.0.0.2:9100
# - 10.0.0.3:9100
# labels:
# job: jobtype1
Finally reload prometheus
with:
service reload prometheus
14. Install syslog-ng
You need to replace syslogd
with syslog-ng
on both servers.
On Both Servers
pkg install -y syslog-ng
Set up /usr/local/etc/syslog-ng.conf
as follows:
@version: "3.38"
@include "scl.conf"
# options
options {
chain_hostnames(off);
use_dns (no);
dns-cache(no);
use_fqdn (no);
keep_hostname(no);
flush_lines(0);
threaded(yes);
log-fifo-size(2000);
stats_freq(0);
time_reopen(120);
ts_format(iso);
};
# local sources
source src {
system();
internal();
};
# Add additional log files here
#source s_otherlogs {
# file("/var/log/service/service.log");
# file("/var/log/service2/access.log");
#};
# destinations
destination messages { file("/var/log/messages"); };
destination security { file("/var/log/security"); };
destination authlog { file("/var/log/auth.log"); };
destination maillog { file("/var/log/maillog"); };
destination lpd-errs { file("/var/log/lpd-errs"); };
destination xferlog { file("/var/log/xferlog"); };
destination cron { file("/var/log/cron"); };
destination debuglog { file("/var/log/debug.log"); };
destination consolelog { file("/var/log/console.log"); };
destination all { file("/var/log/all.log"); };
destination newscrit { file("/var/log/news/news.crit"); };
destination newserr { file("/var/log/news/news.err"); };
destination newsnotice { file("/var/log/news/news.notice"); };
destination slip { file("/var/log/slip.log"); };
destination ppp { file("/var/log/ppp.log"); };
#destination console { file("/dev/console"); };
destination allusers { usertty("*"); };
# pot settings
destination loghost {
tcp(
"10.100.1.99"
port(514)
disk-buffer(
mem-buf-size(134217728) # 128MiB
disk-buf-size(2147483648) # 2GiB
reliable(yes)
dir("/var/log/syslog-ng-disk-buffer")
)
);
};
# log facility filters
filter f_auth { facility(auth); };
filter f_authpriv { facility(authpriv); };
filter f_not_authpriv { not facility(authpriv); };
filter f_cron { facility(cron); };
filter f_daemon { facility(daemon); };
filter f_ftp { facility(ftp); };
filter f_kern { facility(kern); };
filter f_lpr { facility(lpr); };
filter f_mail { facility(mail); };
filter f_news { facility(news); };
filter f_security { facility(security); };
filter f_user { facility(user); };
filter f_uucp { facility(uucp); };
filter f_local0 { facility(local0); };
filter f_local1 { facility(local1); };
filter f_local2 { facility(local2); };
filter f_local3 { facility(local3); };
filter f_local4 { facility(local4); };
filter f_local5 { facility(local5); };
filter f_local6 { facility(local6); };
filter f_local7 { facility(local7); };
# log level filters
filter f_emerg { level(emerg); };
filter f_alert { level(alert..emerg); };
filter f_crit { level(crit..emerg); };
filter f_err { level(err..emerg); };
filter f_warning { level(warning..emerg); };
filter f_notice { level(notice..emerg); };
filter f_info { level(info..emerg); };
filter f_debug { level(debug..emerg); };
filter f_is_debug { level(debug); };
# program filters
filter f_ppp { program("ppp"); };
filter f_all {
level(debug..emerg) and not (program("devd") and level(debug..info) ); };
log {
source(src);
filter(f_notice);
filter(f_not_authpriv);
destination(messages);
};
log { source(src); filter(f_kern); filter(f_debug); destination(messages); };
log { source(src); filter(f_lpr); filter(f_info); destination(messages); };
log { source(src); filter(f_mail); filter(f_crit); destination(messages); };
log { source(src); filter(f_security); destination(security); };
log { source(src); filter(f_auth); filter(f_info); destination(authlog); };
log { source(src); filter(f_authpriv); filter(f_info); destination(authlog); };
log { source(src); filter(f_mail); filter(f_info); destination(maillog); };
log { source(src); filter(f_lpr); filter(f_info); destination(lpd-errs); };
log { source(src); filter(f_ftp); filter(f_info); destination(xferlog); };
log { source(src); filter(f_cron); destination(cron); };
log { source(src); filter(f_is_debug); destination(debuglog); };
log { source(src); filter(f_emerg); destination(allusers); };
log { source(src); filter(f_ppp); destination(ppp); };
log { source(src); filter(f_all); destination(loghost); };
# turn on sending otherlogs too
#log { source(s_otherlogs); destination(loghost); };
Then run:
service syslogd onestop
service syslogd disable
service syslog-ng enable
sysrc syslog_ng_flags="-R /tmp/syslog-ng.persist"
service syslog-ng restart
15. Login to grafana
, prometheus
, alertmanager
You can now login to grafana
or prometheus
or alertmanager
via the respective front ends:
grafana
:http://ip:3000
prometheus
:http://ip:9090
alertmanager
:http://ip:9093
Part 3: nextcloud
with minio
storage
16. Create Minio Bucket For Nextcloud
Setup Bucket Via Web Frontend
Login to the minio
interface and create a new bucket called mydata
.
Accept the default options.
Alternatively Set Up Bucket Via Command Line
On minio1
You can also setup a bucket via the command line by first setting an alias
minio-client alias set sampler https://10.100.1.3:9000 sampler samplerpasswordislong --api S3v4 --insecure --config-dir /root/.minio-client/
and then create a bucket:
minio-client mb --insecure --config-dir /root/.minio-client/ --with-lock sampler/mydata
The
--insecure
flag is required with self-signed certificates
17. Create mariadb
Database For nextcloud
nextcloud
requires a database and user to be setup prior to installation.
On minio1
Login to minio1 and setup a database for nextcloud
the quick way:
pot term mariadb-clone
mysql
DROP DATABASE IF EXISTS nextcloud
CREATE DATABASE nextcloud
CREATE USER 'nextcloud'@'10.100.1.%' IDENTIFIED BY 'mynextcloud1345swdwfr3t34rw'
GRANT ALL PRIVILEGES on nextcloud.* to 'nextcloud'@'10.100.1.%'
FLUSH PRIVILEGES
QUIT
18. Setting a Custom nextcloud
Configuration
You will create 3 custom nextcloud
config files to copy in with the nextcloud
image, specifically:
objectstore.config.php.in
mysql.config.php.in
custom.config.php.in
On minio1
In /root/nomadjobs
on minio1, create a custom config.php file called objectstore.config.php.in
as follows:
<?php
$CONFIG = array (
'objectstore' =>
array (
'class' => 'OC\\Files\\ObjectStore\\S3',
'arguments' => array (
'bucket' => 'mydata', // your bucket name
'autocreate' => true,
'key' => 'sampler', // your key
'secret' => 'samplerpasswordislong', // your secret
'use_ssl' => true,
'region' => '',
'hostname' => '10.100.1.3',
'port' => '9000',
'use_path_style' => true,
),
),
);
Then create mysql.config.php.in
:
<?php
$CONFIG = array (
'dbtype' => 'mysql',
'version' => '',
'dbname' => 'nextcloud',
'dbhost' => '10.100.1.15:3360',
'dbtableprefix' => 'oc_',
'dbuser' => 'nextcloud',
'dbpassword' => 'mynextcloud1345swdwfr3t34rw',
'mysql.utf8mb4' => true,
);
As well as creating custom.config.php.in
:
<?php
$CONFIG = array (
'trusted_domains' =>
array (
0 => 'nextcloud.minio1',
1 => '10.100.1.3:9000',
),
'datadirectory' => '/mnt/nextcloud',
'config_is_read_only' => false,
'loglevel' => 1,
'logfile' => '/mnt/nextcloud/nextcloud.log',
'memcache.local' => '\OC\Memcache\APCu',
'filelocking.enabled' => false,
'overwrite.cli.url' => '',
'overwritehost' => '',
'overwriteprotocol' => 'https',
'installed' => false,
'mail_from_address' => 'nextcloud',
'mail_smtpmode' => 'smtp',
'mail_smtpauthtype' => 'PLAIN',
'mail_domain' => 'minio1',
'mail_smtphost' => '',
'mail_smtpport' => '',
'mail_smtpauth' => 1,
'maintenance' => false,
'theme' => '',
'twofactor_enforced' => 'false',
'twofactor_enforced_groups' =>
array (
),
'twofactor_enforced_excluded_groups' =>
array (
0 => 'no_2fa',
),
'updater.release.channel' => 'stable',
'ldapIgnoreNamingRules' => false,
'ldapProviderFactory' => 'OCA\\User_LDAP\\LDAPProviderFactory',
'encryption_skip_signature_check' => true,
'encryption.key_storage_migrated' => false,
'allow_local_remote_servers' => true,
'mail_sendmailmode' => 'smtp',
'mail_smtpname' => 'nextcloud@minio1',
'mail_smtppassword' => '',
'mail_smtpsecure' => 'ssl',
'app.mail.verify-tls-peer' => false,
'app_install_overwrite' =>
array (
0 => 'camerarawpreviews',
1 => 'keeweb',
2 => 'calendar',
),
'apps_paths' =>
array (
0 =>
array (
'path' => '/usr/local/www/nextcloud/apps',
'url' => '/apps',
'writable' => true,
),
1 =>
array (
'path' => '/usr/local/www/nextcloud/apps-pkg',
'url' => '/apps-pkg',
'writable' => false,
),
),
);
19. Add nomad
Job for nextcloud
in nomad
Dashboard
Open the nomad
dashboard in your browser. Click “run job” and paste the following:
job "nextcloud" {
datacenters = ["samplerdc"]
type = "service"
group "group1" {
count = 1
network {
port "http" {
static = 10443
}
}
task "nextcloud1" {
driver = "pot"
service {
tags = ["nginx", "www", "nextcloud"]
name = "nextcloud-server"
port = "http"
check {
type = "tcp"
name = "tcp"
interval = "60s"
timeout = "30s"
}
}
config {
image = "https://potluck.honeyguide.net/nextcloud-nginx-nomad"
pot = "nextcloud-nginx-nomad-amd64-13_1"
tag = "0.64"
command = "/usr/local/bin/cook"
args = ["-d","/mnt/nextcloud","-s","10.100.1.3:9000"]
copy = [
"/root/nomadjobs/objectstore.config.php.in:/root/objectstore.config.php",
"/root/nomadjobs/mysql.config.php.in:/root/mysql.config.php",
"/root/nomadjobs/custom.config.php.in:/root/custom.config.php",
"/path/to/minio/rootca.crt:/root/rootca.crt"
]
mount = [
"/mnt/data/jaildata/nextcloud/nextcloud_www:/usr/local/www/nextcloud",
"/mnt/data/jaildata/nextcloud/storage:/mnt/nextcloud"
]
port_map = {
http = "80"
}
}
resources {
cpu = 1000
memory = 2000
}
}
}
}
Accept the options and run the job.
There will be a short wait before the nextcloud
image is live, it can take 10mins.
20. Configure nextcloud
Once the nextcloud
image is live, you can now login at the public fronted (via haproxy/other) and configure, or configure via the command line.
On minio1
pot list
*find nextcloud name*
pot term $nextcloud-full-name-randomised...
su -m www -c 'cd/usr/local/www/nextcloud/; php occ maintenance:install \
--database "mysql" \
--database-host "10.200.1.15" \
--database-port "3306" \
--database-name "nextcloud" \
--database-user "nextcloud" \
--database-pass "mynextcloud1345swdwfr3t34rw" \
--database-table-space "oc_" \
--admin-user "sampler" \
--admin-pass "sampler123" \
--data-dir "/mnt/nextcloud"'
Alternatively: Try the Minio-Sampler Environment
If you’d like to skip configuring all the services above manually, and simply run something with an example cluster and apps, please give Minio-sampler a try!
Make sure to review the detailed installation instructions, and install vagrant
, virtualbox
, and packer
for your platform.
To get started:
git clone https://codeberg.org/Honeyguide/minio-sampler.git
cd minio-sampler
export PATH=$(pwd)/bin:$PATH
(optional: sudo chmod 775 /tmp)
edit config.ini
(set a free IP on your LAN) and then run:
minsampler init mysample
cd mysample
minsampler packbox
minsampler startvms
This process will take up to 2 hours to finish, at which point you can access a shell:
vagrant ssh minio1
sudo su -
./preparenextcloud.sh # not working!
Or open http://AC.CE.SS.IP
in your browser to get the introduction page with links to minio
, grafana
, prometheus
, nextcloud
.
Pot jails use 10.200.1.0/24 in the sampler
If for some reason the nextcloud
nomad job doesn’t start, or wasn’t added:
vagrant ssh minio1
sudo su -
cat nomadjobs/nextcloud.nomad
Copy the text. In the nomad
dashboard, by select “run job” and paste.