Disclaimer
These descriptions do not claim to guarantee 100% protection against data loss. This service can also only be a part of a concept. It is up to the customer to decide what effort he wants to make.
Overview
For testing purpose the ZFS poolname doesn’t matter and can be changed anytime. In normal use the ZFS poolname must be the customer ID. You get your customer ID by registering via E-Mail.
Domainnames will follow this scheme: minio.customerid.hdd-housing.eu
and syncthing.customerid.hdd-housing.eu
File /customerid/lxcbackups/userids
will be used for user creation in server side LXC.
Deduplication is not allowed because of too high hardware demands.
Server uses Proxmox 7.
Change ZFS poolname:
zpool export myexamplezfspool
zpool import myexamplezfspool mycustomerid
# the pool named myexamplezfspool will be renamed to mycustomerid during import.
Prepare HDD / local test setup
Proxmox Setup
Proxmox ZFS on Linux
ZFS pool can be built with WWN
or partuuid
, to be able to run the pool even from an USB case if needed. For normal service disk ID
is sufficient.
Content of file /customerid/lxcbackups/userids
on ZFS Pool:
#User ID starting with 101000 needed because server uses unprivileged LXC
adduser minio --uid 101000
adduser syncthing --uid 101001
Chown datasets to user:
chown minio:minio /customerid/minio
chown syncthing:syncthing /customerid/syncthing
There are many ways in which way to build a ZFS pool. Some examples:
- One HDD split into 5 partitions for data redundancy but no HDD failure tolerance while only loosing 20% disk space, but also loose some performance
- One HDD without data redundancy and no disk spacec loss
- One HDD with ZFS setting
copies
set to 2 for data redundancy but no HDD failure tolerance and also loosing 50% disk space
- Two HDD in mirror for data redundancy and HDD failure tolerance, but loosing 50% disk space
- 3 or more HDD in raidz1 for data redundancy and HDD failure tolerance for bigger pools
Create a ZFS pool with 2 HDD as mirror
Get WWN from HDD:
ls -l /dev/disk/by-id/
#richtige HDD suchen, zB ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD -> ../../sdj
#daher ist unsere gesuchte HDD sdj
#WWN suchen, welche auf sdj verweist, hier:
#wwn-0x50014ee20c6324e6 -> ../../sdj
#zweite HDD suchen, zB ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6FN513C -> ../../sdk
#daher ist unsere gesuchte HDD sdk
#WWN suchen, welche auf sdk verweist, hier:
#wwn-0x50014ee20c629629 -> ../../sdk
ZFS Pool mit gefundener WWN erstellen:
zpool create -o ashift=12 -O compression=zstd poolname /dev/disk/by-id/wwn-0x5000039ffde45ca2
Create a ZFS pool on a single disk with no data redundancy
Find WWN of the HDD:
ls -l /dev/disk/by-id/
#find the right HDD, for this example: ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD -> ../../sdj
#the letter of our example HDD is currently sdj
#find WWN of sdj
#wwn-0x50014ee20c6324e6 -> ../../sdj
Create ZFS Pool with found WWN:
zpool create -o ashift=12 -O compression=zstd poolname /dev/disk/by-id/wwn-0x5000039ffde45ca2
Create a ZFS pool on a single disk with 5 partitions for data redundancy
Example where the HDD is in an USB enclosoure
Install parted and partition HDD:
apt install parted
parted /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD mklabel gpt
parted /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD mkpart zfs 0% 20%
parted /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD mkpart zfs 20% 40%
parted /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD mkpart zfs 40% 60%
parted /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD mkpart zfs 60% 80%
parted /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD mkpart zfs 80% 100%
Find the letter of the disk:
ls -l /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD
Result:
/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4DF9NJD -> ../../sdh
Find all Partuuid of sdh:
ls -l /dev/disk/by-partuuid/ | grep sdh
Result (just one displayed as example):
2c49c49f-4221-324e-afca-23bedbb06677 -> ../../sdi9 #2c49c49f-4221-324e-afca-23bedbb06677 is the partuuid
Create ZFS pool (adjust ashift if needed):
zpool create -o ashift=12 -O compression=zstd poolname raidz1 /dev/disk/by-partuuid/<partuuid1> /dev/disk/by-partuuid/<partuuid2> /dev/disk/by-partuuid/<partuuid3> /dev/disk/by-partuuid/<partuuid4> /dev/disk/by-partuuid/<partuuid5>
Minio S3 Storage Server
Install Minio:
Syncthing
Multiple instances are possible, but for user- and hostnames only “syncthing”, “syncthing2” to “syncthing9” are allowed. Further adoptions needed:\
- Username
- Datasetname and ownage (chown)
- .service file
- User=
- –home
- –gui-address increase port by one
Install syncthing:
apt install curl apt-transport-https ca-certificates
curl -o /usr/share/keyrings/syncthing-archive-keyring.gpg https://syncthing.net/release-key.gpg
echo "deb [signed-by=/usr/share/keyrings/syncthing-archive-keyring.gpg] https://apt.syncthing.net/ syncthing stable" | tee /etc/apt/sources.list.d/syncthing.list
apt update
apt install syncthing
Create and adopt this file for every instance you want /etc/systemd/system/syncthing.service
:
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for syncthing
Documentation=man:syncthing(1)
After=network.target
StartLimitIntervalSec=60
StartLimitBurst=4
[Service]
User=syncthing
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart --logflags=0 --home=/poolname/syncthing --gui-address=0.0.0.0:8384
Restart=on-failure
RestartSec=1
SuccessExitStatus=3 4
RestartForceExitStatus=3 4
# Hardening
ProtectSystem=full
PrivateTmp=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
NoNewPrivileges=true
# Elevated permissions to sync ownership (disabled by default),
# see https://docs.syncthing.net/advanced/folder-sync-ownership
#AmbientCapabilities=CAP_CHOWN CAP_FOWNER
[Install]
WantedBy=multi-user.target
Copy /etc/systemd/system/syncthing
to the ZFS pool:
mkdir /customerid/lxcbackups/systemd
cp -a /etc/systemd/system/syncthing /customerid/lxcbackups/systemd/
systemctl daemon-reload
systemctl start syncthing.service
systemctl status syncthing.service
How to Back Up with Restic to MinIO
https://blog.min.io/back-up-restic-minio/
Troubleshoot
Change pool to use WWN
zpool export poolname ; sleep 5 ; zpool import -d /dev/disk/by-id poolname ; sleep 5 ; zpool list -v poolname
Technical details
Standardvalues Sanoid (ZFS Snapshots)
For the whole pool:
frequently = 0
hourly = 72
daily = 30
weekly = 8
monthly = 6
yearly = 0
autosnap = yes
autoprune = yes
Changes as you wish are possible, just write an email.