tested on focal
on all nodes
the v9 module won’t always build, but it does with a kernel.org longterm, or ubuntu’s equivalent
uname -r dpkg -l | grep headers ls -lF /lib/modules/*/build ls -lF /lib/modules/*/source #apt install linux-generic #cd /usr/src/linux/ #make olddefconfig && echo OK #make prepare && echo OK apt install software-properties-common apt-transport-https ca-certificates #add-apt-repository ppa:npalix/coccinelle #apt install coccinelle add-apt-repository ppa:linbit/linbit-drbd9-stack apt install drbd-dkms --> postfix: local only meanwhile you setup DMA --> postfix: FQDN as found from hosts // otherwise force it by adding the domain ...and fix your freaking /etc/hosts by adding the domain even to the cluster entries... dkms status ls -lF /lib/modules/*/updates/dkms/ dpkg -l | grep drbd #drbd-utils is there also apt install linstor-controller linstor-satellite linstor-client python-linstor dpkg -l | grep linstor
on all nodes
systemctl enable --now linstor-controller systemctl status linstor-controller linstor node list #LS_CONTROLLERS=lin1,lin2,lin3 linstor node list vi /etc/linstor/linstor-client.conf [global] controllers=pro5s1,pro5s2
Now it’s time to check whether you are either LVM2/thin or ZFS ready
dpkg -l | grep lvm2 lsmod | grep zfs
on a single node
cat /etc/hosts linstor node create pro5s1 x.x.x.x linstor node create pro5s2 x.x.x.x
all nodes
pvcreate /dev/xvdb vgcreate vdisks /dev/xvdb pvs vgs
controller only
linstor storage-pool create lvm lin1 vpool vdisks linstor storage-pool create lvm lin2 vpool vdisks linstor storage-pool create lvm lin3 vpool vdisks #linstor storage-pool delete lin3 vpool linstor storage-pool list
and check on every node
lvs
or at once
#linstor physical-storage list #linstor physical-storage create-device-pool \ # --pool-name vdisks LVMTHIN lin1 /dev/xvdb \ # --storage-pool vpool
all nodes
Assuming three disks. Making use of what is left on first disk
cfdisk /dev/sda NEW / ALL REMAINING SPACE / be Solaris boot
and creating the ZFS pool for every node
zpool create tank1 /dev/sda3 /dev/sdb /dev/sdc zpool create tank2 /dev/sda3 /dev/sdb /dev/sdc zpool list zfs list
controller only
create a pool WITH THE SAME NAME across the farm
linstor storage-pool create zfs pro5s1 vdiskpool tank1 linstor storage-pool create zfs pro5s2 vdiskpool tank2 linstor storage-pool list linstor storage-pool list-properties pro5s1 vdiskpool linstor storage-pool list-properties pro5s2 vdiskpool #linstor storage-pool delete NODE-NAME POOL-NAME
linstor resource-group create vdiskrg --storage-pool vdiskpool --place-count 2 linstor resource-group list #linstor resource-group delete RES-GROUP
now create the VG
linstor volume-group create vdiskrg linstor volume-group list vdiskrg linstor volume-definition list #linstor volume-group delete RG-NAME 0
and some resources within, the quick way
linstor resource-group spawn-resources vdiskrg vdisk0 10G
or the right way
linstor resource-definition create vdisk0 linstor volume-definition create vdisk0 10G #linstor volume-definition set-size backups 0 100G linstor resource create pro5s1 vdisk0 --storage-pool vdiskpool
and check
linstor resource-definition list linstor resource list linstor resource list-properties pro5s1 GUEST-NAME #linstor resource-definition delete vres0 #linstor resource delete pro5s1 vres0
to be continued…
https://launchpad.net/~linbit/+archive/ubuntu/linbit-drbd9-stack
https://www.linbit.com/linbit-software-download-page-for-linstor-and-drbd-linux-driver/
https://github.com/LINBIT/linstor-server/tags
https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/
https://www.linbit.com/drbd-user-guide/linstor-guide-1_0-en/
https://www.linbit.com/replicating-storage-volumes-on-scaleway-arm-with-linstor/
https://www.admin-magazine.com/Articles/Storage-cluster-management-with-LINSTOR
https://abelog.tech/archives/94
[DRBD-user] linstor resource create fails https://lists.linbit.com/pipermail/drbd-user/2020-January/025424.html