SETTING UP DRBD MANUALLY

w/o drbd-manage nor linstor

on Slackware 14.2 and current

INSTALL & SYSPREP

eventually build drbd from scratch for slackware or for centos

not sure NTP is that mandatory for this type of cluster, but let’s do it anyway

make sure the nodes communicate with hostname and also IP (validate fingerprints), not sure they need to talk incl to themselves but let’s do it too

BLOCK DEVICE ARCHITECTURE

three nodes, on which we are going to use 10G partitions.

storage1 -- storage2 -- storage3
sda2        sda2        sda2
sda3        sda3        sda3
sda4        sda4        sda4

and those will be setup as such,

storage1:sda2 + storage2:sda3 --> drbd1
storage2:sda2 + storage3:sda3 --> drbd2
storage3:sda2 + storage1:sda3 --> drbd3

BLOCK DEVICE SETUP

all nodes

cfdisk /dev/sda

then create sda2,3 e.g. 10G each.

slackpkg install parted #official
fdisk -l /dev/sda
partprobe
ls -lF /dev/sda2 /dev/sda3 /dev/sda4

DRBD SETUP

storage1

cat > /etc/drbd.conf <<EOF
global {
        usage-count yes;
        udev-always-use-vnr;
}

common {
        net {
                protocol C;
                fencing resource-only;
                allow-two-primaries yes;
        }
        disk {
                read-balancing when-congested-remote;
        }
}

resource r1 {
        device    /dev/drbd1;
        meta-disk internal;

        on storage1 {
                disk      /dev/sda2;
                address   10.1.1.221:7701;
        }
        on storage2 {
                disk      /dev/sda3;
                address   10.1.1.222:7701;
        }
}

resource r2 {
        device    /dev/drbd2;
        meta-disk internal;

        on storage2 {
                disk      /dev/sda2;
                address   10.1.1.222:7702;
        }
        on storage3 {
                disk      /dev/sda3;
                address   10.1.1.223:7702;
        }
}

resource r3 {
        device    /dev/drbd3;
        meta-disk internal;

        on storage3 {
                disk      /dev/sda2;
                address   10.1.1.223:7703;
        }
        on storage1 {
                disk      /dev/sda3;
                address   10.1.1.221:7703;
        }
}
EOF

scp /etc/drbd.conf storage2:
scp /etc/drbd.conf storage3:

FIRST SHOT (--force) & ACTIVE/PASSIVE

all nodes

lsmod | egrep 'drbd|lru_cache'
rmmod lru_cache
rmmod drbd_transport_tcp
rmmod drbd

modprobe drbd
modprobe drbd_transport_tcp

drbdadm down all
drbdadm create-md all
drbdadm up all

storage1

state is currently inconsistent to start with, this is why we need to force synchronization

ls -lF /dev/drbd1
ls -lF /dev/drbd3
drbdadm primary --force r1
drbdadm secondary r3
drbdadm status

storage2

ls -lF /dev/drbd1
ls -lF /dev/drbd2
drbdadm primary --force r2
drbdadm secondary r1
drbdadm status

storage3

ls -lF /dev/drbd2
ls -lF /dev/drbd3
drbdadm status
drbdadm primary --force r3
drbdadm secondary r2

on any node

watch the volumes synchronize live

watch cat /proc/drbd

once the volume is synchronized, see the difference in size

fdisk -l /dev/sda2 /dev/drbd1

for a 10GB partition as sda2, it takes exactly and persistantly 364544 bytes difference

READY TO GO

vi /etc/rc.d/rc.local

/sbin/modprobe drbd
#v9 /sbin/modprobe drbd_transport_tcp
#/etc/init.d/drbd start
/usr/local/sbin/drbdadm up all
echo

vi /etc/rc.d/rc.local_shutdown

#/etc/init.d/drbd stop
/usr/local/sbin/drbdadm down all
#v9 /sbin/rmmod drbd_transport_tcp
/sbin/rmmod drbd
echo

ACCEPTANCE

check what protocol a resource is on on v8

cat /proc/drbd

or on v9 (as long as debug is enabled)

cat /sys/kernel/debug/drbd/resources/<resource>/connections/<connection>/<volume>/proc_drbd

OPERATIONS & MONITORING

see operations

START FROM SCRATCH

all nodes

rm -rf /etc/drbd.d/
rm -f /etc/drbd.conf
drbdadm down all
#vi /etc/rc.local

TODO

does net {fencing resource-only;} do anything if I do not set handlers {fence-peer somecommand;}?

drbd-utils/scripts/stonith_admin-fence-peer.sh https://github.com/LINBIT/drbd-utils/blob/master/scripts/stonith_admin-fence-peer.sh

[DRBD-user] Trying to Understanding crm-fence-peer.sh https://lists.linbit.com/pipermail/drbd-user/2019-January/024759.html

RESOURCES

LINBIT DRBD kernel module https://github.com/LINBIT/drbd

DRBD userspace utilities (for 9.0, 8.4, 8.3) https://github.com/LINBIT/drbd-utils

“read-balancing” with 8.4.1+ https://www.linbit.com/en/read-balancing/

v9

DRBD 9.0 Manual Pages https://docs.linbit.com/man/v9/

drbd.conf - DRBD Configuration Files https://docs.linbit.com/man/v9/drbd-conf-5/

v8.4

DRBD 8.4 https://github.com/LINBIT/drbd-8.4

drbd.conf - Configuration file for DRBD’s devices https://docs.linbit.com/man/v84/drbd-conf-5/

How to Install DRBD on CentOS Linux https://linuxhandbook.com/install-drbd-linux/

How to Setup DRBD 9 on Ubuntu 16 https://www.globo.tech/learning-center/setup-drbd-9-ubuntu-16/

more

LINSTOR SDS server https://github.com/LINBIT/linstor-server

ops

CLI management tool for DRBD. Like top, but for DRBD resources. https://github.com/LINBIT/drbdtop

troubles

[DRBD-user] drbd-dkms fails to build under proxmox 6 https://lists.linbit.com/pipermail/drbd-user/2019-August/025208.html

[DRBD-user] Problems compiling kernel module https://lists.linbit.com/pipermail/drbd-user/2016-June/022391.html


HOME | GUIDES | BENCHMARKS | html