This guide relates/depends on other ones:
those guides are just too painful:
so for this PoC, I will make it simplier:
see DRBD. No stonith nor fencing handler (handler {fence-peer somecommand;}
) there.
see TGT. Enable the daemon incl. at startup. And start clean,
cat > /etc/tgt/targets.conf <<-EOF default-driver iscsi ignore-errors no EOF ls -l /etc/tgt/
what classes?
crm ra classes
what does the ocf
class contain?
crm ra list ocf | grep scsi
what does hearbeat contain?
ls -l /usr/lib/ocf/resource.d/heartbeat/ | grep -i scsi
yes, this is bad.
crm configure property stonith-enabled=false crm configure property no-quorum-policy=ignore
floating ip1
,
crm configure primitive ip1 ocf:heartbeat:IPaddr2 params ip=10.8.8.31 cidr_netmask=24 op monitor interval=10s
target target1
,
crm configure primitive target1 ocf:heartbeat:iSCSITarget params iqn=iqn.2018-10.su.os3:dark1 tid=1 op monitor interval=10s
lun lun1
,
crm configure primitive lun1 ocf:heartbeat:iSCSILogicalUnit params target_iqn=iqn.2018-10.su.os3:dark1 lun=1 path=/dev/drbd1 op monitor interval=10s
group those together,
crm configure group group1 ip1 target1 lun1
note that this is ordered. You can change the order doing crm configure edit
or otherwise going for the cib.
check the configuration,
crm configure show crm_verify --live-check
check on what node the thing is running,
crm status
and on the relevant node,
tgt-admin -s
then remotely against ip1
,
iscsiadm -m discovery -t st -p 10.8.8.31
put some node in standby,
crm_standby -G crm_standby -v on
and the resources that lived there should now live on some other node as Started,
crm status tgt-admin -s
or move a LUN by specifying the target node manually e.g.
crm_resource -r group1 -M -H dark1 #crm_resource --resource group1 --move --node dark1
also,
crm resource migrate group1 dark2 #crm resource migrate group2 dark3 #crm resource migrate group3 dark1
check,
crm status
revert back to normal,
crm resource migrate group1 dark1 #crm resource migrate group2 dark2 #crm resource migrate group3 dark3
Now copy the single resources to their homologues,
crm configure edit primitive ip2 IPaddr2 \ params ip=10.8.8.32 cidr_netmask=24 \ op monitor interval=10s primitive ip3 IPaddr2 \ params ip=10.8.8.33 cidr_netmask=24 \ op monitor interval=10s primitive lun2 iSCSILogicalUnit \ params target_iqn="iqn.2018-10.su.os3:dark2" lun=1 path="/dev/drbd2" \ op monitor interval=10s primitive lun3 iSCSILogicalUnit \ params target_iqn="iqn.2018-10.su.os3:dark3" lun=1 path="/dev/drbd3" \ op monitor interval=10s primitive target2 iSCSITarget \ params iqn="iqn.2018-10.su.os3:dark2" tid=2 \ op monitor interval=10s primitive target3 iSCSITarget \ params iqn="iqn.2018-10.su.os3:dark3" tid=3 \ op monitor interval=10s group group2 target2 lun2 ip2 \ meta target-role=Stopped group group3 target3 lun3 ip3 \ meta target-role=Stopped crm status
First things first, make sure ALL your block devices are alright,
drbdadm status
Then start the new groups,
crm resource start group2 crm status crm resource start group3 crm status
and proceed with further resource migrations, as shown above.
vi /root/STATUS #!/bin/ksh #/usr/sbin/drbdadm status /usr/sbin/drbd-overview print '' print TGT: \\c tgt-admin -s >/dev/null && echo UP || echo DOWN print '' crm status chmod +x /root/STATUS
crm resource unmanage lun1 export OCF_ROOT=/usr/lib/ocf export OCF_RESKEY_target_iqn="iqn.2018-10.su.os3:dark3.lun1" export OCF_RESKEY_lun=1 export OCF_RESKEY_path="/dev/drbd1" #export OCF_TRACE_RA=1 /usr/lib/ocf/resource.d/heartbeat/iSCSILogicalUnit start echo $? tgt-admin -s /usr/lib/ocf/resource.d/heartbeat/iSCSILogicalUnit stop echo $? crm resource manage lun1
export OCF_RESKEY_ op monitor interval=10s