For the Master and Minions, make sure static name resolution points local hostname not only to ipv4 but also ipv6 e.g.,
vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 host.example.local host ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 host.example.local host
For the Minion, make sure the salt
host resolves (default name for Master daemon),
ping salt
Note. if using static name resolution, you cannot check with the host
command.
Proceed with Salt component installation,
yum install systemd systemd-python python-hashlib wget "https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm" rpm -ivh salt-repo-latest-2.el7.noarch.rpm yum clean expire-cache #yum install salt-master #cp -pi /etc/salt/master /etc/salt/master.dist yum install salt-minion cp -pi /etc/salt/minion /etc/salt/minion.dist yum install salt-ssh #yum install salt-syndic #yum install salt-cloud #yum install salt-api
work around the minion-2016.11.5 FIPS bug,
rpm -qa | grep domex rpm -e --nodeps python2-pycryptodomex yum install python-crypto
Ready to go,
#systemctl start salt-master #systemctl status salt-master systemctl start salt-minion systemctl status salt-minion
and check (make sure firewalls are not blocking those two ports on the master, so the minions can call it),
netstat -antupe | egrep '450[56]'
Refs.
On the master node, print all the fingerprints,
salt-key -F master
On the master node, print all pub keys,
salt-key --list all (or -L)
On the minion nodes, print the local key fingerprint,
salt-call --local key.finger
On the master node, check that it corresponds to known minions,
salt-key --finger <minion id>
Accept all the pending minion keys at once,
salt-key -A
Now check that the communications are working against all minions from the master,
salt '*' test.ping
Prepare the folder that is going to contain modules and state files,
mkdir -p /srv/salt/ vi /etc/salt/master extension_modules: /srv/salt systemctl restart salt-master.service
Create sample state files,
vi /srv/salt/network_packages.sls network_packages: pkg.installed: - pkgs: - rsync - lftp - curl vi /srv/salt/specific_shit.sls specific_shit: file.directory: - name: /opt/my_new_directory - user: root - group: root - mode: 755
Create a top state file,
vi /srv/salt/top.sls base: '*': - network_packages 'minion1': - specific_shit
And apply (assuming only redhat hosts for now),
salt '*' state.apply
Refs.
Either use minion ids, adv targeting or node groups.
Configure node groups using adv targeting -G grains e.g.,
vi /etc/salt/master nodegroups: linux: 'G@kernel:linux' redhat: 'G@os_family:redhat' rhel: 'G@os:redhat' centos: 'G@os:centos'
Refs.
e.g.,
salt '*' test.ping salt '*' cmd.run 'uname -a' salt '*' disk.usage salt '*' cmd.run 'ls -l /etc' salt '*' cmd.exec_code python 'import sys; print sys.version' salt '*' pkg.install vim salt '*' network.interfaces
salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo
see what other functions are available,
salt '*' sys.doc|less
Refs.
Check the status of the SS cluster,
salt '*' state.apply salt '*' saltutil.sync_modules salt '*' saltutil.sync_all
Search for service names,
salt minion1 service.get_all