5 Sysadmin Tips for using SaltStack

Mike R
6 min readFeb 15, 2023

I’ve been using Salt for several years managing my company’s infra and networks. Its a great product and makes any sysadmin’s life much easier.

Here are 5 most important tips I’ve learned from the experience.

1. Backup all files managed by Salt.

This one was a trial-by-fire lesson. I rolled out some user home directory changes via Salt and ended up messing up our most important user account, .ssh permissions, keys, etc, resulting in limited impact on operations.

Fortunately we had a backup script that saved these files to an external disk and we were able to restore the user files, but a hard lesson was learned, anything touched by Salt should get backed up somewhere, and Salt makes this extremely easy:

enable file backups on Salt-Minion

backup_mode

edit salt minion config file (/etc/salt/minion) and uncomment backup_mode: minion, then restart the minion service. Any time Salt modifies a file, a copy will be placed in /var/cache/salt/minion/file_backup directory

all files are backed up with a timestamp

2. Always use “test=true” when running ad-hock state.sls or state.highstate

If you aren’t using a CI/Build process that runs testing on any code change, make sure to always run highstate with test=true, as this will only simulate changes, not do actual apply. This is a must in a production environment where a tiny code change can cause severe consequences.

salt 'target' state.highstate test=true

Running as test=true shows potential changes which you can review before applying,

potential file changes are in yellow

3. Use Salt’s Python flexibility

Salt is basically a python-based backbone to your entire infrastructure, giving you all the necessary tools to extend and improve your monitoring, deployment, configuration, etc

because its an entirely Python-driven event framework, its very easy to extend Salt to serve your special use cases

Heres an example of a custom Grain that will return a grain called “mynetwork”

you can see I’m using Salt’s utility called is_linux, which tells me if target is a Linux host

create a new folder called _grains in your salt repo

my Salt code structure

add a new file network.py

#!/usr/bin/python
from __future__ import print_function

import socket
import logging
from salt.utils.platform import is_linux

log = logging.getLogger(__name__)

# set networks
networks = {}
networks["network1"] = ["host1", "host2", "host3"]
networks["network2"] = ["host4", "host5"]

def get_network():
"""determines what Network a host is in"""

if not is_linux():
return

hostname = socket.gethostname()
content = open("/sys/devices/virtual/dmi/id/product_serial")
serial = content.read()
network = ""
for nw in networks:
if hostname in networks[nw]:
network = nw
elif serial.startswith("ec2"):
network = "ec2"

if not network:
network = "unknown"

return {"mynetwork": network}


if __name__ == "__main__":

print(get_network())

now when you run

salt \* grains.get mynetwork

you will see dynamic data pulled via this python file

4. Install salt-minions via python PIP, not via package repositories

Installing and maintaining salt-minions can be painful if you have a fleet of servers with various OS versions and python versions. I found it easier to install all Salt-related packages using Python Pip

The beauty of this is that everything is inside a virtual environment so there is zero risk of having version issues or collisions with external python packages.

Whenever I want to upgrade my minions/agents, all I need to do is run

pip install salt --upgrade

To do this, I use a simple bash script to bootstrap a Minion and install Salt packges into a virtual environment located in /opt/salt

#!/bin/bash

# salt minion installer

export PYTHONIOENCODING=utf8
VENVPATH="/opt/salt"


# get latest py3 version
[ -f /bin/python3 ] && PYPATH=/bin/python3
[ -f /bin/python3.6 ] && PYPATH=/bin/python3.6
[ -f /bin/python3.7 ] && PYPATH=/bin/python3.7
[ -f /bin/python3.8 ] && PYPATH=/bin/python3.8
[ -f /bin/python3.9 ] && PYPATH=/bin/python3.9
[ -f /bin/python3.10 ] && PYPATH=/bin/python3.10

[ -z "${PYPATH}" ] && { echo "No python3 detected, exiting"; exit 1; }

# upgrade pip
$PYPATH -m pip install --upgrade pip --proxy <PROXY IP>:3128

# create venv
[ -d "${VENVPATH}/bin" ] || { cd "/opt"; $PYPATH -m venv salt; }

# install pkgs
[ -f "${VENVPATH}/bin/salt" ] || /opt/salt/bin/pip3 install salt pyinotify dictor --proxy <PROXY>:3128

ln -sf $VENVPATH/bin/salt-minion /usr/bin/salt-minion
ln -sf $VENVPATH/bin/salt-call /usr/bin/salt-call


echo "
[Unit]
Description=The Salt Minion
Documentation=man:salt-minion(1) file:///usr/share/doc/salt/html/contents.html https://docs.saltstack.com/en/latest/contents.html
After=network.target salt-master.service

[Service]
KillMode=process
Type=notify
NotifyAccess=all
LimitNOFILE=8192
ExecStart=/opt/salt/bin/salt-minion

[Install]
WantedBy=multi-user.target
" >> /usr/lib/systemd/system/salt-minion.service

systemctl daemon-reload

mkdir /etc/salt

echo "
master: saltmaster
id: $(hostname)
" >> /etc/salt/minion

echo "<MASTER IP> saltmaster" >> /etc/hosts

this script automatically creates a Python virtual environment in /opt/salt

it then installs salt-minion and salt-run packages into this virtualenv, symlinks the binaries, creates a systemd file to start Salt Minion and adds Minion config file and Master IP into /etc/hosts

once your minion starts its service, connects to Master (and you accept its request), you can then run a highstate to configure it, but the best part is that its very easy to upgrade the minion agent itself,

from Salt Master simply run

salt \* cmd.run "/opt/salt/bin/pip install salt --upgrade

once agents are upgraded to latest, restart Salt-Minion service to pick up new version

salt \* service.restart salt-minion

check that minions are latest version

salt \* grains.get saltversion

5. Use Salt’s SDB to store and hand out credentials

To store application credentials, secrets, keys, etc, you can use dedicated authentication platforms like Hashicorp Vault, which are built to give out secrets via API calls.

The problem is that this involes relatively complex setup and configuration of Vault or other similar solutions, and if you need to give out credentials to your formulas, you can use Salt’s excellent credential storage solution, SDB — Simple Database

Here’s a simple example, I’m using salt-cloud to manage my AWS instances (to start, stop ec2 instances) and I need to store the AWS key somewhere on my Salt master so that the master can connect to AWS and authenticate

on your Salt master, create a new file in /etc/salt/master.d/cred

vim /etc/salt/master.d/cred


saltcred:
driver: yaml
files:
- /etc/salt/.cred.yaml

Here, we are adding an SDB data structure called “saltcred” which we can then reference via our Pillar data. This data will come from a cred YAML file,

create a new .cred.yaml file in /etc/salt (notice the dot in front, this makes it a hidden file in a directory, not perfect secrecy but helps if its not obvious)

here I am storing several credentials that I will use via my Salt formulas, ie EC2 using salt-cloud, Restic backup, Stunnel formulas

cat /etc/salt/.cred.yaml


---
ec2_start_stop:
id: ABC111111
key: YxhY3zxxxxxxxxxxxxxxxxxxxxx

restic_os_backup_s3:
aws_access_key: ZZZZXXX1111etc
aws_access_key_id: AJJJXXX1111etc
restic_pw: someSecretPassword

stunnel_psk:
pw: "someSecretPSK"

secure this YAML file

chmod 600 /etc/salt/.cred.yaml
chown root:root /etc/salt/.cred.yaml

on your Master’s pillar file, provide a pillar data for your EC2 creds

ec2_cred: {{ salt[‘sdb.get’](‘sdb://saltcred/ec2_start_stop’) | yaml_encode }}

to get a specific subkey, ie

ec2_start_stop/id

use a colon

ec2_cred: {{ salt['sdb.get']('sdb://saltcred/ec2_start_stop:id') | yaml_encode }}

>> ABC11111

Salt will access SDB data structure called “saltcred” and parse through to find a key called “ec2_start_stop”

this credential is stored in Pillar, which means it can only be accessible to your target minion and is encrypted.

now in your Cloud configuration (or a regular State formula), you can access this cred during runtime

{% set cred = pillar.get('ec2_cred') -%}

ec2ohio:
driver: ec2
vpcid: vpc-1e0axxx
id: {{ cred["id"] }}
key: {{ cred["key"] }}

the “ec_cred” is a pillar key from above, we define it on Master’s pillar file and all it is, is a simple Dictionary containing ID and Key for AWS authentication

--

--