Aria Automation
VMware Aria Automation Config Aria Automation Cloud Automation SaltStack Config

Getting Started with SaltStack Config: Working with Pillars

SaltStack is a configuration management and automation tool used to manage server deployments. In this blog post, we will show you how to get started with SaltStack Config, specifically the usage of Pillars. We will cover the basics of working with Pillars, including a use case and how to use Pillars in state files. Stay tuned for future posts in this series, where we will go into more detail on using SaltStack to manage and automate your deployments!

What is a Pillar in SaltStack?

A Pillar is a way to store data that is specific to a Minion, or group of Minions. This data can be used in salt states. For example, you might have a Pillar file that contains sensitive data, such as passwords or API keys. These would be encrypted and only accessible to the Minions that need access to them.

What is the difference between a Grain and Pillar Data?

Grains are directly collected from a machine that runs a Minion and presented through the Grains interface to SaltSack. The Grains interface presents information about the operating system, domain name, IP address, kernel, OS type, memory usage and many other properties of the underlying system.

Grains are individual and may differ from machine to machine. Pillar data is generated and/or stored on each Salt Master, and can be used by Minions when they apply a specific state.

Grains are valid only for a single machine and can only be used on that machine. Pillar data, on the other hand, is stored centrally and can be used on one or multiple machines.

You could combine Grains and Pillar Data, by implementing Grains into the Pillar files. For example, if you want to install a specific type of package on different operating system distributions, you don’t need to have multiple state files for each operating system. Instead of having many state files, you could reduce them to one state file and one Pillar. The state file makes sure to execute the package install but receives the name of the package from the Pillar.

Inside the Pillar, you will use Grains that present information about the operating system to select the package name which matches the operating system.

The example below shows a Pillar that contains different web server packages, depending on the Linux distribution.

pkgs:
  {% if grains['os_family'] == 'RedHat' %}
  apache: httpd
  {% elif grains['os_family'] == 'Debian' %}
  apache: apache2
  {% elif grains['os'] == 'Arch' %}
  apache: apache
  {% endif %}

How to use Pillars?

When you start to work with pillars to store data, you need to know where Pillars are stored. By default, these settings can be found in the master conf file, which is located at /etc/salt/master.

In there you will find a setting called  pillar_roots.

pillar_roots:
  base:
    - /srv/pillar
  dev:
    - /srv/pillar/dev
  prod:
    - /srv/pillar/prod
  __env__:
    - /srv/pillar/others

This setting will be commented out with # when you open your master configuration for the first time. Just remove the # and the setting will be applied after you restart the master service.

The next thing you will notice are the attributes just under the Pillar_roots attribute. Those are the settings for different environments. We will cover the usage of environments in another blog post in this series. For now, it is important to understand that a Pillar location is defined for each environment inside the master configuration. The default environment is called “base”, with a default location for Pillar files in “/srv/pillar.” If you are using more environments, you can add the locations here as well.

How does Salt know which Pillar must be applied to which Minion?

As I have mentioned earlier, you can store sensitive data in a Pillar, which is only accessible by a specific Minion or group of Minions. To not spread your sensitive data across your entire network, Pillar files are organized and structured in Pillar top files. These top files are not the same as state top files, but work similar.

For each environment, you need to define a Pillar top file that matches Salt Pillar data to Salt Minions.

For example, if you want all of your Salt-managed machines to be aware of general settings and attributes, you can use the “*” character. This will allow you to assign the “default” Pillar to all Minions.

 base:
  '*':
    - default

When you refresh your Salt Pillar data, all Minions will be given the default Pillar. If you want to initiate a refresh of all Salt pillars, use this command:

salt ‘*’ saltutil.refresh_pillar

How to structure and use Pillar data?

The first thing you have to do is to create a Pillar file. For example, you create a Pillar which is called “my_ftp_credentials.sls”.

Secondly, you need to add the “my_ftp_credentials.sls” to the environment Pillar top file.

base:
  'my_machine1':
    - my_ftp_credentials

In the example above, the Minion with the id “my_machine1” is the only Minion which receives this sensitive data.

The third step is to update your Pillar with some data. Edit the Pillar file “my_ftp_credentials.sls” and add some key:value pairs.

ftpusername: myFTPUser
ftppassword: strongFTPpassword
ftpFQDN: myFTPserver.corp.local

The last step is to use the Pillar data inside your state file.

sync directory:
  cmd.run:
    - name: lftp -c "open -u {{ pillar['ftpusername'] }},{{ pillar['ftppassword'] }}
           -p 22 sftp://{{ pillar['ftpFQDN'] }};mirror -c -R /local /remote"

To help you understand the concept of different environments, consider using the same Pillar file and state file in a different environment. Imagine you have a production and staging environment with separate FTP servers. The Pillar file within the production environment could contain different ftpFQDN and username/password than the staging environment. That way, you only need to update the Pillar file, and not the state file. Or another way around, if you have the Pillar files already defined for each environment, you can modify and test a new version of your “sync directory” state file in the staging environment, and once it is tested successfully, you can move to production without passing sensitive user data between the environments.