Hiera, R10K, and the end of manifests as we know them

Last week, we started using Hiera. We’re going to do a lot more today. First, we’ll add what we have to version control, then we’ll integrated it with r10k, and we’ll wrap up by migrating more content out of manifests into Hiera. Along the way we’ll explain how Hiera works. I also encourage you to review the Puppet Labs docs and the source code as needed.

Version Control

Before we do anything else, we need to take the existing hiera data and put it in version control. Just like our modules and manifests, hiera data will be edited to match our feature implementation and to define the infrastructure in use. It is equally as important as our module configs. I’ve created a GitHub repository called hiera-tutorial and will reference that, but any git-based repository will work for our purposes. Create a directory for the local repo, add our existing content, and push that to the origin:

[rnelson0@puppet ~]$ cd git
[rnelson0@puppet git]$ mkdir hiera-tutorial
[rnelson0@puppet git]$ cd hiera-tutorial
[rnelson0@puppet hiera-tutorial]$ cp -r /etc/puppet/data/* ./
[rnelson0@puppet hiera-tutorial]$ ls
global.yaml  puppet_role
[rnelson0@puppet hiera-tutorial]$ git init
Initialized empty Git repository in /home/rnelson0/git/hiera/.git/
[rnelson0@puppet hiera-tutorial]$ git add .
[rnelson0@puppet hiera-tutorial]$ git commit -m 'Initial commit to hiera repo'
[master (root-commit) 4293c1c] Initial commit to hiera repo
 3 files changed, 8 insertions(+), 0 deletions(-)
 create mode 100644 global.yaml
 create mode 100644 puppet_role/puppet.yaml
[rnelson0@puppet hiera-tutorial]$ git remote add origin ssh://git@codecloud.web.att.com:7999/st_msspuppet/hiera.git
[rnelson0@puppet hiera-tutorial]$ git push origin master
Counting objects: 6, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (6/6), 451 bytes, done.
Total 6 (delta 0), reused 0 (delta 0)
To ssh://git@codecloud.web.att.com:7999/st_msspuppet/hiera.git
* [new branch]      master -> master

In a bit we’ll make a decision whether we want to use branches in our hiera data. If you decide to do that, create the production branch and push contents to it, then change the primary branch at GitHub and remove the defunct master branch. Even if you stick with a single branch, you can use this process to change the branch name, for example to data.

[rnelson0@puppet hiera-tutorial]$ git checkout -b production
Switched to a new branch 'production'
[rnelson0@puppet hiera-tutorial]$ git branch -d master
Deleted branch master (was 0e43028).
[rnelson0@puppet hiera-tutorial]$ git push origin production
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rnelson0/hiera-tutorial.git
 * [new branch]      production -> production
<Change the primary branch at GitHub now>
[rnelson0@puppet hiera-tutorial]$ git push origin :master
To git@github.com:rnelson0/hiera-tutorial.git
 - [deleted]         master

With our existing content preserved, we can move on to the next step.

Integration with R10K.

Now we need to edit the r10k installation manifest. We have already configured r10k to populate environments with modules based off of the branches of our puppet and module repos. We will add a source for hiera, which will allow us to make changes to hiera via git and have the changes pushed around by r10k instead of manually copying files around ourselves. We will also be able to specify whether we have a monolithic hiera configuration or environment-specific configurations.

Because initial configuration of r10k was performed as root, move the r10k_installation.pp manifest from root’s home directory to ours, change permissions, and add the hiera location.

[rnelson0@puppet hiera-tutorial]$ cd
[rnelson0@puppet ~]$ sudo mv /root/r10k_installation.pp ./
[rnelson0@puppet ~]$ sudo chown rnelson0.rnelson0 r10k_installation.pp
[rnelson0@puppet ~]$ vi r10k_installation.pp

Here are the contents of the file. A source for hiera has been added – plus a comma after the existing puppet subsection – and the modulepath/manifestdir settings that were deprecated in Puppet 3.6.1 were removed. The new content is highlighted in bold:

class { 'r10k':
  version => '1.2.0',
  sources => {
    'puppet' => {
      'remote'  => 'https://github.com/rnelson0/puppet-tutorial.git',
      'basedir' => "${::settings::confdir}/environments",
      'prefix'  => false,
    },
    'hiera' => {
      'remote'  => 'https://github.com/rnelson0/hiera-tutorial.git',
      'basedir' => "${::settings::confdir}/data",
      'prefix'  => false,
    }
  },
  manage_modulepath => false,
}

Apply the manifest. You’ll see a small change to the hiera configuration file:

[rnelson0@puppet ~]$ sudo puppet apply r10k_installation.pp
Notice: Compiled catalog for puppet.nelson.va in environment production in 0.86 seconds
Notice: /Stage[main]/Main/Ini_setting[manifestdir]/ensure: created
Notice: /Stage[main]/R10k::Config/File[r10k.yaml]/content: content changed '{md5}3831ea2606d05e88804d647d56d2e12b' to '{md5}62730aa21170be02c455406043ef268e'
Notice: /Stage[main]/R10k::Config/Ini_setting[R10k Modulepath]/ensure: created
Notice: Finished catalog run in 0.57 seconds
[rnelson0@puppet ~]$ cat /etc/r10k.yaml
:cachedir: /var/cache/r10k
:sources:
  puppet:
    prefix: false
    basedir: /etc/puppet/environments
    remote: "https://github.com/rnelson0/puppet-tutorial.git"
  hiera:
    prefix: false
    basedir: /etc/puppet/data
    remote: "https://github.com/rnelson0/hiera-tutorial.git"

:purgedirs:
  - /etc/puppet/environments

We’ll also make a change to the datadir value in /etc/hiera.yaml (or /etc/puppet/hiera.yaml, as they should be symlinked). You have two choices here. The first example will allow us to create a hiera branch that matches our other feature branches, allowing a hiera configuration per environment. The disadvantage is that you MUST branch for each environment, otherwise catalog compilation failure will fail for those dynamic environments. The second examples, commented out, point directly to the production branch. This relies on defining the environment in /etc/hiera.yaml‘s :hierarchy: setting (and the commented-out line would be for a single data branch; there’s more than one way to do this). The advantage is that you don’t need to branch your hiera repo. Choose one of these settings and make it so. I’ve gone with the former in my public repo, but the last option at work.

  # One branch per environment
  :datadir: '/etc/puppet/data/%{environment}'

  # Make sure /etc/hiera.yaml's :hierarchy: includes "%{environment}" statements
  #:datadir: '/etc/puppet/data/production'
  # If you set the branch to 'data' you can tell r10k to use ${::settings::confdir} and this datadir
  #:datadir: '/etc/puppet/data'

Restart the puppetmaster service. Anytime you make changes to /etc/hiera.yaml, you will need to restart puppetmaster for the changes to take effect. You do NOT need to restart it when the contents of datadir are changed, however. Finally, re-run r10k and you’ll see the differences in the contents of /etc/puppet/data:

[rnelson0@puppet hiera-tutorial]$ ls /etc/puppet/data/puppet/role
server.yaml
[rnelson0@puppet hiera-tutorial]$ sudo r10k deploy environment -p
[rnelson0@puppet hiera-tutorial]$ ls /etc/puppet/data/puppet_role
ls: cannot access /etc/puppet/data/puppet_role: No such file or directory
[rnelson0@puppet hiera-tutorial]$ ls /etc/puppet/data/production/
.git/        global.yaml  puppet_role/

If everything went well, run puppet on the master and server01 and everything should work like it did before. Verify that before continuing. Some things to look for: the comma between the existing puppet and new hiera resource in the r10k manifest, that you did not add modulepath/manifestdir back to /etc/puppet/puppet.conf after migrating to environmentpath, and that you restarted the puppetmaster service.

Converting manifests

The next thing we need to do is convert our existing manifests/site.pp entries into hiera definitions. Last week, we converted the server role into a hiera definition. Let’s look at the definition for the puppet node:

node 'puppet.nelson.va' {
  include ::base
  notify { "Generated from our notify branch": }

  # PuppetDB
  include ::puppetdb
  include ::puppetdb::master::config

  # Hiera
  package { ['hiera', 'hiera-puppet']:
    ensure => present,
  }

  class { '::mcollective':
    client             => true,
    middleware         => true,
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }

  user { 'root':
    ensure => present,
  } ->
  mcollective::user { 'root':
    homedir     => '/root',
    certificate => 'puppet:///modules/site_mcollective/client_certs/root.pem',
    private_key => 'puppet:///modules/site_mcollective/private_keys/root.pem',
  }

  mcollective::plugin { 'puppet':
    package => true,
  }
}

We need to create a few profiles and roles, starting with the profiles. There will be 5 of them – including a new profile that ensures the puppet-master package is installed and running and some firewall rules for it (we should have been tracking this earlier, but I forgot!). I’ve only included the comment section from the first file, but you should be sure to include the header in each file:

[rnelson0@puppet profile]$ cat manifests/puppetdb.pp
# == Class: profile::puppetdb
#
# PuppetDB profile
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include profile::puppetdb
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class profile::puppetdb {
  include ::puppetdb
  include ::puppetdb::master::config
}
[rnelson0@puppet profile]$ cat manifests/hiera.pp
# Comments go here
class profile::hiera {
  package { ['hiera', 'hiera-puppet']:
    ensure => present,
  }
}
[rnelson0@puppet profile]$ cat manifests/mcollective/all.pp
# Comments go here
class profile::mcollective::all {
  class { '::mcollective':
    client             => true,
    middleware         => true,
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }

  mcollective::plugin { 'puppet':
    package => true,
  }
}
[rnelson0@puppet profile]$ cat manifests/mcollective/users.pp
# Comments go here
class profile::mcollective::users {
  user { 'root':
    ensure => present,
  } ->
  mcollective::user { 'root':
    homedir     => '/root',
    certificate => 'puppet:///modules/site_mcollective/client_certs/root.pem',
    private_key => 'puppet:///modules/site_mcollective/private_keys/root.pem',
  }
}
[rnelson0@puppet profile]$ cat manifests/puppet_master.pp
# Comments go here
class profile::puppet_master {
  package {'puppet-server':
    ensure => present,
  }

  firewall { '100 allow agent checkins':
    dport  => 8140,
    proto  => tcp,
    action => accept,
  }

  firewall { '110 sinatra web hook':
    dport  => 80,
    proto  => tcp,
    action => accept,
  }
}

The corresponding role is very simple:

class role::puppet {
  include profile::base  # All roles should have the base profile
  include profile::puppet_master
  include profile::puppetdb
  include profile::hiera
  include profile::mcollective::users
  include profile::mcollective::all
}

We still have a definition for puppet in the site manifest. You can reduce the site.pp file to just a few lines now:

Package {
  allow_virtual => true,
}

node default {
  hiera_include('classes')
}

Commit/push changes and re-deploy with r10k. Whoops, we forgot to update hiera. That’s okay, now you know what the error message looks like when you skip this step:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find data item classes in
 any Hiera data file and no default supplied at /etc/puppet/environments/production/manifests/site.pp:6 on nod
e puppet.nelson.va

Hopefully, you remember how to fix this from last week. If not, navigate to your hiera repo’s puppet_role/ directory. Create a yaml file for the puppet puppet_role (or whatever puppet_role you decided upon for it) and add the role::puppet class to it:

[rnelson0@puppet puppet_role]$ cat > puppet.yaml
---
classes:
  role::puppet

Commit/push/r10k and run puppet again. You should see the firewall rules from profile::puppet_master and maybe an mcollective update; if you already had the rule in place from our earlier work then you may not see anything other than a successful checkin.

Defines – A Special Case

If you review the site.pp file, you’ll notice that it uses a function called hiera_include. This includes the defined classes, and each of those classes look up their parameters via hiera. However, there are some other hiera functions you may need. A common example is the case of defines with node-specific information. Above, in profile::mcollective::users, we use two defines, user and mcollective::user. As the values for these defines are the same for each node, we simply provided all the correct key/value pairs we needed in the module. When we need the values to be specific to a node (or environment, or network – anything more specific than “all”), we can’t populate the define’s attributes in the module.

Since we don’t have anything like that in our example setup, we’ll have to make something up. Let’s create a definition for a DHCP server, that’s probably helpful for a lab network. Here’s an example of a way to do that with profiles and the site manifest, which we will then convert to hiera/roles and profiles:

<snippet of manifests/site.pp>
node /dhcp/ {
  include ::profile::dhcp_server

  ::dhcp::server::subnet { '10.0.0.0':
    broadcast   => '10.0.0.255',
    netmask     => '255.255.255.0',
    routers     => '10.0.0.1',
    range_begin => '10.0.0.100',
    range_end   => '10.0.0.150',
    dns_servers => ['10.0.0.1', '10.0.0.2'],
    domain_name => 'nelson.va',
    other_opts  => ['option ntp-servers 10.0.0.1'],
  }
}


[rnelson0@puppet profile]$ cat > manifests/dhcp_server.pp
class profile::dhcp_server {
  include mss::base
  include dhcp::server

  firewall { '100 Allow DHCP requests':
    proto  => 'udp',
    sport  => [67, 68],
    dport  => [67, 68],
    state  => 'NEW',
    action => 'accept',
  }
}

The define above is ::dhcp::server::subnet. You cannot hiera_include it, which means if you convert this to a role, every DHCP server will serve the same scope. If you’ve never had the pleasure of having two servers serving the same scope, I can guarantee you that you don’t want to! We’ll want to move that out of the module and into hiera, but how? We’ll use a combination of hiera_hash, which creates a hash from hiera data, and create_resources, which can instantiate resources, including defines, using a provided hash to populate the needed values. The PuppetLabs documentation shows a simple example of how this works with a flat manifest and a manifest with an array, but how would we go about doing this with hiera?

First, let’s take a look at where to put the data in hiera. If we look at the :hierarchy: portion of hiera.yaml, we’ve got a few options:

:hierarchy:
  - defaults
  - puppet_role/%{puppet_role}
  - "%{clientcert}"
  - "%{environment}"
  - global

The puppet_role will potentially apply to multiple nodes, so that’s out. Next is the clientcert value. Each node (and mcollective user!) has its own certificate, which can be seen by running puppet cert list –all on the master (optionally, use awk to only grab the important part):

[rnelson0@puppet puppet_role]$ sudo puppet cert list --all | awk '{print $2}'
"agent1.nelson.va"
"puppet.nelson.va"
"root"
"server01.nelson.va"

This should match the FQDN in most environments, but could also be the short hostname (i.e. agent1). This seems a likely choice for node-specific elements. You could also add to the hierarchy. Popular options are to provide a level that combines environment and clientcert/fqdn, as in “%{environment}/%{clientcert}”. I’ll assume the simple “%{clientcert}” level is being used with the value dhcp.nelson.va. In this yaml file, we need to create a hash called dhcp_subnet of all the values provided to the define, with a top-level hash key that matches the name of the define. In this case, that’s the subnet. All the other attributes are underneath this level. Let’s define the node yaml, plus the puppet_role yaml for dhcp:

[rnelson0@puppet hiera-tutorial]$ cat > dhcp.nelson.va.yaml
---
dhcp_subnet:
  '10.0.0.0':
    broadcast   : '10.0.0.255'
    netmask     : '255.255.255.0'
    routers     : '10.0.0.1'
    range_begin : '10.0.0.100'
    range_end   : '10.0.0.150'
    dns_servers :
      - '10.0.0.1'
      - '10.0.0.2'
    domain_name : 'nelson.va'
    other_opts  :
      - 'option ntp-servers 10.0.0.1'

[rnelson0@puppet hiera-tutorial]$ cat > puppet_role/dhcp.yaml
---
classes:
  role::dhc

To glue everything together, we now need to create the dhcp role and import this information. We’ll use hiera_hash to import the above hash, then create_resources to instantiate a ::dhcp::server::subnet with the hash’s contents:

[rnelson0@puppet role]$ cat manifests/dhcp.pp
class role::dhcp {
  include profile::base  # All roles should have the base profile
  include profile::dhcp

  create_resources(::dhcp::server::subnet, hiera_hash('dhcp_subnet'))
}

Commit/push/r10k the changes. Since we haven’t created a DHCP node yet, you’ll have to deploy a VM from a template, as we did last week, give it a hostname of ‘dhcp’ (numbered instances are far less likely with DHCP servers, but ‘dhcp01′ works just as well – as long as the hiera yaml filename matches!), and checkin/sign/checkin with puppet. It should receive all the configuration changes required to now be a DHCP server in the network 10.0.0.0/24, assigning leases between 10.0.0.100-150.

Some other common uses with defines are to create users and manage apache vhost definitions. In both cases, you don’t want the module to contain the definitions, but hiera. This successfully abstracts the data away from the service definition and allows you to reuse your code very efficiently.

Summary

Building on last week’s roles, profiles, and hiera introduction, we’ve examined how to use version control and r10k to manage hiera, how to convert simple node definitions into roles managed via hiera, and how to provide node-specific configuration on top of the role’s classes. If you have any other node definitions in regular *.pp manifests, take the time to convert them all to hiera. From here on out, we’ll assume that’s where your data lies.

At this point, I’ll be taking a bit of a break from the blog for the summer. I’m happy to say that my wife has taken a new job and we will have closed on a new house around the time this article is published. We’ll be involved in moving and settling into our new house for a few weeks, but I’ll be bringing more puppet-ey goodness in the fall. This seems like a great time to roll up all the cahnges, so I’ve added a tag, v0.5, to all the repos associated with this series, which is a snapshot in time of the moment this article was finished (6/23/2014).

In the fall, we’ll start working on scaling up our setup. In the meantime, enjoy your ability to deploy new VMs and quickly apply policy to them. Have a great summer, everyone!

I Survived #VirtualDesignMaster Challenge 1!

This week has been pretty exciting. It’s getting closer to the move and things are starting to seem real – which means more time is involved in it. Somehow, in the midst of all that, I managed to complete my design proposal for Virtual Design Master’s first challenge, a whopping 30 minutes before the due date. On Thursday night, all the contestants defended their design. To my surprise, I survived! I am thankful of some critical reviews from Jason Shiplett and some friends on IRC. We lost a few competitors, as is the nature of the challenge, but everyone’s designs are amazing. Check them out at http://www.virtualdesignmaster.com/.

This week’s challenge is about constraints. We have some physical constraints – have to use the same vendors, and needs to fit in 21U, oh and by the way, it’s on the moon – plus a unique requirement I haven’t seen anywhere else: IPv6 only. That’s going to be tough. But they weren’t done with the constraints yet. We have to use someone else’s design from challenge 1! Everyone on Team Beta has to work off the design by Daemon Behr (@VMUG_Vancouver). I’m very honored that my design (@rnelson0) was chosen as the design that Team Alpha has to work from.

If you are available next Thursday at 9PM Eastern, tune in at http://www.virtualdesignmaster.com/live/ to see the results of challenge 2!

Intro to Roles and Profiles with Puppet and Hiera

If you’ve been following along with the Puppet series, our next task is to start using roles and profiles. If you’re just visiting, feel free to review the series to get caught up. Today, we will discuss the roles and profiles pattern, start implementing it as well as a custom fact, and deploy a webserver on a node managed by puppet. Finally, we’ll move some of our configuration from the site manifest into Hiera.

NOTE: A small note on security. I’ve been running through this series as ‘root’ and earlier said, “Well, just be more secure in production.” That’s lame. This blog covers security as well as virtualization and automation so I’m going to live up to that. For now, I’ve added a local user with useradd, updated sudoers, and cloned all the repos so that I can show best practices, which will include doing most work as my user and then sudo/su to run a few commands as root. Later, we’ll manage local users via puppet.

[root@puppet git]# useradd rnelson0 -c "Rob Nelson"
[root@puppet git]# passwd rnelson0
Changing password for user rnelson0.
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.

[root@puppet ~]# cat > /etc/sudoers.d/puppetadmins
rnelson0        ALL=(ALL)       ALL

<Login as rnelson0>
[rnelson0@puppet ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rnelson0/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/rnelson0/.ssh/id_rsa.
Your public key has been saved in /home/rnelson0/.ssh/id_rsa.pub.
...
<Import key into github>
...
[rnelson0@puppet ~]$ cd git
[rnelson0@puppet git]$ git clone git@github.com:rnelson0/puppet-tutorial
Initialized empty Git repository in /home/rnelson0/git/puppet-tutorial/.git/
remote: Counting objects: 848, done.
remote: Compressing objects: 100% (579/579), done.
remote: Total 848 (delta 190), reused 841 (delta 186)
Receiving objects: 100% (848/848), 395.47 KiB, done.
Resolving deltas: 100% (190/190), done.
[rnelson0@puppet git]$ git clone git@github.com:rnelson0/rnelson0-base
Initialized empty Git repository in /home/rnelson0/git/rnelson0-base/.git/
remote: Reusing existing pack: 35, done.
remote: Total 35 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (35/35), 10.11 KiB, done.
Resolving deltas: 100% (10/10), done.
[rnelson0@puppet git]$ git clone git@github.com:rnelson0/site_mcollective.git
Initialized empty Git repository in /home/rnelson0/git/site_mcollective/.git/
remote: Reusing existing pack: 31, done.
remote: Total 31 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (31/31), 12.36 KiB, done.
Resolving deltas: 100% (5/5), done.

Provision Server01

At the beginning of this series, I described a set of 10 nodes we would use for learning simply called server01 through server10. If you did not already provision those nodes, you need to deploy at least one now. You can use a different name and IP but I will be referencing server01 and 10.0.0.51 throughout this article. Since it has been some time, be sure to either update your kickstart template or run ‘yum update -y‘ and reboot after deployment to avoid version mismatches and install the latest security patches. Be sure to add a local user and update sudoers as we have above so that you can manage the node securely.

Roles and Profiles

We’re going to look into implementing the roles and profile pattern. This pattern is very popular, though it is perhaps poorly named. In this pattern, you define a number of profiles, which specify resources for each profile, and then define roles that are a collection of individual profiles. Ideally, a role has many profiles, and each node definition references a single role. If a node requires more than one role, you should define a new role that has the union set of profiles the two roles have. Some people believe that each node should have a single profile that denotes the roles it should have and hence the pattern is named incorrectly. However, I will use the pattern as described, mostly because the majority of documentation on the internet assumes that you follow the one role/many profile pattern and implementing it the other way around thus leads to some confusion.

Before we begin, create a role and a profile module. I’ve created two repos on Github (called simply role and profile), two corresponding modules on the server with puppet module generate as we did above, and pushed the new files up to Github. Here are the commands, with the output truncated:

[rnelson0@puppet rnelson0-custom_facts]$ cd ..
[rnelson0@puppet git]$ puppet module generate --modulepath `pwd` rnelson0-role
[rnelson0@puppet git]$ puppet module generate --modulepath `pwd` rnelson0-profile
[rnelson0@puppet git]$ cd rnelson0-role/
[rnelson0@puppet rnelson0-role]$ git init
[rnelson0@puppet rnelson0-role]$ git add .
[rnelson0@puppet rnelson0-role]$ git commit -m "first commit"
[rnelson0@puppet rnelson0-role]$ git remote add origin git@github.com:rnelson0/rnelson0-role.git
[rnelson0@puppet rnelson0-profile]$ git add .
[rnelson0@puppet rnelson0-profile]$ git commit -m "first commit"
[rnelson0@puppet rnelson0-profile]$ git remote add origin git@github.com:rnelson0/rnelson0-profile.git
[rnelson0@puppet rnelson0-profile]$ git push -u origin master

Let’s build two example profiles of a base profile and a very simple web server. Our web server will require nothing more than apache. We can build the base profile by looking at init.pp and grabbing the ssh and ntp settings and putting them in manifests/profile/base.pp.

[rnelson0@puppet rnelson0-profile]$ cat > manifests/base.pp
# == Class: profile::base
#
# Base profile
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include profile::base
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class profile::base {
  include ::motd

  # SSH server and client
  class { '::ssh::server':
    options => {
      'PermitRootLogin'          => 'yes',
      'Protocol'                 => '2',
      'SyslogFacility'           => 'AUTHPRIV',
      'PasswordAuthentication'   => 'yes',
      'GSSAPIAuthentication'     => 'yes',
      'GSSAPICleanupCredentials' => 'yes',
      'AcceptEnv'                => 'LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT LC_IDENTIFICATION LC_ALL LANGUAGE XMODIFIERS',
      'Subsystem'                => '      sftp    /usr/libexec/openssh/sftp-server',
      'Banner'                   => '/etc/issue.net',
    },
  }
  class { '::ssh::client':
    options => {
      'Host *' => {
        'SendEnv'                   => 'LANG LC_*',
        'HashKnownHosts'            => 'yes',
        'GSSAPIAuthentication'      => 'yes',
        'GSSAPIDelegateCredentials' => 'no',
      },
    },
  }

  class { '::ntp':
    servers => [ '0.pool.ntp.org', '2.centos.pool.ntp.org', '1.rhel.pool.ntp.org'],
  }
}
CTRL-D

Let’s create a simple profile for the webserver and apply the puppetlabs’s apache class. Note that we have to use the full path to the module, ::apache; if we leave off the colons then it will find apache within the profile module and create a circular reference.

[rnelson0@puppet rnelson0-profile]$ cat > manifests/apache.pp
# == Class: profile::apache
#
# Apache profile
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include profile::apache
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class profile::apache {
  class {'::apache': }
}
CTRL-D

Finally, let’s create a role for the webserver called role::webserver:

[rnelson0@puppet rnelson0-profile]$ cd ../rnelson0-role/
[rnelson0@puppet rnelson0-role]$ cat > manifests/webserver.pp
# == Class: role::webserver
#
# Webserver role
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include role::webserver
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class role::webserver {
  include profile::apache
  include profile::base  # All roles should have the base profile
}

The roles and profiles are going to become handy when we define a fact that we can use to match the role.

Custom Facts

What is a fact, in Puppet’s parlance? A fact is a piece of information about a puppet node that is collected by the master system, or the local system when using puppet apply. This information appears in the form of top-scope variables that can be accessed in a manifest as $fact. Some examples are osfamily (redhat, debian, etc.), timezone, is_virtual, fqdn, and network information in the format ipaddress_<int>. You can see facts on a system with puppet by running facter (builtin facts only) or facter -p (includes puppet-defined facts). You must be root to see all facts, regular users only see some facts. Add the name of a fact if you just want to see that result.

[rnelson0@puppet ~]$ facter | wc
     74     244    3061
[rnelson0@puppet ~]$ sudo facter | wc
     85     300    3416
[rnelson0@puppet ~]# facter timezone
GMT

You can use these facts in a manifest. A common use is to determine an action to be taken based on the OS family – Redhat uses .rpm, Debian uses .deb, Solaris uses .pkg, and so on. When writing modules for public consumption – and I encourage that you design with this in mind even if the module is private – it is good practice to provide multi-platform support and this is a great way to do so. You can use any of the facts this way, for instance to use the IP address for eth0 as a default value, to check the free RAM before taking an action, or detecting if the node is running on a virtual platform.

What if you want the master to know something about nodes that isn’t provided by facter? That’s where custom facts come in. We can create a fact of our own choosing by writing some Ruby code, add it to the master, and on the next checkin, agents will receive the fact and start providing it in reports. I’m going to walk us through the process, and you can find more details here.

NOTE: If you started with Puppet about the same time as I started this series, you may have v1.7.x of facter. The current version for CentOS as of this writing is 2.0.1-1. Be sure to update with “yum update -y” or your distribution’s equivalent to ensure you match the latest and greatest before continuing. If you must stay at a previous version, you may run into some issues that you’ll have to debug on your own.

Requirements

We have a few requirements to use custom facts. The first is to ensure that pluginsync is enabled on the master(s). Make sure this configuration option is set in /etc/puppet/puppet.conf, and if not, set it and restart the puppetmaster service:

[main]
pluginsync = true

We also have to comply with the module structure <modulepath>/<module>/lib/facter/<customfact>.rb. Let’s create a new module and a repo called custom_facts and add a fact called in role.rb. If you have upgraded to puppet 3.5.x, you’ll notice that puppet module generates metadata for you during generation now!

[rnelson0@puppet git]$ puppet module generate --modulepath `pwd` rnelson0-custom_facts
We need to create a metadata.json file for this module.  Please answer the
following questions; if the question is not applicable to this module, feel free
to leave it blank.

Puppet uses Semantic Versioning (semver.org) to version modules.
What version is this module?  [0.1.0]
-->

Who wrote this module?  [rnelson0]
-->

What license does this module code fall under?  [Apache 2.0]
-->

How would you describe this module in a single sentence?
--> Custom facts for roles and profiles

Where is this module's source code repository?
--> https://github.com/rnelson0/rnelson0-custom_facts

Where can others go to learn more about this module?  [https://github.com/rnelson0/rnelson0-custom_facts]
-->

Where can others go to file issues about this module?  [https://github.com/rnelson0/rnelson0-custom_facts/issues]
-->

----------------------------------------
{
  "name": "rnelson0-custom_facts",
  "version": "0.1.0",
  "author": "rnelson0",
  "summary": "Custom facts for roles and profiles",
  "license": "Apache 2.0",
  "source": "https://github.com/rnelson0/rnelson0-custom_facts",
  "project_page": "https://github.com/rnelson0/rnelson0-custom_facts",
  "issues_url": "https://github.com/rnelson0/rnelson0-custom_facts/issues",
  "dependencies": [
    {
      "name": "puppetlabs-stdlib",
      "version_range": ">= 1.0.0"
    }
  ]
}
----------------------------------------

About to generate this metadata; continue? [n/Y]
--> y

Notice: Generating module at /home/rnelson0/git/rnelson0-custom_facts...
Notice: Populating ERB templates...
Finished; module generated in rnelson0-custom_facts.
rnelson0-custom_facts/tests
rnelson0-custom_facts/tests/init.pp
rnelson0-custom_facts/Rakefile
rnelson0-custom_facts/metadata.json
rnelson0-custom_facts/spec
rnelson0-custom_facts/spec/spec_helper.rb
rnelson0-custom_facts/spec/classes
rnelson0-custom_facts/spec/classes/init_spec.rb
rnelson0-custom_facts/README.md
rnelson0-custom_facts/manifests
rnelson0-custom_facts/manifests/init.pp
[rnelson0@puppet git]$ cd rnelson0-custom_facts/
[rnelson0@puppet rnelson0-custom_facts]$ mkdir facter
[rnelson0@puppet rnelson0-custom_facts]$ touch facter/roles.rb
[rnelson0@puppet rnelson0-custom_facts]$ git init
Initialized empty Git repository in /home/rnelson0/git/rnelson0-custom_facts/.git/
[rnelson0@puppet rnelson0-custom_facts]$ git add .
[rnelson0@puppet rnelson0-custom_facts]$ git commit -m 'First commit of custom_facts module'
[master (root-commit) a6239d7] First commit of custom_facts module
 7 files changed, 191 insertions(+), 0 deletions(-)
 create mode 100644 README.md
 create mode 100644 Rakefile
 create mode 100644 facter/roles.rb
 create mode 100644 manifests/init.pp
 create mode 100644 metadata.json
 create mode 100644 spec/classes/init_spec.rb
 create mode 100644 spec/spec_helper.rb
 create mode 100644 tests/init.pp
[rnelson0@puppet rnelson0-custom_facts]$ git remote add origin git@github.com:rnelson0/rnelson0-custom_facts.git
[rnelson0@puppet rnelson0-custom_facts]$ git push -u origin master
Counting objects: 15, done.
Compressing objects: 100% (10/10), done.
Writing objects: 100% (15/15), 3.77 KiB, done.
Total 15 (delta 0), reused 0 (delta 0)
To git@github.com:rnelson0/rnelson0-custom_facts.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.

Before you commit your changes, let’s update the Puppetfile to reference our three new modules (I’m going to start dropping output that isn’t vital from here on out):

[rnelson0@puppet rnelson0-profile]$ cd ../puppet-tutorial/
[rnelson0@puppet puppet-tutorial]$ vi Puppetfile
...
mod "custom_facts",
  :git => "git://github.com/rnelson0/rnelson0-custom_facts"

mod "role",
  :git => "git://github.com/rnelson0/rnelson0-role"

mod "profile",
  :git => "git://github.com/rnelson0/rnelson0-profile"
[rnelson0@puppet puppet-tutorial]$ git commit -a -m 'Add custom_facts, role, and profile modules'
[rnelson0@puppet puppet-tutorial]$ git push origin

Commit and push the changes up for all three repos. Don’t forget to set up your webhooks in every repo, especially the post-receive hook on the Github side!

Assigning Roles

Now that we have a role, how do we assign it? We can easily modify the site manifest and create a node definition for server01 with the correct role. That’s fine with one server, but what if we have multiple servers in that role? Imagine for the moment that all ten VMs, server01 through server10, will be web servers. If we strip off the numbers from the hostname, we are left with server. We can define a role that all server## VMs use, creating a farm of that application type. You’ve probably seen this before with www1, www2, etc., we’re just using a different string.

Let’s flesh out our custom fact to create a ‘role’ fact based on the hostname. In our example, the pattern is very simple, /([a-z]+)[0-9]+/, and we can discard the numbers. If there are no trailing numbers, we accept the hostname as it is. In a worst case scenario, we use ‘default’. There are some complex examples out there, for instance by using the entirety of an FQDN to codify a role, environment, and instance number in the short name and a location as the DNS suffix. Instead, we’ll use the hostname fact, a shorter regex, and only create a single fact. Here’s some ruby code that creates a fact called puppet_role:

# ([a-z]+)[0-9]+, i.e. www01 or logger22 have a puppet_role of www or logger
if Facter.value(:hostname) =~ /^([a-z]+)[0-9]+$/
  Facter.add('puppet_role') do
    setcode do
      $1
    end
  end

# ([a-z]+), i.e. www or logger have a puppet_role of www or logger
elsif Facter.value(:hostname) =~ /^([a-z]+)$/
  Facter.add('puppet_role') do
    setcode do
      $1
    end
  end

# Set to hostname if no patterns match
else
  Facter.add('puppet_role') do
    setcode do
      'default'
    end
  end
end

We have one other change to make. Our site manifest only has a node definition for the master node. Let’s create an empty definition for nodes that don’t have a specified block. Add this to the bottom of your site.pp:

node default {
}

Now, commit this change and redeploy your environments. Earlier, we enabled pluginsync. The master has to synchronize as well before any clients can synchronize with it, so a simple puppet agent –test should allow the master to get the fact. You can test it afterward with facter:

[rnelson0@puppet rnelson0-custom_facts]$ sudo puppet agent --test
...
Info: Loading facts in /etc/puppet/environments/production/modules/custom_facts/lib/facter/roles.rb
...
[rnelson0@puppet rnelson0-custom_facts]$ sudo facter -p puppet_role
puppet

The next item to work on is the node server01. When you run the agent you should see the fact downloaded and the puppet_role fact populated:

[rnelson0@server01 ~]$ sudo puppet agent --test
...
Info: Loading facts in /var/lib/puppet/lib/facter/roles.rb
...
[rnelson0@server01 ~]$ sudo facter -p puppet_role
server

Putting it all together

Sweet. We’ve defined profiles, a role that uses those profiles, and a fact that can generate a puppet role for similarly named servers. How do we use these things we have created? Let’s go back to our site manifest and look at our node definitions:

[rnelson0@puppet puppet-tutorial]$ grep node manifests/site.pp
node 'puppet.nelson.va' {
node default {

We can create a node definition now for our ‘server‘ nodes and include the webserver role. It’s three simple lines, which is just the way we like it:

node /^server\d+/ {
  include role::webserver
}

Deploy that on the master. With that simple statement, an agent noop from server01 will show you that puppet is now ready to install the components of the webserver role, from the base (ssh/ntp/motd) and apache (apache) profiles.

[rnelson0@server01 ~]$ sudo puppet agent --test --noop
...
Notice: /Stage[main]/Motd/File[/etc/motd]/content:
...
Notice: Class[Ntp::Service]: Would have triggered 'refresh' from 1 events
...
Notice: /Stage[main]/Apache::Service/Service[httpd]: Would have triggered 'refresh' from 49 events
...

If everything looks good, run it again without the noop and afterward, you should be able to visit http://server01 and see an empty directory listing. If you have iptables enabled, you may need to stop it as we haven’t opened port 80 yet.

There is one last little bit to do. We went to all that work to create the fact puppet_role but did not use it. Let’s not let that effort go to waste! You can use this fact in many ways. In our example, we’ll update the Hiera hierarchy to load role-specific information. First, edit the hierarchy in /etc/hiera.yaml to look something like this:

:hierarchy:
  - defaults
  - puppet_role/%{puppet_role}
  - "%{clientcert}"
  - "%{environment}"
  - global

Hiera can examine puppet facts. In this case, it will look in the datadir (/etc/puppet/data in our setup) for a file called puppet_role/%{puppet_role}.yaml and use any data available in it. We’ll use hiera to define our classes and remove the specific node definitions from the site manifest. First, here’s the yaml:

[rnelson0@puppet rnelson0-profile]$ cat /etc/puppet/data/global.yaml
---
puppetmaster: 'puppet.nelson.va'
classes:
  profile::base
[rnelson0@puppet rnelson0-profile]$ cat /etc/puppet/data/puppet_role/server.yaml
---
classes:
  role::webserver

Second, update your site manifest by removing the server## definition and updating the default definition:

[rnelson0@puppet puppet-tutorial]$ git diff
diff --git a/manifests/site.pp b/manifests/site.pp
index 16d22b9..3757419 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -37,9 +37,6 @@ node 'puppet.nelson.va' {
   }
 }

-node /^server\d+/ {
-  include role::webserver
-}
-
 node default {
+  hiera_include('classes')
 }

The hiera_include function calls include and passes it the values found in the hiera lookup for classes. For nodes without a corresponding puppet_role or specific node definition, they will receive profile::base. Server01′s puppet_role has a corresponding hiera file, so it will receive role::webserver. Push the changes upstream, uninstall apache from server01, and re-run the agent:

[rnelson0@server01 ~]$ sudo yum remove httpd
...
[rnelson0@server01 ~]$ sudo puppet agent --test
...
Notice: /Stage[main]/Apache/Package[httpd]/ensure: created
...

With the definition now created for the puppet_role of server, you can deploy the remaining server02-server10 and all nodes will receive the role::webserver class. Just right click on your CentOS template, name your VM/set the hostname to server02 and an IP of 10.0.0.52. After the first boot, run puppet agent –test and watch the node receive the specified role.

The other node in our system is the puppet master. You can create a role::puppetmaster, based on the existing node definition and our profile::base class, and assign that to the puppet_role of puppet. I’ll leave that process as an exercise for the user.

Summary

Today, we created a custom fact and implemented the roles and profiles pattern through two modules. These three components are used together with YAML data in Hiera to allow us to deploy a single role to multiple servers of a like type. We explored simplifying the site manifest to rely on Hiera. In future sessions, we’ll expand on how to dynamically populate our hierarchy and eliminate all reliance on the site manifest.

I’d also like to thank Craig Dunn and Gary Larizza for their foundation work, via their blogs at http://www.craigdunn.org/ and http://garylarizza.com/. I’m very happy to be standing on their shoulders and I hope I’ve been able to provide some value on top of that. If you haven’t already, go read their sites, they’ve got a lot more to say about roles, profiles, and Puppet.

#vExpert, #VirtualDesignMaster, and other Stuff

My summer has been exciting. On Wednesday, I received notification that I was accepted as a vExpert for 2014! That’s pretty awesome, both as confirmation that hard work has payed off and encouragement to keep it up in the future. On Thursday night, the Virtual Design Master competition kicked off. This will hopefully keep me busy throughout the summer. I haven’t even gotten started on it, though, as I am on-call this week and things blew up right after the live start. Here’s hoping it settles down so I can work this weekend!

As if that wasn’t busy enough, my wife accepted a new job in June with a start date in August. We’ll be moving in support of that around the end of the month. With that in mind, I’m taking a summer break from the blog (but certainly not a vacation!). I have a few scheduled articles that will take me through the end of July and I’m hoping to have a guest author to cover August until VMworld. I plan to get back to blogging in early September.

Until then, here is a mix of the most popular articles and the ones I really enjoyed writing.

  • Puppet – There are two more articles to complete the intro portion. Now that you’re familiar with Puppet, we’ll look at closer integration with vSphere in the Fall.
  • Auto Deploy Deep Dive – I was hoping to present this at VMworld but it wasn’t meant to be! Check out the #vBrownBag presentation, too. They’re in the middle of a Cisco track and will be covering Docker on 7/23, good stuff.
  • The Philosophy of Ender’s Game – Now that the movie’s out on DVD and cable, it’s a good time to watch it again and do some critical analysis. Preferably on your tablet while piloting a quadcopter drone, both ideas that can be traced back to this novel. This wasn’t very popular, but it was one of my favorite articles to write. It’s always fun to wax philosophical.
  • Snapshots Management – Surprisingly, this recent article seems very popular. I shouldn’t be surprised, snapshots continue to be a wildly misunderstood tool that cause problems for even veteran vSphere admins.
  • InfoSec and Social Media – This article was a result of attending CPX 2014 and comparing it to VMworld 2013. I had fun writing it, soliciting feedback, and working to do the things I said I would.
  • Synology Multi-VLAN Setup – This remains a very popular article. I hope Synology makes VLAN configuration a little smoother in future DSM revisions, but until then, this will get you going.

Have a great summer!

Snapshots and Automated Emails

A common problem in virtualization is snapshots. The name “snapshot” makes us (novice or otherwise!) think of a picture in time, which sometimes leads to the belief that the snapshot is “taken” and then stored somewhere, though that’s not how snapshots really work.

In reality, snapshots create a psuedo-consistent state of the virtual disk at that point in time. Subsequent writes in a snapshotted state are redirected to delta files. If you are performing an upgrade, a snapshot is helpful, allowing you to restore the prior system state if there are problems. After a few days, the snapshot loses its value as a restore becomes increasingly unlikely because you would lose the application changes as well. Snapshots also play a role in backups, where they are used temporarily to provide the psuedo-consistent state for the backup utility before the snapshot is deleted.

When a snapshot is deleted, that delta is applied to the base virtual disk(s), playing back through the transactions. Large snapshots take a long time to delete and affect system performance until the consolidation is complete. They can also affect the VM during normal operation as the delta file size increases.

Because of this difference between what we picture and how snapshots work, it is not uncommon to find long-lived snapshots that are gigantic in size and have no actual benefit. Today, we’ll look at determining when you have a snapshot problem in your vSphere infrastructure and how to alert the owners.

PowerCLI to the rescue!

You can manage snapshots many ways, including by right clicking on a VM in the Web or Thick clients and looking at the snapshots for that VM. That’s fine if you have 2 or 3 VMs. Past that, you’ll find that is VERY slow. Let’s use PowerCLI instead. I’ve added a new cmdlet to my github PowerCLI modules repository called Send-SnapshotReports. This builds on some work by Alan Renouf, Luc Dekens, and some others (links at the end) who can do a much better job of explaining it than I can. Here’s the gist though:

  • Use Get-VM and Get-Snapshots to pull the list of snapshots over a certain age.
  • For each snapshot:
    • Grab some extra info about the snapshot, specifically who created it.
    • Attempt to link the owner to an email address.
    • Email that person and tell them they are horrible and they should feel horrible.

There’s a bit more to it if you want to pick at it, or provide some enhancements. Use Get-Help to get an accurate description of all the CLI arguments and some of the caveats. Create a snapshot of a VM and run the following command, filling in your information as needed. Keep in mind that you will need to connect to your vSphere server first.

PS C:\> Send-SnapshotReports -MailFrom rnelson0@gmail.com -MailDefault rnelson0@gmail.com -SMTPRelay relay.example.com -Retention 0 -LookupOwner

This will email the owner, or me, from my email address for all snapshots over 0 days in age (default: 14). You’ll need a valid smtp relay as well (enhancements needed: SMTP on non-standard ports, TLS). This will work great if you think you have a snapshot program. If you don’t think you have a snapshot problem, or you’re lazy, or on vacation, it doesn’t work very well. It would be best if we could schedule this cmdlet to be run on a frequent basis. We can schedule PowerCLI tasks, but it’s slightly tricky.

Scheduling a PowerCLI Task

Scheduling a task is fairly easy and, thanks to Magnus Andersson, there’s a graphical walkthrough to the process. His walkthrough is pretty good but I ran into a few problems up front. I’ll describe the whole process, but you can reference the walkthrough if you need a visual aid.

The first thing you will need is a place to run the task. I’m using a Windows 2008R2 RDP server that I use to manage my network. I imagine this would look very similar in 2012, but you may find some slight variance. The second thing is that you’ll need an account to run this as. I would recommend a designated service account rather than an individual account, so that it continues to run if your account expires, is suspended, or you leave the job. The next guy after you will appreciate it! However… I have encountered some permissions issues with the files we will create below, so I’ll show you this process with an interactive login. If your service account doesn’t have rights for interactive login, you will have to do some legwork to iron out permission issues.

Log into the RDP server as the account that will run the PowerCLI task. Load the PowerCLI-Administrator-Cmdlets module from GitHub, or you won’t get far. We’re going to need to store credentials for the service account. Fire up Powershell (ISE) and run the following commands:

$Cred = Get-Credential
$Cred.Password | ConvertFrom-SecureString | Set-Content C:\Users\ServiceAccount\Documents\WindowsPowerShell\vspherelocal.cred

The first command will open a dialog box where you can enter credentials for a vSphere login that has the necessary rights to view snapshots and tasks. Enter the username and password and hit enter. I’ve used the administrator@vsphere.local account. The second command will store the password as a securestring in the specified location. While a securestring is reasonably secure, don’t tempt fate, make sure it’s stored in a secure location.

Next, create a Powershell script (.ps1) in the user’s Documents\WindowsPowershell directory. Substitute the appropriate values for your environment below:

$Username = 'administrator@vsphere.local'
$Password = Get-Content 'C:\Users\ServiceAccount\Documents\WindowsPowerShell\vspherelocal.cred' | ConvertTo-SecureString
$Email = 'rnelson0@gmail.com'
$SMTPRelay = 'relay.example.com'
$vCenter = 'vcenter.nelson.va' 

$Credential = New-Object System.Management.Automation.PSCredential -ArgumentList $Username, $Password

Send-SnapshotReports -vSphereServer $vCenter -Credential $Credential -MailFrom $Email -MailDefault $Email -SMTPRelay $SMTPRelay -LookupOwner -Retention 7

We set some variables, including the password by reading the securestring on disk. Next we create a proper credential out of the username/password tuple. Finally, we call the cmdlet Send-SnapshotReports with a few extra arguments than we did before. The first two, vSphereServer and Credential, should be used together and are intended for interactive shells, where the user will not be authenticated to the vSphere server prior to calling the cmdlet. We also overrode the default 14 day retention timeframe and used a 7 day timeframe.

Run this script in PowerShell (ISE) and ensure everything works fine. PowerShell can be really picky about the permissions on the credential file and it’s easier to debug in an interactive shell. Assuming everything works, we just need to schedule it.

Flip back to Magnus’s tutorial and schedule the task. This process is accurate, but there is one tweak since we’re using a cmdlet. When adding arguments, all you need is the command, not the psconsolefile. I.e. “-Command “&{C:\Users\ServiceAccount\Documents\WindowsPowerShell\SnapshotReminder.ps1}”. Choose an appropriate interval. I’ve chosen daily, all days of the week, a happy medium between annoying people about snapshots and becoming so noisy they’ll disregard the emails entirely. Once the task is scheduled, right click on it to run it immediately. You should get another set of emails.

If, for some reason, you did not, it’s a real pain to track down where things are going wrong. The task’s history will likely say it completed successfully, because it successfully called Powershell with the correct arguments. We don’t write any output, so there’s nothing to see. We can add some output, though. Add this line to the top of your script:

Write-Transcript C:\snapshots.txt

Run the task again. All output, including errors, will be written to C:\snapshots.txt (assuming the account has permissions, so choose a location it can write!). When I ran into problems with permissions, it showed up in this file:

Get-Content : Access to the path 'C:\Users\ServiceAccount\Documents\WindowsPowerShell\vspherelocal.cred' is denied.

Once you iron out any kinks, you’ll have your daily reminder in place, and you can run it manually if necessary. Enjoy!

Links

Puppet Installables – MCollective

In our ongoing Puppet series, we just completed installing PuppetDB and Hiera. There’s one other installable that’s a bit more complicated than those two.

MCollective

The last component we’re going to install today is MCollective. While developed by Puppet Labs, MCollective isn’t directly related to Puppet, as PuppetDB and Hiera are. It’s not a Configuration Management tool, it’s an Orchestration API. It does integrate quite well with Puppet and Facter, among other sources. Some things you can do with MCollective might be to query how many systems have 32GB of RAM, how many systems are running a version of OpenSSL vulnerable to Heartbleed, or to restart Apache on all servers in the Development environment. This installation is trickier than either PuppetDB or Hiera.

Note: As usual, keep in mind that we’re installing in a lab environment. When you move Puppet to production, everything will need to handle a larger scale. MCollective specifically is a good candidate for separating the service out among various servers with at least a middleware, server, and client node. Today we will settle for installing it all on the master.

As usual, we will install MCollective through puppet, specifically the puppetlabs/mcollective module. If you’re using r10k, beware, there’s a lot of dependencies, many of which have their own dependencies, to add here. Use puppet module install puppetlabs/mcollective or deploy with r10k. As a nifty trick, if you are using r10k, you can install with puppet module add it to Puppetfile, then re-deploy. The module install will only install the module and dependencies you don’t have already.

[root@puppet puppet-tutorial]# puppet module install puppetlabs/mcollective
Notice: Preparing to install into /etc/puppet/environments/production/modules ...
Notice: Downloading from https://forge.puppetlabs.com ...
Notice: Installing -- do not interrupt ...
/etc/puppet/environments/production/modules
└─┬ puppetlabs-mcollective (v1.1.3)
  ├─┬ garethr-erlang (v0.3.0)
  │ ├── puppetlabs-apt (v1.4.2)
  │ └── stahnma-epel (v0.0.6)
  ├─┬ puppetlabs-activemq (v0.2.0)
  │ └── puppetlabs-java (v1.1.0)
  ├── puppetlabs-java_ks (v1.2.3)
  ├── puppetlabs-rabbitmq (v3.1.0)
  └── richardc-datacat (v0.4.3)
[root@puppet puppet-tutorial]# vi Puppetfile
[root@puppet puppet-tutorial]# git diff
diff --git a/Puppetfile b/Puppetfile
index 11d979c..1775720 100644
--- a/Puppetfile
+++ b/Puppetfile
@@ -12,6 +12,15 @@ mod "yguenane/ygrpms", "0.1.0"
 mod "saz/motd"
 mod "puppetlabs/puppetdb"
 mod "puppetlabs/postgresql"
+mod "puppetlabs/mcollective"
+mod "garethr/erlang"
+mod "puppetlabs/apt"
+mod "stahnma/epel"
+mod "puppetlabs/activemq"
+mod "puppetlabs/java"
+mod "puppetlabs/java_ks"
+mod "puppetlabs/rabbitmq"
+mod "richardc/datacat"

 # For our r10k installer
 mod "zack/r10k", "1.0.2"
[root@puppet puppet-tutorial]# git commit -a -m 'Add mcollective module to puppetmaster'
### Checking puppet syntax, for science! ###

### Checking if puppet manifests are valid ###

### Checking if ruby template syntax is valid ###

Everything looks good.
[installables 7a49f85] Add mcollective module to puppetmaster
 1 files changed, 9 insertions(+), 0 deletions(-)
[root@puppet puppet-tutorial]# git push origin installables
Counting objects: 5, done.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 509 bytes, done.
Total 3 (delta 1), reused 0 (delta 0)
To https://rnelson0@github.com/rnelson0/puppet-tutorial
   f635c44..7a49f85  installables -> installables
[root@puppet puppet-tutorial]# r10k deploy environment -p
Faraday: you may want to install system_timer for reliable timeouts

Next, we need to add a few classes to the master. We need a middleware node first. We’re going to use ActiveMQ, which is the default middleware, though you can use RabbitMQ if you’d like.

  class { '::mcollective': 
    client             => true,
    middleware         => true,
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }

You’ll notice there are some certs to add to a module called site_mcollective. This module does not exist yet, you’ll need to create it and edit it. After creating the repo, use puppet module generate –modulepath path site_mcollective to create the module in our source tree, rename it, and initialize it as a git repo. Push the changes when you’re done. Alternatively, you can clone mine and start from there.

[root@puppet puppet-tutorial]# cd ..
[root@puppet git]# puppet module generate --modulepath `pwd` rnelson0-site_mcollective
Notice: Generating module at /root/git/rnelson0-site_mcollective
rnelson0-site_mcollective
rnelson0-site_mcollective/manifests
rnelson0-site_mcollective/manifests/init.pp
rnelson0-site_mcollective/Modulefile
rnelson0-site_mcollective/README
rnelson0-site_mcollective/spec
rnelson0-site_mcollective/spec/spec_helper.rb
rnelson0-site_mcollective/tests
rnelson0-site_mcollective/tests/init.pp
[root@puppet git]# mv rnelson0-site_mcollective/ site_mcollective/
[root@puppet git]# cd site_mcollective/
[root@puppet site_mcollective]# touch README.md
[root@puppet site_mcollective]# git init
Initialized empty Git repository in /root/git/site_mcollective/.git/
[root@puppet site_mcollective]# git add .
[root@puppet site_mcollective]# git commit -m "first commit"
[master (root-commit) b2b4026] first commit
 5 files changed, 97 insertions(+), 0 deletions(-)
 create mode 100644 Modulefile
 create mode 100644 README
 create mode 100644 README.md
 create mode 100644 manifests/init.pp
 create mode 100644 spec/spec_helper.rb
 create mode 100644 tests/init.pp
[root@puppet site_mcollective]# git remote add origin git@github.com:rnelson0/site_mcollective.git
[root@puppet site_mcollective]# git push -u origin master
Counting objects: 11, done.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (11/11), 1.90 KiB, done.
Total 11 (delta 0), reused 0 (delta 0)
To git@github.com:rnelson0/site_mcollective.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.

With the module created, we need to store some certificates in it.

Note: You should NEVER store certificates, public keys, or other sensitive information in a public repository. Anyone could grab that information and use it to impersonate your devices, attack them, or run up your credit card bill. I am ignoring my own advice because I’ve mitigated the vulnerability by having my puppet system in a protected lab, and it’s very difficult to write a tutorial and show what the end result is otherwise. This changes nothing, it’s still a really, really, REALLY bad practice. Use a private repo or an internal server with appropriate security controls!

Let me repeat: NEVER store certificates, public keys, or other sensitive information in a public repository!
Never do this. Never. Ever. NEVER! EVAR!

Using the puppet cert generate <user> command, generate a cert for the root user. In production, you’ll want to use your non-priv user.

[root@puppet site_mcollective]# puppet cert generate root
Notice: root has a waiting certificate request
Notice: Signed certificate request for root
Notice: Removing file Puppet::SSL::CertificateRequest root at '/var/lib/puppet/ssl/ca/requests/root.pem'
Notice: Removing file Puppet::SSL::CertificateRequest root at '/var/lib/puppet/ssl/certificate_requests/root.pem'

Combined with the keys Puppet has already generated, this gives us user and agent public keys ($ssldir/certs) and private keys ($ssldir/private_keys) that we need to copy into the files directory of the module.

[root@puppet site_mcollective]# mkdir -p files/certs
[root@puppet site_mcollective]# mkdir -p files/client_certs
[root@puppet site_mcollective]# mkdir -p files/private_keys
[root@puppet site_mcollective]# cp /var/lib/puppet/ssl/certs/puppet.nelson.va.pem files/certs
[root@puppet site_mcollective]# cp /var/lib/puppet/ssl/certs/ca.pem files/certs
[root@puppet site_mcollective]# cp /var/lib/puppet/ssl/certs/root.pem files/client_certs
[root@puppet site_mcollective]# cp /var/lib/puppet/ssl/private_keys/root.pem files/private_keys
[root@puppet site_mcollective]# cp /var/lib/puppet/ssl/private_keys/puppet.nelson.va.pem files/private_keys

After you commit/push the changes to site_mcollective, add the module to Puppetfile in the puppet repo.

[root@puppet site_mcollective]# cd ..
[root@puppet git]# cd puppet-tutorial/
[root@puppet puppet-tutorial]# vi Puppetfile
[root@puppet puppet-tutorial]# git diff
diff --git a/Puppetfile b/Puppetfile
index 1775720..461bac0 100644
--- a/Puppetfile
+++ b/Puppetfile
@@ -36,3 +36,6 @@ mod "puppetlabs/vcsrepo", "0.2.0"
 # Modules from Github
 mod "base",
   :git => "git://github.com/rnelson0/rnelson0-base"
+
+mod "site_mcollective",
+  :git => "git://github.com/rnelson0/site_mcollective"

Lastly you’ll need to add the root user’s cert to the root user’s directory. Let’s add that in in the node definition.

[root@puppet puppet-tutorial]# git diff
diff --git a/manifests/site.pp b/manifests/site.pp
index 846ae00..4a02d21 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -21,4 +21,12 @@ node 'puppet.nelson.va' {
     ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
     ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va/pem',
   }
+
+  user { 'root':
+    ensure => present,
+  } ->
+  mcollective::user { 'root':
+    homedir     => '/root',
+    certificate => 'puppet:///modules/site_mcollective/client_certs/root.pem',
+    private_key => 'puppet:///modules/site_mcollective/private_keys/root.pem',
+  }
 }

Update the files, commit/push your changes, and deploy. Run puppet and you MCollective should be available on the master. Assuming no errors, it’s time to install the puppet agent plug-in. As I mentioned earlier, PuppetLabs develops MCollective, but it’s not just a puppet component. There are a number of plug-ins available, which can be installed with the mcollective::plugin definition that the mcollective module provides. Add this to the master’s node definition, push/commit, and run puppet again.

[root@puppet puppet-tutorial]# git diff
diff --git a/manifests/site.pp b/manifests/site.pp
index 869dbea..f077727 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -30,4 +31,8 @@ node 'puppet.nelson.va' {
     certificate => 'puppet:///modules/site_mcollective/client_certs/root.pem',
     private_key => 'puppet:///modules/site_mcollective/private_keys/root.pem',
   }
+
+  mcollective::plugin { 'puppet':
+    package => true,
+  }
 }

Finally, let’s test everything. We’ll use mco to ping, get an inventory, and view the puppet plug-in help, just to make sure we have base functionality. (We’ll worry about the connection_headers warning later)

[root@puppet puppet-tutorial]# mco ping
warn 2014/04/23 14:15:27: activemq.rb:274:in `connection_headers' Connecting without STOMP 1.1 heartbeats, if you are using ActiveMQ 5.8 or newer consider setting plugin.activemq.heartbeat_interval
puppet                                   time=175.73 ms


---- ping statistics ----
1 replies max: 175.73 min: 175.73 avg: 175.73
[root@puppet puppet-tutorial]# mco inventory puppet
warn 2014/04/23 14:15:35: activemq.rb:274:in `connection_headers' Connecting without STOMP 1.1 heartbeats, if you are using ActiveMQ 5.8 or newer consider setting plugin.activemq.heartbeat_interval
Inventory for puppet:

   Server Statistics:
                      Version: 2.4.1
                   Start Time: Wed Apr 23 14:11:59 +0000 2014
                  Config File: /etc/mcollective/server.cfg
                  Collectives: mcollective
              Main Collective: mcollective
                   Process ID: 6200
               Total Messages: 9
...
[root@puppet puppet-tutorial]# mco help puppet

Schedule runs, enable, disable and interrogate the Puppet Agent

Usage: mco puppet [OPTIONS] [FILTERS] <ACTION> [CONCURRENCY|MESSAGE]
Usage: mco puppet <count|enable|status|summary>
Usage: mco puppet disable [message]
Usage: mco puppet runonce [PUPPET OPTIONS]
Usage: mco puppet resource type name property1=value property2=value
Usage: mco puppet runall [--rerun SECONDS] [PUPPET OPTIONS]

The ACTION can be one of the following:
...

Some other interesting plugins are service, package, and nrpe. You can use this to manipulate packages on your nodes, such as upgrading openssl, restarting a service like apache, or running Nagios plugins remotely to verify status of devices after a network issue. There’s plenty you can do with MCollective and though I will cover it as we go, you may want to read ahead, especially if you have some mass updates to perform in the near-term.

As you scale up, you’re going to want to spit things up into middleware, client, and server nodes. See here, under MCollective Terminology, to understand what nodes get what roles. Here are some example node definitions that use our site_mcollective module and the nelson.va domain; adjust accordingly.

# middleware
node 'puppet.nelson.va' {
  class { '::mcollective':
    middleware         => true,
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }
}

node 'mco-client.nelson.va'
  class { '::mcollective':
    client             => true,
    middleware         => true,
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }

  mcollective::user { 'mcollective-user':
    certificate => 'puppet:///modules/site_mcollective/client_certs/mcollective-user.pem',
    private_key => 'puppet:///modules/site_mcollective/private_keys/mcollective-user.pem',
  }

  mcollective:plugin { 'puppet':
    package => true,
  }
}

node 'mc-server.nelson.va' {
  class { '::mcollective':
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }

  mcollective::actionpolicy { 'nrpe':
    default => 'deny',
  }

  mcollective::actionpolicy::rule { 'vagrant user can use nrpe agent':
    agent    => 'nrpe',
    callerid => 'cert=vagrant',
  }
}

In such a deployment, you would log into the node mco-client.nelson.va and run mco from there. However, until we get into working on scaling up, I’ll assume that you’re using mco on the master.

Other Installables

Now that you have PuppetDB, Hiera, and MCollective installed, you’ve got all the tools you really need for a good Puppet system, plus the workflows we defined. We’ll start to use these tools next week, when we get back to defining manifests. These are also the most intrusive tools to add at a later date as they require some changes to your configuration and workflow. You’ll probably be comfortable with just these tools, but if you’re in the installing mode, there’s a few other tools to look at.

There are a number of Consoles for Puppet. These give some sort of graphical view into your Puppet system. The Foreman is a very popular OSS console that includes an ENC along with the GUI. Puppetboard is an OSS reporting tool that is starting to flesh out and add features from Puppet Dashboard, an older OSS tool that lost “official” PuppetLabs support but is still in somewhat active development. And of course, Puppet Enterprise includes the Puppet Enterprise Console. Any of these consoles will be helpful but aren’t something I plan to cover at this time.

We’ll also be discussing scaling up Puppet, but in the future. If you can’t wait, there’s a lot of information in Pro Puppet 2nd Ed., as well as plenty of internet articles, that deal with this subject.

I can’t think of any other tools that would have mass appeal. If I missed anything, drop me a line and I’ll update this article. Otherwise, tune in next week as we start to define manifests for our first agent.

What is a systems administrator?

I’ve seen a few topics recently where people seem to misunderstand what a systems administrator is. The cause for this is likely due to a combination of factors including HR job classifications, some misunderstandings, and also a simple lack of other terms to use. Here’s my definition:

Systems Administration requires that you administer a group of interconnected objects, i.e. a system. One the important components of a modern system is the network, something that actual systems administrators know about. This does not mean that they are experts on each system component, but they are familiar with the components and can perform basic and some intermediate troubleshooting without requiring assistance.

In many cases, it appears that what people are talking about is actually a server administrator, maybe even just a computer operator. If we had an IT union, the only thing I’d want from them is to prevent people (and HR!) from misrepresenting their actual responsibilities.

Is that a good definition of systems administration, and are there better terms we can use? What do you think?