Puppet Installables – PuppetDB and Hiera

Welcome back to our Puppet for vSphere Admins series. We started out deploying the puppet master and a few example manifests, then took a right turn into the land of Git and workflows. I know you’re anxious to get back to developing manifests, but we’ve got a few more things to install on the master before we worry about the manifests. PuppetDB and Hiera, and MCollective are powerful tools that most administrators will find of immense benefit. We can install these later, but we’d have to redo some of our work. Who wants to do that?

As I mentioned last week, I’ll assume you’re using r10k at some level, so I’ll mostly just reference “use r10k” unless there’s a specific gotcha. If you’re not using r10k, follow whatever workflow you’ve decided on to add modules, update manifests, and track all changes in your VCS.

Continue reading

Puppet and Git, 206: Git Hooks – Post-Receive

Welcome to our last class on Git usage. Now that we have a pre-commit hook in place, we’ll finish things up with a post-receive hook.

UPDATE: On 9/17/2014, Phil released Reaktor, an early version of which much of this article is based on. I haven’t had time to investigate but it should be easier to install and far more functional!

Post-Receive Hook

I don’t know about you, but I’m already tired of having to run r10k manually. Having to ssh to the master, log in, and run commands is so droll. What can we do about it?

A post-receive event fires when a remote push is received by the repo with the hook (i.e. when I ‘git push origin branch’, the ‘origin’ server will fire the hook) and we can use a corresponding post-receive hook to deploy for us. This is slightly trickier than a pre-commit hook because of where the event is firing. We don’t want to run it on our desktop/workstation VM, because that host would be the origin repo for everyone. We can’t want run it on the puppet master, because then our puppet master is the origin. (Technically, each repository is a fully sufficient origin repository on its own, but I’m making an assumption that you have a designated origin that’s backed up and therefore you won’t be doing the same for the other repo clones.) That leaves GitHub, which is already our designated origin. Because the post-receive event will fire on the origin, we need to ensure that Github can talk to the puppet master, which is where r10k is located.

There’s no one way to do this. My design is to implement a GitHub Web Hook, which will result in Github sending a POST to a target URL (which means GitHub needs to be able to communicate with it!) that says, “Someone just committed a change, here are the details of the change.” There’s more detail on enabling the Web Hook here. If you’re using Atlassian Stash, as we do at work, hopefully the admins have installed a plugin for web hooks (ironically, I started this documentation in March and I’m still waiting on a web hook to be installed – but if I used Github…).
Continue reading

Puppet and Git, 205: Git Hooks – Pre-Commit

Welcome to the Puppet and Git, class 205: Git Hooks. Since we finished up talking about workflows, let’s move on and explore what a git hook is and how you can use one to improve your workflow.

Git Hooks

What is a git hook? You can read some boring official documentation, but who does that? Instead here’s a short summary: A git hook is a program that is called when a git events are triggered. These programs are usually simple shell scripts and some common events people use them with are commits. We’ll look at commits and the event that fires before the commit is completed.

Pre-Commit Hook

If you’ve been programming for more than, say, an hour, you’ve undoubtedly experienced the bad mojo that results from missing a semi-colon or curly brace or other piece of syntactical junk. And you may have even committed such a piece of broken code and pushed it upstream just to watch the whole thing fall apart. If we can lint our code, we can determine whether the code meets the syntactic requirements of a language. It’s important to note that linting doesn’t verify that your code does what it says it will do, it JUST verifies that the code will parse or compile. This would prevent the wonderful pattern of commits that looks something like:
Continue reading

Puppet and Git, 204: r10k Workflow for Existing Module

We’ve installed and configured r10k, are using it for deployments, and have a workflow for new modules. More commonly, we will be working on existing modules, a slightly different workflow. We can examine this new workflow by modifying the base module only.

Workflow to Modify Existing Module

Unlike adding a new module, the Puppetfile only needs to be modified to reflect the feature branch. This is where the workflows diverge: instead of requiring a merge, commit, and push, we can create a temporary branch and just delete it when we’re done. Only the module branch needs merged. We’ll show this by making a simple change, modifying Dave’s name. We have another Dave Smith who works here, so we’ll add a middle initial and the name of Dave’s organization, to prevent confusion.

The feature branch is just called dave. We’ll work on the module repo first. Make sure you’re in master and checkout the new branch. Dave’s description should be updated to “Dave G. Smith – IT Administrator” – everything will be alright unless they hire another Dave G. Smith over there. Commit the change and push it.
Continue reading

Puppet 3.6.1 Updates

If you’ve been following with the Puppet series, your VMs probably started with Puppet 3.4.x or earlier. In the time since, Puppet has released up through v3.6.1 that brings a lot of improvements. However, if you simply upgrade your master and nodes, you’ll run into a few warnings about deprecations and future deprecations. Let’s take a look at the issues and how to resolve them. As always, read the release notes so that you understand the changes and test in a lab to ensure there is no negative impact.

Note: You MUST upgrade your nodes to v3.6.1 as well as the master, or you may receive fatal errors on the nodes. We haven’t gotten there yet, but if you have mcollective installed and configured, it’s a great way to upgrade your nodes at the same time.

Here’s the first item you’ll see:

[rnelson0@puppet ~]$ sudo puppet agent --test --noop
Warning: Setting modulepath is deprecated in puppet.conf. See http://links.puppetlabs.com/env-settings-deprecations
   (at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1067:in `each')
Warning: Setting manifestdir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
   (at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1071:in `each')

You can fix this by implementing environment directories. Here’s the diff I made:

[rnelson0@puppet ~]$ diff puppet.conf.org /etc/puppet/puppet.conf
15,16c15
<     modulepath = /etc/puppet/environments/$environment/modules:/opt/puppet/share/puppet/modules
<     manifestdir = /etc/puppet/environments/$environment/manifests
---
>     environmentpath = $confdir/environments

If you actually do have global modules under /opt, add a basemodulepath key and value. Now when you run another test, you may see some errors as it “fixes” itself. Run it a second time and you’ll see this:

[rnelson0@puppet ~]$ sudo puppet agent --test --noop
...
Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
   (at /usr/lib/ruby/site_ruby/1.8/puppet/type.rb:816:in `set_default')

This is a warning that something will be deprecated. You can read about the issue here. As the link says, it’s easy to fix this. In your puppet repo, add these lines to the top of manifests/site.pp:

Package {
  allow_virtual => true,
}

If you run puppet again, you’ll notice the warnings are gone!

One last note, if you get some spurious warnings, restart the puppet master service. In my lab, I didn’t need to do this, but in production I had to. I assume it’s because I did something out of order, but I couldn’t identify what that was.

Puppet and Git, 203: r10k Workflow for New Module

Welcome back to our Puppet and Git 200-series classes. With r10k installed and configured, today we can focus on workflows. The first workflow is for a new module, either a brand new module you are creating or simply a “new to you” module, such as importing more modules from the Forge. In our classrom, we will add a single module from the forge and update the base module to make use of it. This will give us a good understanding of the workflow for r10k.

Workflow To Add A New Module

The first step in our workflow is to decided on a module to add to our setup. If you have a particular module you want to use, feel free to substitute it below. I’ve chosen saz-motd, a very simple module that is visible when installed, but will not have a material impact on your nodes. We can see right now that there is no message of the day, so we’ll know when we’re done:

[root@puppet ~]# cat /etc/motd
[root@puppet ~]#

Note: We’ll add our module to a feature branch below. It’s a simple module, so this is fine. More complex modules, such as those that include additional facts and functions, should always be installed on the master first to ensure the plugins are synchronized, which means adding them to production. This was discussed on IRC so I don’t have a link to documentation to show how this works; this is the closest I could find. I’ll mention it again when we install such a module, but I wanted to mention it in case the module you chose provides custom facts/functions.

Continue reading