Puppet rspec tests with Hiera data

Editor’s note: Please check out the much newer article Configuring Travis CI on a Puppet Module Repo for the new “best practices” around setting up rspec-puppet. You are encouraged to use the newer setup, though everything on this page will still work!

I’ve covered puppet unit tests with rspec and beyond before. What if you need to go even further and test data from hiera? There’s a way to do that with rspec as well, and it only requires a few extra lines to your spec config – plus the hiera data, of course.

Use hiera in your class

We’ve covered a number of ways to use hiera in your class. You can use hiera lookups (hiera(), hiera_hash(), etc.) and automatic parameter lookups with classes. We’ll look specifically at hiera_hash() and the create_resources() it is commonly paired with. You cannot simply test that create_resources() was called because you need to know the resulting title of the generated resource. Here’s a simple DHCP profile class I created:

class profile::dhcp {
  # DHCP service and host reservations
  include dhcp::server
  $dhcp_server_subnets = hiera_hash('dhcp_server_subnets', undef)
  if ($dhcp_server_subnets) {
    create_resources('dhcp::server::subnet', $dhcp_server_subnets)
  }

  $dhcp_server_hosts = hiera_hash('dhcp_server_hosts', undef)
  if ($dhcp_server_hosts) {
    create_resources('dhcp::server::host', $dhcp_server_hosts)
  }
}

Continue reading

Updating your git fork from the original repo

Over the 12 days of #Commitmas, you may have submitted a Pull Request (PR) to Matt Brender’s 12-days-of-commitmas repository. I’ve done the same thing. Once the PR is merged, your fork is out of sync with the main repository. If you do more development on your fork, you may start to run into conflicts when you submit the next PR.

Adding additional remotes

When you set up your fork, github (bitbucket, etc.) helpfully provides you with the repository’s remote information in HTTPS or SSH format (see Jonathan Frappier’s article on forking for more information). After you clone your fork’s repo, you’ll have a single remote called origin. You can see your remotes with git remote -v.

git remotes fig 1

We can handle PRs through the github interface, but that doesn’t help us keep things in sync (well, you could delete your fork and make a new one, but that’s a lot of work and wipes out any pending changes you’ve made, so we’ll consider that unfeasible). The best way to manage this is to add the original repo as another remote. The remote can be called anything you want. We will call it upstream, a common pattern with github users.

git remotes fig 2

As you may remember, you provide the keyword origin to your push statement. That’s not a magic word, it’s referring to the remote to which you are pushing your commit history to. Most of the time you don’t have rights to push to someone else’s repo, but git push upstream <branch> is a legitimate git command. You can also use the upstream remote with other commands.

Fetch and Pull

You can fetch the latest information from the remove with git fetch upstream. This will pull down all references and branch names. You can then git pull upstream master to fast-forward apply all changes in the upstream’s master branch to your current branch. I made sure to make sure I’m in my own master branch before I began, but you could do this in any branch – great if you started some edits based off master and a few PRs have been applied since.

git remotes fig 3

Once you’re done, don’t forget to push changes to YOUR remote. You don’t technically need to do it, but if you work on another machine, you’d have to perform the whole clone/remote add/fetch/pull process again.

git remotes fig 4

Now you’re able to see all the changes made in the repo and you’re able to base new branches off the up-to-date contents of your master branch!

Using git amend for quick corrections

Now that you know how to rewrite your git history with rebase, let’s take a look at amending your commits. An amend is much simpler and, for the case of typos or other minor corrections you discover immediately after a commit, sometimes much more appropriate.

For the setup, we’re going to add some markdown to the 12-days-of-commitmas repository’s README.md, but with incorrect tags. A URL is of the format “[text to display](url://to/visit)”. It’s easy to get this backwards or to use the same tags on both sides of the URL. Here, I’ve used square braces for the URL and commited it anyway. Whoops!

git amend fig 1

Continue reading

Using git rebase to rewrite history

As part of The 12 Days of #Commitmas, we’re all practicing our git-fu. I’ve learned a bit about rebasing that I’d like to show you now.

Why are we rebasing?

The definition, from man git-rebase, is “Forward-port local commits to the updated upstream head.” Clear as mud, if you ask me. Another way of putting it is that a rebase moves a branch to a new base commit. Let’s say you create a new branch from master on the 1st of the month and make two changes, then a week goes by. A half dozen changes have been made to master and your branch doesn’t have it. You can rebase your commits against the current master rather than the original master. Underneath the hood, git is rewriting the project history to achieve all of this. All the commits are actually new. This means the checksums for commits are unique (we’ll look at why that matters in a moment).

This is great for integrating your feature against the master and maintaining a linear history. There’s another reason we might want to rebase, though – maintaining a clear history.

Continue reading

Puppet 3.7.3 Updates

I took the plunge this weekend and updated my home lab to Puppet v3.7.3. Before you begin, check out the release notes. Puppet v4.0.0 is on the horizon, so there are a lot of new features and options available to you, along with bug fixes, but also some deprecations. It’s a great time to test some of the new features (like the future parser) while you can still turn them off, but as long as you’re coming from the 3.6.x series, your existing config should just work.

I had one issue with the upgrade. I started getting errors that classes couldn’t be found, regardless of which class I included and whether it was via puppet agent or puppet apply. Turned out, I had a manually installed puppet gem. Whoops! It was version 3.6.1 and I simply removed it and everything started working. You really shouldn’t have that issue, unless you’ve been doing testing on your master like I’ve been doing. Don’t do that.

While I’ve only been using it for about 12 hours now, I have to say I love the changes in 3.7.0. The big performance improvement comes from persistent HTTPS connections. Previously, new connections were initiated for (I believe) just about every file transferred, including plugins (facts, types, providers). This is a huge performance increase for me and I still have pretty small manifests. If you have really large catalogs, especially ones that transfer a lot of files, you should be really happy with this. The best part is, you don’t have to enable this, it’s turned on for you.

If you have the time this holiday season, give Puppet 3.7.3 a shot in your lab. You should be pleasantly surprised and prepared to upgrade production after the holiday!

DZone Research’s Guide to Continuous Delivery

Josh Gray posted a Random Thoughts on DevOps article last month that included a reading list. I’ve read the two novels on there, but I had not read DZone Research’s Guide to Continuous Delivery (it’s free with an email, and you will get a few of them but they seem to respect unsubscribe requests). It’s a pretty short guide, something you can read on your lunch break. It does have some overlap with other DevOps guides, but if you’re unfamiliar with Continuous Integration and Continuous Delivery, as ideas or as implementations, it’s definitely worthwhile.

Even if you are familiar with CD, page 24 is awesome. It’s a CD “Maturity Checklist”. If this is all new to you, it takes the previous 23 pages and distills all the advice and generalities down to some pretty specific and relatable items. If you are already on your CD journey, now you can view where you are and where you need to go.

There’s also a nice glossary on page 35. As I’ve said before, it’s a real benefit to everyone when you’re speaking the same language.

I recommend taking a few moments to read over this brief guide. Josh also recommended their Guide to Enterprise Integration, which I haven’t read yet but I’ll take his word for.

Publishing Forge Modules

Last week, we looked at an advanced spec helper, puppetlabs_spec_helper, and generated some tests with it. We also looked at the rake targets available with the helper and you may have noticed the build target: “Build puppet module package”. Prior to that, we created a new certs module that is code only, no data, for use in distribution of certificate files for web servers. It seems like a good opportunity to see how this works so we can upload a module to the forge.

Forge Modules

The Puppet Forge is a central repository for shared modules, written by Puppet Labs or by the community. It’s the puppet analog to perl’s CPAN, python’s pip, etc. – tell puppet you want a module and it fetches it from the forge. As the modules are shared, rather than specific to a user’s installation, be sure to use sound fundamentals to create a portable module. Your role and profile modules, which likely reference umpteen other modules and the files and templates they contain, are not good candidates for the forge, but a utility module like the certs module is a good candidate.

Continue reading

Apply to be a vExpert 2015 candidate

If you haven’t already, you should apply to be a vExpert 2015 candidate. There are three forms you can use:

You have until Dec 12th to apply (next Friday) and results will be announced February 5th. There will also be a Q2 application period (est. March 15th deadline based on 2014) if you miss this for some reason. If you’re part of the virtualization community and unsure if you should apply, then you should apply. This is not a technical certification or aware, it’s a community award. Writing blogs, participating in VMUGs, using Twitter – these are all activities that vExperts take part in.

And if you don’t apply, you might find me recommending you at the beginning of next week. Hop to it!

Beyond rspec-puppet: puppetlabs_spec_helper

Editor’s note: Please check out the much newer article Configuring Travis CI on a Puppet Module Repo for the new “best practices” around setting up rspec-puppet. You are encouraged to use the newer setup, though everything on this page will still work!

We recently discussed test-driven development for puppet modules in the context of rspec-puppet. That’s a nice, simple introduction to testing, but doesn’t provide everything we need. Rspec-puppet is limited in the matchers available (notably there are no negation tests) and its inability to test dependencies (when a module includes another module), both of which will be necessary eventually. The next step is puppetlabs_spec_helper, a project by Puppet Labs that provides us with more full-fledged specification tests.

Installation

The biggest requirement for puppetlabs_spec_helper is a ruby version of 1.9 or higher. CentOS 6.5, however, only includes v1.8.7. There are numerous ways to upgrade ruby, most of which are horrible. We’ll look at using the Ruby Version Manager, or RVM, to upgrade to 1.9.3. This can be done with puppet via the maestrodev/rvm module. After adding the module to your master, create a class or modify an existing one to provide RVM and some puppet and rspec gems.

Continue reading

Pull Requests aren’t just for Code anymore

Pull requests (PRs) are an interface to discuss proposed changes to be integrated into a project. As a sysadmin, you might typically hear about developers using PRs to manage code in a public repository. Even if you don’t know how to code, you can still contribute with PRs to your favorite project.

As a frequent user of r10k, but someone unfamiliar with ruby, I can’t contribute very much to the inner workings of the program. However, as a user, I’m in a good position to provide feedback on the user experience. To that end, I forked the repository on github and created some branches to update the documentation to (hopefully!) improve it for other users. Afterward, I submitted PRs and worked with Adrien Thebo, the project maintainer, to fine tune the PRs till they were correctly implemented. The results of that PR are here and the other PRs are merged or still being edited.

As I’ve noted before, documentation matters. If you can’t or aren’t willing to contribute code on a project, improving the documentation is a great way to give back to the community. Give it a shot!