Visible Ops Phase One: Stabilize The Patient and Modify First Response

The first phase of implementing Visible Ops is Stabilize The Patient and Modify First Response. This first order of business intends to reduce our amount of unplanned work significantly, to 25% or less, to allow our organization to focus on more than just firefighting. When the environment is stabilized, efforts can instead be spent on proactive work that stops fires before they start.

The chapter, like all chapters in Visible Ops, begins with an Issues and Indicators section, describing the issues with both a formal statement and a narrative example. The issues are familiar to anyone in IT, from how self-inflicting problems create unplanned work to the inconvenient, but predictable, timing of system failures during times of urgency. The narrative example provided helps to relate the formal statement to the experiences we all share.

Continue reading

Book Review: The Visible Ops Handbook

A few months ago, I purchased The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps, by Kevin Behr, Gene Kim, and George Spafford and published by the IT Process Institute (ITPI). Those names may sound familiar, as these are the same authors of The Phoenix Project. Where The Phoenix Project is a novel that teaches us general concepts through storytelling, The Visible Ops Handbook is a set of instructions to help us implement the changes that are now so vital to the DevOps movement – it’s all about executing! The Visible Ops methodology it teaches is one of many precursors to DevOps, and it’s a wonderful foundation that I feel should be explored by all.

Visible Ops has a nice structure. There are five sections describing the four steps of Visible Ops (70 pages) followed by a few appendices (30 pages). The content is a result of studies done by the ITPI (contemporary equivalents of today’s State of DevOps reports) and the implementation of Visible Ops, before it went to print, by IP Services. In those 100 pages, the lessons learned from the studies is codified using ITIL terminology. The writing is very accessible, but also very dense, and is worth referencing repeatedly as you work through the included steps. The target audience is decidedly technical, but everyone in an IT organization can benefit from the material.

Continue reading

Hiera-fy your Hiera setup

In 2014, we set up our puppet environment and we’ve spent the first half of 2015 improving the configuration. In that time, we installed hiera, were introduced to it through the role/profile pattern, focused on separating the data from the code and moving it into hiera, and most recently on an improved controlrepo that modified the hiera layout. We have been using hiera the whole time, and there’s a lot we can do to improve how we use it still.

Manage Hiera with Puppet

Our initial hiera.yaml was simple and static. With our improved controlrepo layout, the new hiera.yaml file is more dynamic. A problem still remains: we are configuring hiera manually! You may have a hiera.yaml in your controlrepo or even a bootstrap.pp file for your initial puppet master. We have also been managing the hiera package manually in profile::hiera. This addresses the problem in the short term but adds to our administrative overhead – anytime we update the hiera config, we need to do so in these files as well as on the master itself.

Continue reading

Opinionated Test Driven Development

There are many technical reasons to chose (or not chose!) to use Test Driven Development. In my opinion, one of the most important benefits isn’t technical at all – it’s that it forces you to have an opinion! When you decide what you are going to test, you are forming an opinion on what you want your code to do. It might be processing an input, transforming an input, or providing output based on current state. You have an opinion on the end state but leave the implementation to be determined.

I liken this to having an outline for writing. If you start writing without an outline, the end result never seems to match what you thought you’d end up with. If you follow an outline, however sparse, the result matches what you thought you’d up with – if not, you didn’t follow the outline. Sure, it requires some revisions here and there to make it all tidy, but at least your first draft is legible.

When considering whether you should employ Test Driven Development, keep this in mind as a non-technical benefit!

Configuring an R10k webhook on your Puppet Master

Now that we have a unified controlrepo, we need to set up an r10k webhook. I have chosen to implement the webhook from zack/r10k. There are other webhooks out there – I’m a huge fan of Reaktor – but I chose this because I’m already using this module and because it is recommended by Puppet Labs. It’s an approved module, to boot!

Update: The zack/r10k module has migrated to puppet/r10k, which should be used instead. I’ve commented out sections that are incompatible with the most recent versions of the module, but as this article is now 2 years old, there may be other changes in surrounding modules you will become aware of, too.

Module Setup

The first step is to make sure the module is installed along with its dependencies. There are no conditional dependencies in a Puppet module’s metadata.json, so you can skip puppetlabs/pe_gem and gentoo/portage if you’d like. On the other hand, there are no ill side effects from having the modules present unless you were to use them for some reason. This is an opportunity to up the version on some pinned modules as well, such as stdlib, as long as you do not increment the major version. If the major version increases, there’s a significant chance your code will have some breakage, it’s best to do that in a separate branch.

I encountered a bug with zack/r10k v2.7.3 (#162). This bug is fixed in v2.7.4. Be sure to upgrade!

Continue reading

Preventing Git-astrophe – Judicious use of the force flag

I’d like to tell a tale of a git-astrophe that I caused in the hope that others can learn from my mistakes. Git is awesome but also very feature-ful, which can lead to learning about some of those features at the worst times. In this episode, I abused my knowledge of git rebase, learned how the -f flag to git push works, and narrowly avoided learning about git reflog/fsck in any great detail.

Often times, you will need to rebase your feature branch against master (or production, in this case, it was a puppet controlrepo) before submitting a pull request for someone else to review. This isn’t just a chance to rewrite your commit history to be tidy, but to re-apply the changes in your branch against an updated main branch.

For instance, you created branch A from production on Monday morning, at the same time as your coworker created a branch B. Your coworker finished up her work on the branch quickly and submitted a PR that was merged on Monday afternoon. It took you until Tuesday morning to have your PR ready. At this time, it is generally adviseable to rebase against the updated production to ensure your branch behaves as desired after applying B‘s changes. Atlassian has a great tutorial on rebasing, if you are not familiar with the concept.

Continue reading

Improved r10k deployment patterns

In previous articles, I’ve written a lot about r10k (again, again, and again), the role/profile pattern, and hiera (refactoring modules and rspec test data). I have kept each of these in a separate repository (to wit: controlrepo, role, profile, and hiera). This can also make for an awkward workflow. On the other hand, there is great separation between the various components. In some shops, granular permissions are required: the Puppet admins have access to the controlrepo and all developers have access to role/profile/hiera repos. There may even be multiple repos for different orgs. If you have a great reason to keep your repositories separate, you should continue to do so. If not, let’s take a look at how we can improve our r10k workflow by combining at least these four repositories into a single controlrepo.

Starting Point

To ensure we are all on the same page, here are the relevant portions of my Puppetfile:

Continue reading

Deploying MySQL with Puppet, without disabling SELinux

I’ve been vocal in the past about the need to not disable SELinux. Very vocal. However, SELinux can be difficult to work with. I was reminded of how difficult while deploying MySQL recently. Let’s take a look at how to iron out the SELinux configuration for MySQL, and how to deploy it with Puppet I will be using CentOS 6.6 in this article. The package names and SELinux information may vary if you use another distribution.

MySQL Design

Let’s review the design of the MySQL installation before continuing. For the most part, it’s a standard install, we’re not doing any elaborate tuning or anything. All the passwords will be ‘password’ (clearly you should change this in production!). All the anonymous users (@localhost, root@localhost, etc.) will have a password set. An additional ‘replication’ user is created so multiple databases can be replicated and example replication settings are included. The test databases are removed and a single user/database pair of wikiuser/wikidb will be created. We won’t do anything with the database, it’s just an example that can be duplicated as needed.

Continue reading

Introducing ‘puppetinabox’: bootstrap a lab setup with Puppet

Over the past few days, I’ve been working on a project. I call it puppetinabox and it includes a consolidate controlrepo with a Puppetfile of curated collection of forge modules and config repository. If you have an existing Linux OS template and basic skills with linux and git, you can quickly deploy a network managed by puppet and r10k. This curated collection can be used as is, to learn Puppet or stand up a basic network, or fork the code and start customizing your network, using it as the foundation for a more complex configuration.

This project grew out of the 12 Days of Commitmas and my own desire to puppetize my home network. Over the holiday break, there was a lot of streaming media going on and I started to see failures on my services. Refreshing the services, which were entirely too coupled on mostly one VM, was looking pretty daunting. Puppetizing the network was slightly daunting, but far preferable to doing it by hand. I decided to put what I had learned and practiced to good use and thus, puppetinabox was born!

Puppetinabox includes everything required to provide puppet, puppetdb, r10k, dns and dhcp services, a yum repository, and a build server. The only thing you need to provide is a gateway device and you have a functional network. If you already have a working network and you want to puppetize it, never fear – you can tweak the provided material to fit or use puppetinabox to start refreshing the services one at a time.

I created the puppetinabox GitHub organization to house all the repositories. You can read the documentation here. I’d love to hear what you think, whether this will help you or what improvements might make it better. Let me know in comments, on twitter, or as a GitHub issue. If anyone is graphically inclined, I could really use a nice looking avatar for the organization, too!

If you’d like some more information, I presented on the 1/15/2015 vBrownBag DevOps Series. Watch the video and check out the slides!

Puppet rspec tests with Hiera data

Editor’s note: Please check out the much newer article Configuring Travis CI on a Puppet Module Repo for the new “best practices” around setting up rspec-puppet. You are encouraged to use the newer setup, though everything on this page will still work!

I’ve covered puppet unit tests with rspec and beyond before. What if you need to go even further and test data from hiera? There’s a way to do that with rspec as well, and it only requires a few extra lines to your spec config – plus the hiera data, of course.

Use hiera in your class

We’ve covered a number of ways to use hiera in your class. You can use hiera lookups (hiera(), hiera_hash(), etc.) and automatic parameter lookups with classes. We’ll look specifically at hiera_hash() and the create_resources() it is commonly paired with. You cannot simply test that create_resources() was called because you need to know the resulting title of the generated resource. Here’s a simple DHCP profile class I created:

class profile::dhcp {
  # DHCP service and host reservations
  include dhcp::server
  $dhcp_server_subnets = hiera_hash('dhcp_server_subnets', undef)
  if ($dhcp_server_subnets) {
    create_resources('dhcp::server::subnet', $dhcp_server_subnets)
  }

  $dhcp_server_hosts = hiera_hash('dhcp_server_hosts', undef)
  if ($dhcp_server_hosts) {
    create_resources('dhcp::server::host', $dhcp_server_hosts)
  }
}

Continue reading