Publishing Forge Modules

Last week, we looked at an advanced spec helper, puppetlabs_spec_helper, and generated some tests with it. We also looked at the rake targets available with the helper and you may have noticed the build target: “Build puppet module package”. Prior to that, we created a new certs module that is code only, no data, for use in distribution of certificate files for web servers. It seems like a good opportunity to see how this works so we can upload a module to the forge.

Forge Modules

The Puppet Forge is a central repository for shared modules, written by Puppet Labs or by the community. It’s the puppet analog to perl’s CPAN, python’s pip, etc. – tell puppet you want a module and it fetches it from the forge. As the modules are shared, rather than specific to a user’s installation, be sure to use sound fundamentals to create a portable module. Your role and profile modules, which likely reference umpteen other modules and the files and templates they contain, are not good candidates for the forge, but a utility module like the certs module is a good candidate.

Continue reading

Apply to be a vExpert 2015 candidate

If you haven’t already, you should apply to be a vExpert 2015 candidate. There are three forms you can use:

You have until Dec 12th to apply (next Friday) and results will be announced February 5th. There will also be a Q2 application period (est. March 15th deadline based on 2014) if you miss this for some reason. If you’re part of the virtualization community and unsure if you should apply, then you should apply. This is not a technical certification or aware, it’s a community award. Writing blogs, participating in VMUGs, using Twitter – these are all activities that vExperts take part in.

And if you don’t apply, you might find me recommending you at the beginning of next week. Hop to it!

Beyond rspec-puppet: puppetlabs_spec_helper

Editor’s note: Please check out the much newer article Configuring Travis CI on a Puppet Module Repo for the new “best practices” around setting up rspec-puppet. You are encouraged to use the newer setup, though everything on this page will still work!

We recently discussed test-driven development for puppet modules in the context of rspec-puppet. That’s a nice, simple introduction to testing, but doesn’t provide everything we need. Rspec-puppet is limited in the matchers available (notably there are no negation tests) and its inability to test dependencies (when a module includes another module), both of which will be necessary eventually. The next step is puppetlabs_spec_helper, a project by Puppet Labs that provides us with more full-fledged specification tests.

Installation

The biggest requirement for puppetlabs_spec_helper is a ruby version of 1.9 or higher. CentOS 6.5, however, only includes v1.8.7. There are numerous ways to upgrade ruby, most of which are horrible. We’ll look at using the Ruby Version Manager, or RVM, to upgrade to 1.9.3. This can be done with puppet via the maestrodev/rvm module. After adding the module to your master, create a class or modify an existing one to provide RVM and some puppet and rspec gems.

Continue reading

Pull Requests aren’t just for Code anymore

Pull requests (PRs) are an interface to discuss proposed changes to be integrated into a project. As a sysadmin, you might typically hear about developers using PRs to manage code in a public repository. Even if you don’t know how to code, you can still contribute with PRs to your favorite project.

As a frequent user of r10k, but someone unfamiliar with ruby, I can’t contribute very much to the inner workings of the program. However, as a user, I’m in a good position to provide feedback on the user experience. To that end, I forked the repository on github and created some branches to update the documentation to (hopefully!) improve it for other users. Afterward, I submitted PRs and worked with Adrien Thebo, the project maintainer, to fine tune the PRs till they were correctly implemented. The results of that PR are here and the other PRs are merged or still being edited.

As I’ve noted before, documentation matters. If you can’t or aren’t willing to contribute code on a project, improving the documentation is a great way to give back to the community. Give it a shot!

Toyota Production Systems (Lean) Terminology

I found a great article about the Toyota Production Systems (TPS) terminology. TPS is also known as Lean and is the basis of The Goal, The Phoenix Project and DevOps. I’ll be using the terminology a lot in the future, so take a moment to read up on the terms. A shared language helps ensure effective communication. We’ve already discussed Kanban, here are some other terms to focus on:

  • Andon
  • Kaizen – notice it’s for everyone, not specialists.
  • Nemawashi
  • Muda
  • Mura
  • Muri
  • Set-Up Time
  • Tataki Dai

DevOps: The Dev doesn’t mean what you think it means

In a past discussions on DevOps, I’ve said that the Dev doesn’t stand for Developers. That probably seems odd, since in many instances it’s described as Developers + Ops. DevOps is a software development methodology, hence the Dev means development. But, what does that actually mean?

Development is the business side of your product pipeline, as opposed to Ops, which is the customer side. The business side entails not just your software developers, but Product, Sales, and QA (and you could even argue Marketing). These organizations help come up with the product requirements and customers who will use the product. You need a product to develop that your customers want so that the software developers can start developing. This whole side of the business needs to work in synchrony to provide the most value. Development without a product nets you nothing, and product without customers nets you increased inventory costs.

This also affects your feedback loop between Ops and Dev. The operations side of the house needs to provide feedback not just to the software developers, but to let Product and Sales know how the customer’s needs were met and QA needs to know about quality issues that slipped through. If you only talk to the developers, your feedback loop isn’t complete and you’re not implementing DevOps properly.

Celebrating your developers and ignoring the rest of development is like exercising your arms and legs but ignoring your core.

Puppet Forge Module rnelson0/certs

I just published my first puppet module on the forge, rnelson0/certs. It provides a single define that installs a pair of SSL files (.crt and .key) from a specified external location to the managed node. This is designed for use with apache::vhost defines that allow you to provide the name of SSL files to the vhost, but requires the files to already exist on the node. I hope you find it useful. Report any issues via the GitHub issues tracker. Thanks!

Writing for #vDM30in30

As November rushes to its conclusion, it’s time to start some introspection about #vDM30in30. Here are some observations I’ve made, in no particular order:

  • I was asked how I write so many articles in such a short time. My most successful pattern is to identify 2-3 related concepts I want to write about and write 5 or 6 sentences describing them in Evernote. When it’s time to write an article, go to Evernote, pick a concept, and give myself 1 hour to write an article. At the end of the hour, start revising it, but there’s no time limit on this. I’ve been able to do most articles in 60-90 minutes this way. The concepts stay focused, the editing is good, and I’m prevented from obsessing about perfection to the point that I never end up publishing anyway. This results in shorter articles, so won’t hold true for longer technical articles, but I like this pattern.
  • As a consequence of this pattern, my editing process is getting tighter and tighter. Write and finish, then edit, edit, edit. I’ve noticed fewer errors in my writing overall – hopefully that’s the reality of it.
  • Writing and editing is easy. Ideas are difficult. Having a stock to rely on isn’t a bad thing. I hope to end this effort with a dozen or so ideas banked for the future.
  • Thirty articles in thirty days is rough. Even with a decent process in place and much shorter articles, I don’t think I want to do this again anytime soon. I’ll stick to my once or twice a week schedule, thanks!
  • Fear is a killer. By publishing rapidly, I’ve overcome most of my fear. Previously, I would sit on a completed article for days, sometimes weeks, for fear of how it would be received. Now, I am more focused on writing for my own goals – I appreciate it when an article is well received, but it’s not the primary focus during writing and editing.
  • Even though I’m less concerned about the reception, of course I’ve looked at page view statistics. Whether I’ve published 0, 1, or 4 articles a day, page views – aggregate and per new article – seem to remain fairly consistent. There doesn’t appear to be a downside to publishing multiple articles a day. I also didn’t see any significant correlation between the day of the week or the time of publication and the number of views. This isn’t something I’ll worry about in the future.
  • I wrote about the writing process itself a few times. I found this useful to myself and I hope others find it helpful as well. The 30in30 exercise, after all, was about improving my writing.

While I said I don’t want to do this again, it has been a worthwhile exercise and I think I benefited a lot. I hope the readers enjoyed it, as well! Even though this 30in30 challenge is ending, it’s not too late to start your own 30in30 challenge.

Happy Thanksgiving!

When Good Hypotheses Go Bad

I’ve written recently about the necessity of hypotheses, whether you’re writing or troubleshooting. When you craft a hypothesis, it’s based on some preconceived notion you have that you plan to test. When your hypotheses are tested, sometimes they are found wanting. It’s tempting to discard your failed hypotheses and simply move on to the next, but even a failed hypothesis can have a purpose.

Imagine for a moment that you’re sitting in front of a user’s computer, helping them out with some pesky problem. Suddenly it’s the end of the day, you’ve tried everything in your repertoire and you’re calling it quits when the user looks at you and says, “I thought it was kinda weird you tried all that. Bob did everything you did last week and he couldn’t figure it out either.” Gee, thanks, Bob! There’s not even a ticket from last week, nor did he mention talking with this user. How many hours did you just waste that you could have saved if you knew none of it would work?

Bob spent hours crafting and testing his hypotheses, but he discarded all of them, straight to the circular file. You then proceeded to craft and test many of the same hypotheses which, of course, failed again. If only there was some way we could learn from our failures… Wait, there is!

Let’s take a quick look at another example, a scientific hypothesis. A researcher crafts a hypothesis and spends $100,000 to gather preliminary data that can be submitted for a grant worth $2,000,000. If the preliminary data looks good, great – well on the way to two million in funding. If it doesn’t pan out and the hypothesis is shot, $100,000 just went down the drain. That’s the nature of science. But…

A few years go by. Another researcher comes up with the same brilliant idea and sets out to collect some preliminary data for around $100,000. Whoops, the hypothesis isn’t that brilliant, doesn’t work out, and the scientist wasted time and money. Now science is out $200,000 on this failed hypothesis. If only she had known that someone else had tried this before, but there was nothing in the literature to indicate that someone had. She publishes her data in a journal and the next scientist who thinks they have it made can see what the results will look like before investing time and money in the idea. Good money isn’t thrown after bad money anymore.

You can help those after you (including future-you) if you take some time to record your hypotheses and how they failed. You don’t necessarily have to go into great detail, though scientific papers obviously require more rigor, often just a sentence or two will work. “Traceroutes were failing at the firewall, but a packet capture on the data port showed the traffic leaving the firewall,” or, “The AC fan wouldn’t start and the capacitor looked like it might be bad, but I swapped it out for my spare cap and it still won’t start.” If it’s a really spectacular failure – something that was ohhhhh-so-close to working, or a real subtle failure – maybe it’s worthy of a full blog article.

Make sure to store this information somewhere it will be found by someone who is likely to need it. In Bob’s case, this is what the ticketing system was there for, so that others can see his previous work on an asset or for a user. At home, you might keep a journal or put a note in the margins of the AC manual. For public consumption, you might write a blog article or submit your research results to a journal. Anywhere that will help prevent someone in the future from having to waste resources to rediscover the failed hypothesis.

Try and make this part of your habit when researching and troubleshooting. State your hypothesis, test the hypothesis, and record any failures before proceeding with successes. Don’t be a Bob!

DevOps for the SysAdmin

Last Thursday, I was proud to present the following slidedeck with Byron Schaller at the Indianapolis VMUG meeting. There was one edit I wanted to make afterward, which of course is when I have the best editing thoughts 🙂

The Dev in DevOps stands for software development, not software developers.

It’s a small and subtle difference that has a huge impact.

If you haven’t spoken at your local VMUG, please, give it a shot. It’s incredibly rewarding in the growth of your speaking abilities, your ability to internalize and then vocalize a subject, and in participation in the VMUG. Most VMUGs struggle to find speakers. Please volunteer and give back to the community..