30 in 30 Blog Writing Begins!

Last year, I participated in vDM30in30, a spin-off/shout-out to National Novel Writing Month (NaNoWriMo) but focused on bloggers. The goal is to write 30 blog posts in 30 days. It’s a spinoff from virtualdesignmaster.com, but it’s not just for people in the virtualization community. It’s for anyone who has a blog – or always wanted to start a blog – and wants to try for 30 blog posts in November.

The goal is 30 posts in 30 days. Some people do one post a day, some just write whenever the mood strikes. Some schedule all 30 posts ahead of time, some schedule a few or none. Some posts are really long, others just a paragraph or two. Some people write all 30 posts or even more, others don’t (I only got to 25 last year). It’s whatever you want it to be. What’s important is that you’re practicing your writing skills, getting into the habit of writing, and sharing with others. All you need to do is tag your blog posts with the category vDM30in30 and use the hashtag #vDM30in30 if you publicize on social media.

So, are you with me?

Learn more about vDM30in30 here and keep up with everyone’s posts by tracking the hashtag. You can also tweet your participation @discoposse to be added to the vDM30in30 list.

Configuring Travis CI on a Puppet Module Repo

Recently we looked at enabling Travis CI on the Controlrepo. Today, we’re going to do the same for a module repo. We’re going to use much of the same logic and files, just tweaking things a bit to fit the slightly different file layout and perhaps changing the test matrix a bit. If you have not registered for Travis CI yet, go ahead and take care of that (public or private) before continuing.

The first challenge is to decide if you’re going to enable Travis CI with an existing module, or a new module. Since a new module is probably easier, let’s get the hard stuff out of the way.

Set up an existing module

I have an existing module rnelson0/certs which has no CI but does have working rspec tests, a great candidate for today’s efforts. Let’s make sure the tests actually work, it’s easy to make incorrect assumptions:

modules travis ci fig 1

Continue reading

Configuring Travis CI on your Puppet Controlrepo

Continuous Integration is an important technique used in modern software development. For every change, a CI system runs a suite of tests to ensure the whole system – not just the changed portion – still “works”, or more specifically, still passes the defined tests. We are going to look at Travis CI, a cloud-based Continuous Integration service that you can connect to your GitHub repositories. This is valuable because it’s free (for “best effort” access; there are paid plans as well.) and helps you guarantee that code you check in will work with Puppet. This isn’t a substitute or replacement for rspec-puppet, this is another layer of testing that improves the quality of our work.

There are plenty of other CI systems out there – Jenkins and Bamboo are popular – but that would involve setting up the CI system as well as configuring our repo to use CI. Please feel free to investigate these CI systems, but they’ll remain beyond the scope of this blog for the time being. Please share any guides you may have in the comments, though!

Travis CI works by spinning up a VM or docker instance, cloning our git repo (using tokenized authentication), and running the command(s) we provide. Each entry in our test matrix will run on a separate node, so we can test different OSes or Ruby or Puppet versions to our heart’s content. The results of the matrix are visible through GitHub and show us red if any test failed and green if all tests passed. We’ll look at some details of how this works as we set up Travis CI.

From a workflow perspective, you’ll continue to create branches on your controlrepo and submit PRs. The only additional step is that when a PR is ready for review, you’ll want to wait for Travis CI to complete first. If it’s red, investigate the failure and remediate it. Don’t review code until everything is green because it won’t work anyway. This will mostly be a time saver, unless you’re watching your CI run which of course makes it slower!

Continue reading

Minimum Viable Configuration (MVC)

In my PuppetConf talk, I discussed a concept I call “Minimum Viable Configuration”, or MVC. This concept is similar to that of the Minimum Viable Product (MVP), in which you develop and deploy just the core features required to determine if there’s a market fit for your anticipated customer base. The MVC, however, is targeted at your developers, and is the minimum amount of customization required for the developers to be productive with the languages and tools your organization uses. This can include everything from having preferred IDEs available, language plugins, build tools, etc.

A Minimum Viable Configuration may not appear necessary to many, especially those who have been customizing their own environment for years or decades. The MVC is really targeted at your team, or as the organization as a whole. You may have a great customized IDE setup for writing Puppet or Powershell code, but others on your team may just be starting. The MVC allows the organization to share that accumulated wealth, making full use of the tens or hundreds of years of experience on the team. A novice developer can sit down and be productive with any language or tool covered by the MVC by standing on the shoulders of their teammates.

The MVC truly is the minimum customization required to get started – for instance, a .vimrc file that sets the tabstop to 2 characters and provides enhanced color coding and syntax checking for various languages – but that still allows users to add their own customizations. If you enforce the minimum, but don’t limit further customization, new hires can not only check their email on day one, but can actually delve through the codebase and start making changes on day one. You can also tie it into any vagrant images you might maintain.

Your MVC will change over time, of course. Use your configuration management tool, like Puppet, to manage the MVC. When the baseline is updated, all the laptops and shared nodes can be updated quickly to the new standard. You can see an example of a Minimum Viable Configuration for Linux in PuppetInABox’s role::build and the related profiles (build, rcfiles::vim, rcfiles::bash). You can easily develop similar roles and profiles for other languages or operating systems.

I feel the MVC can be a very powerful tool for teams who work with an evolving variety of tools and languages, who hire novices and grow expertise internally, and especially organizations that are exposing Operations teams to development strategies (i.e. DevOps). What do you think about the MVC? Are you using something similar now, or is there another way to address the issue?

#PuppetConf 2015 Wrap-Up

I mentioned over the spring/summer that I was headed to PuppetConf 2015, which happened last week. It was a blast! I highly recommend that if you use Puppet, you find a way to make it to PuppetConf 2016 which will be held in San Diego.

There were a lot of great events, official and unofficial, throughout the week. I met a ton of people, way too many to mention individually, and made a lot of friends. I live tweeted three of the event days, which are storified, and here are some highlights:

Contributor’s Summit: This is a great opportunity to become involved in the community. You can contribute docs, code, or commentary. I’m serious about the last, lots more time was spent on designing than coding things. A few of us – Henrik, Felix, Vanessa, and myself – sat down to attack HI-118 and created something. Plenty of other people and groups created their own things. I saw lots of ways to do the same things and also the awesome puppet-retrospec, which creates rspec-puppet tests for all .pp code in a module. It’s very naive at this point, but it’s better than not having tests!

Sessions, Day One: At the keynote, Puppet’s new Application Orchestration was unleashed. This is seriously awesome. Define your application’s microservices, then assign nodes to provide the services. Need multiple nodes for a service? Assign more than one. Want a node to provide more than one service – say, a single SQL server that serves more than one database? Assign multiple services to that node. It’s pretty simple but pretty powerful. Of course, we only got to check out some demos in the demo section of the Exhibit Floor, but it’s very promising.

I attended a number of sessions, of course:

  • State of the Puppet Community – Kara Sowles & Meg Hartley, Puppet Labs
  • 200,000 Lines Later: Our Journey to Manageable Puppet Code – David Danzilio, Constant Contact
  • Infrastructure Security: How Hard Could it Be, Right? – Ben Hughes, Etsy
  • Identity: LGBTQ in Tech – Daniele Sluijters, Spotify
  • Hacking Types and Providers – Introduction and Hands-On – Felix Frank, mpex GmbH

Sessions, Day Two: Today’s keynote showed a bit more of the Application Orchestrator but also focused on the speed and capabilities of some C++ prototypes for facter and puppet. They’re blazing fast. I also spoke on Puppetizing Your Organization! That was terrifying but rewarding. If you have something to share, PuppetConf is the place, it’s extremely friendly and rewarding. Here are the sessions I attended:

  • Thriving in Bureaucratic Environments – Ashley Hathaway, IBM Watson
  • Application Modeling Patterns – David Lutterkort & Ryan Coleman, Puppet Labs
  • Building Communities – Byron Miller, HomeAway.com

After the last session, I had to head home immediately and take a red-eye. I missed out on the pub crawl and some of the after activities, but I had a great time while I was there. Hello to everyone I met there, thanks to everyone who contributed to my presentation and made it that much better, and especially thanks to everyone who showed up to my talk! Hopefully I’ll see you all in San Diego next October!

Update: I forgot to mention how great the Oregon Conference Center was. By far one of the most organized conferences I’ve been to and absolutely the best catered foods.