Installing Jenkins and RVM

Update: Contrary to the module readme as of 12/1/2016, this WILL result in a VM running Jenkins 2, rather than version 1.

It’s time to get started installing Jenkins for use with Puppet. I’ll be installing this in my home lab on a vm called jenkins (so inventive, I know) as an all-in-one server for simplicity. I don’t know Jenkins real well so I don’t know what I should be doing for masters and build servers, but my home lab needs are small anyway. I gave it 2 cores and 1 GB of RAM, which I think is on the low end for Jenkins. My initial exploration led me to believe I’ll need to install Jenkins as well as RVM, since my base OS image is CentOS 7 (Ruby 2.0.0) and the current Puppet 4 AIO build is against Ruby 2.1. I’m using Puppet and the rtyler/jenkins module to set up Jenkins, since I don’t know anything about its innards. Down below, you’ll see that I installed RVM by hand, after which it occurred to me that there’s a perfectly cromulent maestrodev/rvm module that I’ve used in the past – something I may change out later, but I already wrote my manifest so I’m going to share it anyway!

I used Jenkins jobs to experiment a lot with this setup, such as enabling RVM on a job and seeing what errors showed up in the log, but I’m going to hold off on showing that magic until the next article. I will still explain where portions of the manifest came from, just without the actual errors.

Before we go any further, make sure you have the jenkins module and its dependencies added to your Puppetfile, .fixtures.yml, and wherever else you might need to track it.

Continue reading

Getting started with Jenkins and Puppet

If you’ve followed my blog, you’ve seen that I enjoy using Travis CI to run my puppet rspec tests on the controlrepo or against component modules. When you create a PR, Travis starts a build and adds some links to the PR notes to the builds. When it’s complete, the PR is updated to let you know whether the build was successful or not. You can also configure your repo to prevent merges unless the build succeeds. The best part is that it “just works” – you never have to worry about upgrading Travis or patching it or really anything other than making sure you enabled it. It’s a pretty awesome system, especially since it is absolutely free for open source projects. I do love Travis!

But, Travis isn’t always available and isn’t always the answer. Travis only runs against PRs or (typically) merges into the default branch. It won’t run on a schedule, even if your code never changed, which can help test any dynamic dependencies you might have (whether you should have any is a different topic!). It runs on a limited subset of OSes that Travis supports. You have to use Github and public repos to hit the free tier – if your controlrepo is private, there’s a different travis site to use and there is no free plan, though there is a limited number of trial builds to get you started. After that, it can be pretty pricey, starting at 1 concurrent builds for $69/month. This is great for a business, but most of us can’t afford $840 a year for the home network. It’s also, by itself, somewhat limited in how you can integrate it. It can be part of a pipeline, but it just receives a change and sends a status back to Github, it won’t notify another system itself. You have to build that yourself.

There are a ton of other Continuous Integration and Continuous Delivery systems out there, though. They can be cheaper, have better integrations, run on your own hardware/cloud, and don’t have to require Github. Among the myriad options available, I chose to look at Jenkins, with the rtyler/jenkins puppet module. I can run this on my own VM in my home lab (hardware/cloud), it can integrate with GitHub private repos ($0) or BitBucket (doesn’t require Github). CloudBees also discussed a new Puppet Enterprise Pipeline plugin at PuppetConf 2016, which is really appealing to me (better integrations). I can also have it run builds on a schedule, so if dependencies change out from underneath me, I’m alerted when that happens, not when I run my first test a month after the underlying change.

I’m very new to Jenkins and I’m going to do a lot of things wrong while testing it, but I’m going to start blogging about my experiences as I go, right or wrong, and then try and do a wrapup article once I learn more and have a fully working solution. I’ve been inspired to do this by Julia Evans and her wonderful technical articles that are more about the exploration of technology. That goes against my normal style of figuring it all out first, but I so love to see the exploration of others and hope you will, too! As we go on this journey, please feel free to educate or correct me in the comments and on twitter, as always.

Puppet Tech Debt: Moving Rspec Tests

Now that we have shaved the yak of generate-puppetfile, it’s time to move my rspec tests to the top of the controlrepo, as discussed on Thursday. To do so, we need to move not just the spec tests, but also the files Rakefile, .rspec, .rubocop.yml, and create a facsimile metadata.json, and of course, generate a new .fixtures.yml.

Moving the files is pretty simple. In my controlrepo, I only have a profile class with tests, so I’m going to take those as-is. We do want to make sure that we start with a clean environment, we don’t want to copy a bunch of fixtures or other temp files. We could use git clean -ffdxn; if you don’t want to redownload your gems you can manually clean up the relevant directories. Then we can do the file shuffle:

$ git clean -ffdxn
Would remove .bundle/
Would remove Gemfile.lock
Would remove coverage/
Would remove dist/profile/spec/fixtures/manifests/
Would remove dist/profile/spec/fixtures/modules/
Would remove vendor/
$ rm -fR dist/profile/spec/fixtures/manifests/ dist/profile/spec/fixtures/modules/
$ git mv dist/profile/spec ./
$ git mv dist/profile/{Rakefile,.rspec,.rubocop.yml} ./

Continue reading

Release 0.10.0 of generate-puppetfile

As I discussed on Thursday, I am looking to re-architect the layout of my controlrepo’s rspec tests. Of course, there were yaks to shave, first. To that end, I’ve released version 0.10.0 of generate-puppetfile (rubygem, github project) with the added ability to run generate-puppetfile --create-fixtures at the top of your controlrepo and generate a .fixtures.yml that will set up symlinks for all of the modules contained in your controlrepo. This functionality is based off the existence of the file environment.conf in the current directory and it containing a modulepath stanza. All the paths in the modulepath are explored and symlink-type fixtures created for each module found. Previous functionality when run inside a module directory is still preserved, of course.

Because the project has not reached version 1.0.0 yet, I have renamed the option --fixtures to --create-fixtures. This is in preparation for a feature request for an --only-fixtures feature request.

Let’s look at how to use this new feature. Before we start, we must be at the top level of the controlrepo. Next, we need environment.conf to have a modulepath stanza. In my controlrepo, the file looks like this:

Continue reading

Rspec trick: Getting at output you can’t see

I was having a problem yesterday with a specific rspec test that was failing for my puppet tool generate-puppetfile, and I couldn’t understand why. I was expecting an exitcode of 0 but was receiving 1, so obviously I had an issue, but I couldn’t see the accompanying error message. When I attempted to run the command myself, it succeeded, so I was pretty sure the issue was with either argument handling by my program or more likely, the way I wrote my rspec test (spoiler:  it was in my rspec!). Here’s what the rspec test looked like to start with:

  context 'when creating fixtures' do
    let :args do
        'rnelson0/certs'
        '--create-fixtures'
    end

    its(:exitstatus) { is_expected.to eq(0) }
    it 'should create .fixtures.yml' do
      File.exists? './.fixtures.yml'
    end
  end

Continue reading

Puppet Tech Debt Day 3, excluding OS testing

When using rspec-puppet-facts, there’s one minor limitation: it tests all the supported operating systems, even if a class is designed for a specific OS or family. You can easily skip the rspec-puppet-facts in a specific test, though it kind of defeats the purpose for your general purpose operating systems (if your OS type isn’t in facterdb – currently a certainty with network devices – you have to go around it anyway). But what if you want to keep using the facts and just exclude one or two compatible OSes? We were bandying this about on the Puppet slack yesterday and came up with a solution. Thanks to Daniel Schaaff for determining the syntax for this pattern!

Here’s what the per-OS portion of a spec test file looks like once you add rspec-puppet-fact:

on_supported_os.each do |os, facts|
  context "on #{os}" do
    let(:facts) do
      facts
    end

    it { is_expected.to contain_class('profile::unattendedupgrades')}
    it { is_expected.to contain_class('profile::linux::apt_source_list')}
  end
end

Obviously you don’t expect, or want, apt-related resources to apply to non-Debian OSes. We can filter that out using the osfamily fact. This lets us keep the on_supported_os.each pattern in our spec tests but preserve the functionality we want. Here’s what that looks like:

on_supported_os.each do |os, facts|
  context "on #{os}" do
    let(:facts) do
      facts
    end

    if facts[:osfamily] == 'Debian' then
      it { is_expected.to contain_class('profile::unattendedupgrades')}
      it { is_expected.to contain_class('profile::linux::apt_source_list')}
    end
  end
end

You can apply this wherever you want in your tests. If the class profile::unattendedupgrades were to apply to all OSes, move it out of the if block. You can also limit by kernel or whether or not selinux is enabled – or by a custom fact you generated.

Update: I came up with this pattern for my linux-only classes, to future proof against adding Windows as a supported OS:

describe 'profile::access_request', :type => :class do
  on_supported_os.each do |os, facts|
    next unless facts[:kernel] == 'Linux'

Burnout and Vacation Time

As the holidays and the accompanying vacation time inches closer, it’s a good reminder of the service vacation provides in avoiding burnout. Burnout is a serious problem in our industry and something we should all strive to avoid in ourselves and others. Of course, we rarely think we are burnt out until it’s too late. There are some common signs to look out for – are you constantly shuffling from project to project or thought to thought and unable to catch up? When you do sit down with a task, is it easy to become distracted, putting you even further behind? Are you feeling despondent or even depressed when work is on your mind, maybe even thinking about rage-quitting someday soon? You might be getting burned out. This tends to be pretty common for many of us when November rolls around, since we probably haven’t taken a good vacation since the last holiday season.

That makes the holidays a great time to take a vacation. You need it, and you probably are going to lose some or all of that time if you don’t use it now. If you haven’t scheduled it already, you should talk to your manager and get it scheduled now. Take time for yourself and back away from the edge of the burnout cliff. And the sooner you talk to your boss, the better chance you can actually take the time you want, since all of your coworkers probably want to take vacation at the same time and someone has to draw the short straw!

I suggest that you also work on ensuring that in 2017, you do a better job of planning your vacation throughout the year. One or two days here and there usually doesn’t do it, you need a long enough period of time that you can dump the weight on your shoulders and stand up tall for a while before you go back to work. Take a week off in April or June or September instead of hoarding vacation days. If you can, travel somewhere, but even a stay-cation helps a lot if it’s a whole week.

I also recommend touching a computer as little as possible. We’re nerds, we’re going to touch a computer at some point – and technically you probably can’t avoid it if you want to drive somewhere, watch some streaming movies, or order some food – but we benefit when we aren’t surrounding ourselves with the things that are causing us stress in the first place. Instead of using the vacation to catch up on all the new literature about Product X, pick up a good novel, watch a TV series you skipped when it was new, or maybe pick up an entirely new hobby. Build a chicken coop, or watch someone else build one! Anything that’s NOT work.

Above all, take care of yourself. Enjoy the whole year, not just the holiday season!

Puppet Tech Debt Day 2: Adjusting Rspec Tests

Yesterday was our 14th anniversary, so I didn’t have time to write a blog post, but I did look into a tech debt issue: rspec tests. In addition to adding rspec-puppet-facts, I found a program called onceover that offers two concepts I want to look into.

First, there is a rake task to generate fixtures. I have a similar feature in generate-puppetfile, but it’s not as perfect as I’d like – it often requires some manual touch afterward. Onceover’s rake task does not have that issue. I hope to be able to grab the rake task without being forced to use the rest of it or affecting the existing test set up. Maybe I’ll be interested in the rest of it someday, but not right now, and it’s great when you’re not forced to make a forklift upgrade like that.

The second item is centralizing rspec-puppet tests in the controlrepo rather than inside each module itself. That will change the relevant portions of my controlrepo layout from:

.
├── dist
│   ├── profile
│   │   ├── files
│   │   ├── lib
│   │   ├── manifests
│   │   ├── metadata.json
│   │   ├── Rakefile
│   │   ├── spec
│   │   │   ├── classes
│   │   │   ├── fixtures
│   │   │   │   ├── hieradata
│   │   │   │   ├── hiera.yaml
│   │   │   └── spec_helper.rb
│   │   ├── templates
│   │   └── tests
│   └── role
│       ├── manifests
│       ├── metadata.json
│       ├── Rakefile
│       ├── README.md
│       ├── spec
│       └── tests
├── environment.conf
├── Gemfile
├── hiera
├── hiera.yaml
├── manifests
│   └── site.pp
├── Puppetfile
└── r10k_installation.pp

To:

.
├── dist
│   ├── profile
│   │   ├── files
│   │   ├── lib
│   │   ├── manifests
│   │   ├── metadata.json
│   │   ├── Rakefile
│   │   ├── templates
│   │   └── tests
│   └── role
│       ├── manifests
│       ├── metadata.json
│       ├── Rakefile
│       ├── README.md
│       └── tests
├── environment.conf
├── Gemfile
├── hiera
├── hiera.yaml
├── manifests
│   └── site.pp
├── Puppetfile
├── r10k_installation.pp
└── spec
    ├── classes
    ├── fixtures
    ├── hieradata
    ├── hiera.yaml
    └── spec_helper.rb

I haven’t done this yet but did talk to some others who already do this and are benefiting from the simplified test setup. I’m looking forward to trying it out soon.

November Goal: Pay down Puppet Tech Debt Part 1

It is getting close to the time of the year when the pace of feature-driven change slows down – people want stability when they are on vacation and especially when they’re holding the pager and others are on vacation, and Lord help anyone who negatively affects a Black Friday sale. This is a great time to work on your technical debt. First, you need to identify where it lies!

I expect to spend most of this week identifying areas at work where there are pain points specifically related to tech debt and whether it is better to keep paying the interest or if it is time to pay the whole thing down. I have identified a few candidates related to Puppet already, mostly from lessons learned at PuppetConf.

  • Convert tests to use rspec-puppet-facts. A long list of custom facts in each spec test becomes untenable pretty quickly. Preliminary tests show that I need to chose whether tests are based on Windows or Linux, as mixing and matching in the same tests would break most of them, and I’m leaning toward Linux. This does mean that some tests will not use rspec-puppet-facts and will keep their own fact lists.
  • Convert params patterns to Data in Modules.
  • Try out octocatalog-diff – some unexpected string conversions have been painful before.
  • Get a BitBucket-Jenkins-Puppet workflow working and document. This looks promising, does anyone else have workflow guides I can follow?
  • Update my Puppet Workflow documentation. This isn’t paying down any actual tech debt, but I think it goes hand-in-hand with the above item and revisiting it should provide some clarity to what we do and maybe highlight some room for improvement.

I’m sure there will be more to come. I will try and blog about my progress throughout the vDM30in30 challenge.