Installing Jenkins and RVM

Update: Contrary to the module readme as of 12/1/2016, this WILL result in a VM running Jenkins 2, rather than version 1.

It’s time to get started installing Jenkins for use with Puppet. I’ll be installing this in my home lab on a vm called jenkins (so inventive, I know) as an all-in-one server for simplicity. I don’t know Jenkins real well so I don’t know what I should be doing for masters and build servers, but my home lab needs are small anyway. I gave it 2 cores and 1 GB of RAM, which I think is on the low end for Jenkins. My initial exploration led me to believe I’ll need to install Jenkins as well as RVM, since my base OS image is CentOS 7 (Ruby 2.0.0) and the current Puppet 4 AIO build is against Ruby 2.1. I’m using Puppet and the rtyler/jenkins module to set up Jenkins, since I don’t know anything about its innards. Down below, you’ll see that I installed RVM by hand, after which it occurred to me that there’s a perfectly cromulent maestrodev/rvm module that I’ve used in the past – something I may change out later, but I already wrote my manifest so I’m going to share it anyway!

I used Jenkins jobs to experiment a lot with this setup, such as enabling RVM on a job and seeing what errors showed up in the log, but I’m going to hold off on showing that magic until the next article. I will still explain where portions of the manifest came from, just without the actual errors.

Before we go any further, make sure you have the jenkins module and its dependencies added to your Puppetfile, .fixtures.yml, and wherever else you might need to track it.

Continue reading

Getting started with Jenkins and Puppet

If you’ve followed my blog, you’ve seen that I enjoy using Travis CI to run my puppet rspec tests on the controlrepo or against component modules. When you create a PR, Travis starts a build and adds some links to the PR notes to the builds. When it’s complete, the PR is updated to let you know whether the build was successful or not. You can also configure your repo to prevent merges unless the build succeeds. The best part is that it “just works” – you never have to worry about upgrading Travis or patching it or really anything other than making sure you enabled it. It’s a pretty awesome system, especially since it is absolutely free for open source projects. I do love Travis!

But, Travis isn’t always available and isn’t always the answer. Travis only runs against PRs or (typically) merges into the default branch. It won’t run on a schedule, even if your code never changed, which can help test any dynamic dependencies you might have (whether you should have any is a different topic!). It runs on a limited subset of OSes that Travis supports. You have to use Github and public repos to hit the free tier – if your controlrepo is private, there’s a different travis site to use and there is no free plan, though there is a limited number of trial builds to get you started. After that, it can be pretty pricey, starting at 1 concurrent builds for $69/month. This is great for a business, but most of us can’t afford $840 a year for the home network. It’s also, by itself, somewhat limited in how you can integrate it. It can be part of a pipeline, but it just receives a change and sends a status back to Github, it won’t notify another system itself. You have to build that yourself.

There are a ton of other Continuous Integration and Continuous Delivery systems out there, though. They can be cheaper, have better integrations, run on your own hardware/cloud, and don’t have to require Github. Among the myriad options available, I chose to look at Jenkins, with the rtyler/jenkins puppet module. I can run this on my own VM in my home lab (hardware/cloud), it can integrate with GitHub private repos ($0) or BitBucket (doesn’t require Github). CloudBees also discussed a new Puppet Enterprise Pipeline plugin at PuppetConf 2016, which is really appealing to me (better integrations). I can also have it run builds on a schedule, so if dependencies change out from underneath me, I’m alerted when that happens, not when I run my first test a month after the underlying change.

I’m very new to Jenkins and I’m going to do a lot of things wrong while testing it, but I’m going to start blogging about my experiences as I go, right or wrong, and then try and do a wrapup article once I learn more and have a fully working solution. I’ve been inspired to do this by Julia Evans and her wonderful technical articles that are more about the exploration of technology. That goes against my normal style of figuring it all out first, but I so love to see the exploration of others and hope you will, too! As we go on this journey, please feel free to educate or correct me in the comments and on twitter, as always.

Puppet Tech Debt: Moving Rspec Tests

Now that we have shaved the yak of generate-puppetfile, it’s time to move my rspec tests to the top of the controlrepo, as discussed on Thursday. To do so, we need to move not just the spec tests, but also the files Rakefile, .rspec, .rubocop.yml, and create a facsimile metadata.json, and of course, generate a new .fixtures.yml.

Moving the files is pretty simple. In my controlrepo, I only have a profile class with tests, so I’m going to take those as-is. We do want to make sure that we start with a clean environment, we don’t want to copy a bunch of fixtures or other temp files. We could use git clean -ffdxn; if you don’t want to redownload your gems you can manually clean up the relevant directories. Then we can do the file shuffle:

$ git clean -ffdxn
Would remove .bundle/
Would remove Gemfile.lock
Would remove coverage/
Would remove dist/profile/spec/fixtures/manifests/
Would remove dist/profile/spec/fixtures/modules/
Would remove vendor/
$ rm -fR dist/profile/spec/fixtures/manifests/ dist/profile/spec/fixtures/modules/
$ git mv dist/profile/spec ./
$ git mv dist/profile/{Rakefile,.rspec,.rubocop.yml} ./

Continue reading

Release 0.10.0 of generate-puppetfile

As I discussed on Thursday, I am looking to re-architect the layout of my controlrepo’s rspec tests. Of course, there were yaks to shave, first. To that end, I’ve released version 0.10.0 of generate-puppetfile (rubygem, github project) with the added ability to run generate-puppetfile --create-fixtures at the top of your controlrepo and generate a .fixtures.yml that will set up symlinks for all of the modules contained in your controlrepo. This functionality is based off the existence of the file environment.conf in the current directory and it containing a modulepath stanza. All the paths in the modulepath are explored and symlink-type fixtures created for each module found. Previous functionality when run inside a module directory is still preserved, of course.

Because the project has not reached version 1.0.0 yet, I have renamed the option --fixtures to --create-fixtures. This is in preparation for a feature request for an --only-fixtures feature request.

Let’s look at how to use this new feature. Before we start, we must be at the top level of the controlrepo. Next, we need environment.conf to have a modulepath stanza. In my controlrepo, the file looks like this:

Continue reading

Rspec trick: Getting at output you can’t see

I was having a problem yesterday with a specific rspec test that was failing for my puppet tool generate-puppetfile, and I couldn’t understand why. I was expecting an exitcode of 0 but was receiving 1, so obviously I had an issue, but I couldn’t see the accompanying error message. When I attempted to run the command myself, it succeeded, so I was pretty sure the issue was with either argument handling by my program or more likely, the way I wrote my rspec test (spoiler:  it was in my rspec!). Here’s what the rspec test looked like to start with:

  context 'when creating fixtures' do
    let :args do
        'rnelson0/certs'
        '--create-fixtures'
    end

    its(:exitstatus) { is_expected.to eq(0) }
    it 'should create .fixtures.yml' do
      File.exists? './.fixtures.yml'
    end
  end

Continue reading

Burnout and Vacation Time

As the holidays and the accompanying vacation time inches closer, it’s a good reminder of the service vacation provides in avoiding burnout. Burnout is a serious problem in our industry and something we should all strive to avoid in ourselves and others. Of course, we rarely think we are burnt out until it’s too late. There are some common signs to look out for – are you constantly shuffling from project to project or thought to thought and unable to catch up? When you do sit down with a task, is it easy to become distracted, putting you even further behind? Are you feeling despondent or even depressed when work is on your mind, maybe even thinking about rage-quitting someday soon? You might be getting burned out. This tends to be pretty common for many of us when November rolls around, since we probably haven’t taken a good vacation since the last holiday season.

That makes the holidays a great time to take a vacation. You need it, and you probably are going to lose some or all of that time if you don’t use it now. If you haven’t scheduled it already, you should talk to your manager and get it scheduled now. Take time for yourself and back away from the edge of the burnout cliff. And the sooner you talk to your boss, the better chance you can actually take the time you want, since all of your coworkers probably want to take vacation at the same time and someone has to draw the short straw!

I suggest that you also work on ensuring that in 2017, you do a better job of planning your vacation throughout the year. One or two days here and there usually doesn’t do it, you need a long enough period of time that you can dump the weight on your shoulders and stand up tall for a while before you go back to work. Take a week off in April or June or September instead of hoarding vacation days. If you can, travel somewhere, but even a stay-cation helps a lot if it’s a whole week.

I also recommend touching a computer as little as possible. We’re nerds, we’re going to touch a computer at some point – and technically you probably can’t avoid it if you want to drive somewhere, watch some streaming movies, or order some food – but we benefit when we aren’t surrounding ourselves with the things that are causing us stress in the first place. Instead of using the vacation to catch up on all the new literature about Product X, pick up a good novel, watch a TV series you skipped when it was new, or maybe pick up an entirely new hobby. Build a chicken coop, or watch someone else build one! Anything that’s NOT work.

Above all, take care of yourself. Enjoy the whole year, not just the holiday season!

Puppet Tech Debt Day 2: Adjusting Rspec Tests

Yesterday was our 14th anniversary, so I didn’t have time to write a blog post, but I did look into a tech debt issue: rspec tests. In addition to adding rspec-puppet-facts, I found a program called onceover that offers two concepts I want to look into.

First, there is a rake task to generate fixtures. I have a similar feature in generate-puppetfile, but it’s not as perfect as I’d like – it often requires some manual touch afterward. Onceover’s rake task does not have that issue. I hope to be able to grab the rake task without being forced to use the rest of it or affecting the existing test set up. Maybe I’ll be interested in the rest of it someday, but not right now, and it’s great when you’re not forced to make a forklift upgrade like that.

The second item is centralizing rspec-puppet tests in the controlrepo rather than inside each module itself. That will change the relevant portions of my controlrepo layout from:

.
├── dist
│   ├── profile
│   │   ├── files
│   │   ├── lib
│   │   ├── manifests
│   │   ├── metadata.json
│   │   ├── Rakefile
│   │   ├── spec
│   │   │   ├── classes
│   │   │   ├── fixtures
│   │   │   │   ├── hieradata
│   │   │   │   ├── hiera.yaml
│   │   │   └── spec_helper.rb
│   │   ├── templates
│   │   └── tests
│   └── role
│       ├── manifests
│       ├── metadata.json
│       ├── Rakefile
│       ├── README.md
│       ├── spec
│       └── tests
├── environment.conf
├── Gemfile
├── hiera
├── hiera.yaml
├── manifests
│   └── site.pp
├── Puppetfile
└── r10k_installation.pp

To:

.
├── dist
│   ├── profile
│   │   ├── files
│   │   ├── lib
│   │   ├── manifests
│   │   ├── metadata.json
│   │   ├── Rakefile
│   │   ├── templates
│   │   └── tests
│   └── role
│       ├── manifests
│       ├── metadata.json
│       ├── Rakefile
│       ├── README.md
│       └── tests
├── environment.conf
├── Gemfile
├── hiera
├── hiera.yaml
├── manifests
│   └── site.pp
├── Puppetfile
├── r10k_installation.pp
└── spec
    ├── classes
    ├── fixtures
    ├── hieradata
    ├── hiera.yaml
    └── spec_helper.rb

I haven’t done this yet but did talk to some others who already do this and are benefiting from the simplified test setup. I’m looking forward to trying it out soon.

November Goal: Pay down Puppet Tech Debt Part 1

It is getting close to the time of the year when the pace of feature-driven change slows down – people want stability when they are on vacation and especially when they’re holding the pager and others are on vacation, and Lord help anyone who negatively affects a Black Friday sale. This is a great time to work on your technical debt. First, you need to identify where it lies!

I expect to spend most of this week identifying areas at work where there are pain points specifically related to tech debt and whether it is better to keep paying the interest or if it is time to pay the whole thing down. I have identified a few candidates related to Puppet already, mostly from lessons learned at PuppetConf.

  • Convert tests to use rspec-puppet-facts. A long list of custom facts in each spec test becomes untenable pretty quickly. Preliminary tests show that I need to chose whether tests are based on Windows or Linux, as mixing and matching in the same tests would break most of them, and I’m leaning toward Linux. This does mean that some tests will not use rspec-puppet-facts and will keep their own fact lists.
  • Convert params patterns to Data in Modules.
  • Try out octocatalog-diff – some unexpected string conversions have been painful before.
  • Get a BitBucket-Jenkins-Puppet workflow working and document. This looks promising, does anyone else have workflow guides I can follow?
  • Update my Puppet Workflow documentation. This isn’t paying down any actual tech debt, but I think it goes hand-in-hand with the above item and revisiting it should provide some clarity to what we do and maybe highlight some room for improvement.

I’m sure there will be more to come. I will try and blog about my progress throughout the vDM30in30 challenge.

#vDM30in30 in May!

Every year in November, NaNoWriMo occurs. For those of us who blog, a more recent challenge called vDM30in30 takes place at the same time: Write 30 blog posts in 30 days. However, November can be a difficult time for writing as the holidays and family can encroach on that. This has kept a number of people from participating in past challenges or led to people having to drop out before the month is out.

In response, this year we’d like to try two challenge events. In addition to the annual event in November, we’re launching a May event! It’s the same challenge – 30 blog posts in 30 days – but outside of the holiday season! Yes, we know, May has 31 days. It’s up to you if you want to write from May 1-30 or May 2-31, or maybe even write 31 posts in 31 days!

This challenge is entirely personal. The 30 blog posts can be about any subject you like, of any length. You can do one a day or clump them together. If you announce your posts on Twitter or Facebook, just add the hashtag #vDM30in30. The only goal is to push yourself to write frequently. Read more in the Q&A link below.

If you would like to participate, please contact Angelo Luciani or myself on twitter or us the comments below to let us know about your blog and social media contacts. We’ll put out a list of public participants and add you to a once-a-day summary post of all the participants.

Reference:

vDM30in30 2015 Retrospective

Today ends my vDM30in30 challenge. This makes the 30th post and goes out just a bit before the end of the day on November 30th, 2015. I hit the mark within the timeframe, yay! That’s an improvement over last year’s 25 posts. Writing 30 posts in 30 days was difficult for me, but rewarding. Let’s take a look at why I participated, what I did, and whether it helped me.

I participated in vDM30in30 this year, as in last year, to work on my writing skills. Specifically, I wanted to work on speed. I can write a really long blog post, no problem – some of my Puppet posts were over 5,000 words before I split them up – but it takes me FOREVER! I wanted to work on writing posts of the same length in a shorter duration, but without lowering the quality. This was more than just a requirement to get 30 posts done in 30 days, but something that I think can benefit me elsewhere. Sometimes I spend 10 minutes writing a non-technical email that’s just a single paragraph, and I don’t think that’s really worthy of one sixth of an hour. I was sure I would gain in other ways, but everything was secondary to speed.

Well, not everything. Right before the challenge started, I joined the other participants in trying to encourage others to participate in the challenge. We succeeded, as we had a number of new participants in this year’s challenge! I’ve also spoken to a few people who missed out on the challenge but don’t want to wait until next November to participate, so they may be looking at running the same challenge in January! If anyone else is interested in joining them, let me know in the comments. Thanks to everyone who participated in the challenge, new and existing participants, it was great to see this grow year over year!

Now, back to my challenge efforts. To work on speed, I used a number of tactics:

  • Varied topics. Much of my blog content is what I would consider deep technical content. I wasn’t certain that increasing speed here with the given timeframe was feasible, but I was certain that I could improve speed on this content if I improved speed on other content. I wrote about vSphere, Puppet, Travis CI, and Ruby bundler (all in a not-quite-as-deep manner), and I also branched out into an ode to snow, thoughts on Footloose and 2112, troubleshooting, note taking techniques, our pug Loki, and even got meta about post quality and what to do when the well runs dry.
  • Make November a month of projects. I participated in the challenge while continuing work on other projects (upgrading modules for puppet 4 support, learning Travis CI, Commitmas), making each of these projects fodder for vDM30in30. This ties into the next item, as the project milestone often came under the same time limit.
  • Set a (soft) timer. I often did this by deciding that I had X minutes available, I had a topic I thought could be done in X minutes, and I’d write and post it immediately. I gave myself enough time to do proofreading but I tried to keep to whatever time limit I set. Sometimes I’d have to stop writing because I had to leave the house, and hitting the Post button was difficult but necessary. Of course, I still wanted to keep the quality up so I reserved the right to not hit post or ditch the post entirely. I only made use of this once, and I just needed 5 minutes for proofing when I got back to the computer.
  • Use brainstorming sessions. My normal technique is to think of something I want to write about and then do it. Instead, I would spend 10-30 minutes thinking of what I wanted to write about and making a list in Evernote. By making the list ahead of time, I had a number of solidd ideas to toss around in my head for a few days. When I sat down to write, I often had a rough outline or a list of points to emphasis already. This became especially important at the end of the journey when I started to run out of ideas. If I was going to rack my brain, I wanted to do it for 5 subjects, not just one!
  • Press the post button! Of course, none of the above techniques mattered if I didn’t post the article. I didn’t schedule a single post, every article was made live the moment it was finished. Getting over the fear of hitting post quickly became a secondary goal.

So, did this help me, did I achieve what I set out to? I hit the mark of 30 posts in 30 days and I certainly feel like I improved. I know that I’m proud of myself for following up on my pledge! But did I improve my speed while maintaining or improving the quality of my content? I need to hear from you! I appreciate any and all feedback here in the comments or on twitter. Thank you!

You can see all of the vDM30in30 posts here, including those from 2014.