Puppet 3.6.1 Updates

If you’ve been following with the Puppet series, your VMs probably started with Puppet 3.4.x or earlier. In the time since, Puppet has released up through v3.6.1 that brings a lot of improvements. However, if you simply upgrade your master and nodes, you’ll run into a few warnings about deprecations and future deprecations. Let’s take a look at the issues and how to resolve them. As always, read the release notes so that you understand the changes and test in a lab to ensure there is no negative impact.

Note: You MUST upgrade your nodes to v3.6.1 as well as the master, or you may receive fatal errors on the nodes. We haven’t gotten there yet, but if you have mcollective installed and configured, it’s a great way to upgrade your nodes at the same time.

Here’s the first item you’ll see:

[rnelson0@puppet ~]$ sudo puppet agent --test --noop
Warning: Setting modulepath is deprecated in puppet.conf. See http://links.puppetlabs.com/env-settings-deprecations
   (at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1067:in `each')
Warning: Setting manifestdir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
   (at /usr/lib/ruby/site_ruby/1.8/puppet/settings.rb:1071:in `each')

You can fix this by implementing environment directories. Here’s the diff I made:

[rnelson0@puppet ~]$ diff puppet.conf.org /etc/puppet/puppet.conf
15,16c15
<     modulepath = /etc/puppet/environments/$environment/modules:/opt/puppet/share/puppet/modules
<     manifestdir = /etc/puppet/environments/$environment/manifests
---
>     environmentpath = $confdir/environments

If you actually do have global modules under /opt, add a basemodulepath key and value. Now when you run another test, you may see some errors as it “fixes” itself. Run it a second time and you’ll see this:

[rnelson0@puppet ~]$ sudo puppet agent --test --noop
...
Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
   (at /usr/lib/ruby/site_ruby/1.8/puppet/type.rb:816:in `set_default')

This is a warning that something will be deprecated. You can read about the issue here. As the link says, it’s easy to fix this. In your puppet repo, add these lines to the top of manifests/site.pp:

Package {
  allow_virtual => true,
}

If you run puppet again, you’ll notice the warnings are gone!

One last note, if you get some spurious warnings, restart the puppet master service. In my lab, I didn’t need to do this, but in production I had to. I assume it’s because I did something out of order, but I couldn’t identify what that was.

Puppet and Git, 203: r10k Workflow for New Module

Welcome back to our Puppet and Git 200-series classes. With r10k installed and configured, today we can focus on workflows. The first workflow is for a new module, either a brand new module you are creating or simply a “new to you” module, such as importing more modules from the Forge. In our classrom, we will add a single module from the forge and update the base module to make use of it. This will give us a good understanding of the workflow for r10k.

Workflow To Add A New Module

The first step in our workflow is to decided on a module to add to our setup. If you have a particular module you want to use, feel free to substitute it below. I’ve chosen saz-motd, a very simple module that is visible when installed, but will not have a material impact on your nodes. We can see right now that there is no message of the day, so we’ll know when we’re done:

[root@puppet ~]# cat /etc/motd
[root@puppet ~]#

Note: We’ll add our module to a feature branch below. It’s a simple module, so this is fine. More complex modules, such as those that include additional facts and functions, should always be installed on the master first to ensure the plugins are synchronized, which means adding them to production. This was discussed on IRC so I don’t have a link to documentation to show how this works; this is the closest I could find. I’ll mention it again when we install such a module, but I wanted to mention it in case the module you chose provides custom facts/functions.

Continue reading

Social Media Tips

This past week I wrote an opinion piece on the InfoSec community, which included some tips on using social media. I’ve distilled that very long section to a bullet list and added a few items.

  • Investigate your company’s social media policies and make sure you comply with it.
  • Seek out the proper audience.
    • Facebook – Keep in contact with friends and family and sharing all of your information with the world
    • Twitter – Work communities
    • Blogs – Great for introducing yourself to the world and sharing what you have learned
    • Google+ – Overlaps with the above, but less popular than the others. Future is in doubt
  • Get control. Understand the security/privacy posture of your chosen platform.
  • Listen first.
  • Share only what you want.
    • Check with your spouse and family before sharing info about them!
  • Find dissenting voices, don’t let it become an echo chamber.
  • Respect people.
  • You’re going to be wrong, accept it gracefully.
  • Make sure your contributions have meaning. Focus on creating novel, useful content.
  • Recognize others and promote their content.
  • Retweets, favorites, likes, +1’s, etc. all mean different things. Use the right one.
  • Make time for real life.
  • Have fun!

Improving the InfoSec Social Media Community

While attending CPX 2014, I had a mini-epiphany. This twitter thread got me thinking, “Why is CPX so much different than VMworld?” There’s an obvious size difference – 1600 attendees vs 28,000 – which leads to less sessions and smaller parties, but that’s a given. “Why is the InfoSec community different than the Virtualization community?” This is the real concern, the cultural differences between the two communities that have the most overlap with my job responsibilities and personal interests. One notable difference is that in InfoSec, there aren’t many well known practitioners of security, though there are heroes and rockstars. It also seems to be a less vocal community, and when it does speak, it’s in generalities and news, such as 5 Common Attack Vectors or Who Was Hacked This Weekend. In Virtualization, there’s a lot of public recognition for people, even the niche topics, and the community gets down and dirty and shares very practical information in addition to higher level concepts. So, why this startling difference?

Security Practitioners can be insular

Many of you reading this probably first visited this site for virtualization content – which makes sense, as my first posts were on PowerCLI and Auto Deploy. As such, you’re probably familiar with the drill for conferences: get caught up on your timeline by 7am, then prepare for it to be blown up all day long. Check out the feeds for Storage Field Day 5 (#SFD5), the OpenStack Summit (#openstacksummit), and of course, VMworld (#vmworld, #vmworld2013). Dozens, sometimes hundreds, tweet about each keynote, allowing those not attending the pleasure of knowing what’s going on in near-real time. You can sometimes even convince an attendee to ask your question of the presenter! This extends past the keynotes, which are sometimes streamed, to the individual sessions, which are frequently not streamed and sometimes never recorded or put online. Even if you attend, it’s still interesting to read because inevitably another attendee caught something you missed or saw it differently, giving you additional insight (who else learned from Twitter that Cisco wasn’t on the NSX announcement slide at VMworld 2013?). These interactions create a lot of content ancillary to, but just as important, as the conference agenda itself.
Continue reading

Puppet and Git, 202: r10k Setup – Conversion, Deployment

Welcome back! In our 201 class, we installed r10k, but we still haven’t used it. There’s two tasks we need to complete. First, the existing repo is incompatible with r10k’s dynamic management of modules, so we’ll convert its contents to the proper format. Once that is done, we can deploy dynamic environments using r10k.

Convert existing repo

Check out a clone of the existing puppet repo, rnelson0/puppet-tutorial. As mentioned previously, you can do this on the puppet master, as I will do, or you can perform it on another machine. If you’re on the master, you want to clone the repo into a different directory. After cloning it, check out a new branch called production:

Continue reading

Puppet and Git, 201: r10k Setup – Installation

I know you’re probably anxious to get started with managing your infrastructure, but we’re going to stay distracted by Git for a little longer. In the 100 series, we saw some examples of how to migrate your manifests and modules into Git and how to make changes to your manifests through branches. The setup is a little primitive, but acceptable for a lab – everything is is either done by root or involves pushing changes as a user and pulling them as root, and changes are tested in production. I’d like to introduce you to a tool called r10k that will help us create dynamic branches for testing and decouple our workflow from direct access to the puppet master. In this 201 class, we’ll work on the first half by migrating our existing repo structure into r10k.

Review and Setup

If we review the puppet-tutorial repo’s master branch, we have a standard directory layout that you should be somewhat familiar with now:

Continue reading

Check Point Experience 2014 Recap

Last week, I attended Check Point Experience 2014 (CPX2014) in Washington, D.C. Here are some quick highlights from the conference:

  • There were around 1400 attendees, up from 650 a mere two years ago.
  • Security people cannot properly capitalize VMware either.
  • They also use ‘on-premise’ and make people twitch.
  • There is some conflation between orchestration and automation, and even confusion on what constitutes one or the other.
  • Foreign language translations can be fun! This isn’t a slight against the speakers (I certainly cannot speak their language!), I just think it’s healthy to laugh about these things, especially when the correct word is obvious and the meaning stays intact. If we weren’t always so uptight about things…

There were two more significant lessons I learned at CPX 2014, however.

The first is that Checkpoint has a lot of products that make up what they are calling Software Defined Protection. It’s a neat idea, though some of the products are not GA and hence not usable at this time, leaving the definition somewhat nebulous as far as real world examples go. However, it does define enforcement, control, and management layers (planes) and lays out products that work at each layer, plus pending integration with other tools and standards (a VMware-compatible virtual firewall, REST APIs, etc). Taken together, SDP has the potential to affect design and implementation with an end result not just of increasing security policies, but shortening the gap between malware creation and prevention.
Continue reading

Puppet and Git, 102: Feature Branches

In Puppet and Git 101, we looked at how to add our existing puppet code to our repo. We’re going to take a quick look at how to create a branch, add some code, commit it, and push it to our repo.

Create a Branch

For lack of something significant to do right now, we’ll add a notify command to the node definition for puppet.nelson.va. To do so, we will checkout a new branch called, appropriately, notify. You can call your branches whatever you want, I suggest you simply be consistent in your naming scheme. At work, I use a combination of a ticket number and a one or two word description of the feature, separated by hyphens. Normally our branch is going to be short lived and only exist locally (we’re going to make an exception to that for demo purposes), so it would be a moot, but it’s still a good habit to be in.

[root@puppet puppet]# git branch
* master
[root@puppet puppet]# git checkout -b notify
Switched to a new branch 'notify'
[root@puppet puppet]# git branch
  master
* notify

Continue reading

Puppet and Git, 101: Git Basics

Now that we’ve set up a puppet master and puppetized template, created a sample manifest, and started creating our own module, it’s time to take a few moments to talk about using Puppet with a version control systems (VCS). This article is mainly for those new to VCS at all or new to Git; those very familiar will want to skim or skip this article entirely.

What we have done so far is adding and removing a few lines in a couple files, and we’ve treated it as such. But it’s so much more. Writing code that represents an infrastructure state and using software to implement it is the root of two important IT movements: DevOps and the Software Defined Data Center (SDDC). You write code, puppet creates the infrastructure according to your instructions. Need something changed? Update your code, puppet takes care of the rest. What if you mess up? That’s where version control comes into play.

Version control, among other benefits, gives us the option to look at our code at points in time and to track changes over time, usually with some level of audit detail. If I make a change today and everything runs fine for a few days before blowing up, I can use version control to track the changes made to see if someone else made a change in the interval or perhaps go back to the version prior to my change. Without version control, you have no functional ability to audit your changes and revert the state of your code to a particular point in time.

There are a number of different version control systems that you can use. Subversion has been a popular VCS, though it has some long standing limitations and has been losing favor for a while. Git is a newer distributed version control system (DVCS) that has gained massive popularity by addressing some of the limitations of non-distributed VCSes and encouraging public development via Github.com and other cloud DVCS providers. We’re going to focus on Git due to its popularity, the plethora of examples of Puppet + Git available on the internet, and the ability to leverage Github.

Continue reading

Manifest and Module Organization, Take One

In the last article, we learned how to import modules from the puppet forge. We created a very simple, but disorganized, site manifest. We need to create some organization, which will give us the ability to apply different settings to different nodes. Here’s the manifest we ended up with:

class { '::ntp':
  servers => [ '0.pool.ntp.org', '2.centos.pool.ntp.org', '1.rhel.pool.ntp.org'],
}

user { 'dave':
  ensure     => present,
  uid        => '507',
  gid        => '507',
  shell      => '/bin/bash',
  home       => '/home/dave',
  managehome => true,
}

group { 'dave':
  ensure => 'present',
  gid    => '507',
}

include ::ssh
::ssh::server::configline { 'PermitRootLogin': value => 'yes' }

The manifest includes two modules from the puppet forge and two resources managed by puppet, one user and one group. These resources, however, are going to applied to every agent that connects. As we grow the manifests, we’re going to meet some resources that are only needed on certain agents – web servers, web apps, etc. Let’s take what we have and organize it better.

Continue reading