Hyper-V for the vSphere Admin

Welcome to my inaugural post on rnelson.com, I’m happy to be here and hope that I can provide some useful insight. Recently, I received a voucher for the Microsoft Hyper-V certification exam and decided to take that opportunity to really give it a proper test. With that in mind, Rob convinced me to write some articles on interesting or difficult concepts I come across in the hope that I can help people down the road avoid the same technical landmines. I’m a vSphere admin by experience, so I’ll be comparing Hyper-V components to their vSphere equivalents to help root this in familiar terms. Today, I’ll describe what Hyper-V is, the lab I’m running it in, and some pre-reqs to installing it. Follow up articles with provide greater detail (and pictures!) and cover installation and use.

Hyper-V is Microsoft’s virtualization product in their server line. In my opinion, starting with Hyper-V in Server 2012, it’s becoming competitive with vSphere for the small to medium scale environments in which I’ve worked. Veeam has a great article explaining the concept of Hyper-V. Veeam is a terrific backup product for virtual environments and it supports both Hyper-V and vSphere. If you are an MCSA/MCSE or VCP, you should look into their NFR (Not for resale) licenses to run in your home lab.

My lab consists of a single server, a Dell T320, with 48GB RAM, a dual port Intel Gigabit NIC, and 6x1TB WD Black drives in RAID 10. As I grow my lab and add enough hardware to set up clustering, I will revisit some topics. For now, the setup will not cover high availability or redundancy configuration.

MS server licenses are provided by my soon to expire Technet subscription (RIP Technet). If this option isn’t available to you, Microsoft conveniently allows for a 180 day trial of their products now, I recommend you visit TechNet’s EvalCenter and download the Server 2012 R2 trial and Windows 7 or 8 trial so you have some operating systems to play with.

In my environment, I used my Windows 8.1 workstation with its version of Hyper-V installed to deploy an OS for using the Microsoft MDT package. This is the Microsoft Deployment Toolkit and if you have experience with SCCM, it should be familiar to you. Download it here and install it on a supported server OS. In the event something goes wrong, you want to be able to start over cleanly. MDT provides this capability by allowing you to deploy an OS from a bare metal install via PXE. Until you get SCVMM running and templates built, this is very handy. Note: SCVMM is the Microsoft equivalent of vCenter, it stands for System Center Virtual Machine Manager and is part of System Center 2012 which you can also download from the Technet downloads link I posted above. It requires some setup and I did it purely for my own interest. I will write up how I used and configured it in another article, but for now we just need to get 2012 R2 installed on whatever hardware you want to use.

There are a couple of minor caveats. Hyper-V will not install inside an existing Hyper-V environment, but it will install inside a VMware VM (ESXi, Workstation, etc.). This allows you to use your existing vSphere environment so as to not lose anything that may be currently crucial. If you do not have a dedicated host available for Hyper-V, you can follow this article on nesting Hyper-V in ESXi. Additionally, you need to make sure you have virtualization support enabled in your BIOS. If this host was previously a vSphere server, it should already be set, otherwise check the BIOS settings before continuing. Lastly, you want to have two drives available (one for the hypervisor OS, the other for VM data), either through your RAID array or partitioning, as I have not been able to install Hyper-V to a thumbdrive like I have with vSphere. This functionality may also vary by your server and firmware capabilities.

There are two methods of installing Hyper-V. You can download the entirely free Microsoft Hyper-V package which is literally just Hyper-V with a command prompt. Secondly, you can install the full version of Server 2012 R2 and add the Hyper-V Role which, although it has a larger attack surface which increases our security concerns, I recommend using until we are more familiar with the installation and use of Hyper-V. This allows for remote desktop into the host and you can create and modify the environment directly from there. This is useful when we don’t have a management VM up and running yet.

Thank you for checking out this article and I hope that I can provide some useful information for all the vSphere admins curious about Hyper-V or anyone who is interested in learning more about this interesting technology. If you have any ideas for topics to cover or comments on the series, please use the comments or tweet them to @hawkbox.

Introducing Jason Crichton, aka @hawkbox

With summer upon us, I’ve taken a break from the blog. You’ll still see a few of my small posts pop out every so often, but no lengthy technical posts from me for a while. That doesn’t mean the blog is taking the summer off, though!

I’m proud to introduce a colleague and fellow Arsian, Jason Crichton, as a contributing author on my blog! Jason is going to write some articles over the summer about Hyper-V. For those of us (myself included!) who are only familiar with vSphere, Jason will help us compare the analogous features from each product with articles most Wednesdays this summer. Here’s a little background those of you who have not met Jason before:

My name is Jason Crichton, I’m an IT professional like Rob, just crossing over the 10 year mark of System Administration this summer. I started in the trenches of help desk and through a bit of luck and a lot of hard work now work as a Senior Systems Analyst for a relatively small multinational corporation. I tend to end up heavily involved in the virtualization, security, and operations aspects of the business. Recently I have moved into Powershell tool development for improving the lives of our help desk staff. I find the willingness of people like Rob to put the time and energy into sites like this incredibly valuable, so when he asked me to contribute I was thrilled at the opportunity to give back myself.
When I’m not working with tech, I tend to be motorbiking with my wife Christina on whatever random trip we’ve been able to organize.
My professional experience can be viewed on LinkedIn. Additionally you can follow me on twitter through @hawkbox.
Please give Jason a warm welcome to the blogosphere! If you have any requests for Hyper-V topics, please let myself or Jason know what you’d like to see covered. Thanks!

Saving the moon, #VirtualDesignMaster style

This week was another nail biter in the Virtual Design Master competition. Challenge 2 required us to save the moon while using someone else’s design plus a few constraints: must fit in 21U, have to use the same vendors (but can use different product lines) as the provided design used, and the big one, the moon base only has IPv6 networking. I understand IPv6 but certainly haven’t designed an IPv6-only network, so this was pretty scary and very time consuming for the research.

There were a lot of great designs presented by the VDM competitors. Three of us had to work off of Daemon Behr’s and six of us had to work off of my design from the previous challenge. It was fun to see how other people managed the same base project and morphed it into a project that had their fingerprints on it. Watch the results show and check out the designs (here’s mine). During the design and the judging, I learned a few things in no particular order:

  • Vendors are inconsistent at stating their IPv6 support stance for their products. You might have to track down an SE on twitter or in person to get the scoop, or you might get some vague information or conflicting information, or even no information at all. I made the assumption that no mention == no support, which is tolerable for design but a day or two in the lab could prove otherwise.
  • When you do find support statements, do not be surprised when there is no support for IPv6-only networking. vSphere itself supports IPv6 but doesn’t support IPv6 iSCSI with 2 of 3 initiator types; VSAN and Log Insight have zero IPv6 support; vCO supports it. Only one out of three, and it needs to be 100% for IPv6-only to work.
  • Everything you choose is a risk. A or B? Both risks. The Giant will kill you anyway.
  • Well, the judge is not going to kill you. But regardless of which design decision you took, be prepared to defend it. Neither A or B are wrong, so don’t think you’re “safe” because you choose the “right” one.
  • If budget isn’t an issue, do not undersize your solution. I used 12U out of 21U (I did have one good reason – with no redundant rack, one unit overheating could damage another touching it – but that’s  a stretch) and said “…if the design scales up.” This is a last ditch effort at saving humanity. There is no next ditch effort. Get 21U of equipment, or have a better reason than I did for not using it all.
  • MB is not the same as GB. Whoops.
  • The competitors have a great sense of co-opetition. While we all did our independent research,
  • I better learn about NetApp snapshots, or Josh Odgers may ask me a third time. That would not be good!

Again, I was pleasantly surprised to find that I survived to the next round. On top of that, the judges said that my design was impressive! I’m still stunned, but also very proud. I must again thank those who reviewed my proposal and offered advice. I definitely wouldn’t have made it without your help.

Challenge 3 actually terrifies me. Between now and Tuesday at midnight, I have to learn OpenStack, build a design based on it, then actually lab it up and provide video of the proof of concept! I always wanted to learn about OpenStack… Wish me and the other competitors luck!

Hiera, R10K, and the end of manifests as we know them

Last week, we started using Hiera. We’re going to do a lot more today. First, we’ll add what we have to version control, then we’ll integrated it with r10k, and we’ll wrap up by migrating more content out of manifests into Hiera. Along the way we’ll explain how Hiera works. I also encourage you to review the Puppet Labs docs and the source code as needed.

Version Control

Before we do anything else, we need to take the existing hiera data and put it in version control. Just like our modules and manifests, hiera data will be edited to match our feature implementation and to define the infrastructure in use. It is equally as important as our module configs. I’ve created a GitHub repository called hiera-tutorial and will reference that, but any git-based repository will work for our purposes. Create a directory for the local repo, add our existing content, and push that to the origin:

[rnelson0@puppet ~]$ cd git
[rnelson0@puppet git]$ mkdir hiera-tutorial
[rnelson0@puppet git]$ cd hiera-tutorial
[rnelson0@puppet hiera-tutorial]$ cp -r /etc/puppet/data/* ./
[rnelson0@puppet hiera-tutorial]$ ls
global.yaml  puppet_role
[rnelson0@puppet hiera-tutorial]$ git init
Initialized empty Git repository in /home/rnelson0/git/hiera/.git/
[rnelson0@puppet hiera-tutorial]$ git add .
[rnelson0@puppet hiera-tutorial]$ git commit -m 'Initial commit to hiera repo'
[master (root-commit) 4293c1c] Initial commit to hiera repo
 3 files changed, 8 insertions(+), 0 deletions(-)
 create mode 100644 global.yaml
 create mode 100644 puppet_role/puppet.yaml
[rnelson0@puppet hiera-tutorial]$ git remote add origin ssh://git@codecloud.web.att.com:7999/st_msspuppet/hiera.git
[rnelson0@puppet hiera-tutorial]$ git push origin master
Counting objects: 6, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (6/6), 451 bytes, done.
Total 6 (delta 0), reused 0 (delta 0)
To ssh://git@codecloud.web.att.com:7999/st_msspuppet/hiera.git
* [new branch]      master -> master

In a bit we’ll make a decision whether we want to use branches in our hiera data. If you decide to do that, create the production branch and push contents to it, then change the primary branch at GitHub and remove the defunct master branch. Even if you stick with a single branch, you can use this process to change the branch name, for example to data.

[rnelson0@puppet hiera-tutorial]$ git checkout -b production
Switched to a new branch 'production'
[rnelson0@puppet hiera-tutorial]$ git branch -d master
Deleted branch master (was 0e43028).
[rnelson0@puppet hiera-tutorial]$ git push origin production
Total 0 (delta 0), reused 0 (delta 0)
To git@github.com:rnelson0/hiera-tutorial.git
 * [new branch]      production -> production
<Change the primary branch at GitHub now>
[rnelson0@puppet hiera-tutorial]$ git push origin :master
To git@github.com:rnelson0/hiera-tutorial.git
 - [deleted]         master

With our existing content preserved, we can move on to the next step.

Integration with R10K.

Now we need to edit the r10k installation manifest. We have already configured r10k to populate environments with modules based off of the branches of our puppet and module repos. We will add a source for hiera, which will allow us to make changes to hiera via git and have the changes pushed around by r10k instead of manually copying files around ourselves. We will also be able to specify whether we have a monolithic hiera configuration or environment-specific configurations.

Because initial configuration of r10k was performed as root, move the r10k_installation.pp manifest from root’s home directory to ours, change permissions, and add the hiera location.

[rnelson0@puppet hiera-tutorial]$ cd
[rnelson0@puppet ~]$ sudo mv /root/r10k_installation.pp ./
[rnelson0@puppet ~]$ sudo chown rnelson0.rnelson0 r10k_installation.pp
[rnelson0@puppet ~]$ vi r10k_installation.pp

Here are the contents of the file. A source for hiera has been added – plus a comma after the existing puppet subsection – and the modulepath/manifestdir settings that were deprecated in Puppet 3.6.1 were removed. The new content is highlighted in bold:

class { 'r10k':
  version => '1.2.0',
  sources => {
    'puppet' => {
      'remote'  => 'https://github.com/rnelson0/puppet-tutorial.git',
      'basedir' => "${::settings::confdir}/environments",
      'prefix'  => false,
    },
    'hiera' => {
      'remote'  => 'https://github.com/rnelson0/hiera-tutorial.git',
      'basedir' => "${::settings::confdir}/data",
      'prefix'  => false,
    }
  },
  manage_modulepath => false,
}

Apply the manifest. You’ll see a small change to the hiera configuration file:

[rnelson0@puppet ~]$ sudo puppet apply r10k_installation.pp
Notice: Compiled catalog for puppet.nelson.va in environment production in 0.86 seconds
Notice: /Stage[main]/Main/Ini_setting[manifestdir]/ensure: created
Notice: /Stage[main]/R10k::Config/File[r10k.yaml]/content: content changed '{md5}3831ea2606d05e88804d647d56d2e12b' to '{md5}62730aa21170be02c455406043ef268e'
Notice: /Stage[main]/R10k::Config/Ini_setting[R10k Modulepath]/ensure: created
Notice: Finished catalog run in 0.57 seconds
[rnelson0@puppet ~]$ cat /etc/r10k.yaml
:cachedir: /var/cache/r10k
:sources:
  puppet:
    prefix: false
    basedir: /etc/puppet/environments
    remote: "https://github.com/rnelson0/puppet-tutorial.git"
  hiera:
    prefix: false
    basedir: /etc/puppet/data
    remote: "https://github.com/rnelson0/hiera-tutorial.git"

:purgedirs:
  - /etc/puppet/environments

We’ll also make a change to the datadir value in /etc/hiera.yaml (or /etc/puppet/hiera.yaml, as they should be symlinked). You have two choices here. The first example will allow us to create a hiera branch that matches our other feature branches, allowing a hiera configuration per environment. The disadvantage is that you MUST branch for each environment, otherwise catalog compilation failure will fail for those dynamic environments. The second examples, commented out, point directly to the production branch. This relies on defining the environment in /etc/hiera.yaml‘s :hierarchy: setting (and the commented-out line would be for a single data branch; there’s more than one way to do this). The advantage is that you don’t need to branch your hiera repo. Choose one of these settings and make it so. I’ve gone with the former in my public repo, but the last option at work.

  # One branch per environment
  :datadir: '/etc/puppet/data/%{environment}'

  # Make sure /etc/hiera.yaml's :hierarchy: includes "%{environment}" statements
  #:datadir: '/etc/puppet/data/production'
  # If you set the branch to 'data' you can tell r10k to use ${::settings::confdir} and this datadir
  #:datadir: '/etc/puppet/data'

Restart the puppetmaster service. Anytime you make changes to /etc/hiera.yaml, you will need to restart puppetmaster for the changes to take effect. You do NOT need to restart it when the contents of datadir are changed, however. Finally, re-run r10k and you’ll see the differences in the contents of /etc/puppet/data:

[rnelson0@puppet hiera-tutorial]$ ls /etc/puppet/data/puppet/role
server.yaml
[rnelson0@puppet hiera-tutorial]$ sudo r10k deploy environment -p
[rnelson0@puppet hiera-tutorial]$ ls /etc/puppet/data/puppet_role
ls: cannot access /etc/puppet/data/puppet_role: No such file or directory
[rnelson0@puppet hiera-tutorial]$ ls /etc/puppet/data/production/
.git/        global.yaml  puppet_role/

If everything went well, run puppet on the master and server01 and everything should work like it did before. Verify that before continuing. Some things to look for: the comma between the existing puppet and new hiera resource in the r10k manifest, that you did not add modulepath/manifestdir back to /etc/puppet/puppet.conf after migrating to environmentpath, and that you restarted the puppetmaster service.

Converting manifests

The next thing we need to do is convert our existing manifests/site.pp entries into hiera definitions. Last week, we converted the server role into a hiera definition. Let’s look at the definition for the puppet node:

node 'puppet.nelson.va' {
  include ::base
  notify { "Generated from our notify branch": }

  # PuppetDB
  include ::puppetdb
  include ::puppetdb::master::config

  # Hiera
  package { ['hiera', 'hiera-puppet']:
    ensure => present,
  }

  class { '::mcollective':
    client             => true,
    middleware         => true,
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }

  user { 'root':
    ensure => present,
  } ->
  mcollective::user { 'root':
    homedir     => '/root',
    certificate => 'puppet:///modules/site_mcollective/client_certs/root.pem',
    private_key => 'puppet:///modules/site_mcollective/private_keys/root.pem',
  }

  mcollective::plugin { 'puppet':
    package => true,
  }
}

We need to create a few profiles and roles, starting with the profiles. There will be 5 of them – including a new profile that ensures the puppet-master package is installed and running and some firewall rules for it (we should have been tracking this earlier, but I forgot!). I’ve only included the comment section from the first file, but you should be sure to include the header in each file:

[rnelson0@puppet profile]$ cat manifests/puppetdb.pp
# == Class: profile::puppetdb
#
# PuppetDB profile
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include profile::puppetdb
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class profile::puppetdb {
  include ::puppetdb
  include ::puppetdb::master::config
}
[rnelson0@puppet profile]$ cat manifests/hiera.pp
# Comments go here
class profile::hiera {
  package { ['hiera', 'hiera-puppet']:
    ensure => present,
  }
}
[rnelson0@puppet profile]$ cat manifests/mcollective/all.pp
# Comments go here
class profile::mcollective::all {
  class { '::mcollective':
    client             => true,
    middleware         => true,
    middleware_hosts   => [ 'puppet.nelson.va' ],
    middleware_ssl     => true,
    securityprovider   => 'ssl',
    ssl_client_certs   => 'puppet:///modules/site_mcollective/client_certs',
    ssl_ca_cert        => 'puppet:///modules/site_mcollective/certs/ca.pem',
    ssl_server_public  => 'puppet:///modules/site_mcollective/certs/puppet.nelson.va.pem',
    ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/puppet.nelson.va.pem',
  }

  mcollective::plugin { 'puppet':
    package => true,
  }
}
[rnelson0@puppet profile]$ cat manifests/mcollective/users.pp
# Comments go here
class profile::mcollective::users {
  user { 'root':
    ensure => present,
  } ->
  mcollective::user { 'root':
    homedir     => '/root',
    certificate => 'puppet:///modules/site_mcollective/client_certs/root.pem',
    private_key => 'puppet:///modules/site_mcollective/private_keys/root.pem',
  }
}
[rnelson0@puppet profile]$ cat manifests/puppet_master.pp
# Comments go here
class profile::puppet_master {
  package {'puppet-server':
    ensure => present,
  }

  firewall { '100 allow agent checkins':
    dport  => 8140,
    proto  => tcp,
    action => accept,
  }

  firewall { '110 sinatra web hook':
    dport  => 80,
    proto  => tcp,
    action => accept,
  }
}

The corresponding role is very simple:

class role::puppet {
  include profile::base  # All roles should have the base profile
  include profile::puppet_master
  include profile::puppetdb
  include profile::hiera
  include profile::mcollective::users
  include profile::mcollective::all
}

We still have a definition for puppet in the site manifest. You can reduce the site.pp file to just a few lines now:

Package {
  allow_virtual => true,
}

node default {
  hiera_include('classes')
}

Commit/push changes and re-deploy with r10k. Whoops, we forgot to update hiera. That’s okay, now you know what the error message looks like when you skip this step:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find data item classes in
 any Hiera data file and no default supplied at /etc/puppet/environments/production/manifests/site.pp:6 on nod
e puppet.nelson.va

Hopefully, you remember how to fix this from last week. If not, navigate to your hiera repo’s puppet_role/ directory. Create a yaml file for the puppet puppet_role (or whatever puppet_role you decided upon for it) and add the role::puppet class to it:

[rnelson0@puppet puppet_role]$ cat > puppet.yaml
---
classes:
  role::puppet

Commit/push/r10k and run puppet again. You should see the firewall rules from profile::puppet_master and maybe an mcollective update; if you already had the rule in place from our earlier work then you may not see anything other than a successful checkin.

Defines – A Special Case

If you review the site.pp file, you’ll notice that it uses a function called hiera_include. This includes the defined classes, and each of those classes look up their parameters via hiera. However, there are some other hiera functions you may need. A common example is the case of defines with node-specific information. Above, in profile::mcollective::users, we use two defines, user and mcollective::user. As the values for these defines are the same for each node, we simply provided all the correct key/value pairs we needed in the module. When we need the values to be specific to a node (or environment, or network – anything more specific than “all”), we can’t populate the define’s attributes in the module.

Since we don’t have anything like that in our example setup, we’ll have to make something up. Let’s create a definition for a DHCP server, that’s probably helpful for a lab network. Here’s an example of a way to do that with profiles and the site manifest, which we will then convert to hiera/roles and profiles:

<snippet of manifests/site.pp>
node /dhcp/ {
  include ::profile::dhcp_server

  ::dhcp::server::subnet { '10.0.0.0':
    broadcast   => '10.0.0.255',
    netmask     => '255.255.255.0',
    routers     => '10.0.0.1',
    range_begin => '10.0.0.100',
    range_end   => '10.0.0.150',
    dns_servers => ['10.0.0.1', '10.0.0.2'],
    domain_name => 'nelson.va',
    other_opts  => ['option ntp-servers 10.0.0.1'],
  }
}


[rnelson0@puppet profile]$ cat > manifests/dhcp_server.pp
class profile::dhcp_server {
  include mss::base
  include dhcp::server

  firewall { '100 Allow DHCP requests':
    proto  => 'udp',
    sport  => [67, 68],
    dport  => [67, 68],
    state  => 'NEW',
    action => 'accept',
  }
}

The define above is ::dhcp::server::subnet. You cannot hiera_include it, which means if you convert this to a role, every DHCP server will serve the same scope. If you’ve never had the pleasure of having two servers serving the same scope, I can guarantee you that you don’t want to! We’ll want to move that out of the module and into hiera, but how? We’ll use a combination of hiera_hash, which creates a hash from hiera data, and create_resources, which can instantiate resources, including defines, using a provided hash to populate the needed values. The PuppetLabs documentation shows a simple example of how this works with a flat manifest and a manifest with an array, but how would we go about doing this with hiera?

First, let’s take a look at where to put the data in hiera. If we look at the :hierarchy: portion of hiera.yaml, we’ve got a few options:

:hierarchy:
  - defaults
  - puppet_role/%{puppet_role}
  - "%{clientcert}"
  - "%{environment}"
  - global

The puppet_role will potentially apply to multiple nodes, so that’s out. Next is the clientcert value. Each node (and mcollective user!) has its own certificate, which can be seen by running puppet cert list –all on the master (optionally, use awk to only grab the important part):

[rnelson0@puppet puppet_role]$ sudo puppet cert list --all | awk '{print $2}'
"agent1.nelson.va"
"puppet.nelson.va"
"root"
"server01.nelson.va"

This should match the FQDN in most environments, but could also be the short hostname (i.e. agent1). This seems a likely choice for node-specific elements. You could also add to the hierarchy. Popular options are to provide a level that combines environment and clientcert/fqdn, as in “%{environment}/%{clientcert}”. I’ll assume the simple “%{clientcert}” level is being used with the value dhcp.nelson.va. In this yaml file, we need to create a hash called dhcp_subnet of all the values provided to the define, with a top-level hash key that matches the name of the define. In this case, that’s the subnet. All the other attributes are underneath this level. Let’s define the node yaml, plus the puppet_role yaml for dhcp:

[rnelson0@puppet hiera-tutorial]$ cat > dhcp.nelson.va.yaml
---
dhcp_subnet:
  '10.0.0.0':
    broadcast   : '10.0.0.255'
    netmask     : '255.255.255.0'
    routers     : '10.0.0.1'
    range_begin : '10.0.0.100'
    range_end   : '10.0.0.150'
    dns_servers :
      - '10.0.0.1'
      - '10.0.0.2'
    domain_name : 'nelson.va'
    other_opts  :
      - 'option ntp-servers 10.0.0.1'

[rnelson0@puppet hiera-tutorial]$ cat > puppet_role/dhcp.yaml
---
classes:
  role::dhc

To glue everything together, we now need to create the dhcp role and import this information. We’ll use hiera_hash to import the above hash, then create_resources to instantiate a ::dhcp::server::subnet with the hash’s contents:

[rnelson0@puppet role]$ cat manifests/dhcp.pp
class role::dhcp {
  include profile::base  # All roles should have the base profile
  include profile::dhcp

  create_resources(::dhcp::server::subnet, hiera_hash('dhcp_subnet'))
}

Commit/push/r10k the changes. Since we haven’t created a DHCP node yet, you’ll have to deploy a VM from a template, as we did last week, give it a hostname of ‘dhcp’ (numbered instances are far less likely with DHCP servers, but ‘dhcp01′ works just as well – as long as the hiera yaml filename matches!), and checkin/sign/checkin with puppet. It should receive all the configuration changes required to now be a DHCP server in the network 10.0.0.0/24, assigning leases between 10.0.0.100-150.

Some other common uses with defines are to create users and manage apache vhost definitions. In both cases, you don’t want the module to contain the definitions, but hiera. This successfully abstracts the data away from the service definition and allows you to reuse your code very efficiently.

Summary

Building on last week’s roles, profiles, and hiera introduction, we’ve examined how to use version control and r10k to manage hiera, how to convert simple node definitions into roles managed via hiera, and how to provide node-specific configuration on top of the role’s classes. If you have any other node definitions in regular *.pp manifests, take the time to convert them all to hiera. From here on out, we’ll assume that’s where your data lies.

At this point, I’ll be taking a bit of a break from the blog for the summer. I’m happy to say that my wife has taken a new job and we will have closed on a new house around the time this article is published. We’ll be involved in moving and settling into our new house for a few weeks, but I’ll be bringing more puppet-ey goodness in the fall. This seems like a great time to roll up all the cahnges, so I’ve added a tag, v0.5, to all the repos associated with this series, which is a snapshot in time of the moment this article was finished (6/23/2014).

In the fall, we’ll start working on scaling up our setup. In the meantime, enjoy your ability to deploy new VMs and quickly apply policy to them. Have a great summer, everyone!

I Survived #VirtualDesignMaster Challenge 1!

This week has been pretty exciting. It’s getting closer to the move and things are starting to seem real – which means more time is involved in it. Somehow, in the midst of all that, I managed to complete my design proposal for Virtual Design Master’s first challenge, a whopping 30 minutes before the due date. On Thursday night, all the contestants defended their design. To my surprise, I survived! I am thankful of some critical reviews from Jason Shiplett and some friends on IRC. We lost a few competitors, as is the nature of the challenge, but everyone’s designs are amazing. Check them out at http://www.virtualdesignmaster.com/.

This week’s challenge is about constraints. We have some physical constraints – have to use the same vendors, and needs to fit in 21U, oh and by the way, it’s on the moon – plus a unique requirement I haven’t seen anywhere else: IPv6 only. That’s going to be tough. But they weren’t done with the constraints yet. We have to use someone else’s design from challenge 1! Everyone on Team Beta has to work off the design by Daemon Behr (@VMUG_Vancouver). I’m very honored that my design (@rnelson0) was chosen as the design that Team Alpha has to work from.

If you are available next Thursday at 9PM Eastern, tune in at http://www.virtualdesignmaster.com/live/ to see the results of challenge 2!

Intro to Roles and Profiles with Puppet and Hiera

If you’ve been following along with the Puppet series, our next task is to start using roles and profiles. If you’re just visiting, feel free to review the series to get caught up. Today, we will discuss the roles and profiles pattern, start implementing it as well as a custom fact, and deploy a webserver on a node managed by puppet. Finally, we’ll move some of our configuration from the site manifest into Hiera.

NOTE: A small note on security. I’ve been running through this series as ‘root’ and earlier said, “Well, just be more secure in production.” That’s lame. This blog covers security as well as virtualization and automation so I’m going to live up to that. For now, I’ve added a local user with useradd, updated sudoers, and cloned all the repos so that I can show best practices, which will include doing most work as my user and then sudo/su to run a few commands as root. Later, we’ll manage local users via puppet.

[root@puppet git]# useradd rnelson0 -c "Rob Nelson"
[root@puppet git]# passwd rnelson0
Changing password for user rnelson0.
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.

[root@puppet ~]# cat > /etc/sudoers.d/puppetadmins
rnelson0        ALL=(ALL)       ALL

<Login as rnelson0>
[rnelson0@puppet ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rnelson0/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/rnelson0/.ssh/id_rsa.
Your public key has been saved in /home/rnelson0/.ssh/id_rsa.pub.
...
<Import key into github>
...
[rnelson0@puppet ~]$ cd git
[rnelson0@puppet git]$ git clone git@github.com:rnelson0/puppet-tutorial
Initialized empty Git repository in /home/rnelson0/git/puppet-tutorial/.git/
remote: Counting objects: 848, done.
remote: Compressing objects: 100% (579/579), done.
remote: Total 848 (delta 190), reused 841 (delta 186)
Receiving objects: 100% (848/848), 395.47 KiB, done.
Resolving deltas: 100% (190/190), done.
[rnelson0@puppet git]$ git clone git@github.com:rnelson0/rnelson0-base
Initialized empty Git repository in /home/rnelson0/git/rnelson0-base/.git/
remote: Reusing existing pack: 35, done.
remote: Total 35 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (35/35), 10.11 KiB, done.
Resolving deltas: 100% (10/10), done.
[rnelson0@puppet git]$ git clone git@github.com:rnelson0/site_mcollective.git
Initialized empty Git repository in /home/rnelson0/git/site_mcollective/.git/
remote: Reusing existing pack: 31, done.
remote: Total 31 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (31/31), 12.36 KiB, done.
Resolving deltas: 100% (5/5), done.

Provision Server01

At the beginning of this series, I described a set of 10 nodes we would use for learning simply called server01 through server10. If you did not already provision those nodes, you need to deploy at least one now. You can use a different name and IP but I will be referencing server01 and 10.0.0.51 throughout this article. Since it has been some time, be sure to either update your kickstart template or run ‘yum update -y‘ and reboot after deployment to avoid version mismatches and install the latest security patches. Be sure to add a local user and update sudoers as we have above so that you can manage the node securely.

Roles and Profiles

We’re going to look into implementing the roles and profile pattern. This pattern is very popular, though it is perhaps poorly named. In this pattern, you define a number of profiles, which specify resources for each profile, and then define roles that are a collection of individual profiles. Ideally, a role has many profiles, and each node definition references a single role. If a node requires more than one role, you should define a new role that has the union set of profiles the two roles have. Some people believe that each node should have a single profile that denotes the roles it should have and hence the pattern is named incorrectly. However, I will use the pattern as described, mostly because the majority of documentation on the internet assumes that you follow the one role/many profile pattern and implementing it the other way around thus leads to some confusion.

Before we begin, create a role and a profile module. I’ve created two repos on Github (called simply role and profile), two corresponding modules on the server with puppet module generate as we did above, and pushed the new files up to Github. Here are the commands, with the output truncated:

[rnelson0@puppet rnelson0-custom_facts]$ cd ..
[rnelson0@puppet git]$ puppet module generate --modulepath `pwd` rnelson0-role
[rnelson0@puppet git]$ puppet module generate --modulepath `pwd` rnelson0-profile
[rnelson0@puppet git]$ cd rnelson0-role/
[rnelson0@puppet rnelson0-role]$ git init
[rnelson0@puppet rnelson0-role]$ git add .
[rnelson0@puppet rnelson0-role]$ git commit -m "first commit"
[rnelson0@puppet rnelson0-role]$ git remote add origin git@github.com:rnelson0/rnelson0-role.git
[rnelson0@puppet rnelson0-profile]$ git add .
[rnelson0@puppet rnelson0-profile]$ git commit -m "first commit"
[rnelson0@puppet rnelson0-profile]$ git remote add origin git@github.com:rnelson0/rnelson0-profile.git
[rnelson0@puppet rnelson0-profile]$ git push -u origin master

Let’s build two example profiles of a base profile and a very simple web server. Our web server will require nothing more than apache. We can build the base profile by looking at init.pp and grabbing the ssh and ntp settings and putting them in manifests/profile/base.pp.

[rnelson0@puppet rnelson0-profile]$ cat > manifests/base.pp
# == Class: profile::base
#
# Base profile
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include profile::base
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class profile::base {
  include ::motd

  # SSH server and client
  class { '::ssh::server':
    options => {
      'PermitRootLogin'          => 'yes',
      'Protocol'                 => '2',
      'SyslogFacility'           => 'AUTHPRIV',
      'PasswordAuthentication'   => 'yes',
      'GSSAPIAuthentication'     => 'yes',
      'GSSAPICleanupCredentials' => 'yes',
      'AcceptEnv'                => 'LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT LC_IDENTIFICATION LC_ALL LANGUAGE XMODIFIERS',
      'Subsystem'                => '      sftp    /usr/libexec/openssh/sftp-server',
      'Banner'                   => '/etc/issue.net',
    },
  }
  class { '::ssh::client':
    options => {
      'Host *' => {
        'SendEnv'                   => 'LANG LC_*',
        'HashKnownHosts'            => 'yes',
        'GSSAPIAuthentication'      => 'yes',
        'GSSAPIDelegateCredentials' => 'no',
      },
    },
  }

  class { '::ntp':
    servers => [ '0.pool.ntp.org', '2.centos.pool.ntp.org', '1.rhel.pool.ntp.org'],
  }
}
CTRL-D

Let’s create a simple profile for the webserver and apply the puppetlabs’s apache class. Note that we have to use the full path to the module, ::apache; if we leave off the colons then it will find apache within the profile module and create a circular reference.

[rnelson0@puppet rnelson0-profile]$ cat > manifests/apache.pp
# == Class: profile::apache
#
# Apache profile
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include profile::apache
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class profile::apache {
  class {'::apache': }
}
CTRL-D

Finally, let’s create a role for the webserver called role::webserver:

[rnelson0@puppet rnelson0-profile]$ cd ../rnelson0-role/
[rnelson0@puppet rnelson0-role]$ cat > manifests/webserver.pp
# == Class: role::webserver
#
# Webserver role
#
# === Parameters
#
# None
#
# === Variables
#
# None
#
# === Examples
#
#  include role::webserver
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#
class role::webserver {
  include profile::apache
  include profile::base  # All roles should have the base profile
}

The roles and profiles are going to become handy when we define a fact that we can use to match the role.

Custom Facts

What is a fact, in Puppet’s parlance? A fact is a piece of information about a puppet node that is collected by the master system, or the local system when using puppet apply. This information appears in the form of top-scope variables that can be accessed in a manifest as $fact. Some examples are osfamily (redhat, debian, etc.), timezone, is_virtual, fqdn, and network information in the format ipaddress_<int>. You can see facts on a system with puppet by running facter (builtin facts only) or facter -p (includes puppet-defined facts). You must be root to see all facts, regular users only see some facts. Add the name of a fact if you just want to see that result.

[rnelson0@puppet ~]$ facter | wc
     74     244    3061
[rnelson0@puppet ~]$ sudo facter | wc
     85     300    3416
[rnelson0@puppet ~]# facter timezone
GMT

You can use these facts in a manifest. A common use is to determine an action to be taken based on the OS family – Redhat uses .rpm, Debian uses .deb, Solaris uses .pkg, and so on. When writing modules for public consumption – and I encourage that you design with this in mind even if the module is private – it is good practice to provide multi-platform support and this is a great way to do so. You can use any of the facts this way, for instance to use the IP address for eth0 as a default value, to check the free RAM before taking an action, or detecting if the node is running on a virtual platform.

What if you want the master to know something about nodes that isn’t provided by facter? That’s where custom facts come in. We can create a fact of our own choosing by writing some Ruby code, add it to the master, and on the next checkin, agents will receive the fact and start providing it in reports. I’m going to walk us through the process, and you can find more details here.

NOTE: If you started with Puppet about the same time as I started this series, you may have v1.7.x of facter. The current version for CentOS as of this writing is 2.0.1-1. Be sure to update with “yum update -y” or your distribution’s equivalent to ensure you match the latest and greatest before continuing. If you must stay at a previous version, you may run into some issues that you’ll have to debug on your own.

Requirements

We have a few requirements to use custom facts. The first is to ensure that pluginsync is enabled on the master(s). Make sure this configuration option is set in /etc/puppet/puppet.conf, and if not, set it and restart the puppetmaster service:

[main]
pluginsync = true

We also have to comply with the module structure <modulepath>/<module>/lib/facter/<customfact>.rb. Let’s create a new module and a repo called custom_facts and add a fact called in role.rb. If you have upgraded to puppet 3.5.x, you’ll notice that puppet module generates metadata for you during generation now!

[rnelson0@puppet git]$ puppet module generate --modulepath `pwd` rnelson0-custom_facts
We need to create a metadata.json file for this module.  Please answer the
following questions; if the question is not applicable to this module, feel free
to leave it blank.

Puppet uses Semantic Versioning (semver.org) to version modules.
What version is this module?  [0.1.0]
-->

Who wrote this module?  [rnelson0]
-->

What license does this module code fall under?  [Apache 2.0]
-->

How would you describe this module in a single sentence?
--> Custom facts for roles and profiles

Where is this module's source code repository?
--> https://github.com/rnelson0/rnelson0-custom_facts

Where can others go to learn more about this module?  [https://github.com/rnelson0/rnelson0-custom_facts]
-->

Where can others go to file issues about this module?  [https://github.com/rnelson0/rnelson0-custom_facts/issues]
-->

----------------------------------------
{
  "name": "rnelson0-custom_facts",
  "version": "0.1.0",
  "author": "rnelson0",
  "summary": "Custom facts for roles and profiles",
  "license": "Apache 2.0",
  "source": "https://github.com/rnelson0/rnelson0-custom_facts",
  "project_page": "https://github.com/rnelson0/rnelson0-custom_facts",
  "issues_url": "https://github.com/rnelson0/rnelson0-custom_facts/issues",
  "dependencies": [
    {
      "name": "puppetlabs-stdlib",
      "version_range": ">= 1.0.0"
    }
  ]
}
----------------------------------------

About to generate this metadata; continue? [n/Y]
--> y

Notice: Generating module at /home/rnelson0/git/rnelson0-custom_facts...
Notice: Populating ERB templates...
Finished; module generated in rnelson0-custom_facts.
rnelson0-custom_facts/tests
rnelson0-custom_facts/tests/init.pp
rnelson0-custom_facts/Rakefile
rnelson0-custom_facts/metadata.json
rnelson0-custom_facts/spec
rnelson0-custom_facts/spec/spec_helper.rb
rnelson0-custom_facts/spec/classes
rnelson0-custom_facts/spec/classes/init_spec.rb
rnelson0-custom_facts/README.md
rnelson0-custom_facts/manifests
rnelson0-custom_facts/manifests/init.pp
[rnelson0@puppet git]$ cd rnelson0-custom_facts/
[rnelson0@puppet rnelson0-custom_facts]$ mkdir facter
[rnelson0@puppet rnelson0-custom_facts]$ touch facter/roles.rb
[rnelson0@puppet rnelson0-custom_facts]$ git init
Initialized empty Git repository in /home/rnelson0/git/rnelson0-custom_facts/.git/
[rnelson0@puppet rnelson0-custom_facts]$ git add .
[rnelson0@puppet rnelson0-custom_facts]$ git commit -m 'First commit of custom_facts module'
[master (root-commit) a6239d7] First commit of custom_facts module
 7 files changed, 191 insertions(+), 0 deletions(-)
 create mode 100644 README.md
 create mode 100644 Rakefile
 create mode 100644 facter/roles.rb
 create mode 100644 manifests/init.pp
 create mode 100644 metadata.json
 create mode 100644 spec/classes/init_spec.rb
 create mode 100644 spec/spec_helper.rb
 create mode 100644 tests/init.pp
[rnelson0@puppet rnelson0-custom_facts]$ git remote add origin git@github.com:rnelson0/rnelson0-custom_facts.git
[rnelson0@puppet rnelson0-custom_facts]$ git push -u origin master
Counting objects: 15, done.
Compressing objects: 100% (10/10), done.
Writing objects: 100% (15/15), 3.77 KiB, done.
Total 15 (delta 0), reused 0 (delta 0)
To git@github.com:rnelson0/rnelson0-custom_facts.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.

Before you commit your changes, let’s update the Puppetfile to reference our three new modules (I’m going to start dropping output that isn’t vital from here on out):

[rnelson0@puppet rnelson0-profile]$ cd ../puppet-tutorial/
[rnelson0@puppet puppet-tutorial]$ vi Puppetfile
...
mod "custom_facts",
  :git => "git://github.com/rnelson0/rnelson0-custom_facts"

mod "role",
  :git => "git://github.com/rnelson0/rnelson0-role"

mod "profile",
  :git => "git://github.com/rnelson0/rnelson0-profile"
[rnelson0@puppet puppet-tutorial]$ git commit -a -m 'Add custom_facts, role, and profile modules'
[rnelson0@puppet puppet-tutorial]$ git push origin

Commit and push the changes up for all three repos. Don’t forget to set up your webhooks in every repo, especially the post-receive hook on the Github side!

Assigning Roles

Now that we have a role, how do we assign it? We can easily modify the site manifest and create a node definition for server01 with the correct role. That’s fine with one server, but what if we have multiple servers in that role? Imagine for the moment that all ten VMs, server01 through server10, will be web servers. If we strip off the numbers from the hostname, we are left with server. We can define a role that all server## VMs use, creating a farm of that application type. You’ve probably seen this before with www1, www2, etc., we’re just using a different string.

Let’s flesh out our custom fact to create a ‘role’ fact based on the hostname. In our example, the pattern is very simple, /([a-z]+)[0-9]+/, and we can discard the numbers. If there are no trailing numbers, we accept the hostname as it is. In a worst case scenario, we use ‘default’. There are some complex examples out there, for instance by using the entirety of an FQDN to codify a role, environment, and instance number in the short name and a location as the DNS suffix. Instead, we’ll use the hostname fact, a shorter regex, and only create a single fact. Here’s some ruby code that creates a fact called puppet_role:

# ([a-z]+)[0-9]+, i.e. www01 or logger22 have a puppet_role of www or logger
if Facter.value(:hostname) =~ /^([a-z]+)[0-9]+$/
  Facter.add('puppet_role') do
    setcode do
      $1
    end
  end

# ([a-z]+), i.e. www or logger have a puppet_role of www or logger
elsif Facter.value(:hostname) =~ /^([a-z]+)$/
  Facter.add('puppet_role') do
    setcode do
      $1
    end
  end

# Set to hostname if no patterns match
else
  Facter.add('puppet_role') do
    setcode do
      'default'
    end
  end
end

We have one other change to make. Our site manifest only has a node definition for the master node. Let’s create an empty definition for nodes that don’t have a specified block. Add this to the bottom of your site.pp:

node default {
}

Now, commit this change and redeploy your environments. Earlier, we enabled pluginsync. The master has to synchronize as well before any clients can synchronize with it, so a simple puppet agent –test should allow the master to get the fact. You can test it afterward with facter:

[rnelson0@puppet rnelson0-custom_facts]$ sudo puppet agent --test
...
Info: Loading facts in /etc/puppet/environments/production/modules/custom_facts/lib/facter/roles.rb
...
[rnelson0@puppet rnelson0-custom_facts]$ sudo facter -p puppet_role
puppet

The next item to work on is the node server01. When you run the agent you should see the fact downloaded and the puppet_role fact populated:

[rnelson0@server01 ~]$ sudo puppet agent --test
...
Info: Loading facts in /var/lib/puppet/lib/facter/roles.rb
...
[rnelson0@server01 ~]$ sudo facter -p puppet_role
server

Putting it all together

Sweet. We’ve defined profiles, a role that uses those profiles, and a fact that can generate a puppet role for similarly named servers. How do we use these things we have created? Let’s go back to our site manifest and look at our node definitions:

[rnelson0@puppet puppet-tutorial]$ grep node manifests/site.pp
node 'puppet.nelson.va' {
node default {

We can create a node definition now for our ‘server‘ nodes and include the webserver role. It’s three simple lines, which is just the way we like it:

node /^server\d+/ {
  include role::webserver
}

Deploy that on the master. With that simple statement, an agent noop from server01 will show you that puppet is now ready to install the components of the webserver role, from the base (ssh/ntp/motd) and apache (apache) profiles.

[rnelson0@server01 ~]$ sudo puppet agent --test --noop
...
Notice: /Stage[main]/Motd/File[/etc/motd]/content:
...
Notice: Class[Ntp::Service]: Would have triggered 'refresh' from 1 events
...
Notice: /Stage[main]/Apache::Service/Service[httpd]: Would have triggered 'refresh' from 49 events
...

If everything looks good, run it again without the noop and afterward, you should be able to visit http://server01 and see an empty directory listing. If you have iptables enabled, you may need to stop it as we haven’t opened port 80 yet.

There is one last little bit to do. We went to all that work to create the fact puppet_role but did not use it. Let’s not let that effort go to waste! You can use this fact in many ways. In our example, we’ll update the Hiera hierarchy to load role-specific information. First, edit the hierarchy in /etc/hiera.yaml to look something like this:

:hierarchy:
  - defaults
  - puppet_role/%{puppet_role}
  - "%{clientcert}"
  - "%{environment}"
  - global

Hiera can examine puppet facts. In this case, it will look in the datadir (/etc/puppet/data in our setup) for a file called puppet_role/%{puppet_role}.yaml and use any data available in it. We’ll use hiera to define our classes and remove the specific node definitions from the site manifest. First, here’s the yaml:

[rnelson0@puppet rnelson0-profile]$ cat /etc/puppet/data/global.yaml
---
puppetmaster: 'puppet.nelson.va'
classes:
  profile::base
[rnelson0@puppet rnelson0-profile]$ cat /etc/puppet/data/puppet_role/server.yaml
---
classes:
  role::webserver

Second, update your site manifest by removing the server## definition and updating the default definition:

[rnelson0@puppet puppet-tutorial]$ git diff
diff --git a/manifests/site.pp b/manifests/site.pp
index 16d22b9..3757419 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -37,9 +37,6 @@ node 'puppet.nelson.va' {
   }
 }

-node /^server\d+/ {
-  include role::webserver
-}
-
 node default {
+  hiera_include('classes')
 }

The hiera_include function calls include and passes it the values found in the hiera lookup for classes. For nodes without a corresponding puppet_role or specific node definition, they will receive profile::base. Server01’s puppet_role has a corresponding hiera file, so it will receive role::webserver. Push the changes upstream, uninstall apache from server01, and re-run the agent:

[rnelson0@server01 ~]$ sudo yum remove httpd
...
[rnelson0@server01 ~]$ sudo puppet agent --test
...
Notice: /Stage[main]/Apache/Package[httpd]/ensure: created
...

With the definition now created for the puppet_role of server, you can deploy the remaining server02-server10 and all nodes will receive the role::webserver class. Just right click on your CentOS template, name your VM/set the hostname to server02 and an IP of 10.0.0.52. After the first boot, run puppet agent –test and watch the node receive the specified role.

The other node in our system is the puppet master. You can create a role::puppetmaster, based on the existing node definition and our profile::base class, and assign that to the puppet_role of puppet. I’ll leave that process as an exercise for the user.

Summary

Today, we created a custom fact and implemented the roles and profiles pattern through two modules. These three components are used together with YAML data in Hiera to allow us to deploy a single role to multiple servers of a like type. We explored simplifying the site manifest to rely on Hiera. In future sessions, we’ll expand on how to dynamically populate our hierarchy and eliminate all reliance on the site manifest.

I’d also like to thank Craig Dunn and Gary Larizza for their foundation work, via their blogs at http://www.craigdunn.org/ and http://garylarizza.com/. I’m very happy to be standing on their shoulders and I hope I’ve been able to provide some value on top of that. If you haven’t already, go read their sites, they’ve got a lot more to say about roles, profiles, and Puppet.

#vExpert, #VirtualDesignMaster, and other Stuff

My summer has been exciting. On Wednesday, I received notification that I was accepted as a vExpert for 2014! That’s pretty awesome, both as confirmation that hard work has payed off and encouragement to keep it up in the future. On Thursday night, the Virtual Design Master competition kicked off. This will hopefully keep me busy throughout the summer. I haven’t even gotten started on it, though, as I am on-call this week and things blew up right after the live start. Here’s hoping it settles down so I can work this weekend!

As if that wasn’t busy enough, my wife accepted a new job in June with a start date in August. We’ll be moving in support of that around the end of the month. With that in mind, I’m taking a summer break from the blog (but certainly not a vacation!). I have a few scheduled articles that will take me through the end of July and I’m hoping to have a guest author to cover August until VMworld. I plan to get back to blogging in early September.

Until then, here is a mix of the most popular articles and the ones I really enjoyed writing.

  • Puppet – There are two more articles to complete the intro portion. Now that you’re familiar with Puppet, we’ll look at closer integration with vSphere in the Fall.
  • Auto Deploy Deep Dive – I was hoping to present this at VMworld but it wasn’t meant to be! Check out the #vBrownBag presentation, too. They’re in the middle of a Cisco track and will be covering Docker on 7/23, good stuff.
  • The Philosophy of Ender’s Game – Now that the movie’s out on DVD and cable, it’s a good time to watch it again and do some critical analysis. Preferably on your tablet while piloting a quadcopter drone, both ideas that can be traced back to this novel. This wasn’t very popular, but it was one of my favorite articles to write. It’s always fun to wax philosophical.
  • Snapshots Management – Surprisingly, this recent article seems very popular. I shouldn’t be surprised, snapshots continue to be a wildly misunderstood tool that cause problems for even veteran vSphere admins.
  • InfoSec and Social Media – This article was a result of attending CPX 2014 and comparing it to VMworld 2013. I had fun writing it, soliciting feedback, and working to do the things I said I would.
  • Synology Multi-VLAN Setup – This remains a very popular article. I hope Synology makes VLAN configuration a little smoother in future DSM revisions, but until then, this will get you going.

Have a great summer!