vRealize Orchestrator Workflows for Puppet Enterprise

Over the past three years, my Puppet for vSphere Admins series has meandered through a number of topics, mostly involved on the Puppet side and somewhat light on the vSphere side. That changed a bit with my article Make the Puppet vRealize Automation plugin work with vRealize Orchestrator, describing how to use the plugin’s built-in workflows to perform some actions on your VMs. However, you had to invoke the workflows one by one, and they only worked on existing VMs. That is not good enough for automation! Today, we will start to look at how to integrate the Puppet Enterprise plugin into other workflows to provide end-to-end lifecycle management for your VMs.

What is the lifecycle of a VM? This can vary quite a bit, so the lifecycle we will work with today is made to be generic enough for everyone to use, but flexible enough that everyone can expand on it. It consists of:

  • Provisioning
    • Updating ancillary systems prior to VM creation (IPAM, DNS, etc)
    • Deploying a Virtual Machine
    • Installing Puppet Enterprise on the VM
    • Using Puppet Enterprise to provision services on and configure the VM
    • Add the new VM to a vCenter tag-based backup system
  • Decommission
    • Delete the VM (removes from backups)
    • Purge the record from PE
    • Update ancillary systems after VM removal (IPAM, DNS, etc)

Continue reading

Connecting Puppetboard to Puppet Enterprise

Last week, I moved the home lab to Puppet Enterprise. One of the things I love about PE is the Console. However, I am a member of Vox Pupuli and we develop Puppetboard (the app AND the module) so it is convenient for me to use it and tie it into PE as well. Though the two overlap, each has functionality the other doesn’t. I really love the count of agent runs by status on the Puppetboard Overview page, for instance. After my migration, however, my previously-working puppetboard just gave me HTTP 500 errors. Fixing it took some wrangling. Thanks to Tim Meusel for his assistance with the cert issue.

First, let’s look at the existing manifest and hiera data for my profile::puppetboard class:

Continue reading

What goes in a Puppet Role or Profile?

The Roles and Profiles pattern by Craig Dunn is a very common pattern used by Puppet practitioners. I’ve written about it before. One of the most common questions I see is, what goes into a Role or Profile class? Craig’s article provides some guidelines, specifically these two:

 

  • A role includes one or more profiles to define the type of server
  • A profile includes and manages modules to define a logical technical stack

 

Those are pretty helpful, but it’s not an exhaustive list, nor does it describe what is prohibited in each type of class. While the main goal of the pattern is composition, I have my own guidelines I follow that may help others:

Roles

  • No parameters
  • Includes profile classes
  • [Rarely] Ordering of resources that come from two separate profiles
  • Contains nothing else.

Here’s an example role for an application server:

role appX {
  include profile::base
  include profile::apache
  include profile::appX
  Package<| tag == 'profile::apache' |> -> Package <| tag == 'profile::appX' |>
}

Profiles

  • Optional parameters
  • Includes component modules
  • Includes basic resource types (built-in or from component modules)
  • Calls functions, include hiera_*() and lookup()
  • Traverses and manipulates variables to process their data
  • Conditionals (limited)
  • Ordering of resources, within the profile
  • May call other profiles, but should be used sparingly
  • If the code is >100 lines, consider separating the profile class into its own module, or finding an existing component module that include the functionality (100 is a very arbitrary number, feel free to adjust it to a number indicating when you want to start thinking about this option)

Here’s an example of a profile that calls other profiles:

class profile::base {

  # Include OS specific base profiles.
  case $::kernel {
    'linux': {
      include profile::base::linux
    }
    'windows': {
      include profile::base::windows
    }
    'JUNOS': {
      include profile::base::junos
    }
    default: {
      fail ("Kernel: ${::kernel} not supported in ${module_name}")
    }
  }
}

Here’s an example of a more complex profile that has parameters and includes other component modules, basic resources, functions, iteration, conditionals, and even another profile:

class profile::base::linux (
  $yumrepo_url,
  $cron_purge          = true,
  $domain_join         = false,
  $metadata_expire     = 21600, # Default value for yum is 6 hours = 21,600 seconds
  $sudo_confs          = {},
  $manage_firewall     = true,  # Manage iptables
  $manage_puppet_agent = true,  # Manage the puppet-agent including upgrades
) {
  # Manage the basics, but allow users to override some management components with flags
  if $manage_firewall {
    include profile::linuxfw
  }
  if $manage_puppet_agent {
    include puppet_agent
  }

  include ntp
  include rsyslog::client
  include motd

  # SSH server and client
  include ssh::server
  include ssh::client

  # Sudo setup
  include sudo
  $sudo_confs.each |$group, $config| {
    sudo::conf{ $group:
      * => $config,
    }
  }

  yumrepo {'local-el':
    ensure          => present,
    descr           => 'Local EL - x86_64',
    baseurl         => $yumrepo_url,
    enabled         => 1,
    gpgcheck        => 0,
    metadata_expire => $metadata_expire,
  }
  Yumrepo['local-el'] -> Package<| |>

  # Ensure unmanaged cron entries are removed
  resources { 'cron':
    purge => $cron_purge,
  }

  if $domain_join {
    include profile::domain_join
  }
}

Summary

The Roles and Profiles pattern is all about composition. The Style Guide helps you with layout and semantic choices. It doesn’t hurt to add your own rules about content and design, too. There’s no defined Best Practice here, but I hope these guidelines help you shape your own practice. Enjoy!

Migrating my home lab from Puppet OpenSource to Puppet Enterprise

I have been using Puppet Enterprise at work and Puppet OpenSource at home for a few years now. There’s a lot to love about both products, but since work uses PE and new features tend to land there first, I have been thinking about trying PE at home as well. I don’t have a large fleet or a large change velocity, so I think the conversion of the master might take some work but the agents will be easy. This will let me play with PE-only functions in my lab (specifically the PE Pipeline plugin for Jenkins) and reduces concern about drift between lab findings and work usage. It does have the downside that some of my blog articles, which I was always assured would work with the FOSS edition, may not be as foolproof in the future. However, I rarely saw that as a problem going the other way in the past, with mcollective management being the only exception. I haven’t written about mcollective much, so I think this is worth the tradeoff.

I am going to migrate my systems, rather that start fresh. This article was written as the migration happened, so if you pursue a similar migration, please read the whole article before starting – I did some things backwards or incomplete, and made a few mistakes that you could avoid. It was also written as something of a stream of consciousness. I’ve tried to edit it as best I can without losing that flow. I do hope you find it valuable.

Pre-Flight Checklist

Puppet Enterprise is a commercial product. You can use it for free with up to 10 nodes, though. This is perfect for my 9 managed-node home network. After that, it costs something around $200/node/year. Make sure you stay within that limit or pony up – we are responsible adults, we pay for the products we use. If you replace nodes, you can deactivate those nodes so they don’t continue to eat into your license count.

Continue reading

Making the Puppet vRealize Automation plugin work with vRealize Orchestrator

I’m pretty excited about this post! I’ve been building up Puppet for vSphere Admins for a few years now but the final integration aspects between Puppet and vSphere/vCenter were always a little clunky and difficult to maintain without specific dedication to those integration components. Thanks to Puppet and VMware, that’s changed now.

Puppet announced version 2.0 of their Puppet Plugin for vRealize Automation this week. There’s a nice video attached, but there’s one problem – it’s centered on vRealize Automation (vRA) and I am working with vRealize Orchestrator (vRO)! vRO is included with all licenses of vCenter, whereas vRA is a separate product that costs extra, and even though vRA requires a vRO engine to perform a lot of its work, it abstracts a lot of the configuration and implementation details away that vRO-only users need to care about. This means that much of the vRA documentation and guides you find, for the Puppet plugin or otherwise, are always missing some of the important details needed to implement the same functionality – and sometimes won’t work at all if it relies on vRA functionality not available to us.

Don’t worry, though, the Puppet plugin DOES work with vRO! We’ll look at a few workflows to install, run, and remove puppet from nodes and then discuss how we can use them within larger customized workflows. You must already have an installed vRealize Orchestrator 7.x instance configured to talk to your vCenter instance. I’m using vRO 7.0.0 with vCenter 6.0. If you’re using a newer version, some of the dialogs I show may look a little different. If you’re still on vRO 6.x, the configuration will look a LOT different (you may have to do some research to find the equivalent functionality) but the workflow examples should be accurate.

Puppet provides a User Guide for use with a reference implementation. I’ll be mostly repeating Part 2 when installing and configuring, but reality tends to diverge from reference so we’ll explore some non-reference details as well.

Continue reading

Automating Puppet tests with a Jenkins Job, version 1.1

Today, let’s build on version 1.0 of our Jenkins job. We are running builds against every commit, but when someone opens a pull request, they don’t get automated builds or feedback. If the PR submitter even knows about Jenkins, and has network access and a login, they can look at it to find out how the tests went, but most people aren’t going to have that visibility (especially if your Jenkins server is private, as in this example setup). We need to make sure Jenkins is aware of the pull request and that it updates the PR with the status. Our end goal is for each PR to start a Jenkins build and update the PR with a successful check when done:

To get there, we will install and configure a new plugin and configure our job to use the plugin.

Continue reading

Automating Puppet tests with a Jenkins Job, version 1.0

As I’ve worked through setting up Jenkins and Puppet (and remembering my password!), I created a job to automate rspec tests on my puppet controlrepo. I am sure I will go through many iterations of this as I learn more, so we’ll just call this version 1.0. The goal is that when I push a branch to my controlrepo on GitHub, Jenkins automagically runs the test. Today, we will ensure that Jenkins is notified of activity on a GitHub repo, that it spins up a clean test environment without any left over files that may inadvertently assist, and run the tests. What it will NOT do is notify anyone – it won’t work off a Pull Request and provide feedback like Travis CI does, for instance. Hopefully, I will figure that out soon.

The example below is using GitHub. You can certainly make this work with BitBucket, GitLab, Mercurial, and tons of other source control systems and platforms, but you might need some additional Jenkins Plugins. It should be pretty apparent where to change Git/GitHub to the system/platform you chose.

Creating A Job

From the main view of your Jenkins instance, click New Item. Call it whatever you want, choose Freestyle project as the type, and click OK. The next page is going to be where we set up all the parameters for the job. There are tabs across the top AND you can scroll down; you’ll see the same selection items either way. Going from the top to the bottom, the settings that we want:

Continue reading