Enterprise Linux 7.3 makes some backwards-incompatible changes to interface names

Today, I was caught off guard by a change in Enterprise Linux 7.3. Apparently, systemd was assigning interface names like eno16780032 based on “garbage” data. I’m not really a fan of ANY of the names the modern schemes generate, what was the problem with eth#? But that’s beside the point. What hit me was that starting in 7.3 the garbage data is now being discarded and this results in a change in interface names. All this, in a point release. Here’s KB 2592561 that details the change. This applies to both Red Hat EL and CentOS, and presumably other members of the family based on RHEL 7.

The good news is that existing nodes that are updated are left alone. A file is generated to preserve the garbage data and your interface name. Unlike other udev rules, this one ONLY applies to existing systems that want to preserve the naming convention:

[root@kickstarted ~]# cat /etc/udev/rules.d/90-eno-fix.rules
# This file was automatically generated on systemd update
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:95:de:4e", NAME="eno16780032"

As you can see, it’s based on the MAC. That’s the bad news. If you plan to use the node as a template to deploy other VMs, the resulting VMs will effectively receive “new” interfaces based on the new MAC, resulting in ens192 instead of eno16780032. This definitely introduces at least one minor issue: the eno16780032 configuration is left intact and the interface is missing, so every call to systemctl restart network generates an error. It can also cause other issues for you if you have scripts, tools, provisioning systems, etc., that are predicting your nodes will have interface names like ens192. This is not difficult to remedy, thankfully.

Continue reading

Connecting Puppetboard to Puppet Enterprise

Last week, I moved the home lab to Puppet Enterprise. One of the things I love about PE is the Console. However, I am a member of Vox Pupuli and we develop Puppetboard (the app AND the module) so it is convenient for me to use it and tie it into PE as well. Though the two overlap, each has functionality the other doesn’t. I really love the count of agent runs by status on the Puppetboard Overview page, for instance. After my migration, however, my previously-working puppetboard just gave me HTTP 500 errors. Fixing it took some wrangling. Thanks to Tim Meusel for his assistance with the cert issue.

First, let’s look at the existing manifest and hiera data for my profile::puppetboard class:

Continue reading

What goes in a Puppet Role or Profile?

The Roles and Profiles pattern by Craig Dunn is a very common pattern used by Puppet practitioners. I’ve written about it before. One of the most common questions I see is, what goes into a Role or Profile class? Craig’s article provides some guidelines, specifically these two:

 

  • A role includes one or more profiles to define the type of server
  • A profile includes and manages modules to define a logical technical stack

 

Those are pretty helpful, but it’s not an exhaustive list, nor does it describe what is prohibited in each type of class. While the main goal of the pattern is composition, I have my own guidelines I follow that may help others:

Roles

  • No parameters
  • Includes profile classes
  • [Rarely] Ordering of resources that come from two separate profiles
  • Contains nothing else.

Here’s an example role for an application server:

role appX {
  include profile::base
  include profile::apache
  include profile::appX
  Package<| tag == 'profile::apache' |> -> Package <| tag == 'profile::appX' |>
}

Profiles

  • Optional parameters
  • Includes component modules
  • Includes basic resource types (built-in or from component modules)
  • Calls functions, include hiera_*() and lookup()
  • Traverses and manipulates variables to process their data
  • Conditionals (limited)
  • Ordering of resources, within the profile
  • May call other profiles, but should be used sparingly
  • If the code is >100 lines, consider separating the profile class into its own module, or finding an existing component module that include the functionality (100 is a very arbitrary number, feel free to adjust it to a number indicating when you want to start thinking about this option)

Here’s an example of a profile that calls other profiles:

class profile::base {

  # Include OS specific base profiles.
  case $::kernel {
    'linux': {
      include profile::base::linux
    }
    'windows': {
      include profile::base::windows
    }
    'JUNOS': {
      include profile::base::junos
    }
    default: {
      fail ("Kernel: ${::kernel} not supported in ${module_name}")
    }
  }
}

Here’s an example of a more complex profile that has parameters and includes other component modules, basic resources, functions, iteration, conditionals, and even another profile:

class profile::base::linux (
  $yumrepo_url,
  $cron_purge          = true,
  $domain_join         = false,
  $metadata_expire     = 21600, # Default value for yum is 6 hours = 21,600 seconds
  $sudo_confs          = {},
  $manage_firewall     = true,  # Manage iptables
  $manage_puppet_agent = true,  # Manage the puppet-agent including upgrades
) {
  # Manage the basics, but allow users to override some management components with flags
  if $manage_firewall {
    include profile::linuxfw
  }
  if $manage_puppet_agent {
    include puppet_agent
  }

  include ntp
  include rsyslog::client
  include motd

  # SSH server and client
  include ssh::server
  include ssh::client

  # Sudo setup
  include sudo
  $sudo_confs.each |$group, $config| {
    sudo::conf{ $group:
      * => $config,
    }
  }

  yumrepo {'local-el':
    ensure          => present,
    descr           => 'Local EL - x86_64',
    baseurl         => $yumrepo_url,
    enabled         => 1,
    gpgcheck        => 0,
    metadata_expire => $metadata_expire,
  }
  Yumrepo['local-el'] -> Package<| |>

  # Ensure unmanaged cron entries are removed
  resources { 'cron':
    purge => $cron_purge,
  }

  if $domain_join {
    include profile::domain_join
  }
}

Summary

The Roles and Profiles pattern is all about composition. The Style Guide helps you with layout and semantic choices. It doesn’t hurt to add your own rules about content and design, too. There’s no defined Best Practice here, but I hope these guidelines help you shape your own practice. Enjoy!

Migrating my home lab from Puppet OpenSource to Puppet Enterprise

I have been using Puppet Enterprise at work and Puppet OpenSource at home for a few years now. There’s a lot to love about both products, but since work uses PE and new features tend to land there first, I have been thinking about trying PE at home as well. I don’t have a large fleet or a large change velocity, so I think the conversion of the master might take some work but the agents will be easy. This will let me play with PE-only functions in my lab (specifically the PE Pipeline plugin for Jenkins) and reduces concern about drift between lab findings and work usage. It does have the downside that some of my blog articles, which I was always assured would work with the FOSS edition, may not be as foolproof in the future. However, I rarely saw that as a problem going the other way in the past, with mcollective management being the only exception. I haven’t written about mcollective much, so I think this is worth the tradeoff.

I am going to migrate my systems, rather that start fresh. This article was written as the migration happened, so if you pursue a similar migration, please read the whole article before starting – I did some things backwards or incomplete, and made a few mistakes that you could avoid. It was also written as something of a stream of consciousness. I’ve tried to edit it as best I can without losing that flow. I do hope you find it valuable.

Pre-Flight Checklist

Puppet Enterprise is a commercial product. You can use it for free with up to 10 nodes, though. This is perfect for my 9 managed-node home network. After that, it costs something around $200/node/year. Make sure you stay within that limit or pony up – we are responsible adults, we pay for the products we use. If you replace nodes, you can deactivate those nodes so they don’t continue to eat into your license count.

Since I have an existing Puppet OpenSource master, I wanted to preserve the Certificate authority and PuppetDB contents. I copied the SSL directory to preserve the CA and used puppet db via puppet-client-tools to export the database. I found I had to use sudo su - because puppet db as root worked, but sudo puppet db as my user complained Error: Unknown Puppet subcommand 'db'. I assume this is because /opt/puppetlabs/bin/puppet-db, provided by the client tools package, is not in my sudopath. After creating the backups, I copied the files to my desktop since the server’s IP/hostname is being re-used, preventing a VM-to-VM transfer later.

sudo su -
tar czvf ~rnelson0/cabackup.tar.gz /etc/puppetlabs/puppet/ssl
yum install -y puppet-client-tools
puppet db export ~rnelson0/my-puppetdb-export.tar.gz
chown rnelson0.rnelson0 ~rnelson0/cabackup.tar.gz ~rnelson0/my-puppetdb-export.tar.gz

I also need to update my vSphere template, which includes puppet’s PC1 repo and the puppet-agent packages. I booted my centos template up and ran yum update while I was at it, then removed all the puppet FOSS packages from it before deploying the new target puppet master from the updated template.

The template updates and the new VM deploy take quite a while. In the meantime, I had to adjust my controlrepo. I was using jlambert121/puppet to manage the puppet master and agents, and that’s not fully compatible with Puppet Enterprise. Neither is puppetlabs/puppetdb. Starting with with the Puppetfile and .fixtures.yml files, remove those module. Next, all instances of the modules need to be removed from profiles and hiera data (it can technically stay in hiera but it’s going to clutter things up). I also have a few references directly to file locations that have changed. Since I was already on Puppet 4 and the locations between FOSS and PE are the same, I have no changes to make, but if you’re migrating from Puppet 3, it’s something to consider. Here’s what my git diff looks like:

$ git diff
diff --git a/.fixtures.yml b/.fixtures.yml
index cc079c7..f3fc171 100644
--- a/.fixtures.yml
+++ b/.fixtures.yml
@@ -27,9 +27,6 @@ fixtures:
     portage:
       repo: "gentoo/portage"
       ref: "2.3.0"
-    puppet:
-      repo: "jlambert121/puppet"
-      ref: "0.8.2"
     createrepo:
       repo: "palli/createrepo"
       ref: "1.1.0"
@@ -93,9 +90,6 @@ fixtures:
     postgresql:
       repo: "puppetlabs/postgresql"
       ref: "4.8.0"
-    puppetdb:
-      repo: "puppetlabs/puppetdb"
-      ref: "5.1.2"
     ruby:
       repo: "puppetlabs/ruby"
       ref: "0.5.0"

diff --git a/Puppetfile b/Puppetfile
index 25056eb..5b9ce0a 100644
--- a/Puppetfile
+++ b/Puppetfile
@@ -10,7 +10,6 @@ mod 'garethr/docker', '5.3.0'
 mod 'garethr/erlang', '0.3.0'
 mod 'gentoo/portage', '2.3.0'
 mod 'golja/gnupg', '1.2.3'
-mod 'jlambert121/puppet', '0.8.2'
 mod 'maestrodev/rvm', '1.13.1'
 mod 'palli/createrepo', '1.1.0'
 mod 'puppet/archive', '1.1.2'
@@ -34,7 +33,6 @@ mod 'puppetlabs/mysql', '3.10.0'
 mod 'puppetlabs/ntp', '6.0.0'
 mod 'puppetlabs/pe_gem', '0.2.0'
 mod 'puppetlabs/postgresql', '4.8.0'
-mod 'puppetlabs/puppetdb', '5.1.2'
 mod 'puppetlabs/ruby', '0.5.0'
 mod 'puppetlabs/stdlib', '4.14.0'
 mod 'puppetlabs/vcsrepo', '1.5.0'

diff --git a/dist/profile/manifests/base.pp b/dist/profile/manifests/base.pp
index 5316740..b0da54d 100644
--- a/dist/profile/manifests/base.pp
+++ b/dist/profile/manifests/base.pp
@@ -23,7 +23,6 @@ class profile::base {
   include ::ntp
   include ::rsyslog::client
   include ::motd
-  include puppet

   # Yum repository
   $yumrepo_url  = hiera('yumrepo_url')
@@ -46,17 +45,4 @@ class profile::base {
   if ($local_users) {
     create_resources('local_user', $local_users)
   }
 }
 
diff --git a/dist/profile/manifests/puppet_master.pp b/dist/profile/manifests/puppet_master.pp
index c694306..cd44ea5 100644
--- a/dist/profile/manifests/puppet_master.pp
+++ b/dist/profile/manifests/puppet_master.pp
@@ -12,7 +12,6 @@
 #
 class profile::puppet_master {
   include ::epel
-  include ::puppet

   include ::hiera

diff --git a/dist/role/manifests/puppet.pp b/dist/role/manifests/puppet.pp
index ae00505..7867575 100644
--- a/dist/role/manifests/puppet.pp
+++ b/dist/role/manifests/puppet.pp
@@ -13,5 +13,4 @@
 class role::puppet {
   include profile::base  # All roles should have the base profile
   include profile::puppet_master
-  include profile::puppetdb
 }

diff --git a/hiera/global.yaml b/hiera/global.yaml
index 601cc6a..fbc1058 100644
--- a/hiera/global.yaml
+++ b/hiera/global.yaml
@@ -37,7 +37,3 @@ ntp::servers:
   - '0.pool.ntp.org'
   - '2.centos.pool.ntp.org'
   - '1.rhel.pool.ntp.org'
-puppet::runmode: service
-puppet::env: production

diff --git a/hiera/puppet_role/puppet.yaml b/hiera/puppet_role/puppet.yaml
index 04e30f1..7244f4b 100644
--- a/hiera/puppet_role/puppet.yaml
+++ b/hiera/puppet_role/puppet.yaml
@@ -7,23 +7,6 @@ hiera::hierarchy:
   - 'global'
 hiera::datadir: '/etc/puppetlabs/code/environments/%%{::}{::environment}/hiera'
 hiera::puppet_conf_manage: false
-puppet::server: true
-puppet::server_version: '2.3.0-1.el7'
-puppet::server_reports:
-  - 'puppetdb'
-puppet::dns_alt_names:
-  - 'puppet'
-puppet::puppetdb_server: 'puppet.nelson.va'
-puppet::puppetdb: true
-puppet::manage_puppetdb: false
-puppet::manage_hiera: false
-puppet::firewall: true
-puppetdb::listen_address: '0.0.0.0'
-puppetdb::ssl_set_cert_paths: true
-puppetdb::master::config::restart_puppet: false
-puppetdb::node_ttl: 30d
-puppetdb::node_purge_ttl: 60d
-puppetdb::report_ttl: 30d
 r10k::version: '2.1.1'
 r10k::sources:
   puppet:

diff --git a/spec/classes/base_spec.rb b/spec/classes/base_spec.rb
index fc33e31..04f8460 100644
--- a/spec/classes/base_spec.rb
+++ b/spec/classes/base_spec.rb
@@ -9,8 +9,6 @@ describe 'profile::base', :type => :class do
       it { is_expected.to create_class('profile::base') }
       it { is_expected.to contain_class('profile::linuxfw') }
       it { is_expected.to contain_class('profile::symlinks') }
-      it { is_expected.to contain_class('puppet') }
       it { is_expected.to contain_class('motd') }
       it { is_expected.to contain_class('ntp') }
       it { is_expected.to contain_class('ssh::server') }

diff --git a/spec/classes/puppet_master_spec.rb b/spec/classes/puppet_master_spec.rb
index d3d2762..8c17ee0 100644
--- a/spec/classes/puppet_master_spec.rb
+++ b/spec/classes/puppet_master_spec.rb
@@ -14,20 +14,12 @@ describe 'profile::puppet_master', :type => :class do
     context 'with defaults for all parameters' do
       it { is_expected.to create_class('profile::puppet_master') }
       it { is_expected.to contain_class('epel') }
-      it { is_expected.to contain_class('puppet') }
-
-      # These resources are included based on hieradata
-      it { is_expected.to contain_class('puppet::server') }
-      it { is_expected.to contain_package('puppetserver').
-        with_ensure('latest')
-      }
-
       it { is_expected.to contain_class('hiera') }
       it { is_expected.to contain_class('r10k') }
       it { is_expected.to contain_class('r10k::webhook') }

# I also removed dist/profile/manifests/puppetdb.pp as profile::puppetdb was no longer required

After making sure all my tests passed, I merged my PR. There’s limits to what I could anticipate at this point, but it’s a solid start. Refactoring always has risks and I expected to – and did – revisit this with some tweaks later. As mentioned at the top of the article, please read all the way to the end before starting your own migration so you can integrate all those changes at once.

Installing Puppet Enterprise

Once the new VM is provisioned, it needs bootstrapped. It needs Puppet Enterprise installed, r10k configured (you can use Code Manager, but I have reasons to stick with r10k for now) and working, the CA and puppetdb data restored, and probably a few other things I didn’t account for at the start. Let’s find out by getting PE installed.

Puppet’s PE installation instructions are very complete. You can install at the CLI or via a Web-based method. The initial steps are the same. First, download the latest PE to the master and expand it:

[root@puppet ~]# wget 'https://pm.puppetlabs.com/cgi-bin/download.cgi?dist=el&rel=7&arch=x86_64&ver=latest' -O pe.latest.tar.gz
[root@puppet ~]# tar xfz pe.latest.tar.gz

Change into the new directory and run the installer:

[root@puppet ~]# cd puppet-enterprise-2016.5.2-el-7-x86_64/
[root@puppet puppet-enterprise-2016.5.2-el-7-x86_64]# ./puppet-enterprise-installer

At this point, you can use the web based or CLI method. I’ll let you choose, it doesn’t matter in the end. You’ll need to provide at least a console_admin_password, which of course you should not share. If you’re doing a monolithic install, you’re ready to proceed; if not, there’s more configuration you need to change. Once you’re ready, proceed. Save the resulting custom-pe.conf file in case you need it in the future, often for future upgrades. The install took about 8 minutes for me, so transfer your CA and puppetdb backups while you wait.

Now, I’m not sure why, but the installer, which presumably knows it’s a monolithic install based on the choices (or perhaps it should add a flag for this) says this at the end:

If this is a monolithic configuration, run 'puppet agent -t' to
complete the setup of this system.

Go ahead and do that, or follow the subsequent instructions for split installations. Also note that the PE installer does not automatically add iptables rules. If you have iptables enabled (you should!) then you’ll need to stop it to access the console, systemctl stop iptables on EL7 works. The default user is admin with the password you set. There’s more information on the console, including how to set up other authentication sources such as Active Directory. If you don’t see the default username there, you’re not alone, I had to go to here to determine that!

Once you log in, you’ll see your wonderful looking Console!

migrating-to-puppet-enterprise-fig-1

Now we have a working Puppet Enterprise master, but we need to do a lot of configuration. If you have the ability, now is a great time to take a snapshot in case any future steps go sideways.

Bootstrapping The Master

First, let’s import the PuppetDB data. This is fairly simple, it just requires a few commands that become apparent as you work through it. The authentication required is the Console admin user. You most likely requires that you start a new session to receive the updated PATH value. Then puppet db works and you can try an import:

[root@puppet ~]# puppet db
Invalid arguments.

Usage:
  puppet-db [options] (--version | --help)
  puppet-db [options] export <path> [--anon=<profile>]
  puppet-db [options] import <path>
  puppet-db [options] status
[root@puppet ~]# puppet db import my-puppetdb-export.tar.gz
Error: ssl requires a token, please use `puppet access login` to retrieve a token (alternatively use 'cert' and 'key' for whitelist validation)
[root@puppet ~]# puppet access login
Enter your Puppet Enterprise credentials.
Username: admin
Password:

Access token saved to: /root/.puppetlabs/token
[root@puppet ~]# puppet db import my-puppetdb-export.tar.gz
[root@puppet ~]#

I can now see that I have 8 nodes that haven’t reported in hours, and if I click on one, I can get all the details about them I want, so this looks good.

migrating-to-puppet-enterprise-fig-2

The CA should be pretty simple, just extract the tar from /, or in the correct relative path depending on how you tarred it up:

[root@puppet ~]# cd /
[root@puppet /]# tar xzf /root/cabackup.tar.gz
[root@puppet /]# ls -la /etc/puppetlabs/puppet/ssl/ca/signed/yumrepo01.nelson.va.pem
-rw-r--r--. 1 pe-puppet pe-puppet 1956 Nov 20 22:44 /etc/puppetlabs/puppet/ssl/ca/signed/yumrepo01.nelson.va.pem

Unfortunately, after I did this, puppetdb failed to startup upon restart, and currently running services had started failing before the restart. I restored my snapshot and continued on. It turns out that it may be possible to import the CA, but that the files must be present before running the PE installer (steps 4 and 5). That article is for PE to PE migrations, so I am not 100% certain this would have worked with PO to PE, but I wasn’t willing to go back and start over at this point to find out. You may be able to adapt Alex Harden’s backup and migration scripts for this purpose if you want to give this a shot yourself.

Next, we need to get r10k working, hiera configured, and the webhook working. There are a number of ways to do this, what I describe below is only one possible way to get this working. I’m using puppet/r10k and the opensource setup, even though I’m on Puppet Enterprise. You can also use Code Manager. I stick with r10k because I use r10k deploy module X, which currently does not have an equivalent in CM. The hiera setup comes from puppet/hiera. I have bootstrap files for both in my controlrepo, so the first thing to do is check that out. I’ll need an ssh key that has permissions to my repos, on GitHub in this case.

[root@puppet ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
<SNIP>
The key's randomart image is:
<SNIP>
[root@puppet ~]# cat /root/.ssh/id_rsa.pub
<SNIP>

Add this as an SSH key and then clone your repo:

[root@puppet ~]# git clone $REPO_URL
Cloning into 'controlrepo'...
The authenticity of host 'github.com (192.30.253.112)' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,192.30.253.112' (RSA) to the list of known hosts.
remote: Counting objects: 2376, done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 2376 (delta 6), reused 1 (delta 1), pack-reused 2346
Receiving objects: 100% (2376/2376), 579.88 KiB | 0 bytes/s, done.
Resolving deltas: 100% (1082/1082), done.

You’ll have to accept the key from github.com, or wherever your repo is hosted. Now, I can access my bootstrap info. First is hiera. The hiera file goes in /etc/puppetlabs/code/hiera.yaml and requires a restart of the server service:

[root@puppet controlrepo]# cat hiera.yaml
---
:backends:
  - yaml

:logger: console

:hierarchy:
  - "clientcert/%{clientcert}"
  - "puppet_role/%{puppet_role}"
  - global

:yaml:
  :datadir: /etc/puppetlabs/code/environments/%{::environment}/hiera
[root@puppet controlrepo]# cp hiera.yaml /etc/puppetlabs/code/hiera.yaml
cp: overwrite ‘/etc/puppetlabs/code/hiera.yaml’? y
[root@puppet controlrepo]# ls -l /etc/puppetlabs/puppet/hiera.yaml
-rw-r--r--. 1 root root 207 Mar  4 23:31 /etc/puppetlabs/puppet/hiera.yaml
[root@puppet controlrepo]# systemctl restart pe-puppetserver
[root@puppet controlrepo]#

Next up is installing and configuring r10k. After the module is installed, then the file r10k_installation.pp can be used:

[root@puppet controlrepo]# puppet module install puppet-r10k
Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
Notice: Downloading from https://forgeapi.puppet.com ...
Notice: Installing -- do not interrupt ...
/etc/puppetlabs/code/environments/production/modules
└─┬ puppet-r10k (v4.2.0)
  ├─┬ gentoo-portage (v2.3.0)
  │ └── puppetlabs-concat (v2.2.0)
  ├── puppet-make (v1.1.0)
  ├── puppetlabs-gcc (v0.3.0)
  ├── puppetlabs-git (v0.5.0)
  ├── puppetlabs-inifile (v1.6.0)
  ├── puppetlabs-pe_gem (v0.2.0)
  ├── puppetlabs-ruby (v0.6.0)
  ├── puppetlabs-stdlib (v4.15.0)
  └── puppetlabs-vcsrepo (v1.5.0)
[root@puppet controlrepo]# cat r10k_installation.pp
class { 'r10k':
  version => '2.5.0',
  sources => {
    'puppet' => {
      'remote'  => '$REPO_URL',
      'basedir' => $::settings::environmentpath,
      'prefix'  => false,
    },
  },
  manage_modulepath => false,
}
[root@puppet controlrepo]# puppet apply r10k_installation.pp
Notice: Compiled catalog for puppet.nelson.va in environment production in 0.81 seconds
Notice: /Stage[main]/R10k::Install::Puppet_gem/File[/usr/bin/r10k]/ensure: created
Notice: /Stage[main]/R10k::Config/File[r10k.yaml]/ensure: defined content as '{md5}b505df8c46140c77dee693fa525c2aac'
Notice: Applied catalog in 1.09 seconds

To make sure r10k is working – configuration and ssh key – I can fetch a list of environments it will deploy:

[root@puppet controlrepo]# r10k deploy display --fetch
WARN     -> The r10k configuration file at /etc/r10k.yaml is deprecated.
WARN     -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml.
---
:sources:
- :name: :puppet
  :basedir: "/etc/puppetlabs/code/environments"
  :remote: $REPO_URL
  :environments:
  - domainjoin
  - octocatalog
  - production
  - puppet_agent

I’ve noted this warning in puppet/r10k PR342, to be fixed fairly soon.

This is based off the branches that exist in the :remote URL. It will obviously be different for you. I then deploy the production environment, which can take a little while, and check the status of the environment afterward:

[root@puppet controlrepo]# r10k deploy environment production -p
WARN     -> The r10k configuration file at /etc/r10k.yaml is deprecated.
WARN     -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml.

[root@puppet controlrepo]# r10k deploy display --detail
WARN     -> The r10k configuration file at /etc/r10k.yaml is deprecated.
WARN     -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml.
---
:sources:
- :name: :puppet
  :basedir: "/etc/puppetlabs/code/environments"
  :remote: $REPO_URL
  :environments:
  - :name: domainjoin
    :signature:
    :status: :absent
  - :name: octocatalog
    :signature:
    :status: :absent
  - :name: production
    :signature: 44a7d905a2f261e58b995c35d8308b152bad1749
    :status: :outdated
  - :name: puppet_agent
    :signature:
    :status: :absent

I then run a hiera command to see that it works:

[root@puppet controlrepo]# hiera ntp::servers ::environment=production --debug
DEBUG: 2017-03-04 23:56:11 +0000: Hiera YAML backend starting
DEBUG: 2017-03-04 23:56:11 +0000: Looking up ntp::servers in YAML backend
DEBUG: 2017-03-04 23:56:11 +0000: Ignoring bad definition in :hierarchy: 'clientcert/'
DEBUG: 2017-03-04 23:56:11 +0000: Ignoring bad definition in :hierarchy: 'puppet_role/'
DEBUG: 2017-03-04 23:56:11 +0000: Looking for data source global
DEBUG: 2017-03-04 23:56:11 +0000: Found ntp::servers in global
["0.pool.ntp.org", "2.centos.pool.ntp.org", "1.rhel.pool.ntp.org"]

I should be ready to run puppet now. I expect maybe an error or two from the catalog, but I should be able to get it to run at least.

[root@puppet ~]# puppet agent -t --noop
Notice: /File[/etc/puppetlabs/code/environments/production]/seluser: seluser changed 'unconfined_u' to 'system_u'
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Notice: /File[/opt/puppetlabs/puppet/cache/lib/augeas]/ensure: created
Notice: /File[/opt/puppetlabs/puppet/cache/lib/augeas/lenses]/ensure: created
Notice: /File[/opt/puppetlabs/puppet/cache/lib/augeas/lenses/fixedsudoers.aug]/ensure: defined content as '{md5}1492fda700091a906d27195bcdc40c90'
Notice: /File[/opt/puppetlabs/puppet/cache/lib/facter/apache_version.rb]/ensure: defined content as '{md5}751e89814b4eee452388b698276f7be3'
...
<SNIP>
...
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: File[/usr/local/bin/facter] is already declared in file /etc/puppetlabs/code/environments/production/dist/profile/manifests/symlinks.pp:18; cannot redeclare at /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/symlinks.pp:37 at /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/symlinks.pp:37:5 on node puppet.nelson.va
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Well, that’s not as exciting as I hoped, but it’s to be expected. I guess I don’t need the symlinks anymore, so I remove that from my profile::base. After I commit changes, I have to run r10k deploy environment production again since the webhook isn’t set up yet. I found on more issue, removing the Package['puppetdb'] ->  Service[webhook] ordering in profile::puppet_master. After these tweaks, I got to the point where a run would occur:

[root@puppet ~]# puppet agent -t --noop
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Applying configuration version '1488672453'
Notice: /Stage[main]/Motd/File[/etc/motd]/content:
...
<SNIP>
...
Notice: Class[R10k::Webhook]: Would have triggered 'refresh' from 7 events
Notice: Stage[main]: Would have triggered 'refresh' from 27 events
Notice: Applied catalog in 19.10 seconds

I actually intended to leave things here overnight, as it was dinner time, but since I was adjusting production in real time, guess what happened inside of 30 minutes? Yep, the master checked in with itself and completed a run. I have two problems now. First, I haven’t added a port 443 firewall rule, so I have to stop iptables to see the console again. The second issue is that the webhook won’t start:

[rnelson0@puppet ~]$ sudo puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for puppet.nelson.va
Info: Applying configuration version '1488735606'
Error: Systemd start for webhook failed!
journalctl log for webhook:
-- Logs begin at Sat 2017-03-04 22:39:43 UTC, end at Sun 2017-03-05 17:40:59 UTC. --
Mar 05 17:40:59 puppet webhook[2560]: /usr/local/bin/webhook:46:in `initialize': No such file or directory @ rb_sysopen - /etc/puppetlabs/puppetdb/ssl/public.pem (Errno::ENOENT)
Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `open'
Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `<main>'

Error: /Stage[main]/R10k::Webhook/Service[webhook]/ensure: change from stopped to running failed: Systemd start for webhook failed!
journalctl log for webhook:
-- Logs begin at Sat 2017-03-04 22:39:43 UTC, end at Sun 2017-03-05 17:40:59 UTC. --
Mar 05 17:40:59 puppet webhook[2560]: /usr/local/bin/webhook:46:in `initialize': No such file or directory @ rb_sysopen - /etc/puppetlabs/puppetdb/ssl/public.pem (Errno::ENOENT)
Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `open'
Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `<main>'

Notice: Applied catalog in 32.30 seconds

That cert no longer exists, but by default puppet/r10k will use the mcollective cert for the peadmin user so I no longer must specify a cert location. I can fix the firewall rule and this at the same time. Here’s the diff:

$ git diff
diff --git a/dist/profile/manifests/puppet_master.pp b/dist/profile/manifests/puppet_master.pp
index 74fcabd..85cf3d3 100644
--- a/dist/profile/manifests/puppet_master.pp
+++ b/dist/profile/manifests/puppet_master.pp
@@ -43,4 +43,10 @@ class profile::puppet_master {
     proto  => tcp,
     action => accept,
   }
+
+  firewall { '115 PE Console':
+    dport  => 443,
+    proto  => tcp,
+    action => accept,
+  }
 }
diff --git a/hiera/puppet_role/puppet.yaml b/hiera/puppet_role/puppet.yaml
index 7244f4b..d8f9954 100644
--- a/hiera/puppet_role/puppet.yaml
+++ b/hiera/puppet_role/puppet.yaml
@@ -15,8 +15,6 @@ r10k::sources:
     prefix: false
 r10k::manage_modulepath: false
 r10k::webhook::config::use_mcollective: false
-r10k::webhook::config::public_key_path: '/etc/puppetlabs/puppetdb/ssl/public.pem'
-r10k::webhook::config::private_key_path: '/etc/puppetlabs/puppetdb/ssl/private.pem'
 r10k::webhook::config::command_prefix: 'umask 0022;'
 r10k::webhook::user: 'root'
 r10k::webhook::group: 0
diff --git a/spec/classes/puppet_master_spec.rb b/spec/classes/puppet_master_spec.rb
index 8c17ee0..61c1b8a 100644
--- a/spec/classes/puppet_master_spec.rb
+++ b/spec/classes/puppet_master_spec.rb
@@ -20,6 +20,7 @@ describe 'profile::puppet_master', :type => :class do
       it { is_expected.to contain_class('r10k::webhook::config') }
       it { is_expected.to contain_firewall('105 puppetdb inbound') }
       it { is_expected.to contain_firewall('110 r10k web hook') }
+      it { is_expected.to contain_firewall('115 PE Console') }
       it { is_expected.to contain_cron('home_config deploy') }
       it { is_expected.to contain_file('/etc/puppetlabs/puppet/autosign.conf') }
     end

With this in place, the puppet run completes:

[rnelson0@puppet ~]$ sudo puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for puppet.nelson.va
Info: Applying configuration version '1488736224'
Notice: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]/content:
--- /etc/webhook.yaml   2017-03-05 00:10:55.948195709 +0000
+++ /tmp/puppet-file20170305-2794-1p80o5z       2017-03-05 17:51:00.463372988 +0000
@@ -14,9 +14,9 @@
 port: "8088"
 prefix: false
 prefix_command: "/bin/echo example"
-private_key_path: "/etc/puppetlabs/puppetdb/ssl/private.pem"
+private_key_path: "/var/lib/peadmin/.mcollective.d/peadmin-private.pem"
 protected: true
-public_key_path: "/etc/puppetlabs/puppetdb/ssl/public.pem"
+public_key_path: "/var/lib/peadmin/.mcollective.d/peadmin-cert.pem"
 r10k_deploy_arguments: "-pv"
 server_software: "WebHook"
 use_mco_ruby: false

Info: Computing checksum on file /etc/webhook.yaml
Info: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]: Filebucketed /etc/webhook.yaml to puppet with sum 0163db804d34fabfaae4103a6e22980f
Notice: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]/content: content changed '{md5}0163db804d34fabfaae4103a6e22980f' to '{md5}3832474b09421d12f3ae1283eaabffe5'
Info: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]: Scheduling refresh of Service[webhook]
Notice: /Stage[main]/Profile::Puppet_master/Firewall[115 PE Console]/ensure: created
Notice: /Stage[main]/R10k::Webhook/Service[webhook]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/R10k::Webhook/Service[webhook]: Unscheduling refresh on Service[webhook]
Notice: Applied catalog in 42.27 seconds
[rnelson0@puppet ~]$ sudo systemctl status webhook
● webhook.service - R10K Webhook Service
   Loaded: loaded (/usr/lib/systemd/system/webhook.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2017-03-05 17:51:27 UTC; 25s ago
 Main PID: 3420 (webhook)
   CGroup: /system.slice/webhook.service
           └─3420 /opt/puppetlabs/puppet/bin/ruby /usr/local/bin/webhook

And, I can now reach my puppet console without having to stop iptables! I tested the webhook, just to make sure it was working properly:

[rnelson0@build03 controlrepo:test]$ git push origin test
Counting objects: 33, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (19/19), done.
Writing objects: 100% (21/21), 1.60 KiB | 0 bytes/s, done.
Total 21 (delta 12), reused 0 (delta 0)
remote: Resolving deltas: 100% (12/12), completed with 8 local objects.
To $REPO_URL
 * [new branch]      test -> test

 [rnelson0@puppet ~]$ tail -f /var/log/webhook/access.log
[2017-03-05 18:00:04] DEBUG accept: 192.30.252.42:53369
[2017-03-05 18:00:04] DEBUG Rack::Handler::WEBrick is invoked.
[2017-03-05 18:00:04] INFO  authenticated: peadmin
[2017-03-05 18:01:20] INFO  message: triggered: umask 0022; r10k deploy environment test -pv

WARN     -> The r10k configuration file at /etc/r10k.yaml is deprecated.
WARN     -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml.
INFO     -> Deploying environment /etc/puppetlabs/code/environments/test
INFO     -> Environment test is now at 669433b892e0ee4963a723af7b40de3e4b4b044b
...
<SNIP>
...
 branch: test
[2017-03-05 18:01:20] DEBUG close: 192.30.252.42:53369

Before moving on, I must not forget that while the webhook is working, it has ONLY deployed two environments, production and test. The others are still absent (note: I have no idea why the status is always outdated for each environment, but the signature matches the latest commit hash so it is not actually outdated):

  :environments:
  - :name: domainjoin
    :signature:
    :status: :absent
  - :name: octocatalog
    :signature:
    :status: :absent
  - :name: production
    :signature: 8f63c0e41decbd02e6ebd8307a178fe69aff9b61
    :status: :outdated
  - :name: puppet_agent
    :signature:
    :status: :absent
  - :name: test
    :signature: 669433b892e0ee4963a723af7b40de3e4b4b044b
    :status: :outdated

I use r10k deploy environment -p to deploy the remaining branches. That completes the master bootstrap.

Agent Updates

The agent updates are potentially much more complex. In my case, it mostly just works, because I was not deploying mcollective on my nodes. PE deploys its own mcollective, and if you are doing that yourself you may run into issues. I also removed the class managing the puppet agent locally, the other likely potential source of conflict. It is entirely possible that some other per-profile changes are required, so vette your own setup closely.

I also have a whopping 9 nodes in my home lab, some of which are still running EL6 and are better replaced with a fresh EL7 node. The following steps were performed by hand due to that low number. Were this a large fleet at work, or even just 2 dozen nodes, I would have been more interested in automating it or simply deploying fresh nodes everywhere. But as a one time event, I settled on the manual route.

The first step was to rip out the puppet foss packages. rpm -qa | grep puppet gives me the list of what to uninstall on the nodes I want to preserve. It may vary based on the role assigned to a node. For instance, on my build node:

[rnelson0@build03 controlrepo:production]$ rpm -qa | grep puppet
puppet-agent-1.9.2-1.el7.x86_64
puppetlabs-release-pc1-1.1.0-4.el7.noarch
puppetdb-termini-4.2.4-1.el7.noarch
[rnelson0@build03 controlrepo:production]$ sudo yum remove -y $(rpm -qa | grep puppet | xargs)

To install PE, open the Console in your web browser and go to Nodes -> Unsigned Certificates (https://puppet/#/node_groups/certificates or similar). You’ll find a curl | bash command here. There is a repo on the PE master that you can also add to a node and install packages that way, but you’re relying on keeping up with possible changes in the script passed to bash, so I’d advise against it. I pasted the line into the shell on build:

[rnelson0@build03 ~]$ curl -k https://puppet.nelson.va:8140/packages/current/install.bash | sudo bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed connect to puppet.nelson.va:8140; No route to host

Well, that was unexpected! It turns out that jlambert121/puppet created an iptables rule for inbound tcp/8140 on the master, so I need to resolve that first. It’s a silly thing to have overlooked, but that’s life:

$ git diff
diff --git a/dist/profile/manifests/puppet_master.pp b/dist/profile/manifests/puppet_master.pp
index 85cf3d3..c97c20e 100644
--- a/dist/profile/manifests/puppet_master.pp
+++ b/dist/profile/manifests/puppet_master.pp
@@ -32,6 +32,12 @@ class profile::puppet_master {
     source => 'puppet:///modules/home_config/master/autosign.conf',
   }

+  firewall {'100 puppet agent inbound':
+    dport  => 8140,
+    proto  => tcp,
+    action => accept,
+  }
+
   firewall {'105 puppetdb inbound':
     dport  => 8080,
     proto  => tcp,
diff --git a/spec/classes/puppet_master_spec.rb b/spec/classes/puppet_master_spec.rb
index 61c1b8a..eede881 100644
--- a/spec/classes/puppet_master_spec.rb
+++ b/spec/classes/puppet_master_spec.rb
@@ -18,6 +18,7 @@ describe 'profile::puppet_master', :type => :class do
       it { is_expected.to contain_class('r10k') }
       it { is_expected.to contain_class('r10k::webhook') }
       it { is_expected.to contain_class('r10k::webhook::config') }
+      it { is_expected.to contain_firewall('100 puppet agent inbound') }
       it { is_expected.to contain_firewall('105 puppetdb inbound') }
       it { is_expected.to contain_firewall('110 r10k web hook') }
       it { is_expected.to contain_firewall('115 PE Console') }

Now everything should work!

[rnelson0@build03 ~]$ curl -k https://puppet.nelson.va:8140/packages/current/install.bash | sudo bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 20107  100 20107    0     0  17649      0  0:00:01  0:00:01 --:--:-- 17668[sudo] password for rnelson0:

Loaded plugins: fastestmirror
Cleaning repos: pe_repo
Cleaning up everything
Cleaning up list of fastest mirrors
Loaded plugins: fastestmirror
pe_repo                                                                                                                                                                                                               | 2.5 kB  00:00:00
pe_repo/primary_db                                                                                                                                                                                                    |  25 kB  00:00:00
Determining fastest mirrors
 * base: mirror.cs.uwp.edu
 * epel: mirror.nexcess.net
 * extras: mirrors.gigenet.com
 * updates: bay.uchicago.edu
Error: No matching Packages to list
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.cs.uwp.edu
 * epel: mirror.nexcess.net
 * extras: mirrors.gigenet.com
 * updates: bay.uchicago.edu
Resolving Dependencies
--> Running transaction check
---> Package puppet-agent.x86_64 0:1.9.2-1.el7 will be installed
--> Finished Dependency Resolution
...
<SNIP>
...

This, too, can take a little while. The installer will start the puppet-agent which includes a first-run attempt. If you look at the PE Console, you’ll note the node still shows as Unreported. Take a look at the log, in my case journalctl, to see why:

[rnelson0@build03 ~]$ journalctl -xe -t puppet-agent
Mar 05 19:14:10 build03 puppet-agent[29949]: Unable to fetch my node definition, but the agent run will continue:
Mar 05 19:14:10 build03 puppet-agent[29949]: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va]
Mar 05 19:14:10 build03 puppet-agent[29949]: Retrieving pluginfacts
Mar 05 19:14:10 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/facts.d]) Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to g
Mar 05 19:14:10 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/facts.d]) Could not evaluate: Could not retrieve file metadata for puppet:///pluginfacts: SSL_connect returned=1 errno=0 state=error: certificate verify fai
Mar 05 19:14:10 build03 puppet-agent[29949]: Retrieving plugin
Mar 05 19:14:12 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/lib]) Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get l
Mar 05 19:14:12 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/lib]) Could not evaluate: Could not retrieve file metadata for puppet:///plugins: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [un
Mar 05 19:14:12 build03 puppet-agent[29949]: Loading facts
Mar 05 19:14:15 build03 puppet-agent[29949]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va]
Mar 05 19:14:15 build03 puppet-agent[29949]: Not using cache on failed catalog
Mar 05 19:14:15 build03 puppet-agent[29949]: Could not retrieve catalog; skipping run
Mar 05 19:14:15 build03 puppet-agent[29949]: Could not send report: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va]

Because I messed up the CA import, the certs don’t match up. We can stop the agent, clear out the certs, and try again. Since I have autosigning enabled, it will just work. If you don’t, you will also need to sign the new certs:

[rnelson0@build03 ~]$ sudo systemctl stop puppet
[rnelson0@build03 ~]$ sudo rm -fR $(sudo puppet config print ssldir)
[rnelson0@build03 ~]$ sudo systemctl start puppet
[rnelson0@build03 ~]$ journalctl -xe -t puppet-agent
Mar 05 19:14:15 build03 puppet-agent[29949]: Could not retrieve catalog; skipping run
Mar 05 19:14:15 build03 puppet-agent[29949]: Could not send report: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va]
Mar 05 19:15:42 build03 puppet-agent[27685]: Caught TERM; exiting
Mar 05 19:16:41 build03 puppet-agent[30390]: Starting Puppet client version 4.9.3
Mar 05 19:16:46 build03 puppet-agent[30432]: (/File[/opt/puppetlabs/puppet/cache/lib/facter/aio_agent_build.rb]/ensure) defined content as '{md5}cdcc1ff07bc245c66cc1d46be56b3af5'
Mar 05 19:16:46 build03 puppet-agent[30432]: (/File[/opt/puppetlabs/puppet/cache/lib/facter/aio_agent_version.rb]/ensure) defined content as '{md5}d05c8cbf788f47d33efd46a935dda61e'

It’s working now! Check the PE Console shortly and you should see your node reporting successfully.

migrating-to-puppet-enterprise-fig-3

Now that we know we need to clean up the ssldir, we can repeat this process a little more smoothly on the other agents:

sudo rm -fR $(sudo puppet config print ssldir)
sudo yum remove -y $(rpm -qa | grep puppet | xargs)
curl -k https://puppet.nelson.va:8140/packages/current/install.bash | sudo bash

In most cases, they should come up with Intentional Changes immediately in the PE Console. I lucked out, but a few may error, for the reasons mentioned earlier. You’ll just have to determine the causes of the failures. I’ve been exploring, but haven’t successfully implemented, octocatalog-diff which should help find discrepencies in the catalog. You can also use the PE Console’s report view to find the problem. From the overview, click on the failed run’s timestamp; alternatively click on the node’s name, then the Reports tab, then the timestamp of the failed run. Switch to the Events tab there and change the filter to Failed. Expand the failed resource for detailed information. Here’s what the master looked like when the webhook was failing earlier:

migrating-to-puppet-enterprise-fig-4

One final issue you may run into is if the agent isn’t running the same OS version as the master. When I run the installer on a CentOS 6 node, I receive this error:

The agent packages needed to support el-6-x86_64 are not present on your master.     To add them, apply the pe_repo::platfo
rm::el_6_x86_64 class to your master node and then run Puppet.     The required agent packages should be retrieved when pup
pet runs on the master, after which you can run the install.bash script again.

You can add this class to the master through the PE Console. Go to Nodes -> Classification -> Expand PE Infrastructure -> PE Master. Click on the Classes tab and start typing the listed classname in the Add new class box. Select the right class when it comes up and click Add Class. At the bottom, a new dialog pops up showing you have pending changes. Add as many new classes as you need and click Commit when you’re ready. Our master will need to run puppet agent to receive the new class, then you can try the installation again on your affected nodes.

migrating-to-puppet-enterprise-fig-5

migrating-to-puppet-enterprise-fig-6

[rnelson0@puppet ~]$ sudo puppet agent -t
[sudo] password for rnelson0:
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for puppet.nelson.va
Info: Applying configuration version '1488742359'
Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64.repo]/ensure: defined content as '{md5}05805efce6320a80e4af1f9554d500eb'
Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64.bash]/ensure: defined content as '{md5}f6fd234b3497b3baf3da9598ffc1d027'
Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64-1.8.3]/ensure: created
Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64]/ensure: created
Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/Pe_staging::Deploy[puppet-agent-el-6-x86_64.tar.gz]/Pe_staging::File[puppet-agent-el-6-x86_64.tar.gz]/Exec[/opt/puppetlabs/server/data/staging/pe_repo-puppet-agent-1.8.3/puppet-agent-el-6-x86_64.tar.gz]/returns: executed successfully
Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/Pe_staging::Deploy[puppet-agent-el-6-x86_64.tar.gz]/Pe_staging::Extract[puppet-agent-el-6-x86_64.tar.gz]/Exec[extract puppet-agent-el-6-x86_64.tar.gz]/returns: executed successfully
Notice: Applied catalog in 42.65 seconds

You will probably have some other cleanup to do that’s not quite as specific to the FOSS->PE software side. For instance, I have a kickstart server with an EL7 template that installs the PC1 repo and the puppet-agent. If I build a new template, I don’t want that, so I’ll have to clean that up. You may have other FOSS-isms in your setup. Hopefully you know where these are at, otherwise you’ll have to stumble upon them later. If you can’t take care of them right this moment, at least open an issue to track it.

Summary

Migrating from Puppet OpenSource to Puppet Enterprise was not that bad. The plan was relatively straightforward, but I did run into a few issues here and there with improper reading of instructions and assumptions about my controlrepo contents, and one or two legitimate issues with the documentation. I did not successfully migrate my CA, though in the end it didn’t cost me that much since I had to touch all my nodes to replace the puppet agent. In a few hours, I was able to get everything migrated over, and write a really long blog post (almost 5,000 words!) about the journey. It’s difficult to tell since I was multi-tasking, but I estimate it took less than 4 hours to complete the actual work required. I hope this helps others who are interested in the same journey and that it saves you some missteps. Thanks!

Making the Puppet vRealize Automation plugin work with vRealize Orchestrator

I’m pretty excited about this post! I’ve been building up Puppet for vSphere Admins for a few years now but the final integration aspects between Puppet and vSphere/vCenter were always a little clunky and difficult to maintain without specific dedication to those integration components. Thanks to Puppet and VMware, that’s changed now.

Puppet announced version 2.0 of their Puppet Plugin for vRealize Automation this week. There’s a nice video attached, but there’s one problem – it’s centered on vRealize Automation (vRA) and I am working with vRealize Orchestrator (vRO)! vRO is included with all licenses of vCenter, whereas vRA is a separate product that costs extra, and even though vRA requires a vRO engine to perform a lot of its work, it abstracts a lot of the configuration and implementation details away that vRO-only users need to care about. This means that much of the vRA documentation and guides you find, for the Puppet plugin or otherwise, are always missing some of the important details needed to implement the same functionality – and sometimes won’t work at all if it relies on vRA functionality not available to us.

Don’t worry, though, the Puppet plugin DOES work with vRO! We’ll look at a few workflows to install, run, and remove puppet from nodes and then discuss how we can use them within larger customized workflows. You must already have an installed vRealize Orchestrator 7.x instance configured to talk to your vCenter instance. I’m using vRO 7.0.0 with vCenter 6.0. If you’re using a newer version, some of the dialogs I show may look a little different. If you’re still on vRO 6.x, the configuration will look a LOT different (you may have to do some research to find the equivalent functionality) but the workflow examples should be accurate.

Puppet provides a User Guide for use with a reference implementation. I’ll be mostly repeating Part 2 when installing and configuring, but reality tends to diverge from reference so we’ll explore some non-reference details as well.

Continue reading

Automating Puppet tests with a Jenkins Job, version 1.1

Today, let’s build on version 1.0 of our Jenkins job. We are running builds against every commit, but when someone opens a pull request, they don’t get automated builds or feedback. If the PR submitter even knows about Jenkins, and has network access and a login, they can look at it to find out how the tests went, but most people aren’t going to have that visibility (especially if your Jenkins server is private, as in this example setup). We need to make sure Jenkins is aware of the pull request and that it updates the PR with the status. Our end goal is for each PR to start a Jenkins build and update the PR with a successful check when done:

To get there, we will install and configure a new plugin and configure our job to use the plugin.

Continue reading

Automating Puppet tests with a Jenkins Job, version 1.0

As I’ve worked through setting up Jenkins and Puppet (and remembering my password!), I created a job to automate rspec tests on my puppet controlrepo. I am sure I will go through many iterations of this as I learn more, so we’ll just call this version 1.0. The goal is that when I push a branch to my controlrepo on GitHub, Jenkins automagically runs the test. Today, we will ensure that Jenkins is notified of activity on a GitHub repo, that it spins up a clean test environment without any left over files that may inadvertently assist, and run the tests. What it will NOT do is notify anyone – it won’t work off a Pull Request and provide feedback like Travis CI does, for instance. Hopefully, I will figure that out soon.

The example below is using GitHub. You can certainly make this work with BitBucket, GitLab, Mercurial, and tons of other source control systems and platforms, but you might need some additional Jenkins Plugins. It should be pretty apparent where to change Git/GitHub to the system/platform you chose.

Creating A Job

From the main view of your Jenkins instance, click New Item. Call it whatever you want, choose Freestyle project as the type, and click OK. The next page is going to be where we set up all the parameters for the job. There are tabs across the top AND you can scroll down; you’ll see the same selection items either way. Going from the top to the bottom, the settings that we want:

Continue reading