I have been using Puppet Enterprise at work and Puppet OpenSource at home for a few years now. There’s a lot to love about both products, but since work uses PE and new features tend to land there first, I have been thinking about trying PE at home as well. I don’t have a large fleet or a large change velocity, so I think the conversion of the master might take some work but the agents will be easy. This will let me play with PE-only functions in my lab (specifically the PE Pipeline plugin for Jenkins) and reduces concern about drift between lab findings and work usage. It does have the downside that some of my blog articles, which I was always assured would work with the FOSS edition, may not be as foolproof in the future. However, I rarely saw that as a problem going the other way in the past, with mcollective management being the only exception. I haven’t written about mcollective much, so I think this is worth the tradeoff.
I am going to migrate my systems, rather that start fresh. This article was written as the migration happened, so if you pursue a similar migration, please read the whole article before starting – I did some things backwards or incomplete, and made a few mistakes that you could avoid. It was also written as something of a stream of consciousness. I’ve tried to edit it as best I can without losing that flow. I do hope you find it valuable.
Pre-Flight Checklist
Puppet Enterprise is a commercial product. You can use it for free with up to 10 nodes, though. This is perfect for my 9 managed-node home network. After that, it costs something around $200/node/year. Make sure you stay within that limit or pony up – we are responsible adults, we pay for the products we use. If you replace nodes, you can deactivate those nodes so they don’t continue to eat into your license count.
Since I have an existing Puppet OpenSource master, I wanted to preserve the Certificate authority and PuppetDB contents. I copied the SSL directory to preserve the CA and used puppet db
via puppet-client-tools
to export the database. I found I had to use sudo su -
because puppet db
as root worked, but sudo puppet db
as my user complained Error: Unknown Puppet subcommand 'db'
. I assume this is because /opt/puppetlabs/bin/puppet-db
, provided by the client tools package, is not in my sudopath. After creating the backups, I copied the files to my desktop since the server’s IP/hostname is being re-used, preventing a VM-to-VM transfer later.
sudo su - tar czvf ~rnelson0/cabackup.tar.gz /etc/puppetlabs/puppet/ssl yum install -y puppet-client-tools puppet db export ~rnelson0/my-puppetdb-export.tar.gz chown rnelson0.rnelson0 ~rnelson0/cabackup.tar.gz ~rnelson0/my-puppetdb-export.tar.gz
I also need to update my vSphere template, which includes puppet’s PC1 repo and the puppet-agent packages. I booted my centos template up and ran yum update
while I was at it, then removed all the puppet FOSS packages from it before deploying the new target puppet master from the updated template.
The template updates and the new VM deploy take quite a while. In the meantime, I had to adjust my controlrepo. I was using jlambert121/puppet to manage the puppet master and agents, and that’s not fully compatible with Puppet Enterprise. Neither is puppetlabs/puppetdb. Starting with with the Puppetfile
and .fixtures.yml
files, remove those module. Next, all instances of the modules need to be removed from profiles and hiera data (it can technically stay in hiera but it’s going to clutter things up). I also have a few references directly to file locations that have changed. Since I was already on Puppet 4 and the locations between FOSS and PE are the same, I have no changes to make, but if you’re migrating from Puppet 3, it’s something to consider. Here’s what my git diff
looks like:
$ git diff diff --git a/.fixtures.yml b/.fixtures.yml index cc079c7..f3fc171 100644 --- a/.fixtures.yml +++ b/.fixtures.yml @@ -27,9 +27,6 @@ fixtures: portage: repo: "gentoo/portage" ref: "2.3.0" - puppet: - repo: "jlambert121/puppet" - ref: "0.8.2" createrepo: repo: "palli/createrepo" ref: "1.1.0" @@ -93,9 +90,6 @@ fixtures: postgresql: repo: "puppetlabs/postgresql" ref: "4.8.0" - puppetdb: - repo: "puppetlabs/puppetdb" - ref: "5.1.2" ruby: repo: "puppetlabs/ruby" ref: "0.5.0" diff --git a/Puppetfile b/Puppetfile index 25056eb..5b9ce0a 100644 --- a/Puppetfile +++ b/Puppetfile @@ -10,7 +10,6 @@ mod 'garethr/docker', '5.3.0' mod 'garethr/erlang', '0.3.0' mod 'gentoo/portage', '2.3.0' mod 'golja/gnupg', '1.2.3' -mod 'jlambert121/puppet', '0.8.2' mod 'maestrodev/rvm', '1.13.1' mod 'palli/createrepo', '1.1.0' mod 'puppet/archive', '1.1.2' @@ -34,7 +33,6 @@ mod 'puppetlabs/mysql', '3.10.0' mod 'puppetlabs/ntp', '6.0.0' mod 'puppetlabs/pe_gem', '0.2.0' mod 'puppetlabs/postgresql', '4.8.0' -mod 'puppetlabs/puppetdb', '5.1.2' mod 'puppetlabs/ruby', '0.5.0' mod 'puppetlabs/stdlib', '4.14.0' mod 'puppetlabs/vcsrepo', '1.5.0' diff --git a/dist/profile/manifests/base.pp b/dist/profile/manifests/base.pp index 5316740..b0da54d 100644 --- a/dist/profile/manifests/base.pp +++ b/dist/profile/manifests/base.pp @@ -23,7 +23,6 @@ class profile::base { include ::ntp include ::rsyslog::client include ::motd - include puppet # Yum repository $yumrepo_url = hiera('yumrepo_url') @@ -46,17 +45,4 @@ class profile::base { if ($local_users) { create_resources('local_user', $local_users) } } diff --git a/dist/profile/manifests/puppet_master.pp b/dist/profile/manifests/puppet_master.pp index c694306..cd44ea5 100644 --- a/dist/profile/manifests/puppet_master.pp +++ b/dist/profile/manifests/puppet_master.pp @@ -12,7 +12,6 @@ # class profile::puppet_master { include ::epel - include ::puppet include ::hiera diff --git a/dist/role/manifests/puppet.pp b/dist/role/manifests/puppet.pp index ae00505..7867575 100644 --- a/dist/role/manifests/puppet.pp +++ b/dist/role/manifests/puppet.pp @@ -13,5 +13,4 @@ class role::puppet { include profile::base # All roles should have the base profile include profile::puppet_master - include profile::puppetdb } diff --git a/hiera/global.yaml b/hiera/global.yaml index 601cc6a..fbc1058 100644 --- a/hiera/global.yaml +++ b/hiera/global.yaml @@ -37,7 +37,3 @@ ntp::servers: - '0.pool.ntp.org' - '2.centos.pool.ntp.org' - '1.rhel.pool.ntp.org' -puppet::runmode: service -puppet::env: production diff --git a/hiera/puppet_role/puppet.yaml b/hiera/puppet_role/puppet.yaml index 04e30f1..7244f4b 100644 --- a/hiera/puppet_role/puppet.yaml +++ b/hiera/puppet_role/puppet.yaml @@ -7,23 +7,6 @@ hiera::hierarchy: - 'global' hiera::datadir: '/etc/puppetlabs/code/environments/%%{::}{::environment}/hiera' hiera::puppet_conf_manage: false -puppet::server: true -puppet::server_version: '2.3.0-1.el7' -puppet::server_reports: - - 'puppetdb' -puppet::dns_alt_names: - - 'puppet' -puppet::puppetdb_server: 'puppet.nelson.va' -puppet::puppetdb: true -puppet::manage_puppetdb: false -puppet::manage_hiera: false -puppet::firewall: true -puppetdb::listen_address: '0.0.0.0' -puppetdb::ssl_set_cert_paths: true -puppetdb::master::config::restart_puppet: false -puppetdb::node_ttl: 30d -puppetdb::node_purge_ttl: 60d -puppetdb::report_ttl: 30d r10k::version: '2.1.1' r10k::sources: puppet: diff --git a/spec/classes/base_spec.rb b/spec/classes/base_spec.rb index fc33e31..04f8460 100644 --- a/spec/classes/base_spec.rb +++ b/spec/classes/base_spec.rb @@ -9,8 +9,6 @@ describe 'profile::base', :type => :class do it { is_expected.to create_class('profile::base') } it { is_expected.to contain_class('profile::linuxfw') } it { is_expected.to contain_class('profile::symlinks') } - it { is_expected.to contain_class('puppet') } it { is_expected.to contain_class('motd') } it { is_expected.to contain_class('ntp') } it { is_expected.to contain_class('ssh::server') } diff --git a/spec/classes/puppet_master_spec.rb b/spec/classes/puppet_master_spec.rb index d3d2762..8c17ee0 100644 --- a/spec/classes/puppet_master_spec.rb +++ b/spec/classes/puppet_master_spec.rb @@ -14,20 +14,12 @@ describe 'profile::puppet_master', :type => :class do context 'with defaults for all parameters' do it { is_expected.to create_class('profile::puppet_master') } it { is_expected.to contain_class('epel') } - it { is_expected.to contain_class('puppet') } - - # These resources are included based on hieradata - it { is_expected.to contain_class('puppet::server') } - it { is_expected.to contain_package('puppetserver'). - with_ensure('latest') - } - it { is_expected.to contain_class('hiera') } it { is_expected.to contain_class('r10k') } it { is_expected.to contain_class('r10k::webhook') } # I also removed dist/profile/manifests/puppetdb.pp as profile::puppetdb was no longer required
After making sure all my tests passed, I merged my PR. There’s limits to what I could anticipate at this point, but it’s a solid start. Refactoring always has risks and I expected to – and did – revisit this with some tweaks later. As mentioned at the top of the article, please read all the way to the end before starting your own migration so you can integrate all those changes at once.
Installing Puppet Enterprise
Once the new VM is provisioned, it needs bootstrapped. It needs Puppet Enterprise installed, r10k configured (you can use Code Manager, but I have reasons to stick with r10k for now) and working, the CA and puppetdb data restored, and probably a few other things I didn’t account for at the start. Let’s find out by getting PE installed.
Puppet’s PE installation instructions are very complete. You can install at the CLI or via a Web-based method. The initial steps are the same. First, download the latest PE to the master and expand it:
[root@puppet ~]# wget 'https://pm.puppetlabs.com/cgi-bin/download.cgi?dist=el&rel=7&arch=x86_64&ver=latest' -O pe.latest.tar.gz [root@puppet ~]# tar xfz pe.latest.tar.gz
Change into the new directory and run the installer:
[root@puppet ~]# cd puppet-enterprise-2016.5.2-el-7-x86_64/ [root@puppet puppet-enterprise-2016.5.2-el-7-x86_64]# ./puppet-enterprise-installer
At this point, you can use the web based or CLI method. I’ll let you choose, it doesn’t matter in the end. You’ll need to provide at least a console_admin_password
, which of course you should not share. If you’re doing a monolithic install, you’re ready to proceed; if not, there’s more configuration you need to change. Once you’re ready, proceed. Save the resulting custom-pe.conf
file in case you need it in the future, often for future upgrades. The install took about 8 minutes for me, so transfer your CA and puppetdb backups while you wait.
Now, I’m not sure why, but the installer, which presumably knows it’s a monolithic install based on the choices (or perhaps it should add a flag for this) says this at the end:
If this is a monolithic configuration, run 'puppet agent -t' to complete the setup of this system.
Go ahead and do that, or follow the subsequent instructions for split installations. Also note that the PE installer does not automatically add iptables rules. If you have iptables enabled (you should!) then you’ll need to stop it to access the console, systemctl stop iptables
on EL7 works. The default user is admin with the password you set. There’s more information on the console, including how to set up other authentication sources such as Active Directory. If you don’t see the default username there, you’re not alone, I had to go to here to determine that!
Once you log in, you’ll see your wonderful looking Console!
Now we have a working Puppet Enterprise master, but we need to do a lot of configuration. If you have the ability, now is a great time to take a snapshot in case any future steps go sideways.
Bootstrapping The Master
First, let’s import the PuppetDB data. This is fairly simple, it just requires a few commands that become apparent as you work through it. The authentication required is the Console admin
user. You most likely requires that you start a new session to receive the updated PATH
value. Then puppet db
works and you can try an import:
[root@puppet ~]# puppet db Invalid arguments. Usage: puppet-db [options] (--version | --help) puppet-db [options] export <path> [--anon=<profile>] puppet-db [options] import <path> puppet-db [options] status [root@puppet ~]# puppet db import my-puppetdb-export.tar.gz Error: ssl requires a token, please use `puppet access login` to retrieve a token (alternatively use 'cert' and 'key' for whitelist validation) [root@puppet ~]# puppet access login Enter your Puppet Enterprise credentials. Username: admin Password: Access token saved to: /root/.puppetlabs/token [root@puppet ~]# puppet db import my-puppetdb-export.tar.gz [root@puppet ~]#
I can now see that I have 8 nodes that haven’t reported in hours, and if I click on one, I can get all the details about them I want, so this looks good.
The CA should be pretty simple, just extract the tar from /
, or in the correct relative path depending on how you tarred it up:
[root@puppet ~]# cd / [root@puppet /]# tar xzf /root/cabackup.tar.gz [root@puppet /]# ls -la /etc/puppetlabs/puppet/ssl/ca/signed/yumrepo01.nelson.va.pem -rw-r--r--. 1 pe-puppet pe-puppet 1956 Nov 20 22:44 /etc/puppetlabs/puppet/ssl/ca/signed/yumrepo01.nelson.va.pem
Unfortunately, after I did this, puppetdb failed to startup upon restart, and currently running services had started failing before the restart. I restored my snapshot and continued on. It turns out that it may be possible to import the CA, but that the files must be present before running the PE installer (steps 4 and 5). That article is for PE to PE migrations, so I am not 100% certain this would have worked with PO to PE, but I wasn’t willing to go back and start over at this point to find out. You may be able to adapt Alex Harden’s backup and migration scripts for this purpose if you want to give this a shot yourself.
Next, we need to get r10k working, hiera configured, and the webhook working. There are a number of ways to do this, what I describe below is only one possible way to get this working. I’m using puppet/r10k and the opensource setup, even though I’m on Puppet Enterprise. You can also use Code Manager. I stick with r10k because I use r10k deploy module X
, which currently does not have an equivalent in CM. The hiera setup comes from puppet/hiera. I have bootstrap files for both in my controlrepo, so the first thing to do is check that out. I’ll need an ssh key that has permissions to my repos, on GitHub in this case.
[root@puppet ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: <SNIP> The key's randomart image is: <SNIP> [root@puppet ~]# cat /root/.ssh/id_rsa.pub <SNIP>
Add this as an SSH key and then clone your repo:
[root@puppet ~]# git clone $REPO_URL Cloning into 'controlrepo'... The authenticity of host 'github.com (192.30.253.112)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,192.30.253.112' (RSA) to the list of known hosts. remote: Counting objects: 2376, done. remote: Compressing objects: 100% (28/28), done. remote: Total 2376 (delta 6), reused 1 (delta 1), pack-reused 2346 Receiving objects: 100% (2376/2376), 579.88 KiB | 0 bytes/s, done. Resolving deltas: 100% (1082/1082), done.
You’ll have to accept the key from github.com, or wherever your repo is hosted. Now, I can access my bootstrap info. First is hiera. The hiera file goes in /etc/puppetlabs/code/hiera.yaml
and requires a restart of the server service:
[root@puppet controlrepo]# cat hiera.yaml --- :backends: - yaml :logger: console :hierarchy: - "clientcert/%{clientcert}" - "puppet_role/%{puppet_role}" - global :yaml: :datadir: /etc/puppetlabs/code/environments/%{::environment}/hiera [root@puppet controlrepo]# cp hiera.yaml /etc/puppetlabs/code/hiera.yaml cp: overwrite ‘/etc/puppetlabs/code/hiera.yaml’? y [root@puppet controlrepo]# ls -l /etc/puppetlabs/puppet/hiera.yaml -rw-r--r--. 1 root root 207 Mar 4 23:31 /etc/puppetlabs/puppet/hiera.yaml [root@puppet controlrepo]# systemctl restart pe-puppetserver [root@puppet controlrepo]#
Next up is installing and configuring r10k. After the module is installed, then the file r10k_installation.pp
can be used:
[root@puppet controlrepo]# puppet module install puppet-r10k Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ... Notice: Downloading from https://forgeapi.puppet.com ... Notice: Installing -- do not interrupt ... /etc/puppetlabs/code/environments/production/modules └─┬ puppet-r10k (v4.2.0) ├─┬ gentoo-portage (v2.3.0) │ └── puppetlabs-concat (v2.2.0) ├── puppet-make (v1.1.0) ├── puppetlabs-gcc (v0.3.0) ├── puppetlabs-git (v0.5.0) ├── puppetlabs-inifile (v1.6.0) ├── puppetlabs-pe_gem (v0.2.0) ├── puppetlabs-ruby (v0.6.0) ├── puppetlabs-stdlib (v4.15.0) └── puppetlabs-vcsrepo (v1.5.0) [root@puppet controlrepo]# cat r10k_installation.pp class { 'r10k': version => '2.5.0', sources => { 'puppet' => { 'remote' => '$REPO_URL', 'basedir' => $::settings::environmentpath, 'prefix' => false, }, }, manage_modulepath => false, } [root@puppet controlrepo]# puppet apply r10k_installation.pp Notice: Compiled catalog for puppet.nelson.va in environment production in 0.81 seconds Notice: /Stage[main]/R10k::Install::Puppet_gem/File[/usr/bin/r10k]/ensure: created Notice: /Stage[main]/R10k::Config/File[r10k.yaml]/ensure: defined content as '{md5}b505df8c46140c77dee693fa525c2aac' Notice: Applied catalog in 1.09 seconds
To make sure r10k is working – configuration and ssh key – I can fetch a list of environments it will deploy:
[root@puppet controlrepo]# r10k deploy display --fetch WARN -> The r10k configuration file at /etc/r10k.yaml is deprecated. WARN -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml. --- :sources: - :name: :puppet :basedir: "/etc/puppetlabs/code/environments" :remote: $REPO_URL :environments: - domainjoin - octocatalog - production - puppet_agent
I’ve noted this warning in puppet/r10k PR342, to be fixed fairly soon.
This is based off the branches that exist in the :remote
URL. It will obviously be different for you. I then deploy the production environment, which can take a little while, and check the status of the environment afterward:
[root@puppet controlrepo]# r10k deploy environment production -p WARN -> The r10k configuration file at /etc/r10k.yaml is deprecated. WARN -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml. [root@puppet controlrepo]# r10k deploy display --detail WARN -> The r10k configuration file at /etc/r10k.yaml is deprecated. WARN -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml. --- :sources: - :name: :puppet :basedir: "/etc/puppetlabs/code/environments" :remote: $REPO_URL :environments: - :name: domainjoin :signature: :status: :absent - :name: octocatalog :signature: :status: :absent - :name: production :signature: 44a7d905a2f261e58b995c35d8308b152bad1749 :status: :outdated - :name: puppet_agent :signature: :status: :absent
I then run a hiera command to see that it works:
[root@puppet controlrepo]# hiera ntp::servers ::environment=production --debug DEBUG: 2017-03-04 23:56:11 +0000: Hiera YAML backend starting DEBUG: 2017-03-04 23:56:11 +0000: Looking up ntp::servers in YAML backend DEBUG: 2017-03-04 23:56:11 +0000: Ignoring bad definition in :hierarchy: 'clientcert/' DEBUG: 2017-03-04 23:56:11 +0000: Ignoring bad definition in :hierarchy: 'puppet_role/' DEBUG: 2017-03-04 23:56:11 +0000: Looking for data source global DEBUG: 2017-03-04 23:56:11 +0000: Found ntp::servers in global ["0.pool.ntp.org", "2.centos.pool.ntp.org", "1.rhel.pool.ntp.org"]
I should be ready to run puppet now. I expect maybe an error or two from the catalog, but I should be able to get it to run at least.
[root@puppet ~]# puppet agent -t --noop Notice: /File[/etc/puppetlabs/code/environments/production]/seluser: seluser changed 'unconfined_u' to 'system_u' Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Notice: /File[/opt/puppetlabs/puppet/cache/lib/augeas]/ensure: created Notice: /File[/opt/puppetlabs/puppet/cache/lib/augeas/lenses]/ensure: created Notice: /File[/opt/puppetlabs/puppet/cache/lib/augeas/lenses/fixedsudoers.aug]/ensure: defined content as '{md5}1492fda700091a906d27195bcdc40c90' Notice: /File[/opt/puppetlabs/puppet/cache/lib/facter/apache_version.rb]/ensure: defined content as '{md5}751e89814b4eee452388b698276f7be3' ... <SNIP> ... Info: Loading facts Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: File[/usr/local/bin/facter] is already declared in file /etc/puppetlabs/code/environments/production/dist/profile/manifests/symlinks.pp:18; cannot redeclare at /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/symlinks.pp:37 at /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/symlinks.pp:37:5 on node puppet.nelson.va Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run
Well, that’s not as exciting as I hoped, but it’s to be expected. I guess I don’t need the symlinks anymore, so I remove that from my profile::base
. After I commit changes, I have to run r10k deploy environment production
again since the webhook isn’t set up yet. I found on more issue, removing the Package['puppetdb'] -> Service[webhook]
ordering in profile::puppet_master
. After these tweaks, I got to the point where a run would occur:
[root@puppet ~]# puppet agent -t --noop Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Applying configuration version '1488672453' Notice: /Stage[main]/Motd/File[/etc/motd]/content: ... <SNIP> ... Notice: Class[R10k::Webhook]: Would have triggered 'refresh' from 7 events Notice: Stage[main]: Would have triggered 'refresh' from 27 events Notice: Applied catalog in 19.10 seconds
I actually intended to leave things here overnight, as it was dinner time, but since I was adjusting production in real time, guess what happened inside of 30 minutes? Yep, the master checked in with itself and completed a run. I have two problems now. First, I haven’t added a port 443 firewall rule, so I have to stop iptables to see the console again. The second issue is that the webhook won’t start:
[rnelson0@puppet ~]$ sudo puppet agent -t Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for puppet.nelson.va Info: Applying configuration version '1488735606' Error: Systemd start for webhook failed! journalctl log for webhook: -- Logs begin at Sat 2017-03-04 22:39:43 UTC, end at Sun 2017-03-05 17:40:59 UTC. -- Mar 05 17:40:59 puppet webhook[2560]: /usr/local/bin/webhook:46:in `initialize': No such file or directory @ rb_sysopen - /etc/puppetlabs/puppetdb/ssl/public.pem (Errno::ENOENT) Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `open' Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `<main>' Error: /Stage[main]/R10k::Webhook/Service[webhook]/ensure: change from stopped to running failed: Systemd start for webhook failed! journalctl log for webhook: -- Logs begin at Sat 2017-03-04 22:39:43 UTC, end at Sun 2017-03-05 17:40:59 UTC. -- Mar 05 17:40:59 puppet webhook[2560]: /usr/local/bin/webhook:46:in `initialize': No such file or directory @ rb_sysopen - /etc/puppetlabs/puppetdb/ssl/public.pem (Errno::ENOENT) Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `open' Mar 05 17:40:59 puppet webhook[2560]: from /usr/local/bin/webhook:46:in `<main>' Notice: Applied catalog in 32.30 seconds
That cert no longer exists, but by default puppet/r10k will use the mcollective cert for the peadmin user so I no longer must specify a cert location. I can fix the firewall rule and this at the same time. Here’s the diff:
$ git diff diff --git a/dist/profile/manifests/puppet_master.pp b/dist/profile/manifests/puppet_master.pp index 74fcabd..85cf3d3 100644 --- a/dist/profile/manifests/puppet_master.pp +++ b/dist/profile/manifests/puppet_master.pp @@ -43,4 +43,10 @@ class profile::puppet_master { proto => tcp, action => accept, } + + firewall { '115 PE Console': + dport => 443, + proto => tcp, + action => accept, + } } diff --git a/hiera/puppet_role/puppet.yaml b/hiera/puppet_role/puppet.yaml index 7244f4b..d8f9954 100644 --- a/hiera/puppet_role/puppet.yaml +++ b/hiera/puppet_role/puppet.yaml @@ -15,8 +15,6 @@ r10k::sources: prefix: false r10k::manage_modulepath: false r10k::webhook::config::use_mcollective: false -r10k::webhook::config::public_key_path: '/etc/puppetlabs/puppetdb/ssl/public.pem' -r10k::webhook::config::private_key_path: '/etc/puppetlabs/puppetdb/ssl/private.pem' r10k::webhook::config::command_prefix: 'umask 0022;' r10k::webhook::user: 'root' r10k::webhook::group: 0 diff --git a/spec/classes/puppet_master_spec.rb b/spec/classes/puppet_master_spec.rb index 8c17ee0..61c1b8a 100644 --- a/spec/classes/puppet_master_spec.rb +++ b/spec/classes/puppet_master_spec.rb @@ -20,6 +20,7 @@ describe 'profile::puppet_master', :type => :class do it { is_expected.to contain_class('r10k::webhook::config') } it { is_expected.to contain_firewall('105 puppetdb inbound') } it { is_expected.to contain_firewall('110 r10k web hook') } + it { is_expected.to contain_firewall('115 PE Console') } it { is_expected.to contain_cron('home_config deploy') } it { is_expected.to contain_file('/etc/puppetlabs/puppet/autosign.conf') } end
With this in place, the puppet run completes:
[rnelson0@puppet ~]$ sudo puppet agent -t Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for puppet.nelson.va Info: Applying configuration version '1488736224' Notice: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]/content: --- /etc/webhook.yaml 2017-03-05 00:10:55.948195709 +0000 +++ /tmp/puppet-file20170305-2794-1p80o5z 2017-03-05 17:51:00.463372988 +0000 @@ -14,9 +14,9 @@ port: "8088" prefix: false prefix_command: "/bin/echo example" -private_key_path: "/etc/puppetlabs/puppetdb/ssl/private.pem" +private_key_path: "/var/lib/peadmin/.mcollective.d/peadmin-private.pem" protected: true -public_key_path: "/etc/puppetlabs/puppetdb/ssl/public.pem" +public_key_path: "/var/lib/peadmin/.mcollective.d/peadmin-cert.pem" r10k_deploy_arguments: "-pv" server_software: "WebHook" use_mco_ruby: false Info: Computing checksum on file /etc/webhook.yaml Info: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]: Filebucketed /etc/webhook.yaml to puppet with sum 0163db804d34fabfaae4103a6e22980f Notice: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]/content: content changed '{md5}0163db804d34fabfaae4103a6e22980f' to '{md5}3832474b09421d12f3ae1283eaabffe5' Info: /Stage[main]/R10k::Webhook::Config/File[webhook.yaml]: Scheduling refresh of Service[webhook] Notice: /Stage[main]/Profile::Puppet_master/Firewall[115 PE Console]/ensure: created Notice: /Stage[main]/R10k::Webhook/Service[webhook]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/R10k::Webhook/Service[webhook]: Unscheduling refresh on Service[webhook] Notice: Applied catalog in 42.27 seconds [rnelson0@puppet ~]$ sudo systemctl status webhook ● webhook.service - R10K Webhook Service Loaded: loaded (/usr/lib/systemd/system/webhook.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2017-03-05 17:51:27 UTC; 25s ago Main PID: 3420 (webhook) CGroup: /system.slice/webhook.service └─3420 /opt/puppetlabs/puppet/bin/ruby /usr/local/bin/webhook
And, I can now reach my puppet console without having to stop iptables! I tested the webhook, just to make sure it was working properly:
[rnelson0@build03 controlrepo:test]$ git push origin test Counting objects: 33, done. Delta compression using up to 2 threads. Compressing objects: 100% (19/19), done. Writing objects: 100% (21/21), 1.60 KiB | 0 bytes/s, done. Total 21 (delta 12), reused 0 (delta 0) remote: Resolving deltas: 100% (12/12), completed with 8 local objects. To $REPO_URL * [new branch] test -> test [rnelson0@puppet ~]$ tail -f /var/log/webhook/access.log [2017-03-05 18:00:04] DEBUG accept: 192.30.252.42:53369 [2017-03-05 18:00:04] DEBUG Rack::Handler::WEBrick is invoked. [2017-03-05 18:00:04] INFO authenticated: peadmin [2017-03-05 18:01:20] INFO message: triggered: umask 0022; r10k deploy environment test -pv WARN -> The r10k configuration file at /etc/r10k.yaml is deprecated. WARN -> Please move your r10k configuration to /etc/puppetlabs/r10k/r10k.yaml. INFO -> Deploying environment /etc/puppetlabs/code/environments/test INFO -> Environment test is now at 669433b892e0ee4963a723af7b40de3e4b4b044b ... <SNIP> ... branch: test [2017-03-05 18:01:20] DEBUG close: 192.30.252.42:53369
Before moving on, I must not forget that while the webhook is working, it has ONLY deployed two environments, production
and test
. The others are still absent (note: I have no idea why the status is always outdated
for each environment, but the signature matches the latest commit hash so it is not actually outdated):
:environments: - :name: domainjoin :signature: :status: :absent - :name: octocatalog :signature: :status: :absent - :name: production :signature: 8f63c0e41decbd02e6ebd8307a178fe69aff9b61 :status: :outdated - :name: puppet_agent :signature: :status: :absent - :name: test :signature: 669433b892e0ee4963a723af7b40de3e4b4b044b :status: :outdated
I use r10k deploy environment -p
to deploy the remaining branches. That completes the master bootstrap.
Agent Updates
The agent updates are potentially much more complex. In my case, it mostly just works, because I was not deploying mcollective on my nodes. PE deploys its own mcollective, and if you are doing that yourself you may run into issues. I also removed the class managing the puppet agent locally, the other likely potential source of conflict. It is entirely possible that some other per-profile changes are required, so vette your own setup closely.
I also have a whopping 9 nodes in my home lab, some of which are still running EL6 and are better replaced with a fresh EL7 node. The following steps were performed by hand due to that low number. Were this a large fleet at work, or even just 2 dozen nodes, I would have been more interested in automating it or simply deploying fresh nodes everywhere. But as a one time event, I settled on the manual route.
The first step was to rip out the puppet foss packages. rpm -qa | grep puppet
gives me the list of what to uninstall on the nodes I want to preserve. It may vary based on the role assigned to a node. For instance, on my build node:
[rnelson0@build03 controlrepo:production]$ rpm -qa | grep puppet puppet-agent-1.9.2-1.el7.x86_64 puppetlabs-release-pc1-1.1.0-4.el7.noarch puppetdb-termini-4.2.4-1.el7.noarch [rnelson0@build03 controlrepo:production]$ sudo yum remove -y $(rpm -qa | grep puppet | xargs)
To install PE, open the Console in your web browser and go to Nodes -> Unsigned Certificates (https://puppet/#/node_groups/certificates or similar). You’ll find a curl | bash
command here. There is a repo on the PE master that you can also add to a node and install packages that way, but you’re relying on keeping up with possible changes in the script passed to bash, so I’d advise against it. I pasted the line into the shell on build:
[rnelson0@build03 ~]$ curl -k https://puppet.nelson.va:8140/packages/current/install.bash | sudo bash % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed connect to puppet.nelson.va:8140; No route to host
Well, that was unexpected! It turns out that jlambert121/puppet created an iptables rule for inbound tcp/8140 on the master, so I need to resolve that first. It’s a silly thing to have overlooked, but that’s life:
$ git diff diff --git a/dist/profile/manifests/puppet_master.pp b/dist/profile/manifests/puppet_master.pp index 85cf3d3..c97c20e 100644 --- a/dist/profile/manifests/puppet_master.pp +++ b/dist/profile/manifests/puppet_master.pp @@ -32,6 +32,12 @@ class profile::puppet_master { source => 'puppet:///modules/home_config/master/autosign.conf', } + firewall {'100 puppet agent inbound': + dport => 8140, + proto => tcp, + action => accept, + } + firewall {'105 puppetdb inbound': dport => 8080, proto => tcp, diff --git a/spec/classes/puppet_master_spec.rb b/spec/classes/puppet_master_spec.rb index 61c1b8a..eede881 100644 --- a/spec/classes/puppet_master_spec.rb +++ b/spec/classes/puppet_master_spec.rb @@ -18,6 +18,7 @@ describe 'profile::puppet_master', :type => :class do it { is_expected.to contain_class('r10k') } it { is_expected.to contain_class('r10k::webhook') } it { is_expected.to contain_class('r10k::webhook::config') } + it { is_expected.to contain_firewall('100 puppet agent inbound') } it { is_expected.to contain_firewall('105 puppetdb inbound') } it { is_expected.to contain_firewall('110 r10k web hook') } it { is_expected.to contain_firewall('115 PE Console') }
Now everything should work!
[rnelson0@build03 ~]$ curl -k https://puppet.nelson.va:8140/packages/current/install.bash | sudo bash % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 20107 100 20107 0 0 17649 0 0:00:01 0:00:01 --:--:-- 17668[sudo] password for rnelson0: Loaded plugins: fastestmirror Cleaning repos: pe_repo Cleaning up everything Cleaning up list of fastest mirrors Loaded plugins: fastestmirror pe_repo | 2.5 kB 00:00:00 pe_repo/primary_db | 25 kB 00:00:00 Determining fastest mirrors * base: mirror.cs.uwp.edu * epel: mirror.nexcess.net * extras: mirrors.gigenet.com * updates: bay.uchicago.edu Error: No matching Packages to list Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.cs.uwp.edu * epel: mirror.nexcess.net * extras: mirrors.gigenet.com * updates: bay.uchicago.edu Resolving Dependencies --> Running transaction check ---> Package puppet-agent.x86_64 0:1.9.2-1.el7 will be installed --> Finished Dependency Resolution ... <SNIP> ...
This, too, can take a little while. The installer will start the puppet-agent which includes a first-run attempt. If you look at the PE Console, you’ll note the node still shows as Unreported. Take a look at the log, in my case journalctl, to see why:
[rnelson0@build03 ~]$ journalctl -xe -t puppet-agent Mar 05 19:14:10 build03 puppet-agent[29949]: Unable to fetch my node definition, but the agent run will continue: Mar 05 19:14:10 build03 puppet-agent[29949]: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va] Mar 05 19:14:10 build03 puppet-agent[29949]: Retrieving pluginfacts Mar 05 19:14:10 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/facts.d]) Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to g Mar 05 19:14:10 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/facts.d]) Could not evaluate: Could not retrieve file metadata for puppet:///pluginfacts: SSL_connect returned=1 errno=0 state=error: certificate verify fai Mar 05 19:14:10 build03 puppet-agent[29949]: Retrieving plugin Mar 05 19:14:12 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/lib]) Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get l Mar 05 19:14:12 build03 puppet-agent[29949]: (/File[/opt/puppetlabs/puppet/cache/lib]) Could not evaluate: Could not retrieve file metadata for puppet:///plugins: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [un Mar 05 19:14:12 build03 puppet-agent[29949]: Loading facts Mar 05 19:14:15 build03 puppet-agent[29949]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va] Mar 05 19:14:15 build03 puppet-agent[29949]: Not using cache on failed catalog Mar 05 19:14:15 build03 puppet-agent[29949]: Could not retrieve catalog; skipping run Mar 05 19:14:15 build03 puppet-agent[29949]: Could not send report: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va]
Because I messed up the CA import, the certs don’t match up. We can stop the agent, clear out the certs, and try again. Since I have autosigning enabled, it will just work. If you don’t, you will also need to sign the new certs:
[rnelson0@build03 ~]$ sudo systemctl stop puppet [rnelson0@build03 ~]$ sudo rm -fR $(sudo puppet config print ssldir) [rnelson0@build03 ~]$ sudo systemctl start puppet [rnelson0@build03 ~]$ journalctl -xe -t puppet-agent Mar 05 19:14:15 build03 puppet-agent[29949]: Could not retrieve catalog; skipping run Mar 05 19:14:15 build03 puppet-agent[29949]: Could not send report: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.nelson.va] Mar 05 19:15:42 build03 puppet-agent[27685]: Caught TERM; exiting Mar 05 19:16:41 build03 puppet-agent[30390]: Starting Puppet client version 4.9.3 Mar 05 19:16:46 build03 puppet-agent[30432]: (/File[/opt/puppetlabs/puppet/cache/lib/facter/aio_agent_build.rb]/ensure) defined content as '{md5}cdcc1ff07bc245c66cc1d46be56b3af5' Mar 05 19:16:46 build03 puppet-agent[30432]: (/File[/opt/puppetlabs/puppet/cache/lib/facter/aio_agent_version.rb]/ensure) defined content as '{md5}d05c8cbf788f47d33efd46a935dda61e'
It’s working now! Check the PE Console shortly and you should see your node reporting successfully.
Now that we know we need to clean up the ssldir, we can repeat this process a little more smoothly on the other agents:
sudo rm -fR $(sudo puppet config print ssldir) sudo yum remove -y $(rpm -qa | grep puppet | xargs) curl -k https://puppet.nelson.va:8140/packages/current/install.bash | sudo bash
In most cases, they should come up with Intentional Changes immediately in the PE Console. I lucked out, but a few may error, for the reasons mentioned earlier. You’ll just have to determine the causes of the failures. I’ve been exploring, but haven’t successfully implemented, octocatalog-diff which should help find discrepencies in the catalog. You can also use the PE Console’s report view to find the problem. From the overview, click on the failed run’s timestamp; alternatively click on the node’s name, then the Reports tab, then the timestamp of the failed run. Switch to the Events tab there and change the filter to Failed. Expand the failed resource for detailed information. Here’s what the master looked like when the webhook was failing earlier:
One final issue you may run into is if the agent isn’t running the same OS version as the master. When I run the installer on a CentOS 6 node, I receive this error:
The agent packages needed to support el-6-x86_64 are not present on your master. To add them, apply the pe_repo::platfo rm::el_6_x86_64 class to your master node and then run Puppet. The required agent packages should be retrieved when pup pet runs on the master, after which you can run the install.bash script again.
You can add this class to the master through the PE Console. Go to Nodes -> Classification -> Expand PE Infrastructure -> PE Master. Click on the Classes tab and start typing the listed classname in the Add new class
box. Select the right class when it comes up and click Add Class
. At the bottom, a new dialog pops up showing you have pending changes. Add as many new classes as you need and click Commit
when you’re ready. Our master will need to run puppet agent to receive the new class, then you can try the installation again on your affected nodes.
[rnelson0@puppet ~]$ sudo puppet agent -t [sudo] password for rnelson0: Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for puppet.nelson.va Info: Applying configuration version '1488742359' Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64.repo]/ensure: defined content as '{md5}05805efce6320a80e4af1f9554d500eb' Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64.bash]/ensure: defined content as '{md5}f6fd234b3497b3baf3da9598ffc1d027' Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64-1.8.3]/ensure: created Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/File[/opt/puppetlabs/server/data/packages/public/2016.5.2/el-6-x86_64]/ensure: created Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/Pe_staging::Deploy[puppet-agent-el-6-x86_64.tar.gz]/Pe_staging::File[puppet-agent-el-6-x86_64.tar.gz]/Exec[/opt/puppetlabs/server/data/staging/pe_repo-puppet-agent-1.8.3/puppet-agent-el-6-x86_64.tar.gz]/returns: executed successfully Notice: /Stage[main]/Pe_repo::Platform::El_6_x86_64/Pe_repo::El[el-6-x86_64]/Pe_repo::Repo[el-6-x86_64 2016.5.2]/Pe_staging::Deploy[puppet-agent-el-6-x86_64.tar.gz]/Pe_staging::Extract[puppet-agent-el-6-x86_64.tar.gz]/Exec[extract puppet-agent-el-6-x86_64.tar.gz]/returns: executed successfully Notice: Applied catalog in 42.65 seconds
You will probably have some other cleanup to do that’s not quite as specific to the FOSS->PE software side. For instance, I have a kickstart server with an EL7 template that installs the PC1 repo and the puppet-agent. If I build a new template, I don’t want that, so I’ll have to clean that up. You may have other FOSS-isms in your setup. Hopefully you know where these are at, otherwise you’ll have to stumble upon them later. If you can’t take care of them right this moment, at least open an issue to track it.
Summary
Migrating from Puppet OpenSource to Puppet Enterprise was not that bad. The plan was relatively straightforward, but I did run into a few issues here and there with improper reading of instructions and assumptions about my controlrepo contents, and one or two legitimate issues with the documentation. I did not successfully migrate my CA, though in the end it didn’t cost me that much since I had to touch all my nodes to replace the puppet agent. In a few hours, I was able to get everything migrated over, and write a really long blog post (almost 5,000 words!) about the journey. It’s difficult to tell since I was multi-tasking, but I estimate it took less than 4 hours to complete the actual work required. I hope this helps others who are interested in the same journey and that it saves you some missteps. Thanks!
I’ve always had great success with using https://forge.puppet.com/puppetlabs/puppet_agent to upgrade FOSS/PE Puppet 3 agents to the latest puppet-agent package.
It handles removing old/deprecated packages and old/deprecated settings in puppet.conf as well. I also use this to upgrade agents as I upgrade my Puppet masters.
Great article Rob!
I’ve always had great success with using https://forge.puppet.com/puppetlabs/puppet_agent to upgrade FOSS/PE agents.
It handles removing deprecated/old packages and unused settings in puppet.conf and related tools. I also use that to upgrade agents automatically whenever I bump my PE master’s versions.