Using Puppet Enterprise 2018’s new backup/restore features

I was pretty excited when I read the new features in Puppet Enterprise 2018.1. There are a lot of cool new features and fixes, but the backup/restore feature stood out for me. Even with just 5 VMs at home, I don’t want to rock the boat when rebuilding my master by losing my CA or agent certs, much less with a lot more managed nodes at work, and all the little bootstrap requirements have changed since I started using PE in 2014. Figuring out how to get everything running myself would be possible, but it would take a while and be out of date in a few months anyway. Then there is everything in PuppetDB that I do not want to lose, like collected facts/resources and run reports.

Not coincidentally, I still had a single CentOS 6 VM around because it was my all-in-one puppet master, and migrating to CentOS 7 was not something I looked forward to due to the anticipated work it would require. With the release of this feature, I decided to get off my butt and do the migration. It still took over a month to make it happen, between other work, and I want to share my experience in the hope it saves someone else a bit of pain.

Create your upgrade outline

I want to summarize the plan at a really high level, then dive in a bit deeper. Keep in mind that I have a single all-in-one master using r10k and my plan does not address multi-master or split deployments. Both of those deployment models have significantly different upgrade paths, please be careful if you try and map this outline onto those models without adjusting. For the all-in-one master, it’s pretty simple:

  • Backup old master
  • Deploy a new master VM running EL7
  • Complete any bootstrapping that isn’t part of the backup
  • Install the same version of PE
  • Restore the old master’s backup onto the new master
  • Run puppet
  • Point agents at the new master

I will cover the backup/restore steps at the end, so the first step to cover is deploying a new master. This part sounds simple, but if Puppet is currently part of your provisioning process and you only have one master, you’ve got a catch 22 situation – new deployments must talk to puppet to complete without errors, and if you deploy a new puppet master using the same process, it will either fail to communicate with itself since PE is not installed, or it will talk to a PE installation that does not reflect your production environment. We need to make sure that we have the ability to provision without puppet, or be prepared for some manual efforts in the deploy. With a single master, manual efforts aren’t that burdensome, but can still reduce accuracy, which is why I prefer a modified automated provisioning workflow.

A lot of bootstrapping – specifically hiera and r10k/code manager – should be handled by the restore. There were just a few things I needed to do:

  • Run ssh-keygen/install an existing key and attach that key to the git system. You can avoid this by managing the ssh private/public keys via file resources, but you will not be able to pull new code until puppet processes that resource.
  • SSH to your git server and accept the key. You can avoid this with the sshkey resource, with the same restriction.
  • Check your VMs default iptables/selinux posture. I suggest managing security policy via puppet, which should prevent remote agents from connecting before the first puppet run, but it’s also possible to prevent the master from communicating with itself with the wrong default policy.
  • Check the hostname matches your expectations. All of /etc/hosts, /etc/hostname, /etc/sysconfig/network should list the short and FQDN properly, and hostname; hostname -f should return the same values. /etc/resolv.conf may also need the search domain. Fix any issues before installing PE, as certs are generated during install, and having the wrong hostname result can cause cascading faults best addressed by starting over.

The restore should get the rest from the PE side of things. If your provisioning automation performs other work that you had to skip, make sure you address it now, too.

Installing PE is probably the one manual step you cannot avoid. You can go to https://support.puppet.com and find links to current and past PE versions. Make sure you get the EL7 edition and not the EL6 edition. I did not check with Support, but I assume that you must restore on the same version you backed up, I would not risk even a patch release difference.

Skipping the restore brings us to running the agent, a simple puppet agent -t on the master, or waiting 30 minutes for the run to complete on its own.

The final step may not apply to your situation. In addition to refreshing the OS of the master, I switched to a new hostname. If you’re dropping your new master on top of the existing one’s hostname/IP, you can skip this step. I forked a new branch from production called mastermigration. The only change in this branch is to set the server value in /etc/puppetlabs/puppet/puppet.conf. There are a number of ways to do this, I went with a few ini_setting resources and a flag manage_puppet_conf in my profile::base::linux. The value should only be in one of the sections main or agent, so I ensured it is in main and absent elsewhere:

  if $manage_puppet_conf {
    # These settings are very useful during migration but are not needed most of the time
    ini_setting { 'puppet.conf main server':
      ensure => present,
      path => '/etc/puppetlabs/puppet/puppet.conf',
      section => 'main',
      setting => 'server',
      value => 'puppet.example.com',
    }
    ini_setting { 'puppet.conf agent server':
      ensure => absent,
      path => '/etc/puppetlabs/puppet/puppet.conf',
      section => 'agent',
      setting => 'server',
    }
  }

During the migration, I can just set profile::base::linux::manage_puppet_conf: true in hiera for the appropriate hosts, or globally, and they’ll point themselves at the new master. Later, I can set it to false if I don’t want to continue managing it (while there is no reason you cannot leave the flag enabled, by leaving it as false normally you can ensure that changing the server name here does not take effect unless purposefully flip the flag; you could also parameterize the server name).

Now let’s examine the new feature that makes it go.

Backups and Restores

Puppet’s documentation on the backup/restore feature provides lots of detail. It will capture the CA and certs, all your currently deployed code, your PuppetDB contents including facts, and almost all of your PE config. About the only thing missing are some gems, which you should hopefully be managing and installing with puppet anyway.

Using the new feature is pretty simple, puppet-backup createor puppet-backup restore <filename> will suffice for this effort. There are a few options for more fine-grained control, such as backup/restore of individual scopes with --scope=<scopes>[,<additionalscopes>...], e.g. --scope=certs.

 

The backup will only backup the current PE edition’s files, so if you still have /etc/puppet on your old master from PE 3 days, that will not be part of the backup. However, files in directories it does back up, like /etc/puppetlabs/puppet/puppet.conf.rpmsave, will persist. This will help reduce cruft, but not eliminate it. You will still need to police on-disk content. In particular, if you accidentally placed a large file in /etc/puppetlabs, say the PE install tarball, that will end up in your backup and can inflate the size a bit. If you feel the backup is exceptionally large, you may want to search for large files in that path.

The restore docs also specify two commands to run after a restore when Code Manager is used. If you use CM, make sure not to forget this step:

puppet access login
puppet code deploy --all --wait 

The backup and restore process are mostly time-dependent on the size of your puppetdb. With ~120 agents and 14 days of reports, it took less than 10 minutes for either process and generated a ~1G tarball. Larger environments may expect the master to be offline for a bit longer, if they want to retain their full history.

Lab it up

The backup/restore process is great, but it’s new, and some of us have very ancient systems laying around. I highly recommend testing this in the lab. My test looked like this:

  • Clone the production master to a VM on another hostname/IP
  • Run puppet-backup create
  • Fully uninstall PE (sudo /opt/puppetlabs/bin/puppet-enterprise-uninstaller -p -d -y)
  • Remove any remaining directories with puppet in them, excepting the PE 2018 install files, to ensure all cruft is gone
  • Disable and uninstall any r10k webhook or puppet-related services that aren’t provided by PE itself.
  • Reboot
  • Bootstrap (from above)
  • Install PE (sudo /opt/puppetlabs/bin/puppet-enterprise-installer) only providing an admin password for the console
  • Run puppet-backup restore <backup file>
  • Run puppet agent -t
  • Make sure at least one agent can check in with puppet agent -t --server=<lab hostname> (clone an agent too if need be)
  • Reboot
  • Make sure the master and agent can still check in, Console works, etc.
  • If possible, test any systems that use puppet to make sure they work with the new master
  • Identify any missing components/errors and repeat the process until none are observed

I mentioned that I used PE3. My master had been upgraded all the way from version 3.7 to 2018.1.2. I’m glad I tested this, because there were some unexpected database settings that the restore choked on. I had to engage Puppet Support who provided the necessary commands to update the database so I could get a useful backup. This also allowed me to identify all of my bootstrap items and of course, gain familiarity and confidence with the process.

This became really important for me because, during my production migration, I ran into a bug in my provisioning system where the symptom presented itself through Puppet. Because I was very practiced with the backup/restore process, I was able to quickly determine PE was NOT the problem and correctly identify the faulty system. Though it took about 6 hours to do my “very quick” migration, only about an hour of that was actually spent on the Puppet components.

I also found a few pieces of managed files on the master where the code presumed the directory structure would already be there, which it turns out was not the case. I must have manually created some directories 4 years ago. I think the most common issues you would find at this point are dependencies and ordering, but there may be others. Either fix the code now or, if it would negatively affect the production server, prep a branch for merging just prior to the migration, with the plan to revert if you rollback.

I strongly encourage running through the process a few times and build the most complete checklist you can before moving on to production.

Putting it together

With everything I learned in the lab, my final outline looked like this:

  • Backup old master, export to another location
  • Deploy a new master VM running EL7 using an alternative workflow
  • Run ssh-keygen/install an existing key and attach that key to the git system
  • SSH to the git server and accept the key
  • Verify your VMs default iptables/selinux posture; disable during bootstrap if required
  • Validate the hostname is correct
  • Install PE
  • Restore the backup
  • [Optional] Merge any code required for the new server; run r10k/CM to ensure it’s in place on the new master
  • Run puppet
  • Point agents at the new master

Yours may look slightly different. Please, spend the time in the lab to practice and identify any missing steps, it’s well worth it.

Summary

Refreshing any system of significant age is always possible, and often fraught with manual processes that are prone to error. Puppet Enterprise 2018.1 delivered a new backup/restore process that automates much of this process. We have put together a rough outline, refined it in the lab, and then used it to perform the migration in production with high confidence, accounting for any components the backup did not include. I really appreciate this new feature and I look forward to refinements in the future. I hope that soon enough migrations should be as simple an effective as in-place upgrades.

Contributing to a Political Campaign as a Nerd

As I promised in my previous politics article, I will continue not to advocate for specific politics and remain non-partisan on my blog. I do encourage everyone, regardless of beliefs or party, to participate in politics, because it affects you whether you participate or not. Participation in our democracy can only improve it.

Over the past year or two, I have come to feel far more strongly about my politics. I live in the United States, and it’s impossible to pay attention to the news and not feel some kind of way about our Republic. This year, I decided that I needed to contribute more directly, not just passively partake in politics. If you already follow me on twitter, you know that I wear my politics on my sleeve. Like many readers, I regularly vote. Like many readers, I donate to campaigns. That’s not enough for me anymore. So I decided to do more, and reached out to some local campaigns to find out what I could offer.

I will admit that this is scary. I have spent almost 20 years working on a relatively narrow field of expertise within IT. I had no experience with politics. Going from 20 years of experience to 0 is intimidating – but if I did it, so can you. If you read this and want to contribute, I want you to know that you will do just fine. As divisive and loud and argumentative and nasty as politics seems on the evening news, every group I have worked with over the past year has been welcoming, very graceful about my lack of knowledge and mistakes, and very accommodating to how much time I have available. Please, don’t let your fear keep you out. Reach out if you have any questions!

The expertise we have is sorely needed, though, especially in political campaigns. You will quickly find out that most people involved in political campaigns are not computer experts in any way. Sure, they’re computer savvy as most people are nowadays, but there are significant gaps in that knowledge that needs filled. All you need to do is read the news and you will quickly hear about campaigns that are hacked and crowdsourced analysis of what’s on a politicians phone and overwhelming numbers of twitter and facebook shenanigans. Your help is needed and will be welcomed.

I joined a campaign and I had no idea what I was getting into. I did not find many others in tech who have shared their experiences joining their first campaign. I hope this article helps fill this gap a little bit – and if any readers are in the same situation, I would encourage you to blog about it as well! Alright, let’s get volunteering!

As I hit publish on this, there are 85 days until Election Day in the US. It is NOT too late to volunteer! Your assistance will be welcomed up until the final moments on Election Day, and there are always future elections to prepare for.

What to expect

When you click Sign Up on a campaign web site, you’ll be offered some “normal” work – canvassing, phone banking, putting a sign in your yard, etc. All things technical are notably missing from the list. To offer your technical expertise, you will have to reach out to the campaign directly. Many campaigns or candidates have a listed phone number on their website. If not, try looking at the county or state party’s website for a phone number, and inform them that you would like to get in contact with the campaign.

You will have a chance at some point to talk to the candidate or a campaign manager and make sure that’s who you want to work for. Treat it like a job interview! The campaign will ask what you can do, and you get to ask the campaign about what they will do if elected. Be honest! When I first talked to my campaign, I explained that I had 20 years of IT experience but no campaign experience. I was willing to take on things unknown, but I wanted to make sure they knew it would be new to me. I found out they had an experienced webmaster who would be providing me assistance if I joined. I also asked a lot of pointed policy questions to ensure that I would be happy if this candidate was elected. Get your questions answered and let the campaign know whether you want to join and what you can contribute.

General tech tasks

There are so many areas you can contribute to the technology side of a campaign, regardless of where in technology you work. Here’s a very short, very incomplete list of items you can help with:

  • Setting up a free Slack and teaching people how to use it (EVERYONE uses slack nowadays!)
  • Setting up a website and analytics
  • Configuring multi-factor authentication on all services
  • Setting up apps on phones and tablets
  • Answering questions about how to use a computer, application, or service, even if you’re just functioning as Google as a Service for really busy people
  • Providing a sounding board for anything technical, including how technical people and companies may respond to something

Depending on what your expertise is, you may be able to offer some very specific needs. Surely, what you know can be applied to a campaign, though I may not be able to tell you how. A lot of my expertise is in information security. Here are some examples of InfoSec advice you can provide:

  • Explain threat models. Make sure you know how they apply, too; the threat model of running for US Senator is much different than that of running for a local council seat. Everyone can benefit from making sure they don’t expose their financial details to the world, but fewer are worried about specific attacks by enemy nation-states.
  • Ensure services are registered to a well protected account that belongs to the campaign instead of a random gmail account that belongs to someone who may leave the campaign.
  • Make sure MFA is enabled everywhere possible.
  • Restrict access to services to who needs it, and at the least permission level

Remember that you are advising a campaign, not running your own business, so you will probably “lose” some arguments in areas where you are objectively the expert, and that’s okay. Make sure everyone acknowledges the trade-offs being made, and do your best to minimize the potential fallout. If there’s a realistic chance of failure, prepare remediation plans so that you are ready if they are needed – the same kinds of thing you do at work to cover your company’s butt.

Be aware of how much you can contribute. If you can only spend 2 hours a week with the campaign at odd times of the day, maybe you are not the best person to run their web site. That’s okay, just make the campaign aware that you want to help as well as your limits and surely they can find a way to make it fit. If you can spend more hours, then I encourage you to take on more significant tasks. If your circumstances change, just let the campaign know!

There are tons of small things like this that you can contribute. Don’t worry if you can’t think of something now, if you reach out to the campaign I’m sure you can come up with something together!

Larger projects

In addition to these general tasks, you may be able to contribute to higher level projects to help the campaign. If you are a data scientist, a campaign needs you! Everyone needs to know which voters to target, and they’re hopefully looking for more ethical assistance than we have seen campaigns pursue in the past. Many campaigns can determine what kinds of voters they want to target, but they may lack the skills to find those voters within the mass of voter information available. Those who are great with analytics can help get data from the web sites to the voter analysis teams. Social media experts can help leverage Twitter, Facebook, Instagram, and other services to get messaging out effectively. Online advertising needs your expertise in marketing and advertising. Larger campaigns may need custom applications like HillaryBnB (an AirBnB-style app for canvassers).

Again, I had no experience with campaigns so these are just a few of the efforts I’ve observed recently, it’s a very non-comprehensive list of options. Each campaign’s needs are different, so I suggest checking with the campaign to see what is needed, rather than trying to offer specific projects.

Tackling the unknown

Though you are volunteering because you have expertise, sometimes what a campaign needs does not line up exactly. It’s a good thing we are an industry that is constantly learning! Lean on your existing expertise to get going.

I decided to help out a campaign with Google AdWords, as the lack of such a campaign was identified when I joined. Prior to June 1st, I had never used AdWords or done anything with advertising, online or otherwise. Yeah, it was intimidating. But I believe in my candidate, so I tackled it like any other tech I learned. I found some technical articles about how AdWords works and tips for novices, scrounged up some youtube video so I could see it, and then set up an account and got to work. After almost 3 weeks, I am starting to figure it out, and the campaign is benefiting from my efforts. Find something and dig in. You can grow your technical skills and help advocate for your politics at the same time!

While I will not pretend to be authoritative on AdWords, I want to share some things I learned, beyond the simple mechanics that you can learn through the documentation and tutorials:

  • Create the AdWords account with a central campaign account. Add additional administrators with campaign-specific emails, rather than personal emails. This makes it more difficult for someone to leave the campaign and take down advertising.
  • There’s a pretty decent iPhone app for AdWords, and it’s gives you some views not available on the web site, but you cannot edit very much on it. I am sure there is one for Android as well.
  • Google will provide a $100 coupon after spending your first $25. It will be sent in an email to the account owner’s email and it’s not automatic, you need to apply the code.
  • Google will also send an email advertising a free review of your account. I would wait a few weeks before calling, so that the specialists can see some data from your campaigns.
  • Impressions (someone seeing your ad) are free. Clicks (when someone actually clicks on them) are the only thing that costs you money.
  • Campaigns are made up of Ad Groups. Each ad group can advertise a different set of text and point to a different page, but funding is allocated on a Campaign level. You can add as many ad groups to a single campaign, but budget is allocated at the campaign level. You need to balance the number of campaigns and ad groups, and keep balancing them as voters’ interests change.
  • Each Campaign can also be targeted to different locations. You can limit some AdWords Campaigns to the political campaign’s region (by district for federal offices; by using zip codes for state and local offices) to focus your advertising, such as issues-based campaigns. Others may be made open to a wider area, maybe even the whole country, such as donation campaigns.
  • Each Ad Group can be made up of keywords which receive mostly-opaque scoring. The better the score, the better the placement. Keywords that do not result in Impressions receive a lower score and can drag down the score of the entire Ad Group. Disable keywords that do not work, or replace them with more specific and helpful keywords, to keep the scoring up. All the scoring is Google magic, and the effectiveness of this will vary quite a bit for every organization. Keep an eye on it.
  • Keywords are words or phrases that you want your ad matched to. You can also add Exclusions. Combine this with Search Terms results, which show the actual search someone used and the keyword or category it matched, to filter out Impressions/Clicks that are not helpful to your campaign. I have seen some really ridiculous search terms, including the amazing imagen you are an environmentalist giving a speech environmental due to population growth in the western united states that matched the keyword environment. Whoops, way too generic, it was replaced with a more specific phrase. Another was a search including the name of another candidate in another entire state and that one click ate the entire campaign budget for the day. Keeping up with Exclusions can save your campaign a lot of money.
  • Google watches monthly trends to determine when best to spend money. Your daily budget is better thought of as the average daily spend over a 30 day period. For example, Google may determine that you won’t get much out of ads on Saturday and spend close to $0, but the 2nd Wednesday of the month get the most impact and spend far more than your budget. You will only ever be charged 2x your specified budget, even if Google “spends” 2.3x your budget (I have never observed them going past 2.0x).
  • Almost everything you do with AdWords can be changed on the fly. However, there are two things to keep in mind:
    • Any new ad or keywords (and you cannot edit ads/keywords, you actually make a new one to replace the existing one) must be approved. It can take up to 24 hours to approve. You can add ads/keywords and disable them, then enable them when needed, to ensure they are approved prior to when they are needed.
    • Significant budget changes may flag a fraud alert. If you have a significant event-related campaign coming up, set it up at least 3 days in advance, as it can take 3 days to resolve suspected fraud. If you are spending $10 a day and want to increase that to $300 for a weekend event and your account gets flagged on Friday, it may be Monday before it is unfrozen and your event will be over.
  • You won’t find many AdWords tutorials that speak to political campaigns. It’s strongly associated with businesses. A few articles and charts mentioned Social Advocacy, which is probably the closest, but…
  • Success is difficult to measure. A business may track how many people click versus how many people order something. A donation campaign can track how many people donate, but an issues or awareness campaign cannot correlate visitors to the site with votes in a primary or general election. Conversion rates on their own won’t tell you much.
    • For many candidates, awareness itself is the goal. Many voters do not fill out the whole ballot and only check non-federal boxes if they recognize the name. Responsive Ads (as opposed to Text Ads) are fairly unobtrusive but can display small graphics. Logos are very helpful to start creating brand awareness.
    • Run at least two Ads for each campaign, much like you would do A/B testing at work. Review regularly and tweak the ads over time.
  • Advertising is not an island. Coordinate with the Social Media team and the event planners. If your candidate is going to be at a local festival or state fair in a few weeks, add some keywords for the event. If Social Media is blitzing on a policy, make sure common keywords will drive voters to your candidate. When things happen in the news that affect your voters, make sure their searches will bring them to your candidate. Likewise, you can review search terms people use and inform the other teams that these are some of the things voters are searching for and make sure the candidate speaks to those concerns.
    • This can feel very ghoulish or morbid. Much of news that drives people to politicians is going to be of a negative nature. We generally don’t call our elected officials when things are looking up. You WILL probably have to capitalize on an event that resulted in harm or death. In my case, unfortunately, there was a school shooting near the district. Ugh. But voters do want to know their candidate’s policies on subjects like school shootings. Be responsible and principled and above all, caring. Do not let the need to respond compromise your integrity or the candidate’s.
  • Your candidate’s party has other candidates running. Reach out to them for assistance, for ideas, for additional eyes on problems. You can get contact info from your county/state party’s offices, usually.
  • AdWords includes a large number of reports. Working in IT, our tendency is to encourage others to run reports themselves, but everyone on a political campaign is likely already spending all the time they can on their areas of expertise. It can be a huge help if you create/tweak reports for others, and schedule them to email the requester regularly.

Summary

Just because many of us make technology a huge part of our lives, we are not one dimensional. If you feel inspired by politics, don’t hide it, become active. I’ve discussed what you might expect if you join a political campaign, some of the work and expertise technologists can offer campaigns, and my experience in joining a campaign. Whether you contribute a few hours a month or hours every day, you will be a vital part of chosen campaign. That’s awesome! Participation in democracy is what makes our Republic so strong, and that of most democratic governments.

If you have any questions about volunteering, whether it’s technical or about the experience or something else, please reach out. You can drop a comment here, or @rnelson0 on twitter.

Enjoy, and thank you!

Disabling rubocop and upgrading to PDK 1.6.0

As I lamented in my article on converting to the PDK, I really do not like Rubocop and was disappointed I could not turn it off. Thankfully, that was addressed in PDK-998 and the fix was included in time for PDK 1.6.0! Disabling it is pretty simple and though it’s strictly a fix to pdk-templates, updating the PDK won’t hurt.

First, update to PDK 1.6.0. As I use CentOS 7 and the RPM packaging, it’s as simple as sudo yum update pdk -y; follow the directions that match system. Next, we need to add the following lines to .sync.yml:

.rubocop.yml:
  selected_profile: off

Finally, run pdk update, or if you weren’t already using pdk-templates, run pdk convert --template-url=https://github.com/puppetlabs/pdk-templates (I will assume the former to keep it simple). You can add --noop (or say n) and review update.txt|convert.txt to see the differences before applying, or, because you are using version control, just run a diff afterward to see the changes.

[rnelson0@build03 domain_join:pdk160]$ pdk update
pdk (INFO): Updating rnelson0-domain_join using the template at https://github.com/puppetlabs/pdk-templates, from master@041eeb2 to 1.6.0

----------Files to be modified----------
metadata.json
.pdkignore
spec/spec_helper.rb
.gitignore
Rakefile
.rubocop.yml
Gemfile

----------------------------------------

You can find a report of differences in update_report.txt.

Do you want to continue and make these changes to your module? Yes
[✔] Installing missing Gemfile dependencies.

------------Update completed------------

7 files modified.

That’s it! Check the contents of .rubocop.yml and you will notice everything is false (just a snippet because it’s loooong):

---
require: rubocop-rspec
AllCops:
  DisplayCopNames: true
  TargetRubyVersion: '2.1'
  Include:
  - "./**/*.rb"
  Exclude:
  - bin/*
  - ".vendor/**/*"
  - "**/Gemfile"
  - "**/Rakefile"
  - pkg/**/*
  - spec/fixtures/**/*
  - vendor/**/*
  - "**/Puppetfile"
  - "**/Vagrantfile"
  - "**/Guardfile"
Bundler/DuplicatedGem:
  Enabled: false
Bundler/OrderedGems:
  Enabled: false
Layout/AccessModifierIndentation:
  Enabled: false
Layout/AlignArray:
  Enabled: false
Layout/AlignHash:
  Enabled: false
Layout/AlignParameters:
  Enabled: false
Layout/BlockEndNewline:
  Enabled: false
Layout/CaseIndentation:
  Enabled: false
Layout/ClosingParenthesisIndentation:
  Enabled: false
Layout/CommentIndentation:
  Enabled: false

Running validation now finds no issues with ruby syntax no matter how much you ignore style guides:

# master, prior to updating

[rnelson0@build03 domain_join:master]$ pdk validate
...
[✖] Checking Ruby code style (**/**.rb).
info: task-metadata-lint: ./: Target does not contain any files to validate (tasks/*.json).
convention: rubocop: spec/spec_helper_acceptance.rb:17:27: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.
convention: rubocop: spec/spec_helper_acceptance.rb:17:49: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.
convention: rubocop: spec/spec_helper_acceptance.rb:19:66: Style/BracesAroundHashParameters: Redundant curly braces around a hash parameter.
convention: rubocop: spec/spec_helper_acceptance.rb:19:68: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.
convention: rubocop: spec/spec_helper_acceptance.rb:19:96: Layout/SpaceAfterComma: Space missing after comma.
convention: rubocop: spec/acceptance/class_spec.rb:6:9: RSpec/ExampleWording: Do not use should when describing your tests.
convention: rubocop: spec/acceptance/class_spec.rb:12:26: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.
convention: rubocop: spec/acceptance/class_spec.rb:13:26: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.
convention: rubocop: spec/classes/domain_join_spec.rb:2:25: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.

# pdk160, after updating, no code changes
[rnelson0@build03 domain_join:pdk160]$ pdk validate
pdk (INFO): Running all available validators...
pdk (INFO): Using Ruby 2.4.4
pdk (INFO): Using Puppet 5.5.2
[✔] Checking metadata syntax (metadata.json tasks/*.json).
[✔] Checking module metadata style (metadata.json).
[✔] Checking Puppet manifest syntax (**/**.pp).
[✔] Checking Puppet manifest style (**/*.pp).
[✔] Checking Ruby code style (**/**.rb).
info: task-metadata-lint: ./: Target does not contain any files to validate (tasks/*.json).

You may have noticed there are quite a few other files updated. The other significant change is that a :changelog task via github_changelog_generator is now included, so you can remove that from your .sync.yml if you added it and replace it with the recommended config (via the Rakefile):

---
Gemfile:
  optional:
    ':development':
      - gem: 'github_changelog_generator'
        git: 'https://github.com/skywinder/github-changelog-generator'
        ref: '20ee04ba1234e9e83eb2ffb5056e23d641c7a018'
        condition: "Gem::Version.new(RUBY_VERSION.dup) >= Gem::Version.new('2.2.2')"

The other changes are pretty minor, in some cases cosmetic, but of course review them to make sure they’re OK. Submit a PR or equivalent and make sure the tests pass before merging. You can follow along with today’s blog post in domain_join PR35, too.

Enjoy!

Opinion: Technology is always Political

I’m writing this opinion piece right now because of ongoing gross abuses of justice taking place in America right now. Don’t worry, as strongly as I feel about this subject, my blog will remain free of specific political advocacy (please follow me on twitter for my politics), but we absolutely need to talk about the relationship between technology and politics. I would love to see your comments here, or you can reach me on twitter if that is more comfortable.

As technical practitioners, we are often under a lot of pressure to focus on tech and minimize other subjects, especially contentious subjects like politics. These subjects can cause conflict and many of us are taught to avoid, rather than resolve, conflict. We are constantly told that talking about tech is great; maybe talk about your sports teams, music, craft beers – but never talk about politics. That’s divisive! Sure, sometimes politics may cause conflict and even alienate people, but that’s just an aspect of life, of who we are, and not something to box up and hide in a corner. We soon find that everything is political – and if it’s not, it will still be made political for us. Our politics reflect who we are, and we strive to grow and change over time, and it stands to reason that our politics grow and change with us. I find Scott Hanselman to be very eloquent on the relation of politics to self, probably because it comes up so often on his timeline:

Once you share your politics, like Scott, you may be told to stick to technology. But if everything is political, then technology must be political, too. It doesn’t exist in a vacuum. It never has and it never will. We must stop pretending that technology exists free from politics. Our community benefits from embracing this truth, not denying it.

Examination of our industry’s early history quickly shows the relationship between technology and politics. Let’s review the history of IBM during World War II. In 1933, IBM started cozying up to the Hitler regime to increase sales, starting with an innocuous sounding census. Having machine-tabulated census data allowed the German government to rapidly increase their prosecution of the Holocaust. Eventually, every Nazi concentration camp used IBM punch card technology to track prisoners – which IBM serviced under contract.

This was not an isolated act of the company or an oddity of IBM’s German subsidiary. This happened in America, too. IBM pursued and acquired the contract for the Japanese internment camps’ punch cards, at the same time its equipment was used for US Army and Navy cryptography. IBM was okay with the use of its punch card equipment to identify, round up, and track prisoners, even by a country at war with IBM’s home country, so as long as IBM got paid.

Neither of these efforts just “happened.” IBM employees developed the punch card technology. IBM employees had to contact the German and US governments to open sales channels. IBM employees  had to pursue and close sales contracts with the German and US governments. IBM employees had to provide support, spare parts, and even enter concentration camps to change the printer paper throughout World War II.

Numerous people were required to prosecute the Holocaust and the Japanese internment. Only a few were technologists, and fewer still worked for IBM. But all of them allowed their personal political and ethical views to be subsumed and harnessed to a political regime that enacted some of the worst atrocities in the history of humankind. We do not know exactly what these people intended, but history has recorded the outcome and judged it. No amount of, “Well, I didn’t mean that to happen,” or, “I didn’t want to take sides,” will ever change that.

“The only thing necessary for the triumph of evil is that good men should do nothing.” – Edmund Burke

And here we are again, in 2018, observing numerous authoritarian efforts, by both governments and public/private companies, to weaponize technology. Again, this does not just “happen.” Someone has to actively develop, sell, provide, and support these weaponized technologies. Each of us must inform ourselves of how the technologies we work on can be weaponized and used for evil. Implicit and subconscious bias will creep into products, but very frequently, efforts are consciously and overtly made to weaponize technologies. Original intent is never remembered, only the horrible outcomes.

Earlier, I stated that we should have and develop our own political views. In accepting that technology is intrinsically political, our use of technology may then become political advocacy. For example, If we are strongly in favor of a legal right, using a technology designed to interfere or suppress that right would be antithetical to our politics. Thus, our political views should drive our use of technology.

To avoid advocating for specific political views, I suggest that we all commit to a formal code of ethics that closely matches our political views. I highly recommend USENIX’s System Administrators’ Code of Ethics (more in Additional Links). It is straightforward and thorough, is compatible with most political viewpoints, and has stood the test of time within our industry. A chosen code of ethics is only helpful if we stick by it. Not just when it’s easy, but especially when it’s difficult!

So what happens if we do find ourselves involved with a system that can be weaponized, that violates our ethics and politics? How do we advocate our politics and maintain our ethical code when we have significant concerns? “It depends,” of course, on those concerns and how significant they are, on our relationship with the system, and on who we are providing for. We must each determine an inflection point where it passes from, “Can this still be saved?” to, “This cannot be saved.” Everyone’s inflection point will be different – we all have different politics and ethics, finances, health, and family situations, etc. – but we must each determine exactly where that point is for us.

Inflection point in hand, we can proceed to an action plan. We may be able to work fully above board, making cases to stakeholders and management. Or we may have to move below board, purposefully dragging our feet or working against the system. Maybe a change in vendors will address concerns or slow progress, maybe we just don’t do certain things at all. Find the appropriate monkey wrench that fits the gears of the system. We must plan our activities as we would any other technical work, laying out our goals and milestones and alternative plans for when disaster strikes.

We can also lean on each other. Reach out to your coworkers, your colleagues and peers, your friends and family, to voice your concerns and discuss remedies. We are not alone, and we can lean on or be the rock for each other. We may want to confide in just a few trusted people, to organize with our coworkers, or to become whistleblowers. There is so much nuance and possibility here that it is impossible to predict what actions will be required, but together we can determine what those actions are.

There may come a day when we find that we cannot stop the dangerous technology, when we have crossed that inflection point. We have to evaluate, honestly, whether there is still good we can do, or if we have truly passed the point of no return and need to walk away. Many of us will never be close to making these kinds of decisions, but some of us will. We must rely on our ethical codes and our planning to remain true, to see the point of no return, and to walk away, even if it costs us. To stay and actively participate knowing that we are now doing irrevocable harm would be costlier.

This is not a one-time deal. We must always keep our eyes open, evaluating our ever-changing political views against the work in front of us, applying our ethics to keep us on track, and making damn sure that we are never – metaphorically or literally – changing the printer paper in a concentration camp.

We will not build the software for concentration camps or to enable authoritarians. We will not spy on our neighbors or destroy democracies. We will commit to our ethical codes of conduct, and we will use technology to build the shining city on a hill.

Thank you.

 

 

Additional Links:

Creating your first Puppet Task for Puppet Enterprise

At PuppetConf 2017, Puppet Tasks were introduced as part of the new project Bolt. A task allows you to run a program on an arbitrary number of nodes. The program can be just about anything, it just needs to be written in a language that the target nodes can run. For Linux, that means pretty much anything – bash, python, perl, ruby, etc. On Windows, you’re a little more limited out of the box – powershell primarily. Bolt is not yet at version 1.0.0, so I suspect language support for Windows will change. You can use Bolt on its own (even without Puppet, apparently), and starting with Puppet Enterprise 2017.3, you can use Bolt at the PE Console as “tasks” in the UI.

For my first task, I simply want to run a single command on a list of nodes. While I can run arbitrary commands with the bolt command line, I want the practice of writing a task. My use case involves an external authentication system that manages users, ssh keys, and sudo configurations. When a change is made, nodes need to pull the changes. Often, a delay there does not matter – the nodes will receive the change soon enough – but sometimes I want the relevant nodes to pick it up immediately. To do so, I need it to run a single perl script, sanitized as /usr/bin/external_auth.pl, and I want to do it on all the nodes with the profile::external_auth class.

Continue reading

Convert a Puppet module from Bundle-based testing to the Puppet Development Kit (PDK)

A few years ago, I set up my modules with a bundle-based test setup and modulesync and wrote a companion blog post. Since that was written, a lot of things have changed with puppet. One of those is the release last year of the Puppet Development Kit (PDK). The goal of the PDK is to simply development of puppet code and modules by reducing or eliminating all of the headaches of creating a mature ruby/bundler/puppet-lint/etc. setup. There is also a brand new tool called pdksync that combines the PDK with the power of modulesync. I was somewhat involved in the initial efforts toward the PDK, through my work on puppet-lint, but I have not actually used the PDK “in anger” yet, in part because of my previously working modulesync setup. This seems like a great opportunity to switch to PDK and pdksync, starting with the PDK.

Why PDK?

Before we begin, we should look at why we want to use the Puppet Development Kit. My current setup is best described as fragile. Its effectiveness varies based on what version of Ruby I use and the version of the gems I happen to download via bundler on any given day. I use CentOS 7, which is stuck on Ruby 2.0. Most of the gems in my setup require at least Ruby 2.1 or 2.2, so I have to resort to RVM to provide me with Ruby 2.3.1. Someday, I’ll need to update that to Ruby 2.4 for a gem and my setup will break until I fix it.

I am also downloading a bunch of gems that are not pinned and updated versions can bring in subtle bugs or cascading failures not related to changes in my code. Sometimes the gem is directly related to my work, like a puppet-lint version with a bug that I can downgrade and pin. Other times, it’s a very indirect dependency of a dependency of a dependency to puppet-lint, and pinning it only creates more problems for everything that depends on it. Of course, bundler also relies on rubygems.org and internet/mirror access, which sometimes go down when you need them most.

While these are surmountable issues, they always come up while I’m trying to get something done in puppet, and the minutes or hours required to fix the problem prevents me from making the changes I need, when I need.

The PDK resolves this by bundling its own version of Ruby and dependent gems. Puppet vets the setup, so I do not have to. Everything is on disk, so there’s no more required downloading of gems that can be pulled or unavailable because of network issues. This is a huge benefit to all users, whether they pay for Puppet Enterprise or use Puppet Opensource Edition for free. Less time spent worrying about dependency hell and more time getting straight to work. This is important, valuable work, but it’s not my expertise or an actual goal of my job, so I am very content to let someone else handle the setup so I can spend more time managing my systems with Puppet.

The PDK is an installable tool. Install once and you can use it with all your puppet modules and controlrepos. Upgrading the PDK is simple using your package manager. You can of course combine using the PDK on some modules and stick with the ruby/bundler setup on others. However, it will be more difficult (but not impossible) to switch between the PDK and native bundler on the same module – our CI systems will use native bundler after all – but some gem dependencies will no longer be pinned and we lose the guarantee of pinned gems that will work together.

We will see below that you can still modify the setup on each module/controlrepo to some extent, but when using the PDK, the full range of customization bundler offered is unavailable to you. I think most people will not find this to be a problem, but you should definitely read up on the PDK to make sure you understand what you gain and lose before converting to using it. If you change your mind later, switching back from PDK to a bundler-based setup is possible, but it may involve some work to find a working setup of pinned gem versions.

Installing the PDK

The very first thing I need to do is install the PDK. The following is written using PDK v1.5.0.0. The PDK is relatively new and gets frequent updates, so this may become out of date rapidly. If you run into any issues, check the version, read the release notes, and adjust accordingly.

The docs decribe how to install on various systems. I use EL7 so I will install the RPM. I also use Puppet Enterprise, not Puppet Opensource, so I have to add the Puppet repository first. rpm and yum can get me there, or I can use puppet apply:

# Manual
sudo rpm -Uvh https://yum.puppet.com/puppet5/puppet5-release-el-7.noarch.rpm
sudo yum install pdk -y
# puppet apply
cat > ~/pdk.pp < EOF
package { 'puppet5-release-el-7':
  ensure => present,
  provider => 'rpm',
  source => 'https://yum.puppet.com/puppet5/puppet5-release-el-7.noarch.rpm',
}
-> package { 'pdk':
  ensure => present,
}
EOF
sudo puppet apply ~/pdk.pp

I can now call the command pdk successfully. Be aware that it includes its own bundled ruby, so the first time you run it may take a little time to be loaded and cached, which is expected.

[rnelson0@build03 domain_join:master]$ pdk --help
NAME
    pdk - Puppet Development Kit

USAGE
    pdk command [options]

DESCRIPTION
    The shortest path to better modules.

COMMANDS
    build        Builds a package from the module that can be published to the Puppet Forge.
    bundle       (Experimental) Command pass-through to bundler
    convert      Convert an existing module to be compatible with the PDK.
    help         show help
    module       Provide CLI-backwards compatibility to the puppet module tool.
    new          create a new module, etc.
    test         Run tests.
    update       Update a module that has been created by or converted for use by PDK.
    validate     Run static analysis tests.

OPTIONS
    -d --debug                    Enable debug output.
    -f --format=           Specify desired output format. Valid
                                  formats are 'junit', 'text'. You may also
                                  specify a file to which the formatted
                                  output is sent, for example:
                                  '--format=junit:report.xml'. This option
                                  may be specified multiple times if each
                                  option specifies a distinct target file.
    -h --help                     Show help for this command.
       --version                  Show version of pdk.

If you sign up for the puppet-announce mailing list, you will be notified every time there’s a new PDK release. After reading the release notes for edge cases that may impact you, can easily upgrade to the latest version with your distro’s equivalent of yum update pdk. That is a lot easier than updating ruby/bundler setups.

Converting to the PDK

Next, my existing setup must be converted to PDK. I will walk through my efforts, but you can also review the PuppetConf 2017 video and slides about the PDK in addition to this Converting To PDK doc. I am working on my domain_join module first, starting from the release candidate for v0.5.2, if you want to recreate this effort. The module is part of my modulesync config and has 68 tests. It’s a reasonably mature module but not overly complex, perfect for testing without being too deep or shallow. I am also going to break the modulesync setup, which you can see here. Before beginning, I create a new branch on the module:

[rnelson0@build03 domain_join:master]$ git checkout -b pdk
Switched to a new branch 'pdk'

Because I use modulesync, this will not work out of the box for me, but there’s a very naive default pdk convert that can be used to update the config. It will inform you of the files that will be added/modified and prompts you to continue due to the potential for destruction. As noted, this concern is mitigated by using version control, and if you’ve read my blog before, you’re obviously using version control, right? If not, get that done first! (it’s beyond the scope of this article, but my git 101 article may help) Here’s what the naive attempt looks like:

[rnelson0@build03 domain_join:pdk]$ pdk convert

------------Files to be added-----------
.pdkignore
.project
spec/default_facts.yml
.gitlab-ci.yml
appveyor.yml

----------Files to be modified----------
metadata.json
spec/spec_helper.rb
.gitignore
.travis.yml
Rakefile
.rubocop.yml
.rspec
Gemfile

----------------------------------------

You can find a report of differences in convert_report.txt.

pdk (INFO): Module conversion is a potentially destructive action. Ensure that you have committed your module to a version control system or have a backup, and review the changes above before continuing.
Do you want to continue and make these changes to your module? Yes

------------Convert completed-----------

5 files added, 8 files modified.

[rnelson0@build03 domain_join:pdk±]$

The git diff is really, really lengthy, but you can find it here. A lot of it is simply re-arranging of stanzas in existing files (.gitignore, .rspec, .travis.yml, metadata.json) and .rubocopy.yml updates. The rest is mostly in three files: Rakefile, Gemfile and spec/spec_helper.rb. It also adds five files: .gitlab-ci.yml, .pdkignore, .project, appveyor.yml, and spec/default_facts.yml. Some info on the minor changes:

  • spec/default_facts.yml: If you had default facts in your spec/spec_helper.rb file, you should move them here. The rest is mostly “housekeeping” but it removes my hiera config, which I will explore in a moment.
  • .gitignore, .pdkignore: The former has been updated a bit, and the latter is the exact same thing.
  • .gitlab-ci.yml, .travis.yml, appveyor.yml: Puppet provides some good defaults for a number of external systems. I am sticking with Travis CI right now, but it’s great to have defaults for other services if I branch out. The latter looks targeted at testing Windows systems, too, something that’s often problematic. These are all optional but do not hurt by being present.
  • .project: Looks like some XML for use with the editor Eclipse.
  • .rubocop.yml: I really don’t like rubocop but it’s included. I plan to disable rubocop as quickly as possible. However, this addresses one of my pain points – every version of rubocop changes the name of Cops, and it fails to run if it finds an unknown Cop name in its config. Since Puppet vets the config, I do not have to deal with finding all the new Cop information everytime rubocop updates. It’s not enough for me to love it, but it is significant pain reduction.

This leaves the big three files mentioned earlier, which are worthy of more detailed investigation.

Rakefile

This file is MUCH smaller now, I presume in thanks due to some pdk magic. The conversion removed a changelog task I created, so I need to get this back.

GitHubChangelogGenerator::RakeTask.new :changelog do |config|
 version = (Blacksmith::Modulefile.new).version
 config.future_release = "v#{version}"
 config.header = "# Change log\n\nAll notable changes to this project will be documented in this file.\nEach new release typically also includes the latest modulesync defaults.\nThese should not impact the
 config.exclude_labels = %w{duplicate question invalid wontfix modulesync}
end

I also have a number of puppet-lint checks I have disabled, like arrow_alignment, that I need to make sure are restored. After restoring my task and disabled checks, I will be okay with this new, slimmer default.

Gemfile

PDK includes its own version of ruby and bundler and is guaranteed to deliver a gemset with all the dependencies needed to work together. You can run pdk bundle exec gem list to see what it includes, if you are curious what those are. I will add the github_changelog_generator gem here soon, but otherwise as long as everything works, I have no need to poke at this file anymore.

spec/spec_helper.rb

Though the diff is fairly long for this file, there is nothing tricky here, it just connects the new default facts and some other common practices. It DOES remove the hiera configuration. There is a more modern version of my hiera_config that we need to add back in:

RSpec.configure do |c|
  c.hiera_config = 'spec/fixtures/hiera/hiera.yaml'
end

The naive conversion is not that bad for my setup, but it does leave me with three changes to make to keep functional parity: add the github_changelog_generator gem, the :changelog rake task, and re-enable hiera lookups.

Updating the PDK Setup

Now that I’ve identified the non-default changes needed, I can do some updates. The PDK can convert and update modules using a template system. The template it used is listed at the bottom of metadata.json. You can find the templates online, or clone that directory and examine the moduleroot contents (the moduleroot_init directory is also used when you run pdk new):

[rnelson0@build03 domain_join:pdk±]$ git diff metadata.json
diff --git a/metadata.json b/metadata.json
index 7730b90..381aff7 100644
--- a/metadata.json
+++ b/metadata.json
@@ -27,5 +27,8 @@
"name": "puppet",
"version_requirement": ">=4.0.0"
}
- ]
+ ],
+ "pdk-version": "1.5.0",
+ "template-url": "file:///opt/puppetlabs/pdk/share/cache/pdk-templates.git",
+ "template-ref": "1.5.0-0-gd1b3eca"
}

The copies on disk are from the RPM, and are almost definitely out of date. The latest templates are on GitHub. I can re-run the conversion with pdk convert --template-url=https://github.com/puppetlabs/pdk-templates. The changes for me are pretty small but will be much larger the further away in time you are from the date of the RPM build. After running it, the template info will also be updated:

+ ],
+ "pdk-version": "1.5.0",
+ "template-url": "https://github.com/puppetlabs/pdk-templates",
+ "template-ref": "heads/master-0-g7b5f6d2"

We can look at the individual templates here or clone it locally. The first thing to note is that frequently the .erb templates are dynamic data, rather than static. The simplest change is in spec/spec_helper.rb, just adding a single stanza to the Rspec.configure section, which is also dynamic:

RSpec.configure do |c|
  c.default_facts = default_facts
  <%- if @configs['hiera_config'] -%>
  c.hiera_config = "<%= @configs['hiera_config'] %>"
  <%- end -%>
  <%- if @configs['strict_level'] -%>
    c.before :each do
    # set to strictest setting for testing
    # by default Puppet runs at warning level
    Puppet.settings[:strict] = <%= @configs['strict_level'] %>
  end
  <%- end -%>
end

I highlighted the conditional that will populate the filename with the contents of configs['hiera_config']. The configs hash is populated by config_defaults.yml. The README has a lot of helpful information on the defaults. There’s just a few lines for the spec_helper.rbfile:

[rnelson0@build03 pdk-templates:master]$ tail -2 config_defaults.yml
spec/spec_helper.rb:
  strict_level: ":warning"

I need to add to this hash, but I cannot add to the templates since they are upstream. Thankfully, there’s a built in way to account for this. The contents of the configs hash are combined with the same hash taken from the local.sync.yml file!

Note: if you’d like you CAN change the templates by forking puppetlabs/pdk-templates and passing in --template-url when you call pdk new, convert, or update. You are then on the hook for updating your templates over time, though.

To make use of the sync file, I just need to add it to the root of my module directory and add the custom config. It is additive, so only differences need to be present. Here is the hiera_config value required:

[rnelson0@build03 domain_join:pdk±]$ cat .sync.yml
spec/spec_helper.rb:
  hiera_config: 'spec/fixtures/hiera.yaml'

With the use of the pdk update command, I can re-apply the templates in --noop mode and see the change:

[rnelson0@build03 domain_join:pdk±]$ pdk update --noop
pdk (INFO): Updating rnelson0-domain_join using the default template, from 1.5.0 to 1.5.0

----------Files to be modified----------
spec/spec_helper.rb

----------------------------------------

You can find a report of differences in update_report.txt.

[rnelson0@build03 domain_join:pdk±]$ cat update_report.txt
/* Report generated by PDK at 2018-05-29 20:08:11 +0000 */


--- spec/spec_helper.rb 2018-05-29 18:53:09.140882197 +0000
+++ spec/spec_helper.rb.pdknew 2018-05-29 20:08:11.819978562 +0000
@@ -28,6 +28,7 @@

 RSpec.configure do |c|
   c.default_facts = default_facts
+  c.hiera_config = spec/fixtures/hiera.yaml
   c.before :each do
   # set to strictest setting for testing
   # by default Puppet runs at warning level

Now that we have proved out the process, I need to make a few more changes. To add the github_changelog_generator, I add an array entry under Gemfile: required: ':development'. To add the task, I use Rakefile: extras:for the rake task, one entry per line (you can also use multi-line content in yaml if you prefer). This is what the file looks like as well as the pending changes:

[rnelson0@build03 domain_join:pdk±]$ cat .sync.yml
spec/spec_helper.rb:
  hiera_config: 'spec/fixtures/hiera.yaml'
Gemfile:
  required:
    ':development':
      - gem: github_changelog_generator
Rakefile:
  default_disabled_lint_checks:
    - 'arrow_alignment'
    - 'class_inherits_from_params_class'
    - 'class_parameter_defaults'
    - 'documentation'
    - 'single_quote_string_with_variables'
  extras:
    - "require 'github_changelog_generator/task'"
    - 'GitHubChangelogGenerator::RakeTask.new :changelog do |config|'
    - '  version = (Blacksmith::Modulefile.new).version'
    - '  config.future_release = "v#{version}"'
    - '  config.header = "# Change log\n\nAll notable changes to this project will be documented in this file.\nEach new release typically also includes the latest modulesync defaults.\nThese should not impact the functionality of the module."'
    - '  config.exclude_labels = %w{duplicate question invalid wontfix modulesync}'
    - 'end'
[rnelson0@build03 domain_join:pdk±]$ pdk update --noop
pdk (INFO): Updating rnelson0-domain_join using the template at https://github.com/puppetlabs/pdk-templates, from master@7b5f6d2 to 1.5.0

----------Files to be modified----------
spec/spec_helper.rb
Rakefile
Gemfile

----------------------------------------

You can find a report of differences in update_report.txt.

[rnelson0@build03 domain_join:pdk±]$ cat update_report.txt
/* Report generated by PDK at 2018-05-29 20:39:41 +0000 */


--- spec/spec_helper.rb 2018-05-29 20:33:16.488401096 +0000
+++ spec/spec_helper.rb.pdknew  2018-05-29 20:39:41.492124202 +0000
@@ -28,6 +28,7 @@

 RSpec.configure do |c|
   c.default_facts = default_facts
+  c.hiera_config = "spec/fixtures/hiera.yaml"
   c.before :each do
     # set to strictest setting for testing
     # by default Puppet runs at warning level


--- Rakefile    2018-05-29 20:33:16.489401137 +0000
+++ Rakefile.pdknew     2018-05-29 20:39:41.492832995 +0000
@@ -3,4 +3,11 @@
 require 'puppet_blacksmith/rake_tasks' if Bundler.rubygems.find_name('puppet-blacksmith').any?

 PuppetLint.configuration.send('disable_relative')
+
+require 'github_changelog_generator/task'
+GitHubChangelogGenerator::RakeTask.new :changelog do |config|
+  version = (Blacksmith::Modulefile.new).version
+  config.future_release = "v#{version}"
+  config.header = "# Change log\n\nAll notable changes to this project will be documented in this file.\nEach new release typically also includes the latest modulesync defaults.\nThese should not impact the functionality of the module."
+  config.exclude_labels = %w{duplicate question invalid wontfix modulesync}
+end


--- Gemfile     2018-05-29 20:16:10.321541394 +0000
+++ Gemfile.pdknew      2018-05-29 20:39:41.494035036 +0000
@@ -34,6 +34,7 @@
   gem "puppet-module-win-default-r#{minor_version}",   require: false, platforms: [:mswin, :mingw, :x64_mingw]
   gem "puppet-module-win-dev-r#{minor_version}",       require: false, platforms: [:mswin, :mingw, :x64_mingw]
   gem "puppet-blacksmith", '~> 3.4',                   require: false, platforms: [:ruby]
+  gem "github_changelog_generator",                    require: false
 end

 puppet_version = ENV['PUPPET_GEM_VERSION']

I now run it in yesop mode and my changes take. A quick check of rake targets confirms it. Note that all pdk bundle output is written to STDERR, not STDOUT.

[rnelson0@build03 domain_join:pdk±]$ pdk bundle exec rake -T 2>&1 | grep change
rake changelog # Generate a Change log from GitHub

I did not add the puppet-lint disable checks back in here. That is because the PDK does not use the Rakefile when running puppet-lint, it relies on the configuration file. I need to create .puppet-lint.rc at the top of the repo so that the settings are available to my CI system. That file looks like this:

[rnelson0@build03 domain_join:pdk±]$ cat .puppet-lint.rc
--no-arrow_alignment-check
--no-class_inherits_from_params_class-check
--no-documentation-check
--no-single_quote_string_with_variables-check

One difference between the Rake target and the config file is that an invalid check name in the config file can cause errors, whereas the Rake setting just doesn’t do anything. I removed the class_parameter_defaults check from the list because it is no longer a valid check.

There are a lot more things you might want to change, especially if you use CI other than Travis, but this should be enough for me to gain parity with my existing setup. Remember that you can poke at the templates online, find the default settings in config_defaults.yml, tweak in your own .sync.yml, re-run pdk update and everything should work out. If the templates cannot be wrangled as is, you can always open a ticket in the PDK project.

Make sure you commit your changes and push them up to version control, eventually to be merged into master.

First Test

Now I need to run my tests. Before I do that, I clean up everything not in git. Since I have developed in this directory, there are bundler files that don’t need to be there and may cause conflicts with the tests. Again, make sure you’ve committed changes first, or some of your uncommitted changes from the conversion will be removed:

[rnelson0@build03 domain_join:pdk±]$ git clean -ffdx
Removing .bundle/
Removing Gemfile.lock
Removing bin/
Removing convert_report.txt
Removing coverage/
Removing pkg/
Removing spec/defines/
Removing spec/fixtures/manifests/
Removing spec/fixtures/modules/
Removing spec/functions/
Removing spec/hosts/
Removing update_report.txt
Removing vendor/

PDK Tests

The first test is real simple, it’s my unit tests via pdk test unit:

[rnelson0@build03 domain_join:pdk±]$ pdk test unit
pdk (INFO): Using Ruby 2.4.4
pdk (INFO): Using Puppet 5.5.1
[✔] Preparing to run the unit tests.
[✔] Running unit tests.
Evaluated 68 tests in 2.321391522 seconds: 0 failures, 0 pending.
[✔] Cleaning up after running unit tests.

I also want to validate linting and syntax and whatnot with pdk validate:

[rnelson0@build03 domain_join:pdk]$ pdk validate
pdk (INFO): Running all available validators...
pdk (INFO): Using Ruby 2.4.4
pdk (INFO): Using Puppet 5.5.1
[✔] Checking metadata syntax (metadata.json tasks/*.json).
[✔] Checking module metadata style (metadata.json).
[✔] Checking Puppet manifest syntax (**/**.pp).
[✔] Checking Puppet manifest style (**/*.pp).
[✖] Checking Ruby code style (**/**.rb).
info: task-metadata-lint: ./: Target does not contain any files to validate (tasks/*.json).
convention: rubocop: spec/spec_helper_acceptance.rb:17:27: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.
convention: rubocop: spec/spec_helper_acceptance.rb:17:49: Style/HashSyntax: Use the new Ruby 1.9 hash syntax.
<more rubocop results>

I have a ton of rubocop results, which I will address below. Everything else works fine, as expected.

CI Tests

The second is a little trickier. Currently, whatever CI system you use will use ruby/bundler to perform the same checks. That is planned to change (PDK-709 tracks the Travis CI setup) When I use Travis CI, it uses tests from .travis.yml. Here are the relevant portions:

bundler_args: --without system_tests
matrix:
  fast_finish: true
  include:
    -
      env: CHECK="syntax lint metadata_lint check:symlinks check:git_ignore check:dot_underscore check:test_file rubocop"
    -
      env: CHECK=parallel_spec
    -
      env: PUPPET_GEM_VERSION="~> 4.0" CHECK=parallel_spec
      rvm: 2.1.9

There are two different checks that will run. The first is all the syntax and linting, the equivalent of pdk validate. The second and third are the unit tests, run against the latest Puppet 4 and Puppet 5 independently, and equivalent to pdk test unit. Here’s what happens when I run the unit tests first:

[rnelson0@build03 domain_join:pdk]$ pdk bundle exec rake parallel_spec
pdk (INFO): Using Ruby 2.4.4
pdk (INFO): Using Puppet 5.5.1
Cloning into 'spec/fixtures/modules/stdlib'...
2 processes for 2 specs, ~ 1 specs per process
No examples found.

Finished in 0.00032 seconds (files took 0.07684 seconds to load)
0 examples, 0 failures


domain_join
  on redhat-6-x86_64
    with defaults for all parameters
      should not contain Package[samba-common-tools]
      should contain Package[oddjob-mkhomedir]
      should contain Package[krb5-workstation]
      should contain Package[krb5-libs]
      should contain Package[samba-common]
      should contain Package[sssd-ad]
      should contain Package[sssd-common]
      should contain Package[sssd-tools]
      should contain Package[ldb-tools]
      should contain Class[domain_join]
      should contain File[/etc/resolv.conf]
      should contain File[/etc/krb5.conf]
      should contain File[/etc/samba/smb.conf]
      should contain File[/etc/sssd/sssd.conf]
      should contain File[/usr/local/bin/domain-join]
      should contain Exec[join the domain]
    with manage_services false
      should not contain Package[sssd]
      should not contain File[/etc/sssd/sssd.conf]
      should contain File[/etc/resolv.conf]
      should contain File[/usr/local/bin/domain-join]
    with manage_services and manage_resolver false
      should not contain Package[sssd]
      should not contain File[/etc/sssd/sssd.conf]
      should not contain File[/etc/resolv.conf]
      should contain File[/usr/local/bin/domain-join]
    start script syntax
      should contain File[/usr/local/bin/domain-join] with content =~ /sssd status/
    with container
      should contain File[/usr/local/bin/domain-join] with content =~ /net ads join/
      should contain File[/usr/local/bin/domain-join] with content =~ /container_ou='container'/
    with account and password
      should contain File[/usr/local/bin/domain-join] with content =~ /register_account='service_account'/
      should contain File[/usr/local/bin/domain-join] with content =~ /register_password='open_sesame'/
    with join_domain disabled
      should not contain Exec[join the domain]
    with manage_dns disabled
      should not contain File[/usr/local/bin/domain-join] with content =~ /net ads dns register/
      should not contain File[/usr/local/bin/domain-join] with content =~ /update add /
    with manage_dns and ptr enabled
      should contain File[/usr/local/bin/domain-join] with content =~ /net ads dns register/
      should contain File[/usr/local/bin/domain-join] with content =~ /update add .+ addr show fake_interface/
  on redhat-7-x86_64
    with defaults for all parameters
      should contain Package[samba-common-tools]
      should contain Package[oddjob-mkhomedir]
      should contain Package[krb5-workstation]
      should contain Package[krb5-libs]
      should contain Package[samba-common]
      should contain Package[sssd-ad]
      should contain Package[sssd-common]
      should contain Package[sssd-tools]
      should contain Package[ldb-tools]
      should contain Class[domain_join]
      should contain File[/etc/resolv.conf]
      should contain File[/etc/krb5.conf]
      should contain File[/etc/samba/smb.conf]
      should contain File[/etc/sssd/sssd.conf]
      should contain File[/usr/local/bin/domain-join]
      should contain Exec[join the domain]
    with manage_services false
      should not contain Package[sssd]
      should not contain File[/etc/sssd/sssd.conf]
      should contain File[/etc/resolv.conf]
      should contain File[/usr/local/bin/domain-join]
    with manage_services and manage_resolver false
      should not contain Package[sssd]
      should not contain File[/etc/sssd/sssd.conf]
      should not contain File[/etc/resolv.conf]
      should contain File[/usr/local/bin/domain-join]
    start script syntax
      should contain File[/usr/local/bin/domain-join] with content =~ /status sssd.service/
    with container
      should contain File[/usr/local/bin/domain-join] with content =~ /net ads join/
      should contain File[/usr/local/bin/domain-join] with content =~ /container_ou='container'/
    with account and password
      should contain File[/usr/local/bin/domain-join] with content =~ /register_account='service_account'/
      should contain File[/usr/local/bin/domain-join] with content =~ /register_password='open_sesame'/
    with join_domain disabled
      should not contain Exec[join the domain]
    with manage_dns disabled
      should not contain File[/usr/local/bin/domain-join] with content =~ /net ads dns register/
      should not contain File[/usr/local/bin/domain-join] with content =~ /update add /
    with manage_dns and ptr enabled
      should contain File[/usr/local/bin/domain-join] with content =~ /net ads dns register/
      should contain File[/usr/local/bin/domain-join] with content =~ /update add .+ addr show fake_interface/

1 deprecation warning total

Finished in 2.34 seconds (files took 2.17 seconds to load)
68 examples, 0 failures


68 examples, 0 failures

Took 5 seconds
I, [2018-05-29T21:51:29.950455 #16602]  INFO -- : Creating symlink from spec/fixtures/modules/domain_join to /home/rnelson0/modules/domain_join
/opt/puppetlabs/pdk/share/cache/ruby/2.4.0/gems/rspec-core-3.7.1/lib/rspec/core.rb:179:in `block in const_missing': uninitialized constant RSpec::Puppet (NameError)
        from /opt/puppetlabs/pdk/share/cache/ruby/2.4.0/gems/rspec-core-3.7.1/lib/rspec/core.rb:179:in `fetch'
        from /opt/puppetlabs/pdk/share/cache/ruby/2.4.0/gems/rspec-core-3.7.1/lib/rspec/core.rb:179:in `const_missing'
        from /home/rnelson0/modules/domain_join/spec/classes/coverage_spec.rb:1:in `block in '

Deprecation Warnings:

puppetlabs_spec_helper: defaults `mock_with` to `:mocha`. See https://github.com/puppetlabs/puppetlabs_spec_helper#mock_with to choose a sensible value for you


If you need more of the backtrace for any of these deprecations to
identify where to make the necessary changes, you can configure
`config.raise_errors_for_deprecations!`, and it will turn the
deprecation warnings into errors, giving you the full backtrace.
Tests Failed

The text in bold indicates an error in spec/classes/coverage_spec.rb. The simple solution for me is to git rm it, rather than add in the right coverage gem again. It’s not particularly important to me, but if it is to you, you need to add it back to Gemfile and spec/spec_helper.rb. The important thing is that a second run does not have the error and completes successfully.

The second test is a series of rake targets and causes me some grief out of the gate:

[rnelson0@build03 domain_join:pdk]$ pdk bundle exec rake syntax lint metadata_lint check:symlinks check:git_ignore check:dot_underscore check:test_file rubocop
pdk (INFO): Using Ruby 2.4.4
pdk (INFO): Using Puppet 5.5.1
init.pp
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
rake aborted!
.pp files present in tests folder; Move them to an examples folder following the new convention
/opt/puppetlabs/pdk/share/cache/ruby/2.4.0/gems/puppetlabs_spec_helper-2.7.0/lib/puppetlabs_spec_helper/rake_tasks.rb:231:in `block (2 levels) in '
/opt/puppetlabs/pdk/share/cache/ruby/2.4.0/gems/rake-12.3.1/exe/rake:27:in `'
/opt/puppetlabs/pdk/private/ruby/2.4.4/bin/bundle:23:in `load'
/opt/puppetlabs/pdk/private/ruby/2.4.4/bin/bundle:23:in `'
Tasks: TOP => check:test_file
(See full trace by running task with --trace)

The fix is easy – move the existing files to a new location as the convention has changed, or remove them entirely if they are not valuable – and then I can proceed without further fault:

[rnelson0@build03 domain_join:pdk]$ mkdir examples
[rnelson0@build03 domain_join:pdk]$ git mv -v tests/*pp examples/
‘tests/init.pp’ -> ‘examples/init.pp’
[rnelson0@build03 domain_join:pdk±]$ pdk bundle exec rake syntax lint metadata_lint check:symlinks check:git_ignore check:dot_underscore check:test_file rubocop
pdk (INFO): Using Ruby 2.4.4
pdk (INFO): Using Puppet 5.5.1
Running RuboCop...
Inspecting 0 files


0 files inspected, no offenses detected
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml

As I mentioned earlier, I’d like to disable RuboCop. I don’t see how right now. If I specify selected_profile: off in .sync.yml for rubocop, pdk update errors out applying the template (PDK-998). However it seems to pass just fine in that check, though the individual check fails badly (PDK-997). I’m content to let it go so long as it’s passing test and I don’t have to rewrite anything, but I will find SOME way to get rid of it if it starts causing me problems!

If you use gitlab-ci, appveyor, or some other system for testing, you will want to ensure those tests pass as well. Once done, commit everything to git again.

I am now ready to submit a pull request, and if you are following along, you may be, too. You can review and compare my pull request and the tests if you would like. You will of course notice that I merged it in spite of rubocop failures!

Summary

We have looked at what the PDK is, why we want to use it, how to install it, and how to convert a module to use it. Each module can be customized and we explored the .sync.yml file that controls customization. Once we finalized our conversion, we ran the same tests we had prior to the PDK to make sure they still work and verified the Travis CI tests, too. The next step is to find a replacement for modulesync, which allows use to push the same general configuration to multiple modules. Lucky for us, Puppet just released a potential replacement, pdksync, that I will evaluate soon.

Powershell in a Post-TLS1.1 World

I was trying to install PowerCLI on a new server in a new environment today and I encountered all sorts of error messages when PowerShell tried to install the required NuGet provider:

PS C:\Windows\system32> Find-Module -Name VMware.PowerCLI
WARNING: Unable to download from URI 'https://go.microsoft.com/fwlink/?LinkID=627338&clcid=0x409' to ''.
WARNING: Unable to download the list of available providers. Check your internet connection.
PackageManagement\Install-PackageProvider : No match was found for the specified search criteria for the provider 'NuGet'. The package provider 
requires 'PackageManagement' and 'Provider' tags. Please check if the specified package has the tags.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:7405 char:21
+ ... $null = PackageManagement\Install-PackageProvider -Name $script:N ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (Microsoft.Power...PackageProvider:InstallPackageProvider) [Install-PackageProvider], Exception
+ FullyQualifiedErrorId : NoMatchFoundForProvider,Microsoft.PowerShell.PackageManagement.Cmdlets.InstallPackageProvider

PackageManagement\Import-PackageProvider : No match was found for the specified search criteria and provider name 'NuGet'. Try 
'Get-PackageProvider -ListAvailable' to see if the provider exists on the system.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:7411 char:21
+ ... $null = PackageManagement\Import-PackageProvider -Name $script:Nu ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (NuGet:String) [Import-PackageProvider], Exception
+ FullyQualifiedErrorId : NoMatchFoundForCriteria,Microsoft.PowerShell.PackageManagement.Cmdlets.ImportPackageProvider

WARNING: Unable to download from URI 'https://go.microsoft.com/fwlink/?LinkID=627338&clcid=0x409' to ''.
WARNING: Unable to download the list of available providers. Check your internet connection.
PackageManagement\Get-PackageProvider : Unable to find package provider 'NuGet'. It may not be imported yet. Try 'Get-PackageProvider 
-ListAvailable'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:7415 char:30
+ ... tProvider = PackageManagement\Get-PackageProvider -Name $script:NuGet ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Microsoft.Power...PackageProvider:GetPackageProvider) [Get-PackageProvider], Exception
+ FullyQualifiedErrorId : UnknownProviderFromActivatedList,Microsoft.PowerShell.PackageManagement.Cmdlets.GetPackageProvider

Find-Module : NuGet provider is required to interact with NuGet-based repositories. Please ensure that '2.8.5.201' or newer version of NuGet 
provider is installed.
At line:1 char:1
+ Find-Module -Name VMware.PowerCLI
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Find-Module], InvalidOperationException
+ FullyQualifiedErrorId : CouldNotInstallNuGetProvider,Find-Module

I made it very angry, and I didn’t know why! After some searching, I stumbled on a solution on the Microsoft Community site. The issue is that PowerShell 5.1 defaults to only enabling SSL3 and TLS 1.0 for secure HTTP connections. You have probably noticed a lot of recent warnings on various websites about services removing support for TLS 1.0 and 1.1, and SSL3 has been disabled for many for years. Microsoft is no slacker here, and go.microsoft.com has dropped support for SSL3 and TLS 1.0 (probably TLS 1.1, too, but I didn’t check). Thus the Provider list at the URL cannot be accessed and the NuGet install fails.

PS C:\ProgramData\Documents> [Net.ServicePointManager]::SecurityProtocol
Ssl3, Tls

You can fix this by specifying Tls12 as the SecurityProtocol, but it only persists in this session, for this user. Thankfully, PowerShell has a well documented series of profile loads, so you can make the change once for all users on the server. You can choose whichever level works best for you. I chose $PsHome\Profile.ps1 which affects All Users, All Hosts. If you choose a global file like that, launch a PowerShell session as administrator (if you weren’t aware, there’s a Ctrl-modifier to avoid right-clicking!) so that you have the rights to edit the target file. If not, just substitute the file below with your choice.

This snippet will check for the existence of the file and create it if needed, then populate it with our one line change and comment telling us why. Finally, it opens the file so you can inspect it and adjust if you need to. Note that running it again will append the same lines, which isn’t harmful but may result in a little confusion for the next person to peek at it. Hello, future self!

$ProfileFile = "${PsHome}\Profile.ps1"

if (! (Test-Path $ProfileFile)) {
New-Item -Path $ProfileFile -Type file -Force
}
''                                                                                | Out-File -FilePath $ProfileFile -Encoding ascii -Append
'# It is 2018, SSL3 and TLS 1.0 are no good anymore'                              | Out-File -FilePath $ProfileFile -Encoding ascii -Append
'[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12' | Out-File -FilePath $ProfileFile -Encoding ascii -Append

notepad $ProfileFile

If you enter [Net.ServicePointManager]::SecurityProtocol in the current window, you’ll get the same Ssl3, Tls result you saw before. The profile is only loaded at startup. Open a new powershell instance on the server – as any user, even – and run it again. You should see the new setting:

PS C:\windows\system32> [Net.ServicePointManager]::SecurityProtocol
Tls12

Now you are ready to use PowerShell to connect to modern web servers, whether it’s to install NuGet, use Invoke-WebRequest, or any other HTTPS connection. Enjoy!

Self-documenting Puppet modules with puppet-strings

Documentation is hard. Anyone who has been in IT long enough will have tales of chasing their tails because of incorrect or outdated docs, or even missing docs. Documentation really benefits from automation and ease of creation. For Puppet modules, there exists a tool called puppet-strings that can help with this. There are probably other tools for this, but puppet-strings is developed by Puppet and will likely be integrated into the Puppet Development Kit, so I have chosen it as my solution.

Around this time last year, November of 2016, Will Hopper wrote a blog post about how to use puppet-strings. There is also some mention of puppet-strings in the Style Guide. At the time of that blog post, puppet-strings was mostly documented in that blog post and I didn’t jump on the project, but it turns out it’s really easy to leverage. Let’s give a shot.

Converting a Module to use puppet-strings

We should be able to convert any module to use puppet-strings, whether it’s small or large, simple or complex. Find a module you’d like to convert and you can follow along with it. I am going to convert my existing module rnelson0/certs, found on GitHub. First, let’s add the new gem to our module by adding two lines to the Gemfile:

gem 'puppet-strings'
gem 'rgen'

I’ve submitted PR 149 to puppet-strings as I believe rgen should be a runtime dependency, at which point you can remove that gem from the file.

Run bundle install or bundle update. You can now run bundle exec puppet strings generate ./manifests/*.pp . It won’t do much now, since we haven’t added strings-compatible metadata to our module, but it does generate the files:

[rnelson0@build03 certs:stringsdocs±]$ bundle exec puppet strings generate ./manifests/*.pp
[warn]: Missing @param tag for parameter 'source_path' near manifests/vhost.pp:59.
[warn]: Missing @param tag for parameter 'target_path' near manifests/vhost.pp:59.
[warn]: Missing @param tag for parameter 'service' near manifests/vhost.pp:59.
Files:                    2
Modules:                  0 (    0 undocumented)
Classes:                  0 (    0 undocumented)
Constants:                0 (    0 undocumented)
Attributes:               0 (    0 undocumented)
Methods:                  0 (    0 undocumented)
Puppet Classes:           1 (    0 undocumented)
Puppet Defined Types:     1 (    0 undocumented)
Puppet Types:             0 (    0 undocumented)
Puppet Providers:         0 (    0 undocumented)
Puppet Functions:         0 (    0 undocumented)
 100.00% documented
[rnelson0@build03 certs:stringsdocs±]$ ls html
ls: cannot access html: No such file or directory
[rnelson0@build03 certs:stringsdocs±]$ ls
CONTRIBUTING.md  doc  Gemfile  Gemfile.lock  manifests  metadata.json  Rakefile  README.md  spec  tests  vendor
[rnelson0@build03 certs:stringsdocs±]$ tree doc/
doc/
├── css
│   ├── common.css
│   ├── full_list.css
│   └── style.css
├── file.README.html
├── frames.html
├── _index.html
├── index.html
├── js
│   ├── app.js
│   ├── full_list.js
│   └── jquery.js
├── puppet_classes
│   └── certs.html
├── puppet_class_list.html
├── puppet_defined_type_list.html
├── puppet_defined_types
│   └── certs_3A_3Avhost.html
└── top-level-namespace.html

4 directories, 15 files

We can view the output in a browser by pulling up doc/index.html and browsing around it. If this is on a remote machine, it needs to be served up somehow. You can also copy it to your local machine and view it in a web browser (reminder that you can download a .ZIP of a branch from GitHub). I will leave this step out in the future for brevity, but don’t forget to do it, especially if you make changes, refresh, and nothing looks different!

We can add a rake task to make this simpler. In your Rakefile, add require 'puppet-strings/tasks'. If you add the gem to your Gemfile in a group that Travis doesn’t use, you should be sure to guard against failure with something like this:

# These gems aren't always present, for instance
# on Travis with --without development
begin
  require 'puppet_blacksmith/rake_tasks'
  require 'puppet-strings/tasks'
rescue LoadError
end

There are now two new rake tasks. You can generate docs with the much shorter bundle exec rake strings:generate:

[rnelson0@build03 certs:stringsdocs±]$ be rake -T | grep strings
Could not find semantic_puppet gem, falling back to internal functionality. Version checks may be less robust.
rake strings:generate[patterns,debug,backtrace,markup,json,yard_args]  # Generate Puppet documentation with YARD
rake strings:gh_pages:update                                           # Update docs on the gh-pages branch and push to GitHub
[rnelson0@build03 certs:stringsdocs±]$ be rake strings:generate
Could not find semantic_puppet gem, falling back to internal functionality. Version checks may be less robust.
[warn]: Missing documentation for Puppet defined type 'certs::vhost' at manifests/vhost.pp:35.
[warn]: The @param tag for parameter 'title' has no matching parameter at manifests/vhost.pp:35.
Files:                    2
Modules:                  0 (    0 undocumented)
Classes:                  0 (    0 undocumented)
Constants:                0 (    0 undocumented)
Attributes:               0 (    0 undocumented)
Methods:                  0 (    0 undocumented)
Puppet Classes:           1 (    0 undocumented)
Puppet Defined Types:     1 (    0 undocumented)
Puppet Types:             0 (    0 undocumented)
Puppet Providers:         0 (    0 undocumented)
Puppet Functions:         0 (    0 undocumented)
 100.00% documented

Next, we need to make some changes to our modules to document them. We can document manifests, types, providers, and functions, but I don’t have any of my own modules with types/providers/functions and the process is pretty similar, so I will focus on just a manifest today. Here is the header for my certs::vhost defined type before I add puppet-strings metadata:

# == Define: certs::vhost
#
# SSL Certificate File Management
#
# Intended to be used in conjunction with puppetlabs/apache's apache::vhost
# definitions, to provide the ssl_cert and ssl_key files.
#
# === Parameters
#
# [name]
# The title of the resource matches the certificate's name
# e.g. 'www.example.com' matches the certificate for the hostname
# 'www.example.com'
#
# [source_path]
# The location of the certificate files. Typically references a module's files.
# e.g. 'puppet:///site_certs' wills earch $modulepath/site_certs/files on the
# master for the specified files.
#
# [target_path]
# Location where the certificate files will be stored on the managed node.
# Optional value, defaults to '/etc/ssl/certs'
#
# [service]
# Name of the web server service to notify when certificates are updated.
# Optional value, defaults to 'httpd'
#
# === Examples
#
#  Without Hiera:
#
#    $cname = www.example.com
#    certs::vhost{ $cname:
#      source_path =&gt; 'puppet:///site_certificates',
#    }
#
#  With Hiera:
#
#    server.yaml
#    ---
#    certsvhost:
#      'www.example.com':
#        source_path: 'puppet:///modules/site_certificates/'
#
#    manifest.pp
#    ---
#    certsvhost = hiera_hash('certsvhost')
#    create_resources(certs::vhost, certsvhost)
#    Certs::Vhost<| |> -> Apache::Vhost<| |>
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#

And here it is afterward:

# == Define: certs::vhost
#
# SSL Certificate File Management
#
# Intended to be used in conjunction with puppetlabs/apache's apache::vhost
# definitions, to provide the ssl_cert and ssl_key files.
#
# === Parameters
#
# @param name The title of the resource matches the certificate's name # e.g. 'www.example.com' matches the certificate for the hostname # 'www.example.com'
# @param source_path The location of the certificate files. Typically references a module's files. e.g. 'puppet:///site_certs' wills earch $modulepath/site_certs/files on the master for the specified files.
# @param target_path Location where the certificate files will be stored on the managed node. Optional value, defaults to '/etc/ssl/certs'
# @param service Name of the web server service to notify when certificates are updated. Optional value, defaults to 'httpd'
#
# @example
#     Without Hiera:
#    
#     $cname = www.example.com
#     certs::vhost{ $cname:
#       source_path => 'puppet:///site_certificates',
#     }
#    
#     With Hiera:
#    
#     server.yaml
#     ---
#     certsvhost:
#       'www.example.com':
#         source_path: 'puppet:///modules/site_certificates/'
#    
#     manifest.pp
#     ---
#     certsvhost = hiera_hash('certsvhost')
#     create_resources(certs::vhost, certsvhost)
#     Certs::Vhost<| |> -> Apache::Vhost<| |>
#
# === Authors
#
# Rob Nelson <rnelson0@gmail.com>
#
# === Copyright
#
# Copyright 2014 Rob Nelson
#

We can quickly regenerate the html docs and the defined type shows up. Be sure to click the `Defined Types` link in the top left, the left-hand menu does not mix classes and types.

You can see that there’s still some other work to do. The non-strings-ified portions of the comments are left as is, rather than parsed as markdown, so that needs to change. We don’t need most of that leftover crud. The class/defined type name is already known to strings. The Authors section should come from metadata.json (though if there are multiple, I am not sure if that file accepts an array). Copyright isn’t handled by metadata.json, and may not be strictly needed depending on your jurisdiction, but if you do need to keep it, just remove the === Copyright header and leave the text (I have chosen to omit it because US copyright law automatically grants me copyright for 70 years and I’m not that worried about it anyway; I would do something different for work).

I changed some other things:

  • Each  @param can take multi-line comments, as long as each trailing line maintains one space of extra indentation.
  • The title of defined types should be documented using @param title (docs), though it will generate a warning like [warn]: The @param tag for parameter 'name' has no matching parameter at manifests/vhost.pp:33
  • The order of metadata should go @summary > freeform text > @example > @param

Here’s the updated header and the resulting html doc:

# @summary Used in conjunction with puppetlabs/apache's apache::vhost definitions, to provide the related ssl_cert and ssl_key files for a given vhost.
#
# @example
#    Without Hiera:
#
#      $cname = www.example.com
#      certs::vhost{ $cname:
#        source_path => 'puppet:///site_certificates',
#      }
#
#    With Hiera:
#
#      server.yaml
#      ---
#      certsvhost:
#        'www.example.com':
#          source_path: 'puppet:///modules/site_certificates/'
#
#      manifest.pp
#      ---
#      certsvhost = hiera_hash('certsvhost')
#      create_resources(certs::vhost, certsvhost)
#      Certs::Vhost<| |> -> Apache::Vhost<| |>
#
# @param title
#  The title of the resource matches the certificate's name # e.g. 'www.example.com' matches the certificate for the hostname # 'www.example.com'
# @param source_path
#  Required. The location of the certificate files. Typically references a module's files. e.g. 'puppet:///site_certs' will search $modulepath/site_certs/files on the master for the specified files.
# @param target_path
#  Location where the certificate files will be stored on the managed node.
#  Default: '/etc/ssl/certs'
# @param service
#  Name of the web server service to notify when certificates are updated.
#  Default: 'http'

That’s about it! For small modules, this is probably a really simple, really quick change. For larger modules, this may take a while, but it’s tedious, not complicated.

Online Docs

There are two other things you may want to look at. First, the string docs can be a tad large (212K vs 24K for the actual manifests, for example) but more importantly, are NOT guaranteed to be in sync with the rest of your code. If you include doc/ in your git data and you change a parameter definition/use in a module and do not regenerate docs and commit them to the repo simultaneously, users may not understand and take action on the changes. If you go a long while without automatically updating them, you may confuse your users or even yourself.

You can simply add docs/ to your .gitignore file. Now, the docs are not stored in the Git repo – unless you add with `–force` or added them before updating .gitignore, at which point you will definitely want to correct that! This ensures no doc mismatch with published code and can help keep the size of the git repo a little more trim.

Second, GitHub and other providers often do not display HTML docs very well for your users, so even if you include doc/ in your repo, the contents are probably displayed as text files. Whoops! There are a few solutions for this.

  1. Publish through your Git provider’s services, like gh-pages, to a per-project website. For example, GitHub provides gh-pages sites and allows you to configure the publishing source (bonus: the rake task strings:gh_pages:update will push to this easily).
  2. Add a hook to your CI that generates the docs and sends them where necessary. Vox Pupuli is working on this but has not chosen an implementation yet.
  3. There are a few sites that you can add your docs to, some of which automagically update for you. One of these is http://www.puppetmodule.info/. You can easily click the Add Project button in the top right to add your own project to it (voila!). Since this occurs automatically, you never have to do anything else. But, when you are making changes, your docs could get stale until the next automated run occurs.

Puppet Module, by Dominic Cleal, also offers a badge you can add to your readme. Click the About button for more info.

Style Guides

One last thing to mention. As of 12/7/2017, the Style Guide is being updated to add information about puppet-strings. Pay attention to that space! I assume that it will first start with a description of standards and then add some puppet-lint checks to help you enforce it programatically. As puppet-strings is relatively new, you can expect more changes in the immediate future as it solidifies. If you have strong opinions on documentation, please speak up in the Documents Jira project, in Slack/IRC/mailing lists, or contact me and I’ll help you get your comments to the right person.

Summary

Today we added puppet-strings to a module, replaced the existing documentation with puppet-strings-compatible documentation, and looked at some solutions for automating document updates. It’s a simple process to enable better documentation updates, something everyone needs.

Disabling account lockout on your VCSA 6.5

I recently locked myself out of my vCenter Server Appliance when I was attempting to perform an upgrade through VAMI. The VAMI just says “invalid password”, but logging in on the console displayed a message indicating I had failed authentication 12 times. I had only tried four times! Regardless of whether it was me or someone else, now that I knew I had the right password, I was locked out. I waited 5 minutes but still couldn’t get in, so it looked like it was time to do a password reset. However, I wanted to explore something I had done with vRealize Orchestrator recently: disable the account lockout.

KB2147144 documents the process for booting into a privileged shell without a password. Unlike in 6.0, you hit ‘e’ instead of ‘space’ at the GRUB prompt, but otherwise it’s the same. You do have about half a second to hit ‘e’, so pay attention or you’ll find yourself rebooting a few times! For those who are not locked out already, you can just ssh into the VCSA and make this change without a reboot

Once you’re in, search for the word tally in the pam setup with grep tally /etc/pam.d/*. You will find these two lines in /etc/pam.d/system-auth.

auth require pam_tally2.so file=/var/log/tallylog deny=3 onerr=fail even_deny_root unlock_time=86400 root_unlock_time=300
auth require pam_tally1.so file=/var/log/tallylog deny=3 onerr=fail even_deny_root unlock_time=86400 root_unlock_time=300

Comment those two lines out (prepend with a #) and save the file:

# cat /etc/pam.d/system-auth
# Begin /etc/pam.d/system-auth

auth required pam_unix.so

# End /etc/pam.d/system-auth
#auth required pam_tally2.so file=/var/log/tallylog deny=3 onerr=fail even_deny_root unlock_time=86400 root_unlock_time=300
#auth required pam_tally1.so file=/var/log/tallylog deny=3 onerr=fail even_deny_root unlock_time=86400 root_unlock_time=300

If you know your password and are just dealing with lockouts, you can type reboot -f now. Otherwise, type passwd and enter the new password twice and then reboot. You can now enter your password wrong a million times – or someone else can – and you will not lose the ability to log in without waiting an extraordinary amount of time or requiring a reboot.

I upgraded from VCSA 6.5U1b to 6.5U1c and this persisted. I assume that when going to vNext (6.6 or 7.0) this change will be reverted, but I am not sure how it will behave when VCSA 6.5U2 is released, this may need to be re-done, so add disabling the lockout to your upgrade checklists alongside disabling the root account expiration.

Upgrading Puppet Enterprise from 2016.4 to 2017.3

Over the past year, there have been some pretty big improvements to Puppet. I am still running PE 2016.4.2 and the current version is 2017.3.2, so there’s lot of changes in there. Most of the changes are backwards-compatible, so an upgrade from last November’s version is not quite as bad as it sounds, and I definitely want to start using the new features and improvements. The big one for me is Hiera version 5 (new in Puppet 4.9 / PE 2016.4.5). It is backwards compatible, so you can start using it right now, but it does require some changes to take advantage of the new features. I have to upgrade the server, upgrade the agents, and then start implementing the new features! Why do I care about Hiera 5 in particular?

Hiera 3 was great, but you could only use one hiera setup on a server, regardless of how many environments were deployed on that server. This could cause problems when you wanted to change the hiera config and test it. You could not test it in a feature branch, it HAD to be promoted to affect the entire server. If you had multiple masters, you could change the config on just one, but that was about as flexible as hiera 3 would let you be. If it worked, awesome. If it broke, you could break a whole lot that needed undone before you could try again.

Hiera 5 introduces independent hierarchy configurations per environment and even per module! If you want to try out a new backend like hiera-eyaml, you can now create a new feature branch eyaml-test, update the configuration in that branch, push it, and ONLY nodes that use that environment will receive the new configuration. This is a huge help in testing changes to hiera without blowing up all your nodes.

The per module hierarchy also means that module authors can include defaults that use hiera, rather than the params.pp pattern. This makes it easier for module users to override settings. There are also improvements in the interface for those who want to create their own backends. And, best of all, hiera 5 means the name hiera is here to stay – no more confusion between “legacy” hiera and modern lookup, it’s all called hiera 5 now.

It does mean there are some deprecations to keep in mind, but they won’t actually go away until at least Puppet 6. You can use hiera 5 now and take some time to replace the deprecated bits. This does mean we can use our existing hiera 3 setup and worry about migrating it to hiera 5 later, too, which we will take advantage of.

I always prefer to upgrade to the latest version. If for some reason you’re upgrading to Puppet 4.9, be aware that Puppet 4.10 fixed PUP-7554, which caused failures when a hiera 3 format hiera.yaml was found at the base of a controlrepo or module. I kept a hiera.yaml in the root of my controlrepo for bootstrapping purposes for a long while, and if you do, you could hit that bug. Best to just move to the latest if you can.

I think most people will like Hiera 5, but there are a ton of other features (listed at the end) and even if nothing appeals specifically, it is good to stay up to date so you don’t get stuck with a really nasty upgrade process when you find a feature you really need. Please, don’t let yourself get a full year behind on updates like I did. Sometimes it’s really difficult to get out of that situation!

Puppet Enterprise Server Upgrade

I use Puppet Enterprise at both work and home these days, so I will go through the PE upgrade experience. The Puppet OpenSource install and upgrade instructions are on the same page of the documentation, so it seems pretty easy but your mileage may vary, of course.

First, take a look at your installation and make sure it’s in a known state – preferably a known good state all the way around, but at least a known one. If you have outstanding issues on the master, you need to resolve them. If some agents are failing to check in, you may want to take the time to fix them, or you could just keep track of the failures. After the upgrade, you don’t want to see an increase in failures. Once everything looks good, take a snapshot of your master(s) and a full application/OS backup if possible. If you have a distributed setup, perform this on all nodes as close to the same time as possible.

Second, download the latest version of PE (KB#0001) onto your master. Expand the tarball, cd into the directory, and run sudo ./puppet-enterprise-installer. You can provide a .pe.conf file with the -c option, or answer a few interactive questions to get started:

=============================================================
 Puppet Enterprise Installer
=============================================================
2017-11-08 20:38:07,432 Running command: cp /opt/puppetlabs/server/pe_build /opt/puppetlabs/server/pe_build.bak
2017-11-08 20:38:07,480 Running command: cp /opt/puppetlabs/puppet/VERSION /opt/puppetlabs/server/puppet-agent-version.bak

## We've detected an existing Puppet Enterprise 2016.5.2 install.

 Would you like to proceed with text-mode upgrade/repair? [Yn]y

## We've found a pe.conf file at /etc/puppetlabs/enterprise/conf.d/pe.conf.

 Proceed with upgrade/repair of 2016.5.2 using the pe.conf at /etc/puppetlabs/enterprise/conf.d/pe.conf? [Yn]y

The install takes a bit of time (30 minutes on my lab install). Once the upgrade is done, you’ll be directed to run puppet agent -t (with sudo of course). If you have additional compile masters or ActiveMQ hubs and spokes, also run the commands in steps 4 and 5 as well.

You should now be able to log into the Console and see the status of your environment. You will hopefully find Intentional Changes on most of your nodes and zero or no failures (if both are encountered in a run, Intentional Changes “wins” on the Console; let every node run at least twice to see if it moves back to Green or Red before continuing).

If you do encounter failures, you will have to analyze each issue to see if it’s related to the upgrade and something you can fix, or if it’s time to roll back. If you do roll back, make sure you roll back ALL the PE components, including the PuppetDB, so you don’t leave cruft somewhere. I experienced one issue in the lab described in PUP-7878, resolved by a reboot of the master after the upgrade.

If everything is good, then it is time to proceed to upgrading the Agents.

Agent Upgrades

The Puppet docs provide instructions for upgrading Agents in a variety of methods. I prefer to use the module puppetlabs/puppet_agent, as I’ve discussed before (OpenSource, PE Linux and PE Windows clients). My experience is using my profile module and hiera data in the controlrepo, Puppet’s instructions use the Console Classifier. It really does not matter how you do this, but I did find an issue with the Puppet docs (DOCUMENT-763) – after classifying, you must set the parameter puppet_agent::package_version or no upgrade occurs for agents already running Puppet 4 or 5. Set it to 5.3.3(obtained by running puppet --version on the master, which received the latest agent during the upgrade). Here’s how to do that in hiera:

puppet_agent::package_version: '5.3.3'

The next two agent runs will show changes. I ran my tests directly using ssh on a Linux host and it looked like this:

# First run upgrades Puppet 4 to 5
[rnelson0@build03 controlrepo:pe201732]$ sudo puppet agent -t --environment pe201732
Info: Using configured environment 'pe201732'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for build03.nelson.va
Info: Applying configuration version '1510241461'
Notice: /Stage[main]/Puppet_agent::Osfamily::Redhat/Yumrepo[pc_repo]/baseurl: baseurl changed 'https://yum.puppetlabs.com/el/$releasever/PC1/x86_64' to 'https://puppet.nelson.va:8140/packages/2017.3.2/el-7-x86_64'
Notice: /Stage[main]/Puppet_agent::Osfamily::Redhat/Yumrepo[pc_repo]/sslcacert: defined 'sslcacert' as '/etc/puppetlabs/puppet/ssl/certs/ca.pem'
Notice: /Stage[main]/Puppet_agent::Osfamily::Redhat/Yumrepo[pc_repo]/sslclientcert: defined 'sslclientcert' as '/etc/puppetlabs/puppet/ssl/certs/build03.nelson.va.pem'
Notice: /Stage[main]/Puppet_agent::Osfamily::Redhat/Yumrepo[pc_repo]/sslclientkey: defined 'sslclientkey' as '/etc/puppetlabs/puppet/ssl/private_keys/build03.nelson.va.pem'
Notice: /Stage[main]/Puppet_agent::Install/Package[puppet-agent]/ensure: ensure changed '1.9.2-1.el7' to '5.3.3-1.el7'
Notice: /Stage[main]/Puppet_enterprise::Mcollective::Server::Logs/File[/var/log/puppetlabs/mcollective]/mode: mode changed '0750' to '0755'
Notice: Applied catalog in 71.86 seconds

# Second run updates some PE components
[rnelson0@build03 controlrepo:pe201732]$ sudo puppet agent -t --environment pe201732
Info: Using configured environment 'pe201732'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for build03.nelson.va
Info: Applying configuration version '1510241554'
Notice: /Stage[main]/Puppet_enterprise::Pxp_agent/File[/etc/puppetlabs/pxp-agent/pxp-agent.conf]/content:
--- /etc/puppetlabs/pxp-agent/pxp-agent.conf 2017-11-08 21:16:56.713834368 +0000
+++ /tmp/puppet-file20171109-20790-1l6y5bg 2017-11-09 15:32:45.917909748 +0000
@@ -1 +1 @@
-{"broker-ws-uris":["wss://puppet.nelson.va:8142/pcp2/"],"pcp-version":"2","ssl-key":"/etc/puppetlabs/puppet/ssl/private_keys/build03.nelson.va.pem","ssl-cert":"/etc/puppetlabs/puppet/ssl/certs/build03.nelson.va.pem","ssl-ca-cert":"/etc/puppetlabs/puppet/ssl/certs/ca.pem","loglevel":"info"}
\ No newline at end of file
+{"broker-ws-uris":["wss://puppet.nelson.va:8142/pcp2/"],"pcp-version":"2","master-uris":["https://puppet.nelson.va:8140"],"ssl-key":"/etc/puppetlabs/puppet/ssl/private_keys/build03.nelson.va.pem","ssl-cert":"/etc/puppetlabs/puppet/ssl/certs/build03.nelson.va.pem","ssl-ca-cert":"/etc/puppetlabs/puppet/ssl/certs/ca.pem","loglevel":"info"}
\ No newline at end of file

Info: Computing checksum on file /etc/puppetlabs/pxp-agent/pxp-agent.conf
Info: /Stage[main]/Puppet_enterprise::Pxp_agent/File[/etc/puppetlabs/pxp-agent/pxp-agent.conf]: Filebucketed /etc/puppetlabs/pxp-agent/pxp-agent.conf to puppet with sum cad3d2db7a7a912a1734b7e8afa23037
Notice: /Stage[main]/Puppet_enterprise::Pxp_agent/File[/etc/puppetlabs/pxp-agent/pxp-agent.conf]/content: content changed '{md5}cad3d2db7a7a912a1734b7e8afa23037' to '{md5}a19b53e1586a748ba488ee4dcd7afc3c'
Info: /Stage[main]/Puppet_enterprise::Pxp_agent/File[/etc/puppetlabs/pxp-agent/pxp-agent.conf]: Scheduling refresh of Service[pxp-agent]
Notice: /Stage[main]/Puppet_enterprise::Pxp_agent::Service/Service[pxp-agent]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Puppet_enterprise::Mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]/content: [diff redacted]
Info: Computing checksum on file /etc/puppetlabs/mcollective/server.cfg
Info: /Stage[main]/Puppet_enterprise::Mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]: Filebucketed /etc/puppetlabs/mcollective/server.cfg to puppet with sum 7a8d59f271273738a51b4cf05ee6b33a
Notice: /Stage[main]/Puppet_enterprise::Mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]/content: changed [redacted] to [redacted]
Info: /Stage[main]/Puppet_enterprise::Mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]: Scheduling refresh of Service[mcollective]
Notice: /Stage[main]/Puppet_enterprise::Mcollective::Service/Service[mcollective]: Triggered 'refresh' from 1 event
Notice: Applied catalog in 6.02 seconds

I assume the PE component updates are based on facts only present with Puppet 5, facts that would not be present during the first run while the agent is still Puppet 4. Subsequent runs are stable.

I do not have a Windows agent to test with in my lab, I assume it looks similar but cannot verify. Be sure to test at least one Windows agent before releasing this change across your entire Windows fleet.

New Features

I have skipped from 2016.4 to 2017.3, which means I have missed out on new features in four major versions: 2016.5, 2017.1, 2017.2, and 2017.3. Here are some of the big features from the release notes:

I mentioned Hiera 5 already, which I’ll discuss further in another post. I also want to immediately enable the Package Inventory. As described, I can update the Classification of the PE Agent node group to include puppet_enterprise::profile::agent with package_inventory_enabled set to true and committing the change.

While it takes affect immediately, your agents need two runs to show up: the first changes the setting so package data is collected and the second actually collects the list. Once that happens with at least one node, you’ll start seeing data populate on the Inspect > Packages page.

I do not have need for High Availability myself, but that seems really cool. In the past, it’s been quite the pain in the behind to automate yourself. I have not used Orchestrator in anger before, and I do Hiera overrides in my control repo, almost ignoring the Console Classifier otherwise, so I probably will not be exploring them very well. However, I’m really excited about Tasks, that’s something I hope to explore during the winter break, perhaps by upgrading bash across all my systems!

Summary

Today we looked at why we want to upgrade to the latest Puppet and upgraded a Puppet Enterprise monolithic master and some linux agents. It’s not that hard! We also staked out features that we want to investigate and turned on the Package Inventory. There are a lot more changes than I listed, along with tons of fixed bugs and smaller improvements, so I recommend reviewing the release notes for each version to see what interests you.

I hope to be able to look into Hiera 5 and Tasks soon, look for new blog posts on those! Let me know if there’s anything else you’d like to see discussed in the comments or on twitter. Thanks!