Puppet Tech Debt Day 2: Adjusting Rspec Tests

Yesterday was our 14th anniversary, so I didn’t have time to write a blog post, but I did look into a tech debt issue: rspec tests. In addition to adding rspec-puppet-facts, I found a program called onceover that offers two concepts I want to look into.

First, there is a rake task to generate fixtures. I have a similar feature in generate-puppetfile, but it’s not as perfect as I’d like – it often requires some manual touch afterward. Onceover’s rake task does not have that issue. I hope to be able to grab the rake task without being forced to use the rest of it or affecting the existing test set up. Maybe I’ll be interested in the rest of it someday, but not right now, and it’s great when you’re not forced to make a forklift upgrade like that.

The second item is centralizing rspec-puppet tests in the controlrepo rather than inside each module itself. That will change the relevant portions of my controlrepo layout from:

.
├── dist
│   ├── profile
│   │   ├── files
│   │   ├── lib
│   │   ├── manifests
│   │   ├── metadata.json
│   │   ├── Rakefile
│   │   ├── spec
│   │   │   ├── classes
│   │   │   ├── fixtures
│   │   │   │   ├── hieradata
│   │   │   │   ├── hiera.yaml
│   │   │   └── spec_helper.rb
│   │   ├── templates
│   │   └── tests
│   └── role
│       ├── manifests
│       ├── metadata.json
│       ├── Rakefile
│       ├── README.md
│       ├── spec
│       └── tests
├── environment.conf
├── Gemfile
├── hiera
├── hiera.yaml
├── manifests
│   └── site.pp
├── Puppetfile
└── r10k_installation.pp

To:

.
├── dist
│   ├── profile
│   │   ├── files
│   │   ├── lib
│   │   ├── manifests
│   │   ├── metadata.json
│   │   ├── Rakefile
│   │   ├── templates
│   │   └── tests
│   └── role
│       ├── manifests
│       ├── metadata.json
│       ├── Rakefile
│       ├── README.md
│       └── tests
├── environment.conf
├── Gemfile
├── hiera
├── hiera.yaml
├── manifests
│   └── site.pp
├── Puppetfile
├── r10k_installation.pp
└── spec
    ├── classes
    ├── fixtures
    ├── hieradata
    ├── hiera.yaml
    └── spec_helper.rb

I haven’t done this yet but did talk to some others who already do this and are benefiting from the simplified test setup. I’m looking forward to trying it out soon.

November Goal: Pay down Puppet Tech Debt Part 1

It is getting close to the time of the year when the pace of feature-driven change slows down – people want stability when they are on vacation and especially when they’re holding the pager and others are on vacation, and Lord help anyone who negatively affects a Black Friday sale. This is a great time to work on your technical debt. First, you need to identify where it lies!

I expect to spend most of this week identifying areas at work where there are pain points specifically related to tech debt and whether it is better to keep paying the interest or if it is time to pay the whole thing down. I have identified a few candidates related to Puppet already, mostly from lessons learned at PuppetConf.

  • Convert tests to use rspec-puppet-facts. A long list of custom facts in each spec test becomes untenable pretty quickly. Preliminary tests show that I need to chose whether tests are based on Windows or Linux, as mixing and matching in the same tests would break most of them, and I’m leaning toward Linux. This does mean that some tests will not use rspec-puppet-facts and will keep their own fact lists.
  • Convert params patterns to Data in Modules.
  • Try out octocatalog-diff – some unexpected string conversions have been painful before.
  • Get a BitBucket-Jenkins-Puppet workflow working and document. This looks promising, does anyone else have workflow guides I can follow?
  • Update my Puppet Workflow documentation. This isn’t paying down any actual tech debt, but I think it goes hand-in-hand with the above item and revisiting it should provide some clarity to what we do and maybe highlight some room for improvement.

I’m sure there will be more to come. I will try and blog about my progress throughout the vDM30in30 challenge.

PuppetConf Followup: Upgrading to Puppet 4

Last week, I gave a talk at PuppetConf 2016, “Enjoying the Journey from Puppet 3.x to 4.x,” and received some great feedback. One of the major points is that you wanted to hear more opinionated viewpoints than “it depends,” even when it depends! It can be difficult to fit that into a 45 minute talk – heck, I had a 45 minute talk at the airport about just one slide! – but thankfully, I have a blog where I can keep writing and no-one can stop me. Let’s take a look at my slides and go through some of the “it depends” points with some more strongly worded opinions.

Continue reading

PuppetConf 2016 Wrap-Up

Last week, I attended PuppetConf 2016. Spoiler alert: it was pretty awesome! Let’s take a look at what happened and provide some thoughts on what it means for the future of Puppet and IT in general. You can see all my live-tweets using this link, and storify links are in each section.

Contributor’s Summit

Storify

The day before the conference talks is the Contributor’s Summit. It is a combination of group brainstorming, a hackathon, and face time. It starts out with a few talks on where Puppet and the community is, and a non-keynote-spoilering rough idea of where things are going. After about two hours of talks, the summit breaks out into self-managed brainstorming and hacking. If you have a project or idea you are working, you are encouraged to step on stage and announce what you plan to work on and where you’re sitting. Others can then join you to contribute to what you’re working on. Or, you could hack away wherever you’re sitting and mingle with other attendees at will.

Continue reading

Puppet 3 End of Life 12/31/2016

I mentioned this at PuppetConf: Puppet 3 support ends 12/31/2016! Hopefully you weren’t surprised, but if you were, you have just over 60 days to get upgraded. My talk at PuppetConf was about the upgrade journey (video)so may help, and there was a whole track for Puppet 4 on the PuppetConf 2016 video list. Get thee to the upgrade-mobile, pronto!

Started the upgrade and having problems? Ask on the community slack. Need help doing the actual work? If you’re on PE, engage professional services; there are many consultants who will be willing to help you with FOSS.

Some of you have also asked about a reference for this EOS date:

Conference Gadget OpSec

I’m getting ready for PuppetConf shortly and that got me thinking about how to survive conferences with your gadgets operations security (opsec) intact. Here are a few things I’ve learned over the last few years, in no particular order:

  • Charge your devices every night. Check them in the morning to see they actually charged; if not, make sure they’re plugged in while you’re taking a shower and getting breakfast so they can survive the long day. Nothing like sitting down in the keynote and realizing your phone is at 20% and it hasn’t even started. Don’t forget to charge any battery packs you brought.
  • Reduce brightness settings on anything with a screen. Your lapaptop, tablet, phone, watch, etc. It should be very low, somewhere between “no-one else can read this” and “I can’t read this.” This serves two purposes:
    • Prevent others from reading your screen. The person behind you probably doesn’t need to read your email, and definitely not your KeePass/LastPass/etc. Nor do they need to be blinded by it during a presentation where the lights are dimmed.
    • Save battery life. You won’t miss as much of the conference and you save yourself from another risky event…
  • Bring your own charging cables/adapters and battery packs. Do not borrow them or use USB charging stations. (If you really must borrow a charge, make sure you trust that person with your digital life.) Most devices use a USB cable of some sort, and in case you haven’t heard, USB security is pretty horrible and opens you up to being rooted and data exfiltration (see BadUSB, Mactans, USB keystroke loggers and plenty of others). It’s just not worth it.
  • Determine if you want to bring your gadgets at all. This is especially true at security-oriented conferences. Hacks abound at these things, including hacking the cell service. If you must bring a device, it might be best to acquire something for use only at that conference and destroy it afterward. That seems harsh, but flashing the device may not remove some infections. Are you willing to risk it?
  • Use a VPN or at least prefer cell service over wifi. Make sure that any data you transmit is protected from malicious and inadvertent snooping. Most of us are not at security conferences where the cell service is hacked, so if you don’t have a VPN it’s probably pretty secure in comparison to wifi, but not always (know the atmostphere). Adding the VPN on top is the best, though. If your company doesn’t provide one, find a trustworthy service or set one up at home.
  • Ensure you have good password hygiene. At a minimum, make sure they’re of reasonable quality and aren’t shared between services. Jessy Irwin talks about this on a Digital Underground PodCast.
  • Don’t log into anything you don’t have to. For persistent-access services, like email or file sync, log in at home so you have a working token and do not need to enter the password again. For anything you need to authenticate to every time, it’s probably not a good idea. Every use of credentials potentially exposes them to onlookers. Pay your mortgage before you leave or after you get back, not from the hotel wifi.
  • Have a Two Factor Authentication (TFA) backup plan. TFA is much more secure than Two Step Authentication (TSA), but often has some limitations for certain use cases that you need to understand. TSA codes can usually be sent to a new device, whereas adding a new device to your TFA device list may require the existing TFA device. If the original is lost or hacked, you may have no way to recover your account, or it may take significant effort above your “worth my time” threshold. Understand what services would be affected and make sure you have another way to recover access. This might include disabling TFA for the duration – if so, ask yourself again if you really want to bring that gadget. This is best thought through before converting a service to TFA, but now is the time to double check.
  • Keep your devices with you, or in something more secure than the hotel safe. Those safes are often easily broken, as shown here and here. Especially at those security conferences. Definitely don’t leave your laptop unlocked and unattended at the bloggers table. Same thing with your charging and battery equipment.
  • If you don’t need a particular gadget, leave it at home. This is so important, I’m mentioning it twice. Earlier I talked about devices being hacked, but you also cannot lose something if it’s in the dresser at home. Maybe you need your phone, but the FitBit can stay.
  • Bring non-gadget backups. This is especially true for payments. If your phone is hacked, lost, or falls in the toilet, make sure you have at least one physical credit card with you.
  • Maintain a list of devices, services, payment methods you travel with. When something bad does happen, it’s really helpful to have a list of what’s affected. Keep a list at home in case you lose it all, as well as taking a (modified?) copy with you. The list should help you determine what you need to recover, but not have information that someone else could use to steal your identity. In other words, “Bank account check card, $phone” is fine, “Bank Of Bad Opsec, $phone, $card_number, $expiration, $ccv” is way too much. If something happens, start making phone calls. If the list was lost as well, that’s why you have a list at home. Make the calls now, do not wait till you get home and find $20k in charges to dispute or that your enter cloud drive was emptied.
  • Be paranoid. It may not come naturally to all of us, but it is key to good OpSec. If you think something might expose you unnecessarily, don’t do it. It is better to be safe than sorry.

I also have one non-opsec tip for conferences: always call your vendor reps and ask what they have going on at the conference. You can usually arrange some one on one time with their engineering team or attend their event where you can meet others using the same products and compare notes.

If you have your own tips, drop them in the comments or send them to me on twitter!

Getting started with vCheck

If you use vSphere and particularly vCenter, you’re probably at least familiar in passing with PowerCLI, a package of snap-ins and modules for PowerShell. This is my preferred language for interacting with the vSphere/vCenter APIs, since it has (IMO) the best documentation of the available languages and API SDKs. If not, I recommend downloading it and playing with it, it can really help you automate many of your repetitive tasks with less Flash and less right clicking.

One of the most popular tools built with PowerCLI is vCheck. It’s a framework for running a number of checks against your vSphere infrastructure and determining what operational issues are present – something every Ops team needs. It won’t replace a monitoring system such as vROps or even Nagios, but it augments such systems very well. For example, it can report on VMs that have ISOs attached, or where snapshots have been present for more than 7 days – issues that probably aren’t worth paging anyone out for, but need to be dealt with eventually. Many of us have built some homegrown solutions for this, maybe even with PowerCLI, but it is difficult to beat a tool designed to meet the needs of a large percentage of vSphere users, is actively developed by VMware employees, and is a framework that you can extend with instance-specific needs. You can always run your tools and vCheck together, too.

Let’s take a look at vCheck and how to get started with it today. We’ll download it, configure it, schedule it as a daily task, review how to enable and disable checks, and store your configuration in version control. This provides a solid base that you can tweak until it fits your specific needs just right.

Continue reading

vROps/Log Insight Integration and Troubleshooting

Update: Special thanks to Yogita N. Patil and VMware Technical Support for their assistance with the issues below!

Last week, I was trying to integrate vRealize Log Insight with vRealize Operations (vROps) so that I could ‘launch in context’ from vROps. This adds a context-sensitive action to vROps that lets you pull up Log Insight’s Interactive Analysis feature against the alert or object you are currently viewing. This makes it easy to drill down into logs with a lot less clicking around:

Launch in context is a feature in vRealize Operations Manager that lets you launch an external application via URL in a specific context. The context is defined by the active UI element and object selection. Launch in context lets the vRealize Log Insight adapter add menu items to a number of different views within the Custom user interface and the vSphere user interface of vRealize Operations Manager.

The documentation to enable this features seems pretty simple. I ran into a few problems, though…

The requirements are pretty simple, but were the first thing to trip me up. You want to be on Log Insight 3.6 and vROps 6.3. While Log Insight had been upgraded a day or two earlier, vROps was at 6.1. When performing the upgrade of vROps, it did not register its extension properly! Going into the Managed Object Browser showed there was still a vCOps 6.1 registration instance (yes, the extension is still called vCOps!). In addition, the extension was registered by IP, not by DNS. The extension needs to be in place for the steps below, or you receive even more opaque error messages, so I encourage you to verify it now. You can investigate your own MOB at a link similar to https://vcsa.fqdn.example.com/mob, and specifically look at the vROps extension at https://vcsa.fqdn.example.com/mob/?moid=ExtensionManager&doPath=extensionList%5B“com.vmware.vcops”%5D.client

Continue reading

Deploy your #Puppet Enterprise license key with Puppet

Since I manage my Puppet infrastructure with Puppet itself, I am for full automation. For Puppet Enterprise, that includes deploying the license key file from the puppet fileserver (profile/files/master/license.key served as puppet:///modules/profile/master/license.key). When upgrading to the latest Puppet Enterprise version, 2016.2.0, I encountered a change that was tricky to resolve – the puppet_enterprise::license class accepted a license_key parameter, which was marked as deprecated:

Warning: puppet_enterprise::license::license_key is deprecated and will be removed in the next
    PE version. Please use puppet_enterprise::license_key_path. If using the Node Manager, the class
    is located in the PE Infrastructure node group.

Easy, I’ll just use the parameter license_key_path instead! Except, it wants a location for a file on the master, and I’m trying to deploy a file to the master!

Continue reading

Upgrading to Puppet 4 at #PuppetConf 2016

As I did last year, I submitted a proposal for PuppetConf 2016 and it was accepted! As I did last year, I am requesting your help with it.

The talk,  Enjoying the Journey from Puppet 3.x to 4.x, will help attendees lay out a plan to get to Puppet 4. I will be sharing my experiences from POSS and PE upgrades,  including tools to assist with the migration and some pitfalls to avoid. There are many ways to perform these upgrades and my experiences are limited, so I’d like to hear about yours. If you are interested in sharing your experiences and grant me permission to share them in my talk, you can contact me on twitter/DM or by submitting a PR against my PuppetConf github repo. Let me know if you would like to be credited or keep it anonymous. Thanks!