Deploying your SSH Authorized Key via Puppet

Update: I have since published a forge module rnelson0-local_user that can be used to distribute keys as well. If you are using keys with local users, I highly recommend using the forge module. If you are not managing the users directly (say, for domain-joined nodes), continue to use the solution presented belwo.

Today, let’s look at deploying SSH Authorized Keys via puppet. An authorized key is a public key used for public key authentication (not to be confused with an ssh key, which is the unique key identifier of a host that verifies the server is who it says it is). By attaching an authorized key to a user, any login attempt for that user that presents the corresponding private key will be authenticated successfully, giving you the ability to log in without a password. This is commonly used for automation, where no user is present to enter a password, or for a user with a private key to access systems without additional steps.

Authorized keys are typically considered more secure than a password, but they rely on protecting the private key. If the private key is not secured, anyone who obtains the private key can impersonate the account. If a non-privileged user’s key is lost, only that user’s access and files are at immediate risk. An attacker would still need to escalate privileges to damage the system. If a privileged user’s key (no-one reading this logs in as a privileged user, such as root, right? RIGHT?) or an automation account’s key is lost, the immediate risk is much higher. An attacker might gain access to the entire system or be able to attack other systems. You must absolutely secure private keys and ensure you follow the principle of least privilege for all users, especially automation accounts.

Let’s look at an example of how to use a properly secured authorized key. In past articles, we’ve built a yum repository and a build server. You may be logging into these servers frequently and transfering files between the two. Every time, you need to enter your passwords. That gets old, quickly. If you had an authorized key in place, you can ssh to both servers and present your private key, no password. If you copy the private key to the build server or create a new key, you could scp files from the build server to the yumrepo the same way. This should make life a lot easier for you.

There are lots of ways to generate keys depending on your OS and applications. My workflow is to use Putty on a Windows 7 laptop to connect to linux VMs, then use the linux openssh client to ssh to other linux VMs. I’ll cover generating and configuring keys with Putty and openssh.

Continue reading

Creating a Puppet ERB Template

Recently, we looked at converting a module to be hiera friendly. Another task you may have to look forward to is tracking configuration files for different hosts providing the same service. You could have a config for each node, network, environment, etc., all of which need updated if some common element changes. Or, you could use a Puppet Template to have a single template that is populated by node-specific elements from node facts and puppet variables. Each node would receive a personalized copy and any common element changes would be reflected across the board immediately.

As an example, I run some mediawiki servers at work. Each one points to a different database but is otherwise very similar. The search engine is SphinxSearch and it relies on the Sphinx config file /etc/sphinx/sphinx.conf. The config includes the database connection information, which varies from device to device, and a number of other settings standardized across the wikis (minimum search term length, wildcards, and other search settings). Keeping the database connection information accurate across three wikis would normally require three config files. Let’s simplify that with a template.

Puppet templates are written in ERB, a templating language that is part of the Ruby standard library. ERB commands are interpolated into the config where needed and puppet feeds facts and variables to ERB, which determines what values to populate the config with. We have a few good sources of information on the templates: the Ruby docs, a Puppet Labs Using Puppet Templates article, or the Learning Puppet chapter on Templates. I’ll be picking out some highlights, reference them as needed as we work on our template.

Continue reading

Why Puppet?

As we near the end of my schedule puppet content, I’ve asked the Twitterverse for any other topics people want to see discussed. Jason Shiplett asked a great question: Why Puppet?

This is essentially a two-fold question. First, you must understand what Configuration Management (CM) is and why you need it. Second, of all the CM tools out there, why would you choose Puppet?

Configuration Management

In spite of my telling Jason that the world doesn’t need another “Why CM?” post, here we go 🙂

Plenty of other people have done a great job explaining what Configuration Management is and why you need it. Chief among these is Information Technology Infrastructure Library, or ITIL, a framework for IT Service Management. In the Service Transition volume, Configuration Management is described. We can simplify the meaning to describing and managing the state of a configuration through a service’s lifecycle.

Continue reading

Refactoring a Puppet class for use with Hiera

For the past few weeks, we have been working on packaging our own software and deploying it with puppet. Before that, we touched on refactoring modules to use hiera. In fact, I grandiosely referred to it as Hiera, R10K, and the end of manifests as we know them. I included a very simple example of how to refactor a per-node manifest into the role/profile pattern and use hiera to assign it to the node. Today, we’ll look at more features of hiera and how you would refactor an existing class to use hiera.

In a legacy implementation of puppet, you’ll likely find plenty of existing modules whose classes have static assignment or lots of conditionals to determine the necessary values to be applied. Even in a greenfield implementation of puppet, you may find yourself writing straight Puppet DSL for your classes before refactoring them to use hiera. Figuring out how to refactor efficiently isn’t always obvious.

First, let’s take a look at Gary Larizza’s When to Hiera and The Problem with Separating Data from Puppet Code articles. Gary covers the when and why much better than I could, so please, go read his article and then come back here. Gary also covers the common pre-hiera pattern and a few patterns that can be used when refactoring to hiera. There is another pattern that is documented indirectly by Gary (under Hiera data bindings in Puppet 3.x.x) and in the Hiera Complete Example at docs.puppetlab.com. I’m going to explain this pattern and document this directly, adding another Hiera pattern to Gary’s list.

Continue reading

FPM and Build Automation

Having created one or more build servers, the next logical step is to start building software. We touched on this briefly a few weeks ago, and with a proper development station, it’s time to expand on it.

If you’re a developer by trade, you can probably skim or skip this article. Remember, this series is aimed at vSphere Admins, not devs. I’d certainly appreciate your insights in the comments or on twitter, though!

Modifying software build processes for FPM

We’ve used FPM in the past to take a directory and turn it into a package. This works very well when /some/long/path belongs entirely to your application. What if your application drops a binary in /bin, a manpage in /usr/share/man/man5, a config file in /etc, or even just a few files in a directory that’s shared with other packages? Let’s take a look at an extension for mediawiki. This is very simple, we have a legacy Makefile and two useful targets, dev and prod:

Continue reading

Puppetize a Build server

The Puppet series so far has really focused on VM builds and just started to touch on software packaging. We need an appropriate place to do this work, and what better way to set that than via Puppet itself? Today, we’ll create some roles and profiles for a build server, which could be permanent and share amongst developers, spun up as needed for the team, or spun up per developer.

Build Profile and Role

The last few examples we have done with FPM were on our “production” servers. That’s less than ideal for a few reasons. You wouldn’t want to mess up the publicly available service while packaging, whether by overwriting a file, exhausting resources, or the brief outage when services restart. It is not a good idea to add compilers and development libraries to any server unnecessarily as it increases the attack surface (additional security risks, additional packages to patch, additional items for auditors to flag, etc). You also probably do not want your build servers in the same environment as your production servers (unless, as is the case in these examples, your “production” environment is your lab – so just pretend it’s a different environment). Let’s assume that we do not have a good reason to violate these best practices, so our goal is set up a dedicated build server. It will require all the software we have been using so far, and we will throw on a local user. If you have LDAP or another directory service in your lab, I would suggest using it, but this is a good example as sometimes the build network is restricted.

We have two profiles to create, then. The first is the local users. We’ll call this class ::profile::build_users, in case we create another grouping of users later. The second profile is for our build software, and we will call it ::profile::build. Here are the two class files, located at profile/manifests/users/build.pp and profile/manifests/build.pp, respectively.

Continue reading

Deploying your custom application with Puppet

In the past two weeks, we learned how to create packages for our own applications and how to host them in a repository. The next step is to puppetize the application so that you can deploy your application to nodes through automation. We’ll need to add the repo to our base profile, so all nodes receive it, define a profile that requires the application, and a role class and corresponding hiera yaml to apply the configuration to a specified node. Let’s get started!

Add the repo to the base profile

This step is fairly simple. Last week, we defined the repo and applied it manually with:

  yumrepo {'el-6.5':
    descr    => 'rnelson0 El 6.5 - x86_64',
    baseurl  => 'http://yum.nelson.va/el-6.5/',
    enabled  => 'true',
    gpgcheck => 'false',
  }

Add that to your base profile. It should look something like this now:

Continue reading

Create a Yum Repo

In last week’s article, we learned how to build a package with FPM, specifically an RPM. Today, we’ll look at creating a Yum repository to host your packages. The repo can be built with puppet, which can also distribute settings so all your managed nodes can use the repo. By adding the package to the repo, it becomes available to install, again via puppet. This is the first step on the road to the automated software packing and delivery that is vital for continuous integration.

A repo has a few components.

  • Webserver – Content is served up over http.
  • createrepo – A piece of software that manages the repo’s catalog.
  • RPMs – What it’s serving up.

We don’t need to know how the pieces work, though. We’ll rely on palli/createrepo to manage the repo itself. We just make sure a webserver is available, the directories are there, and that there’s some content available.

Continue reading

Creating packages with FPM

In my exploration of Puppet, I’ve found a lot of oblique references to managing software deployments with it, but very little solid guides on how to do so. I need to tackle this for work, so I figured I should start at the top – creating a software deployment. To be clear, I’m speaking of internally developed software, or modifications to public software, not something you can find in your distribution’s packages and install with the puppet package resource type.

Creating Some Software

Going back even further, we need to create some software. I’d wager that most already have something laying around – perhaps a few scripts in a directory along with a Makefile that lets you run “make install” to put them in the final destination, or a tarball and config file that are “installed” by a script that untars the software and copies your customized config in place. If you don’t have something like that, let’s make something. How about a simple PHP application? It’s just a Hello World, nothing special, so you don’t need to know PHP for this.

Spin up a new VM, or requisition one of your existing dev VMs. I’m going to use server01 from the Puppet series. Make sure apache and php are installed, and if this node isn’t managed via our web server role, iptables may block connections so we will stop it:

[rnelson0@server01 ~]$ sudo puppet apply -e "package {['httpd', 'php']: ensure => present}"
Notice: Compiled catalog for server01.nelson.va in environment production in 0.33 seconds
Notice: /Stage[main]/Main/Package[php]/ensure: created
Notice: Finished catalog run in 20.70 seconds
[rnelson0@server01 ~]$ sudo service httpd restart
Stopping httpd:                                            [FAILED]
Starting httpd:                                            [  OK  ]
[rnelson0@server01 ~]$ sudo service iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]

Continue reading

Puppet Scale Up with Apache/Passenger

Welcome back! I hope everyone had a good summer and recharged their batteries. Bonus points if you found time to play with puppet, too! Now that we’ve had a healthy break, let’s get back to it!

When we left the series in July, we had a Puppet master, a few nodes, were implementing the roles and profiles pattern, and used r10k to manage it all. However, we didn’t address scalability. Today, we’ll take a look at addressing this by using Apache and Passenger.

Scaling Up

There are two ways to scale – out and up. If we were to scale out, we’d be concerned with running multiple masters and synchronizing all data between them. That’s something we might look at eventually, but first we want to scale up, which is the process of providing more resources to our master. Since we are vSphere admins, we can easily increase the resources provided to the VM. For instance, our VM has 1 vCPU and 2GB of RAM. It would be easy, and helpful, to increase that, perhaps to 2×4 or 4×8 vCPUxRAM.

Unfortunately, system resources are not the only limitation in our system. Out of the box, Puppet uses WEBrick and scales to about 10 nodes. More than one nodes trying to talk at the same time will generate conflicts and cause some or all nodes to fail to receive a catalog. No matter the resources available, these limitations persist. The answer is to use a dedicated web server with a Rack-based application stack. While any server will work, if you don’t have a preference, then PuppetLabs suggests you use Apache with the Passenger mod. There is a lot of information on Puppet’s site about the limitations and the remedy.

Continue reading