Change your default linux shell without root access

A friend recently mentioned his hatred of ksh and inability to change his default shell because he is not root and root will not make the change. I concur, ksh is turrible, that needs fixed. Here’s a little trick for anyone else who has this issue. Let’s say you want to use bash and the shell forced upon you is ksh. The Korn shell uses .profile to set up your environment when you log in, so append these two lines to your profile:

# run bash damnit
[ ! "`which bash | egrep "^no"`" ] && [ -z $BASH ] && exec bash --login

We look for the output of which bash and ensure it does not say no bash in …, and if that file is non-zero, we exec bash as a login shell. You can read man exec for details on specifically how it works in a given shell, but it essentially replaces the running shell with the new command. If the first two checks are positive, then bash is executed and replaced the ksh process. Once the files are added to your profile, log in again in a new session. Voila, you now have a bash login shell!

rnelson0@dumbserver ~ $ ps -ef | grep bash
  rnelson0  5927  5298   0 19:56:42 pts/1       0:00 grep bash
  rnelson0  5298  5292   0 19:47:48 pts/1       0:00 bash --login

If you’re forced to use a different shell, you may need to locate a different profile file. If you want to use a different shell, just sub out bash for the name of your preferred shell. Enjoy!

Tag-based Veeam Backups

As I teased in Using vCenter tags with PowerCLI, I want to explore how to use tags  in a practical application. To refresh our memory, we looked at creating an Ownership category with individual tags in it, and limited VMs to having just one of the tags. We created a little script that defines our schema, in case we need to re-create it. We are going to work on a new category today for our backups. Specifically, Veeam backups, based on SDDC6282-SPO, Using vSphere tags for advanced policy-driven data protection as presented at VMworld 2015.

Defining Policy and Tags

To create our backup jobs, we need to know a few things that will translate into our tag schema. Our backup policies are defined by a combination of ownership, recovery point objective (RPO), and the retention period. For example, our Development group is okay with a 24 hour RPO and backups that are retained for a week. Operations may require a 4 or 8 hour RPO and require 30 days of backups. Each of those combinations will require a separate backup job. We can combine these tuples of information into individual tags so that Veeam can make use of them. We also need one more tag for VMs that do not need any backup at all. We can put all of this in a tag category called VeeamPolicy. Here’s what that might look like, again in PowerShell:

New-TagCategory -Name VeeamPolicy -Description "Veeam Backup Policy" -Cardinality Single -EntityType VirtualMachine
New-Tag -Name "NoRPO" -Category VeeamPolicy -Description "This VM has no backups"
New-Tag -Name "Development24h7d" -Category VeeamPolicy -Description "Development VMs with 24 hour RPO, 7 days retention"
New-Tag -Name "Operations8h30d" -Category VeeamPolicy -Description "Operations VM with 8 hour RPO, 30 day retention"
New-Tag -Name "Sales48h30d" -Category VeeamPolicy -Description "Sales VM with 48 hour RPO, 30 day retention"

Continue reading

Using vCenter tags with PowerCLI

I was recently introduced to some practical usage of vCenter tags. I used tags to build a fairly easy and dynamic set of sources for Veeam backup jobs, but before I get into that, I want to explain tags a little bit for those who are not familiar with them.

Tags are something that are pretty easy to understand conceptually. You create a category which contains a few tags. An object can receive one or multiple tags from a given category. Tags are managed by vCenter, not by vSphere, so require a license that provides vCenter for management. That’s a pretty simple explanation and list of requirements.

A Tag Category has a collection of Tags available within it. If the Category is “single cardinality”, it means that an object can only be assigned one tag in the category. This might be good for a category associated with ownership or recovery point objective (RPO). A Category can also be “multiple cardinality”, where a single object can be assigned multiple tags in a category. This would be helpful to associate applications with a VM, or a list of support teams that need notified if there is a planned impact or outage.

I’m going to show you how to manage Tags and Tag Categories with PowerCLI (Specifically with the PowerShell ISE and a profile to load PowerCLI), but you can manually create and manage them through the vSphere Web Client, under the Tags item on the left hand menu. You can create and delete and rename tags and categories there all day long, and you can right click on an object almost anywhere else and select Edit Tags to edit the assigned tags on it. When you view most objects, you’ll see an area in the Summary tab labeled Tags that will display and let you manage assignments.

Continue reading

Print the rspec-puppet catalog, courtesy of @willaerk

Sometimes, when you are writing an rspec-puppet test, you’re not sure exactly how the test should be written. You know that you want to test a resource with some extra attribute, but you may be describing the resource wrong, or using a bad regex to test the contents. Rspec-puppet will helpfully tell you when the code and the test do not match, but it does not always tell you where the mismatch is. In those cases, you want to see the whole catalog to see more detail on the resource attributes.

Let’s look at a very simple profile and its corresponding tests and test output:

Continue reading

Rspec fixtures tip: symlink to other modules in your controlrepo

If you are writing rspec tests against your controlrepo, specifically your profile module, you need to set up your .fixtures.yml file to reference the other modules in your controlrepo. For example, here’s a list of the modules in the dist directory of a controlrepo:

dist
├── eyaml
├── profile
└── role

If any of the profile modules reference one of the other modules, it needs to be referenced as a fixture or an error is generated that the resource cannot be found:

$ be rspec spec/classes/eyaml_spec.rb

profile::eyaml
  should contain Class[profile::eyaml] (FAILED - 1)
  should contain Class[eyaml] (FAILED - 2)

Failures:

  1) profile::eyaml should contain Class[profile::eyaml]
     Failure/Error: it { should create_class('profile::eyaml') }

     Puppet::PreformattedError:
       Evaluation Error: Error while evaluating a Function Call, Could not find class ::eyaml for build at /home/rnelson0/puppet/controlrepo/
dist/profile/spec/fixtures/modules/profile/manifests/eyaml.pp:15:3 on node build

We can reference these local modules with the symlinks type of fixture, like this:

fixtures:
  symlinks:
    profile: "#{source_dir}"
    eyaml: "#{source_dir}/../eyaml"

The paths must be fully-qualified, so we use #{source_dir}, a magical value available through rspec, to start with the fully-qualified path to the current directory dist/profile. Follow the symlinks type with repositories and other types of fixtures as you normally would.

When you run rake spec or rake spec_prep, these symlinks will be created underneath dist/profile/spec/fixtures/modules automatically for you. Your rspec tests on profiles that include content from these other modules will no longer error with an inability to find the module in question.

$ be rake spec_prep
$ be rspec spec/classes/eyaml_spec.rb

profile::eyaml
  should contain Class[profile::eyaml]
  should contain Class[eyaml]

Finished in 3.39 seconds (files took 1.61 seconds to load)
2 examples, 0 failures

Enjoy!

Including additional resources in your rspec-puppet tests

I’m a strong advocate of creating unit tests for your puppet code with rspec-puppet. I’ve written a number of articles on tests before, so here’s one more.

When you’re testing a class, sometimes there’s an expectation that it’s used alongside another class or resource. But your test is only against your class, so how do you add the other resource to the catalog? As an example, let’s look at the module jlambert121/puppet and how some of the subclasses in the module are tested, specifically puppet::server::install. The top level describe statement ensures that the class puppet::server::install is present. It does not ensure that some of the other classes it relies upon are present.

This is done through the : :pre_condition statements at the beginning of each 2nd level describe, starting on line 3:

  let(:pre_condition) { 'class {"::puppet": server => true}' }

Other describe blocks instantiate the class with different parameters, as needed for the individual test.

In the : :pre_condition, you can add any Puppet DSL you want. I would not suggest going overboard, but including or instantiating another class or a small number of resources can be very helpful. This can be especially helpful with profile classes, where you may want to test behavior when another profile is present, e.g. testing an application profile with multiple webserver profiles.

I did not find the : :pre_condition statement documented anywhere in the rspec-puppet docs, so I hope this helps others discover it.

Non-conforming in IT

Many people in IT are non-conformists, compared to our non-IT brethren. Common themes include being good at math, enjoying games – video, but other tabletop games like D&D as well –  beginning our day with coffee or other forms of caffeine, can’t stand that sportsball, introverted, spend all day in front of a computer, and a few others. This certainly isn’t an exhaustive list, and not everyone hits these, but I imagine most reading this found a few things that describe them. Others, like coffee and caffeine, aren’t limited to IT people, but add to the collective list of commonalities. For a bunch of people going against the grain, we seem to be choosing a rather similar direction!

You don’t have to be a non-conformist in IT, though. I’m not! I do enjoy my video games and I’m on the introverted side of the spectrum. However, I don’t enjoy math beyond calculus and even prefer english and related studies, I don’t drink caffeine on a regular basis, I love team sports especially football, and my main hobby is woodworking in my garage. I’m the guy whining about the lack of just plain water at a conference or causing your eyes to glaze over when using football analogies in meetings.

If there’s any point to this short article, it’s a reminder that we all do really have different interests. It can be a little weird to be made the outsider of the outsider group, so be sure to celebrate those differences instead of using them against each other.

Puppet Enterprise Migration from 3.8.4 to 2015.3.3

I recently completed a PE migration from 3.8.4 to 2015.3.3 (puppetserver 2.2.41 and puppet agent 4.3.2). This was a somewhat painful exercise, as we kept running into issues because we had gotten so far behind on upgrades. If you need to perform the same kind of upgrade, I hope this broad-stroke description of the upgrade steps will help you. Before we get to the upgrade, let’s cover some of the pre-requisites.

Release Notes

Always read the release notes first. I am sure I will cover some of the notes below in specific problems we ran into, but there’s a lot on there that we did NOT encounter.

Future Parser

When you get to PE 2015.3.3, you’ll be running Puppet 4. Make sure you have the future parser enabled on your master or agents, by following these instructions. You’ll likely run into at least one issue if you weren’t doing this before. For example, automatic string/array conversions may not work as you expect. Get your code up to par before moving forward.

Continue reading

Setting up modulesync with multiple Puppet modules

If you maintain more than one Puppet module, you’ve probably spent some time aligning changes in your general setup by hand – rspec helpers, Gemfile, Rakefile, your travis config, etc. Once you have a third or a fourth module, you find that does not scale. Thankfully, there’s a great tool to help automate this: Modulesync.

How It Works

We’ll discuss how modulesync works first, so that we understand how to set it up. For instance, to perform a dry run, you would run msync update –noop in a modulesync configuration repo. This creates a directory modules and clones the managed modules into that directory. For each module, it then describes the diffs between the existing contents of the module and the config defined by the modulesync configuration repo. Once the changes are reviewed for accuracy, run msync update -m “Commit message here” and the diffs are applied to the default branch (master) with the commit message specified. By creating a modulesync.yml file, the default namespace and branch can be specified. The use of a different branch name allows you to create a PR to apply the changes.

Continue reading

Create a Least-Privilege account to perform domain joins

I’ve been working to automate joining linux machines to an Active Directory domain lately and I was surprised to find little documentation on creating an AD account just for the domain joins. I was able to piece things together by looking at umpteen documents and lots of trial and error. I’ve compiled what I found in the hopes that others do not have to struggle so much. This is just what I was able to find out – please let me know if you have a better way!

 

In a Windows Active Directory domain, it’s important to join computers to the domain. When the computer is joined, the computer account is created and lets it do things like send user authentication requests/receive responses, update it’s IP/name in DNS, and otherwise participate in the AD domain. This is a pretty fundamental and vital requirement, so Microsoft has made it easy for users to perform domain joins, but with some limits.

If a user is a enabled, a member of Domain Users, a member of the local Administrators group, and the correct authentication information is used, they are granted the ability to join any given computer to a domain. The key ms-ds-MachineAccountQuota is defined at the domain level with a default value of 10. Due to the quota, any random user can join 10 computers to the domain after which they will no longer be able to do so. Enabled members of Domain Administrators are exempt from both local Administrators membership and the quota and can join unlimited computers so long as the correct authentication information is used.

This works great with Windows machines, but presents a slightly different problem when joining non-Windows machines to a domain. In these cases, there is likely no local Administrators group, so regular users are never able to satisfy all of the requirements to join a machine to the domain. Domain Administrators can, but that violates the principle of least privilege and is not the best option for production environments. We want a non-administrator account who can join as many computers to the domain as is required. I found two steps were required to create this account.

Active Directory Delegation

In the Active Directory Users and Computers MMC snap-in, you can use the Delegate Control wizard to delegate the ability to create computer accounts to a user account. Unfortunately, I have not found scriptable commands that are an equivalent to this wizard, so we need to describe the GUI process.

  • Create an account, ex: domainjoin, in the appropriate hierarchy of your Active Directory. It is recommend that User cannot change password and Password never expires are selected so the account is always available. It will not have ability to log into a server or any elevated privilege.
  • Delegate the ability to manage computer objects to the user with the Active Directory Users and Computers snap in (from JSI Tip 8144 with tweaks).
    • Open the Active Directory Users and Computers snap-in.
    • Right click the container under which you want the computers added (ex: Computers) and choose Delegate Control.
    • Click Next.
    • Click Add and supply your user account(s), e.g domainjoin. Click Next when complete.
    • Select Create custom task to delegate and click Next.
    • Select Only the following objects in the folder and then check Computer objects and Create selected objects in this folder. Click Next.
    • Under Permissions, check Create All Child Objects and Write All Properties. Click Next.
    • Click Finish

Increase the MachineAccountQuota value

The second step is to increase the quota from the default value of 10. This appears to be done domain-wide, so all users will get the new quota. I somehow doubt that will be a problem, but if it is, you will have to do further research on how to proceed. To increase the quota, we just need a single command entered in an Administrative PowerShell terminal.

Set-ADDomain example.com -Replace @{"ms-ds-MachineAccountQuota"="10000"}

I used 10,000 because we have less than 100 nodes ready to join the domain. You can increase the value if your scale is a bit higher. I’m sure there’s a way to reset the quota, too, I just haven’t found it yet.

Joining your node to the domain

You’re now ready to join your node to the domain with your new least-privilege account domainjoin. I have created a puppet module, domain_join, to meet my personal needs. I’d love to hear how you’re tackling this issue, especially if the solution is better than mine!