To favorite or to retweet

The favorite and retweet buttons have distinct purposes they were intended for in Twitter, though of course not everyone uses them as our Twitter gods intended they be used.


This is how you accomplish two things: 1) Sharing a tweet you found worthy of sharing. 2) Promoting the author’s tweet for more visibility. Retweeting is a way of saying, “Check this out, I like it!”


A favorite is more like a bookmark. Instead of having to scroll through your timeline, a favorite is always available at or under the Favorites button in your client of choice. You can use this to reference a tweet when you want to, or view a link in the tweet later on a different device, and then uncheck it when you don’t want to remember it later. Favoriting is a way of saying, “I want to look at this tweet later.”

Neither is a Like

Don’t treat twitter like the book of faces. Favoriting a tweet isn’t a way to tell other people that it’s a great tweet, in fact it won’t show up on your timeline and only the author will be notified of your favorite. Someone would have to be stalking you to check out your favorites, and while that’s a possibility, it’s not as noticeable as a retweet. Likewise, if you want to save something for later, retweeting it WILL make it show up on your “Me” page, but it can still get pushed further down the timeline, so it’s not as memorable as a favorite.

This isn’t obvious and it took me a while to figure out the difference, but there is one. I don’t expect anyone to change their behavior based on this – in fact, the twitterverse will disappoint me if the twitter announcement of this post isn’t favorited by everyone – but I just had to say something 🙂

Use existing definitions as a baseline

Sometimes we spend way too long trying to define things in our head when we can get existing configurations from the system. It’s vital to have a full service definition or any promotion of the service through environments will turn up missing components and make your life hell. If you’re building a new service that looks similar to an old one, or evolves the old service, steal the old service’s definition and then modify it.


There are a number of ways to gather existing service definitions. If you’re building a new host and you have Enterprise Plus licenses, use Host Profiles. Export an existing host’s config to a host profile, uncheck the irrelevant portions, change what’s relevant but different, and apply to the new host. It might take a few tweaks, but you’ll get it right soon. Then export the new host’s config to a host profile and you’re good to go.

If you don’t have Enterprise Plus, take a look at PowerCLI. It will take more legwork, but there are a ton of cmdlets available to capture networking, storage, and other service definitions from existing hosts which you can then apply elsewhere.

Continue reading

The Goal

If you’ve been paying attention in the IT world at all in the last few years, you’ve heard of this thing called DevOps. You’ve probably also heard of The Phoenix Project, an excellent DevOps novel by Gene Kim and others. Phoenix builds upon the foundation of an earlier novel, The Goal: A Process of Ongoing Improvement, by Eli Goldratt in 1984. The Goal is a revolutionary novel that changed the manufacturing world but hasn’t quite had the same effect on the rest of the world. It’s important to understand history and those that came before us. I decided to really dive in and explore our history.

If you’re curious about what I thought of this book, I’ll save you some time – buy a copy right now and start reading. By teaching in the Socratic Method, it makes high-level concepts easily relate-able by giving us real life examples of how those concepts work. Specifically, I read the 30th anniversary edition that included Standing on the Shoulders of Giants, some extra material that I think really matters. If you already have an older edition, it’s worth the $16 for this extra piece.

So what does a book about manufacturing have to do with DevOps? Nothing – and everything. Phoenix continually shows how lessons learned from the manufacturing world can help us in IT. On the other hand, IT is very different and blindly applying these lessons could actually be harmful. Thankfully, The Goal focuses on two primary components that guide us in applying our new knowledge. The novel is also the foundation of the Theory of Constraints. Let’s take a look at the two components first.

The Goal

The first component of The Goal is… The Goal. Yep, it is that simple. So, what is the goal? It’s universal – the goal of any company is to make money. It doesn’t matter what industry you’re in, that’s just common sense, right? Take a look at your current job and see if you agree. The Goal attacks this assumption and challenges us to view things differently. Eli, through the character of Jonah, defines the goal in the context of a manufacturing plant.

  • Increase throughput, defined as turning raw materials into cash
  • Decrease inventory, defined as all raw and processed materials that are not sold
  • Decrease operational expenses, defined as all costs of running the plant that aren’t inventory costs

These three fundamentals describe the goal in simple to understand terms that can be easily measured. Throughput is taking what you consume and selling it – whether it’s metal into faucets and fixtures that are sold or words and ideas into a blog post that is published. If the consumables lie around far too long, like faucets in a warehouse or a blog post that’s perpetually in draft status, your inventory costs go up instead of down. Operational expenses vary, but your personnel and other operational costs need to trend downward. These concepts are investigated in far more detail. These three concepts turn all of our assumptions on our head.

A Process of Ongoing Improvements

The second part of the story is about the process. Once you’ve come around to a new way of thinking, you don’t just suddenly fix everything. You have to implement change in how you’re doing things to meet the goal. Once you do, you get closer to the goal. You start to decrease the inventory that’s waiting around and throughput goes up. Those initial changes have visible immediate effects, but they may also have hidden long-term effects. Inventory may be decreased enough to deal with the current backlog, but once the backlog is out of the way, do you maintain your throughput? If the inventory is too high or low, new issues may arise. Hence, ongoing improvements.

This is addressed by measuring your throughput, inventory, and operational expenses. However, you need to measure with the goal in mind. Adhering to the previous metrics won’t suffice, as they aren’t aligned with the goal. By getting a better approximation of what is happening in a manufacturing plant, more information is available to drive the ongoing improvements.

Theory of Constraints

Together, these two sections combine to give us the Theory of Constraints. The theory stipulates that there are constraints in your plant and that your effort is best focused on the constraints. Finding these bottlenecks and addressing their limitations (exploiting the constraint) will go the furthest toward increasing your throughput and decreasing inventory and operational expenses. Everything else is subordinated to elevating these constraints. Then, you repeat the process – find a constraint, exploit it, subordinate everything else, elevate it. One additional key is to prevent inertia from becoming a constraint. Don’t do things because that’s how you do them – continue to challenge your assumptions and make whatever changes are required to increase throughput, decrease inventory, and decrease operational expenses.

Standing on the Shoulders of Giants

This bonus story in the 30th anniversary edition describes Toyota’s Lean production system (you may know it as Just-In-Time Production) and its strengths and weaknesses. It’s very short, but mostly importantly it documents that three stability requirements to implement Lean fully, and how unstable businesses can leverage certain parts of Lean to still benefit. A great example is Hitachi Tool Engineering. HTE attempted for years to implement Lean without success, because they did not enjoy the stability required to implement it. Finally, after only attempting to leverage the appropriate processes of Lean, HTE grew their profit ratio before taxes from 7.2% in 2002 to 21.9% in 2007. What company wouldn’t want to do that? Knowing when to not do something is just as important as knowing when to do something.

The Future

Obviously, The Goal struck a chord with me. I think it presents a fabulous theory on how to treat business, whether you’re in manufacturing or not. You’ll see some more posts from me in the future related to The Goal and how we can use the Theory of Constraints and the throughput/inventory/operational expenses in many ways, not just directly in our IT industry. I’ll present some of my own theories and attempt to prove them out by implementing them myself. I hope you take the time to read this book and that you will join me on this journey.

Migrating away from Puppet’s deprecated import feature

The import keyword in Puppet has been deprecated and will be removed in Puppet v4. That’s good to know, but what can you do about it if you’re using it? Let’s take a look at how it might be used. All directories below are relative to your environment directories, such as /etc/puppet/environments/production, unless a full path is given (starts with /).

Current Setup

Here’s what your site manifest might look like:

import nodes/*pp

When Puppet starts, it looks at all the *pp files in the nodes directory and loads them. Those files might look like this, one for each node or node-class:

Continue reading

Entering #vDM30in30 late

Some of you may have seen #vDM30in30 on Twitter recently. It’s based off a 30 blogs in 30 days challenge by Greg Ferro to get people using blogs, both to encourage people to write more, but also as a social media. I think it’s a great idea. Social media isn’t just Twitter, or FaceBook, or LinkedIn, etc, and one of those has some significant character restrictions, plus writing always benefits from repetition.

The #vDM30in30 challenge was started by the crew of to encourage people to write one blog post a day for 30 days. I’m taking it in a slightly different direction. I already write at least one post a week, so to encourage the blog as a form of social media, I’m doing an extra 30 posts in the month of December. They’ll all be in the vDM30in30 category and it will be more than one a day, most likely, but much shorter than my normal technical or opinion pieces.

I encourage anyone reading to take this challenge for yourself, in whatever version suits you. If you’d like to participate but don’t have your own blog, find me on twitter as rnelson0 and I’d be glad to have a guest author on my blog. Enjoy!

On Mentoring: “Perfection is an illusion, it’s pursuit is a pathology”

A while back, my wife, Michelle Block, and I were talking about getting stuff done – actually done, not just part of the way done – when she said something that I think is really profound:

“Perfection is an illusion, it’s pursuit a pathology.”

I really love this statement. It’s very simple, yet full of meaning. I asked Michelle where this statement came from and she gave me a very good story to tell.

Dr. Michelle Block is an Associate Professor of Anatomy & Cell Biology at Indiana University and an expert in her own field. Michelle takes very seriously the need to foster future generations of scientists and is very proud to be able to mentor some of these future scientists. One of the most inspiring experiences in her own development was reading Rosalyn Yalow’s Nobel Prize Speech, and she hopes to be able to provide similar encouragement to her successors. With that in mind, Michelle had been speaking with a colleague about the best way to explain to the upcoming generation of scientists what is expected of them, what it takes to be a good scientist. Her colleague asked her, “What’s the difference between excellence and perfection?”

Continue reading

Deploying your SSH Authorized Key via Puppet

Update: I have since published a forge module rnelson0-local_user that can be used to distribute keys as well. If you are using keys with local users, I highly recommend using the forge module. If you are not managing the users directly (say, for domain-joined nodes), continue to use the solution presented belwo.

Today, let’s look at deploying SSH Authorized Keys via puppet. An authorized key is a public key used for public key authentication (not to be confused with an ssh key, which is the unique key identifier of a host that verifies the server is who it says it is). By attaching an authorized key to a user, any login attempt for that user that presents the corresponding private key will be authenticated successfully, giving you the ability to log in without a password. This is commonly used for automation, where no user is present to enter a password, or for a user with a private key to access systems without additional steps.

Authorized keys are typically considered more secure than a password, but they rely on protecting the private key. If the private key is not secured, anyone who obtains the private key can impersonate the account. If a non-privileged user’s key is lost, only that user’s access and files are at immediate risk. An attacker would still need to escalate privileges to damage the system. If a privileged user’s key (no-one reading this logs in as a privileged user, such as root, right? RIGHT?) or an automation account’s key is lost, the immediate risk is much higher. An attacker might gain access to the entire system or be able to attack other systems. You must absolutely secure private keys and ensure you follow the principle of least privilege for all users, especially automation accounts.

Let’s look at an example of how to use a properly secured authorized key. In past articles, we’ve built a yum repository and a build server. You may be logging into these servers frequently and transfering files between the two. Every time, you need to enter your passwords. That gets old, quickly. If you had an authorized key in place, you can ssh to both servers and present your private key, no password. If you copy the private key to the build server or create a new key, you could scp files from the build server to the yumrepo the same way. This should make life a lot easier for you.

There are lots of ways to generate keys depending on your OS and applications. My workflow is to use Putty on a Windows 7 laptop to connect to linux VMs, then use the linux openssh client to ssh to other linux VMs. I’ll cover generating and configuring keys with Putty and openssh.

Continue reading