This past week I wrote an opinion piece on the InfoSec community, which included some tips on using social media. I’ve distilled that very long section to a bullet list and added a few items.
- Investigate your company’s social media policies and make sure you comply with it.
- Seek out the proper audience.
- Facebook – Keep in contact with friends and family and sharing all of your information with the world
- Twitter – Work communities
- Blogs – Great for introducing yourself to the world and sharing what you have learned
- Google+ – Overlaps with the above, but less popular than the others. Future is in doubt
- Get control. Understand the security/privacy posture of your chosen platform.
- Listen first.
- Share only what you want.
- Check with your spouse and family before sharing info about them!
- Find dissenting voices, don’t let it become an echo chamber.
- Respect people.
- You’re going to be wrong, accept it gracefully.
- Make sure your contributions have meaning. Focus on creating novel, useful content.
- Recognize others and promote their content.
- Retweets, favorites, likes, +1’s, etc. all mean different things. Use the right one.
- Make time for real life.
- Have fun!
While attending CPX 2014, I had a mini-epiphany. This twitter thread got me thinking, “Why is CPX so much different than VMworld?” There’s an obvious size difference – 1600 attendees vs 28,000 – which leads to less sessions and smaller parties, but that’s a given. “Why is the InfoSec community different than the Virtualization community?” This is the real concern, the cultural differences between the two communities that have the most overlap with my job responsibilities and personal interests. One notable difference is that in InfoSec, there aren’t many well known practitioners of security, though there are heroes and rockstars. It also seems to be a less vocal community, and when it does speak, it’s in generalities and news, such as 5 Common Attack Vectors or Who Was Hacked This Weekend. In Virtualization, there’s a lot of public recognition for people, even the niche topics, and the community gets down and dirty and shares very practical information in addition to higher level concepts. So, why this startling difference?
Security Practitioners can be insular
Many of you reading this probably first visited this site for virtualization content – which makes sense, as my first posts were on PowerCLI and Auto Deploy. As such, you’re probably familiar with the drill for conferences: get caught up on your timeline by 7am, then prepare for it to be blown up all day long. Check out the feeds for Storage Field Day 5 (#SFD5), the OpenStack Summit (#openstacksummit), and of course, VMworld (#vmworld, #vmworld2013). Dozens, sometimes hundreds, tweet about each keynote, allowing those not attending the pleasure of knowing what’s going on in near-real time. You can sometimes even convince an attendee to ask your question of the presenter! This extends past the keynotes, which are sometimes streamed, to the individual sessions, which are frequently not streamed and sometimes never recorded or put online. Even if you attend, it’s still interesting to read because inevitably another attendee caught something you missed or saw it differently, giving you additional insight (who else learned from Twitter that Cisco wasn’t on the NSX announcement slide at VMworld 2013?). These interactions create a lot of content ancillary to, but just as important, as the conference agenda itself.
Welcome back! In our 201 class, we installed r10k, but we still haven’t used it. There’s two tasks we need to complete. First, the existing repo is incompatible with r10k’s dynamic management of modules, so we’ll convert its contents to the proper format. Once that is done, we can deploy dynamic environments using r10k.
Convert existing repo
Check out a clone of the existing puppet repo, rnelson0/puppet-tutorial. As mentioned previously, you can do this on the puppet master, as I will do, or you can perform it on another machine. If you’re on the master, you want to clone the repo into a different directory. After cloning it, check out a new branch called production:
I know you’re probably anxious to get started with managing your infrastructure, but we’re going to stay distracted by Git for a little longer. In the 100 series, we saw some examples of how to migrate your manifests and modules into Git and how to make changes to your manifests through branches. The setup is a little primitive, but acceptable for a lab – everything is is either done by root or involves pushing changes as a user and pulling them as root, and changes are tested in production. I’d like to introduce you to a tool called r10k that will help us create dynamic branches for testing and decouple our workflow from direct access to the puppet master. In this 201 class, we’ll work on the first half by migrating our existing repo structure into r10k.
Review and Setup
If we review the puppet-tutorial repo’s master branch, we have a standard directory layout that you should be somewhat familiar with now:
Last week, I attended Check Point Experience 2014 (CPX2014) in Washington, D.C. Here are some quick highlights from the conference:
- There were around 1400 attendees, up from 650 a mere two years ago.
- Security people cannot properly capitalize VMware either.
- They also use ‘on-premise’ and make people twitch.
- There is some conflation between orchestration and automation, and even confusion on what constitutes one or the other.
- Foreign language translations can be fun! This isn’t a slight against the speakers (I certainly cannot speak their language!), I just think it’s healthy to laugh about these things, especially when the correct word is obvious and the meaning stays intact. If we weren’t always so uptight about things…
There were two more significant lessons I learned at CPX 2014, however.
The first is that Checkpoint has a lot of products that make up what they are calling Software Defined Protection. It’s a neat idea, though some of the products are not GA and hence not usable at this time, leaving the definition somewhat nebulous as far as real world examples go. However, it does define enforcement, control, and management layers (planes) and lays out products that work at each layer, plus pending integration with other tools and standards (a VMware-compatible virtual firewall, REST APIs, etc). Taken together, SDP has the potential to affect design and implementation with an end result not just of increasing security policies, but shortening the gap between malware creation and prevention.
In Puppet and Git 101, we looked at how to add our existing puppet code to our repo. We’re going to take a quick look at how to create a branch, add some code, commit it, and push it to our repo.
Create a Branch
For lack of something significant to do right now, we’ll add a notify command to the node definition for puppet.nelson.va. To do so, we will checkout a new branch called, appropriately, notify. You can call your branches whatever you want, I suggest you simply be consistent in your naming scheme. At work, I use a combination of a ticket number and a one or two word description of the feature, separated by hyphens. Normally our branch is going to be short lived and only exist locally (we’re going to make an exception to that for demo purposes), so it would be a moot, but it’s still a good habit to be in.
[root@puppet puppet]# git branch
[root@puppet puppet]# git checkout -b notify
Switched to a new branch 'notify'
[root@puppet puppet]# git branch
Now that we’ve set up a puppet master and puppetized template, created a sample manifest, and started creating our own module, it’s time to take a few moments to talk about using Puppet with a version control systems (VCS). This article is mainly for those new to VCS at all or new to Git; those very familiar will want to skim or skip this article entirely.
What we have done so far is adding and removing a few lines in a couple files, and we’ve treated it as such. But it’s so much more. Writing code that represents an infrastructure state and using software to implement it is the root of two important IT movements: DevOps and the Software Defined Data Center (SDDC). You write code, puppet creates the infrastructure according to your instructions. Need something changed? Update your code, puppet takes care of the rest. What if you mess up? That’s where version control comes into play.
Version control, among other benefits, gives us the option to look at our code at points in time and to track changes over time, usually with some level of audit detail. If I make a change today and everything runs fine for a few days before blowing up, I can use version control to track the changes made to see if someone else made a change in the interval or perhaps go back to the version prior to my change. Without version control, you have no functional ability to audit your changes and revert the state of your code to a particular point in time.
There are a number of different version control systems that you can use. Subversion has been a popular VCS, though it has some long standing limitations and has been losing favor for a while. Git is a newer distributed version control system (DVCS) that has gained massive popularity by addressing some of the limitations of non-distributed VCSes and encouraging public development via Github.com and other cloud DVCS providers. We’re going to focus on Git due to its popularity, the plethora of examples of Puppet + Git available on the internet, and the ability to leverage Github.