Linux OS Patching with Puppet Tasks

One of the biggest gaps in most IT security policies is a very basic feature, patching. Specific numbers vary, but most surveys show a majority of hacks are due to unpatched vulnerabilities. Sadly, in 2018, automatic patching on servers is still out of the grasp of many, especially those running older OSes.

While there are a number of solutions out there from OS vendors (WSUS for Microsoft, Satellite for RHEL, etc.), I manage a number of OSes and the one commonality is that they are all managed by Puppet. A single solution with central reporting of success and failure sounds like a plan. I took a look at Puppet solutions and found a module called os_patching by Tony Green. I really like this module and what it has to offer, even though it doesn’t address all my concerns at this time. It shows a lot of promise and I suspect I will be working with Tony on some features I’d like to see in the future.

Currently, os_patching only supports Red Hat/Debian-based Linux distributions. Support is planned for Windows, and I know someone is looking at contributing to provide SuSE support. The module will collect information on patching that can be used for reporting, and patching is performed through a Task, either at the CLI or using the PE console’s Task pane.

Setup

Configuring your system to use the module is pretty easy. Add the module to your Puppetfile / .fixtures.yml, add a feature flag to your profile, and include os_patching behind the feature flag. Implement your tests and you’re good to go. Your only real decision is whether you default the feature flag to enabled or disabled. In my home network, I will enable it, but a production environment may want to disable it by default and enable it as an override through hiera. Because the fact collects data from the node, it will add a few seconds to each agent’s runtime, so be sure to include that in your calculation.

Adding the module is pretty simple, Here are the Puppetfile / .fixtures.yml diffs:

# Puppetfile
mod 'albatrossflavour/os_patching', '0.3.5'

# .fixtures.yml
fixtures:
  forge_modules:
    os_patching:
      repo: "albatrossflavour/os_patching"
      ref: "0.3.5"

Next, we need an update to our tests. I will be adding this to my profile::base, so I modify that spec file. Add a test for the default feature flag setting, and one for the non-default setting. Flip the to and not_to if you default the feature flag to disabled. If you run the tests now, you’ll get a failure, which is expected since there is no supporting code in the class yet.(there is more to the test, I have only included the framework plus the next tests):

require 'spec_helper'
describe 'profile::base', :type => :class do
  on_supported_os.each do |os, facts|
    let (:facts) {
      facts
    }

    context 'with defaults for all parameters' do
      it { is_expected.to contain_class('os_patching') }
    end

    context 'with manage_os_patching enabled' do
      let (:params) do {
        manage_os_patching: false,
      }
      end

      # Disabled feature flags
      it { is_expected.not_to contain_class('os_patching') }
    end
  end
end

Finally, add the feature flag and feature to profile::base (the additions are in italics):

class profile::base (
  Hash    $sudo_confs = {},
  Boolean $manage_puppet_agent = true,
  Boolean $manage_firewall = true,
  Boolean $manage_syslog = true,
  Boolean $manage_os_patching = true,
) {
  if $manage_firewall {
    include profile::linuxfw
  }

  if $manage_puppet_agent {
    include puppet_agent
  }
  if $manage_syslog {
    include rsyslog::client
  }
  if $manage_os_patching {
    include os_patching
  }
  ...
}

Your tests will pass now. That’s all it takes! For any nodes where it is enabled, you will see a new fact and some scripts pushed down on the next run:

[rnelson0@build03 controlrepo:production]$ sudo puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Notice: /File[/opt/puppetlabs/puppet/cache/lib/facter/os_patching.rb]/ensure: defined content as '{md5}af52580c4d1fb188061e0c51593cf80f'
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for build03.nelson.va
Info: Applying configuration version '1535052836'
Notice: /Stage[main]/Os_patching/File[/etc/os_patching]/ensure: created
Info: /Stage[main]/Os_patching/File[/etc/os_patching]: Scheduling refresh of Exec[/usr/local/bin/os_patching_fact_generation.sh]
Notice: /Stage[main]/Os_patching/File[/usr/local/bin/os_patching_fact_generation.sh]/ensure: defined content as '{md5}af4ff2dd24111a4ff532504c806c0dde'
Info: /Stage[main]/Os_patching/File[/usr/local/bin/os_patching_fact_generation.sh]: Scheduling refresh of Exec[/usr/local/bin/os_patching_fact_generation.sh]
Notice: /Stage[main]/Os_patching/Exec[/usr/local/bin/os_patching_fact_generation.sh]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Os_patching/Cron[Cache patching data]/ensure: created
Notice: /Stage[main]/Os_patching/Cron[Cache patching data at reboot]/ensure: created
Notice: Applied catalog in 54.18 seconds

You can now examine a new fact, os_patching, which will shows tons of information including the pending package updates, the number of packages, which ones are security patches, whether the node is blocked (explained in a bit), and whether a reboot is required:

[rnelson0@build03 controlrepo:production]$ sudo facter -p os_patching
{
  package_updates => [
    "acl.x86_64",
    "audit.x86_64",
    "audit-libs.x86_64",
    "audit-libs-python.x86_64",
    "augeas-devel.x86_64",
    "augeas-libs.x86_64",
    ...
  ],
  package_update_count => 300,
  security_package_updates => [
    "epel-release.noarch",
    "kexec-tools.x86_64",
    "libmspack.x86_64"
  ],
  security_package_update_count => 3,
  blocked => false,
  blocked_reasons => [],
  blackouts => {},
  pinned_packages => [],
  last_run => {},
  patch_window => "",
  reboots => {
    reboot_required => "unknown"
  }
}

Additional Configuration

There are a number of other settings you can configure if you’d like.

  • patch_window: a string descriptor used to “tag” a group of machines, i.e. Week3 or Group2
  • blackout_windows: a hash of datetime start/end dates during which updates are blocked
  • security_only: boolean, when enabled only the security_package_updates packages and dependencies are updated
  • reboot_override: boolean, overrides the task’s reboot flag (default: false)
  • dpkg_options/yum_options: a string of additional flags/options to dpkg or yum, respectively

You can set these in hiera. For instance, my global config has some blackout windows for the next few years:

os_patching::blackout_windows:
  'End of year 2018 change freeze':
    'start': '2018-12-15T00:00:00+1000'
    'end':   '2019-01-05T23:59:59+1000'
  'End of year 2019 change freeze':
    'start': '2019-12-15T00:00:00+1000'
    'end':   '2020-01-05T23:59:59+1000'
  'End of year 2020 change freeze':
    'start': '2020-12-15T00:00:00+1000'
    'end':   '2021-01-05T23:59:59+1000'
  'End of year 2021 change freeze':
    'start': '2021-12-15T00:00:00+1000'
    'end':   '2022-01-05T23:59:59+1000'

Patching Tasks

Once the module is installed and all of your agents have picked up the new config, they will start reporting their patch status. You can query nodes with outstanding patches using PQL. A search like inventory[certname] {facts.os_patching.package_update_count > 0 and facts.clientcert !~ 'puppet'} can find all your agents that have outstanding patches (except puppet – kernel patches require reboots and puppet will have a hard time talking to itself across a reboot). You can also select against a patch_window selection with and facts.os_patching.patch_window = "Week3" or similar. You can then provide that query to the command line task:

puppet task run os_patching::patch_server --query="inventory[certname] {facts.os_patching.package_update_count > 0 and facts.clientcert !~ 'puppet'}"

Or use the Console’s Task view to run the task against the PQL selection:

Add any other parameters you want in the dialog/CLI args, like setting rebootto true, then run the task. An individual job will be created for each node, all run in parallel. If you are selecting too many nodes for simultaneous runs, use additional filters, like the aforementioned patch_window or other facts (EL6 vs EL7, Debian vs Red Hat), etc. to narrow the node selection [I blew up my home lab, which couldn’t handle the CPU/IO lab, when I ran it against all systems the first time, whooops!]. When the job is complete, you will get your status back for each node as a hash of status elements and the corresponding values, including return (success or failure), reboot, packages_updated, etc. You can extract the logs from the Console or pipe CLI logs directly to jq (json query) to analyze as necessary.

Summary

Patching for many of us requires additional automation and reporting. The relatively new puppet module os_patching provides helpful auditing and compliance information alongside orchestration tasks for patching. Applying a little Puppet Query Language allows you to update the appropriate agents on your schedule, or to pull the compliance information for any reporting needs, always in the same format regardless of the (supported) OS. Currently, this is restricted to Red Hat/Debian-based Linux distributions, but there are plans to expand support to other OSes soon. Many thanks to Tony Green for his efforts in creating this module!

Using Puppet Enterprise 2018’s new backup/restore features

I was pretty excited when I read the new features in Puppet Enterprise 2018.1. There are a lot of cool new features and fixes, but the backup/restore feature stood out for me. Even with just 5 VMs at home, I don’t want to rock the boat when rebuilding my master by losing my CA or agent certs, much less with a lot more managed nodes at work, and all the little bootstrap requirements have changed since I started using PE in 2014. Figuring out how to get everything running myself would be possible, but it would take a while and be out of date in a few months anyway. Then there is everything in PuppetDB that I do not want to lose, like collected facts/resources and run reports.

Not coincidentally, I still had a single CentOS 6 VM around because it was my all-in-one puppet master, and migrating to CentOS 7 was not something I looked forward to due to the anticipated work it would require. With the release of this feature, I decided to get off my butt and do the migration. It still took over a month to make it happen, between other work, and I want to share my experience in the hope it saves someone else a bit of pain.

Create your upgrade outline

I want to summarize the plan at a really high level, then dive in a bit deeper. Keep in mind that I have a single all-in-one master using r10k and my plan does not address multi-master or split deployments. Both of those deployment models have significantly different upgrade paths, please be careful if you try and map this outline onto those models without adjusting. For the all-in-one master, it’s pretty simple:

  • Backup old master
  • Deploy a new master VM running EL7
  • Complete any bootstrapping that isn’t part of the backup
  • Install the same version of PE
  • Restore the old master’s backup onto the new master
  • Run puppet
  • Point agents at the new master

I will cover the backup/restore steps at the end, so the first step to cover is deploying a new master. This part sounds simple, but if Puppet is currently part of your provisioning process and you only have one master, you’ve got a catch 22 situation – new deployments must talk to puppet to complete without errors, and if you deploy a new puppet master using the same process, it will either fail to communicate with itself since PE is not installed, or it will talk to a PE installation that does not reflect your production environment. We need to make sure that we have the ability to provision without puppet, or be prepared for some manual efforts in the deploy. With a single master, manual efforts aren’t that burdensome, but can still reduce accuracy, which is why I prefer a modified automated provisioning workflow.

A lot of bootstrapping – specifically hiera and r10k/code manager – should be handled by the restore. There were just a few things I needed to do:

  • Run ssh-keygen/install an existing key and attach that key to the git system. You can avoid this by managing the ssh private/public keys via file resources, but you will not be able to pull new code until puppet processes that resource.
  • SSH to your git server and accept the key. You can avoid this with the sshkey resource, with the same restriction.
  • Check your VMs default iptables/selinux posture. I suggest managing security policy via puppet, which should prevent remote agents from connecting before the first puppet run, but it’s also possible to prevent the master from communicating with itself with the wrong default policy.
  • Check the hostname matches your expectations. All of /etc/hosts, /etc/hostname, /etc/sysconfig/network should list the short and FQDN properly, and hostname; hostname -f should return the same values. /etc/resolv.conf may also need the search domain. Fix any issues before installing PE, as certs are generated during install, and having the wrong hostname result can cause cascading faults best addressed by starting over.

The restore should get the rest from the PE side of things. If your provisioning automation performs other work that you had to skip, make sure you address it now, too.

Installing PE is probably the one manual step you cannot avoid. You can go to https://support.puppet.com and find links to current and past PE versions. Make sure you get the EL7 edition and not the EL6 edition. I did not check with Support, but I assume that you must restore on the same version you backed up, I would not risk even a patch release difference.

Skipping the restore brings us to running the agent, a simple puppet agent -t on the master, or waiting 30 minutes for the run to complete on its own.

The final step may not apply to your situation. In addition to refreshing the OS of the master, I switched to a new hostname. If you’re dropping your new master on top of the existing one’s hostname/IP, you can skip this step. I forked a new branch from production called mastermigration. The only change in this branch is to set the server value in /etc/puppetlabs/puppet/puppet.conf. There are a number of ways to do this, I went with a few ini_setting resources and a flag manage_puppet_conf in my profile::base::linux. The value should only be in one of the sections main or agent, so I ensured it is in main and absent elsewhere:

  if $manage_puppet_conf {
    # These settings are very useful during migration but are not needed most of the time
    ini_setting { 'puppet.conf main server':
      ensure => present,
      path => '/etc/puppetlabs/puppet/puppet.conf',
      section => 'main',
      setting => 'server',
      value => 'puppet.example.com',
    }
    ini_setting { 'puppet.conf agent server':
      ensure => absent,
      path => '/etc/puppetlabs/puppet/puppet.conf',
      section => 'agent',
      setting => 'server',
    }
  }

During the migration, I can just set profile::base::linux::manage_puppet_conf: true in hiera for the appropriate hosts, or globally, and they’ll point themselves at the new master. Later, I can set it to false if I don’t want to continue managing it (while there is no reason you cannot leave the flag enabled, by leaving it as false normally you can ensure that changing the server name here does not take effect unless purposefully flip the flag; you could also parameterize the server name).

Now let’s examine the new feature that makes it go.

Backups and Restores

Puppet’s documentation on the backup/restore feature provides lots of detail. It will capture the CA and certs, all your currently deployed code, your PuppetDB contents including facts, and almost all of your PE config. About the only thing missing are some gems, which you should hopefully be managing and installing with puppet anyway.

Using the new feature is pretty simple, puppet-backup createor puppet-backup restore <filename> will suffice for this effort. There are a few options for more fine-grained control, such as backup/restore of individual scopes with --scope=<scopes>[,<additionalscopes>...], e.g. --scope=certs.

 

The backup will only backup the current PE edition’s files, so if you still have /etc/puppet on your old master from PE 3 days, that will not be part of the backup. However, files in directories it does back up, like /etc/puppetlabs/puppet/puppet.conf.rpmsave, will persist. This will help reduce cruft, but not eliminate it. You will still need to police on-disk content. In particular, if you accidentally placed a large file in /etc/puppetlabs, say the PE install tarball, that will end up in your backup and can inflate the size a bit. If you feel the backup is exceptionally large, you may want to search for large files in that path.

The restore docs also specify two commands to run after a restore when Code Manager is used. If you use CM, make sure not to forget this step:

puppet access login
puppet code deploy --all --wait 

The backup and restore process are mostly time-dependent on the size of your puppetdb. With ~120 agents and 14 days of reports, it took less than 10 minutes for either process and generated a ~1G tarball. Larger environments may expect the master to be offline for a bit longer, if they want to retain their full history.

Lab it up

The backup/restore process is great, but it’s new, and some of us have very ancient systems laying around. I highly recommend testing this in the lab. My test looked like this:

  • Clone the production master to a VM on another hostname/IP
  • Run puppet-backup create
  • Fully uninstall PE (sudo /opt/puppetlabs/bin/puppet-enterprise-uninstaller -p -d -y)
  • Remove any remaining directories with puppet in them, excepting the PE 2018 install files, to ensure all cruft is gone
  • Disable and uninstall any r10k webhook or puppet-related services that aren’t provided by PE itself.
  • Reboot
  • Bootstrap (from above)
  • Install PE (sudo /opt/puppetlabs/bin/puppet-enterprise-installer) only providing an admin password for the console
  • Run puppet-backup restore <backup file>
  • Run puppet agent -t
  • Make sure at least one agent can check in with puppet agent -t --server=<lab hostname> (clone an agent too if need be)
  • Reboot
  • Make sure the master and agent can still check in, Console works, etc.
  • If possible, test any systems that use puppet to make sure they work with the new master
  • Identify any missing components/errors and repeat the process until none are observed

I mentioned that I used PE3. My master had been upgraded all the way from version 3.7 to 2018.1.2. I’m glad I tested this, because there were some unexpected database settings that the restore choked on. I had to engage Puppet Support who provided the necessary commands to update the database so I could get a useful backup. This also allowed me to identify all of my bootstrap items and of course, gain familiarity and confidence with the process.

This became really important for me because, during my production migration, I ran into a bug in my provisioning system where the symptom presented itself through Puppet. Because I was very practiced with the backup/restore process, I was able to quickly determine PE was NOT the problem and correctly identify the faulty system. Though it took about 6 hours to do my “very quick” migration, only about an hour of that was actually spent on the Puppet components.

I also found a few pieces of managed files on the master where the code presumed the directory structure would already be there, which it turns out was not the case. I must have manually created some directories 4 years ago. I think the most common issues you would find at this point are dependencies and ordering, but there may be others. Either fix the code now or, if it would negatively affect the production server, prep a branch for merging just prior to the migration, with the plan to revert if you rollback.

I strongly encourage running through the process a few times and build the most complete checklist you can before moving on to production.

Putting it together

With everything I learned in the lab, my final outline looked like this:

  • Backup old master, export to another location
  • Deploy a new master VM running EL7 using an alternative workflow
  • Run ssh-keygen/install an existing key and attach that key to the git system
  • SSH to the git server and accept the key
  • Verify your VMs default iptables/selinux posture; disable during bootstrap if required
  • Validate the hostname is correct
  • Install PE
  • Restore the backup
  • [Optional] Merge any code required for the new server; run r10k/CM to ensure it’s in place on the new master
  • Run puppet
  • Point agents at the new master

Yours may look slightly different. Please, spend the time in the lab to practice and identify any missing steps, it’s well worth it.

Summary

Refreshing any system of significant age is always possible, and often fraught with manual processes that are prone to error. Puppet Enterprise 2018.1 delivered a new backup/restore process that automates much of this process. We have put together a rough outline, refined it in the lab, and then used it to perform the migration in production with high confidence, accounting for any components the backup did not include. I really appreciate this new feature and I look forward to refinements in the future. I hope that soon enough migrations should be as simple an effective as in-place upgrades.

Contributing to a Political Campaign as a Nerd

As I promised in my previous politics article, I will continue not to advocate for specific politics and remain non-partisan on my blog. I do encourage everyone, regardless of beliefs or party, to participate in politics, because it affects you whether you participate or not. Participation in our democracy can only improve it.

Over the past year or two, I have come to feel far more strongly about my politics. I live in the United States, and it’s impossible to pay attention to the news and not feel some kind of way about our Republic. This year, I decided that I needed to contribute more directly, not just passively partake in politics. If you already follow me on twitter, you know that I wear my politics on my sleeve. Like many readers, I regularly vote. Like many readers, I donate to campaigns. That’s not enough for me anymore. So I decided to do more, and reached out to some local campaigns to find out what I could offer.

I will admit that this is scary. I have spent almost 20 years working on a relatively narrow field of expertise within IT. I had no experience with politics. Going from 20 years of experience to 0 is intimidating – but if I did it, so can you. If you read this and want to contribute, I want you to know that you will do just fine. As divisive and loud and argumentative and nasty as politics seems on the evening news, every group I have worked with over the past year has been welcoming, very graceful about my lack of knowledge and mistakes, and very accommodating to how much time I have available. Please, don’t let your fear keep you out. Reach out if you have any questions!

The expertise we have is sorely needed, though, especially in political campaigns. You will quickly find out that most people involved in political campaigns are not computer experts in any way. Sure, they’re computer savvy as most people are nowadays, but there are significant gaps in that knowledge that needs filled. All you need to do is read the news and you will quickly hear about campaigns that are hacked and crowdsourced analysis of what’s on a politicians phone and overwhelming numbers of twitter and facebook shenanigans. Your help is needed and will be welcomed.

I joined a campaign and I had no idea what I was getting into. I did not find many others in tech who have shared their experiences joining their first campaign. I hope this article helps fill this gap a little bit – and if any readers are in the same situation, I would encourage you to blog about it as well! Alright, let’s get volunteering!

As I hit publish on this, there are 85 days until Election Day in the US. It is NOT too late to volunteer! Your assistance will be welcomed up until the final moments on Election Day, and there are always future elections to prepare for.

What to expect

When you click Sign Up on a campaign web site, you’ll be offered some “normal” work – canvassing, phone banking, putting a sign in your yard, etc. All things technical are notably missing from the list. To offer your technical expertise, you will have to reach out to the campaign directly. Many campaigns or candidates have a listed phone number on their website. If not, try looking at the county or state party’s website for a phone number, and inform them that you would like to get in contact with the campaign.

You will have a chance at some point to talk to the candidate or a campaign manager and make sure that’s who you want to work for. Treat it like a job interview! The campaign will ask what you can do, and you get to ask the campaign about what they will do if elected. Be honest! When I first talked to my campaign, I explained that I had 20 years of IT experience but no campaign experience. I was willing to take on things unknown, but I wanted to make sure they knew it would be new to me. I found out they had an experienced webmaster who would be providing me assistance if I joined. I also asked a lot of pointed policy questions to ensure that I would be happy if this candidate was elected. Get your questions answered and let the campaign know whether you want to join and what you can contribute.

General tech tasks

There are so many areas you can contribute to the technology side of a campaign, regardless of where in technology you work. Here’s a very short, very incomplete list of items you can help with:

  • Setting up a free Slack and teaching people how to use it (EVERYONE uses slack nowadays!)
  • Setting up a website and analytics
  • Configuring multi-factor authentication on all services
  • Setting up apps on phones and tablets
  • Answering questions about how to use a computer, application, or service, even if you’re just functioning as Google as a Service for really busy people
  • Providing a sounding board for anything technical, including how technical people and companies may respond to something

Depending on what your expertise is, you may be able to offer some very specific needs. Surely, what you know can be applied to a campaign, though I may not be able to tell you how. A lot of my expertise is in information security. Here are some examples of InfoSec advice you can provide:

  • Explain threat models. Make sure you know how they apply, too; the threat model of running for US Senator is much different than that of running for a local council seat. Everyone can benefit from making sure they don’t expose their financial details to the world, but fewer are worried about specific attacks by enemy nation-states.
  • Ensure services are registered to a well protected account that belongs to the campaign instead of a random gmail account that belongs to someone who may leave the campaign.
  • Make sure MFA is enabled everywhere possible.
  • Restrict access to services to who needs it, and at the least permission level

Remember that you are advising a campaign, not running your own business, so you will probably “lose” some arguments in areas where you are objectively the expert, and that’s okay. Make sure everyone acknowledges the trade-offs being made, and do your best to minimize the potential fallout. If there’s a realistic chance of failure, prepare remediation plans so that you are ready if they are needed – the same kinds of thing you do at work to cover your company’s butt.

Be aware of how much you can contribute. If you can only spend 2 hours a week with the campaign at odd times of the day, maybe you are not the best person to run their web site. That’s okay, just make the campaign aware that you want to help as well as your limits and surely they can find a way to make it fit. If you can spend more hours, then I encourage you to take on more significant tasks. If your circumstances change, just let the campaign know!

There are tons of small things like this that you can contribute. Don’t worry if you can’t think of something now, if you reach out to the campaign I’m sure you can come up with something together!

Larger projects

In addition to these general tasks, you may be able to contribute to higher level projects to help the campaign. If you are a data scientist, a campaign needs you! Everyone needs to know which voters to target, and they’re hopefully looking for more ethical assistance than we have seen campaigns pursue in the past. Many campaigns can determine what kinds of voters they want to target, but they may lack the skills to find those voters within the mass of voter information available. Those who are great with analytics can help get data from the web sites to the voter analysis teams. Social media experts can help leverage Twitter, Facebook, Instagram, and other services to get messaging out effectively. Online advertising needs your expertise in marketing and advertising. Larger campaigns may need custom applications like HillaryBnB (an AirBnB-style app for canvassers).

Again, I had no experience with campaigns so these are just a few of the efforts I’ve observed recently, it’s a very non-comprehensive list of options. Each campaign’s needs are different, so I suggest checking with the campaign to see what is needed, rather than trying to offer specific projects.

Tackling the unknown

Though you are volunteering because you have expertise, sometimes what a campaign needs does not line up exactly. It’s a good thing we are an industry that is constantly learning! Lean on your existing expertise to get going.

I decided to help out a campaign with Google AdWords, as the lack of such a campaign was identified when I joined. Prior to June 1st, I had never used AdWords or done anything with advertising, online or otherwise. Yeah, it was intimidating. But I believe in my candidate, so I tackled it like any other tech I learned. I found some technical articles about how AdWords works and tips for novices, scrounged up some youtube video so I could see it, and then set up an account and got to work. After almost 3 weeks, I am starting to figure it out, and the campaign is benefiting from my efforts. Find something and dig in. You can grow your technical skills and help advocate for your politics at the same time!

While I will not pretend to be authoritative on AdWords, I want to share some things I learned, beyond the simple mechanics that you can learn through the documentation and tutorials:

  • Create the AdWords account with a central campaign account. Add additional administrators with campaign-specific emails, rather than personal emails. This makes it more difficult for someone to leave the campaign and take down advertising.
  • There’s a pretty decent iPhone app for AdWords, and it’s gives you some views not available on the web site, but you cannot edit very much on it. I am sure there is one for Android as well.
  • Google will provide a $100 coupon after spending your first $25. It will be sent in an email to the account owner’s email and it’s not automatic, you need to apply the code.
  • Google will also send an email advertising a free review of your account. I would wait a few weeks before calling, so that the specialists can see some data from your campaigns.
  • Impressions (someone seeing your ad) are free. Clicks (when someone actually clicks on them) are the only thing that costs you money.
  • Campaigns are made up of Ad Groups. Each ad group can advertise a different set of text and point to a different page, but funding is allocated on a Campaign level. You can add as many ad groups to a single campaign, but budget is allocated at the campaign level. You need to balance the number of campaigns and ad groups, and keep balancing them as voters’ interests change.
  • Each Campaign can also be targeted to different locations. You can limit some AdWords Campaigns to the political campaign’s region (by district for federal offices; by using zip codes for state and local offices) to focus your advertising, such as issues-based campaigns. Others may be made open to a wider area, maybe even the whole country, such as donation campaigns.
  • Each Ad Group can be made up of keywords which receive mostly-opaque scoring. The better the score, the better the placement. Keywords that do not result in Impressions receive a lower score and can drag down the score of the entire Ad Group. Disable keywords that do not work, or replace them with more specific and helpful keywords, to keep the scoring up. All the scoring is Google magic, and the effectiveness of this will vary quite a bit for every organization. Keep an eye on it.
  • Keywords are words or phrases that you want your ad matched to. You can also add Exclusions. Combine this with Search Terms results, which show the actual search someone used and the keyword or category it matched, to filter out Impressions/Clicks that are not helpful to your campaign. I have seen some really ridiculous search terms, including the amazing imagen you are an environmentalist giving a speech environmental due to population growth in the western united states that matched the keyword environment. Whoops, way too generic, it was replaced with a more specific phrase. Another was a search including the name of another candidate in another entire state and that one click ate the entire campaign budget for the day. Keeping up with Exclusions can save your campaign a lot of money.
  • Google watches monthly trends to determine when best to spend money. Your daily budget is better thought of as the average daily spend over a 30 day period. For example, Google may determine that you won’t get much out of ads on Saturday and spend close to $0, but the 2nd Wednesday of the month get the most impact and spend far more than your budget. You will only ever be charged 2x your specified budget, even if Google “spends” 2.3x your budget (I have never observed them going past 2.0x).
  • Almost everything you do with AdWords can be changed on the fly. However, there are two things to keep in mind:
    • Any new ad or keywords (and you cannot edit ads/keywords, you actually make a new one to replace the existing one) must be approved. It can take up to 24 hours to approve. You can add ads/keywords and disable them, then enable them when needed, to ensure they are approved prior to when they are needed.
    • Significant budget changes may flag a fraud alert. If you have a significant event-related campaign coming up, set it up at least 3 days in advance, as it can take 3 days to resolve suspected fraud. If you are spending $10 a day and want to increase that to $300 for a weekend event and your account gets flagged on Friday, it may be Monday before it is unfrozen and your event will be over.
  • You won’t find many AdWords tutorials that speak to political campaigns. It’s strongly associated with businesses. A few articles and charts mentioned Social Advocacy, which is probably the closest, but…
  • Success is difficult to measure. A business may track how many people click versus how many people order something. A donation campaign can track how many people donate, but an issues or awareness campaign cannot correlate visitors to the site with votes in a primary or general election. Conversion rates on their own won’t tell you much.
    • For many candidates, awareness itself is the goal. Many voters do not fill out the whole ballot and only check non-federal boxes if they recognize the name. Responsive Ads (as opposed to Text Ads) are fairly unobtrusive but can display small graphics. Logos are very helpful to start creating brand awareness.
    • Run at least two Ads for each campaign, much like you would do A/B testing at work. Review regularly and tweak the ads over time.
  • Advertising is not an island. Coordinate with the Social Media team and the event planners. If your candidate is going to be at a local festival or state fair in a few weeks, add some keywords for the event. If Social Media is blitzing on a policy, make sure common keywords will drive voters to your candidate. When things happen in the news that affect your voters, make sure their searches will bring them to your candidate. Likewise, you can review search terms people use and inform the other teams that these are some of the things voters are searching for and make sure the candidate speaks to those concerns.
    • This can feel very ghoulish or morbid. Much of news that drives people to politicians is going to be of a negative nature. We generally don’t call our elected officials when things are looking up. You WILL probably have to capitalize on an event that resulted in harm or death. In my case, unfortunately, there was a school shooting near the district. Ugh. But voters do want to know their candidate’s policies on subjects like school shootings. Be responsible and principled and above all, caring. Do not let the need to respond compromise your integrity or the candidate’s.
  • Your candidate’s party has other candidates running. Reach out to them for assistance, for ideas, for additional eyes on problems. You can get contact info from your county/state party’s offices, usually.
  • AdWords includes a large number of reports. Working in IT, our tendency is to encourage others to run reports themselves, but everyone on a political campaign is likely already spending all the time they can on their areas of expertise. It can be a huge help if you create/tweak reports for others, and schedule them to email the requester regularly.

Summary

Just because many of us make technology a huge part of our lives, we are not one dimensional. If you feel inspired by politics, don’t hide it, become active. I’ve discussed what you might expect if you join a political campaign, some of the work and expertise technologists can offer campaigns, and my experience in joining a campaign. Whether you contribute a few hours a month or hours every day, you will be a vital part of chosen campaign. That’s awesome! Participation in democracy is what makes our Republic so strong, and that of most democratic governments.

If you have any questions about volunteering, whether it’s technical or about the experience or something else, please reach out. You can drop a comment here, or @rnelson0 on twitter.

Enjoy, and thank you!