Creating a Puppet ERB Template

Recently, we looked at converting a module to be hiera friendly. Another task you may have to look forward to is tracking configuration files for different hosts providing the same service. You could have a config for each node, network, environment, etc., all of which need updated if some common element changes. Or, you could use a Puppet Template to have a single template that is populated by node-specific elements from node facts and puppet variables. Each node would receive a personalized copy and any common element changes would be reflected across the board immediately.

As an example, I run some mediawiki servers at work. Each one points to a different database but is otherwise very similar. The search engine is SphinxSearch and it relies on the Sphinx config file /etc/sphinx/sphinx.conf. The config includes the database connection information, which varies from device to device, and a number of other settings standardized across the wikis (minimum search term length, wildcards, and other search settings). Keeping the database connection information accurate across three wikis would normally require three config files. Let’s simplify that with a template.

Puppet templates are written in ERB, a templating language that is part of the Ruby standard library. ERB commands are interpolated into the config where needed and puppet feeds facts and variables to ERB, which determines what values to populate the config with. We have a few good sources of information on the templates: the Ruby docs, a Puppet Labs Using Puppet Templates article, or the Learning Puppet chapter on Templates. I’ll be picking out some highlights, reference them as needed as we work on our template.

Original Config File

Before we do anything else, let’s look at the original config file that belongs at /etc/sphinx/sphinx.conf. This file was based on the sample provided by the SphinxSearch extension. The full content is on github. The part we care about is the top block; the rest of the config file will be the same for every wiki that uses SphinxSearch:

source src_wiki_main
{
    # data source
    type            = mysql
    sql_host        = 192.168.1.15
    sql_user        = wikiuser
    sql_pass        = password
    sql_db          = wikidb

    # pre-query, executed before the main fetch query
    sql_query_pre   = SET NAMES utf8

    # main document fetch query - change the table names if you are using a prefix
    sql_query = SELECT page_id, page_title, page_namespace, page_is_redirect, old_id, old_text FROM page, revision, text WHERE rev_id=page_latest AND old_id=rev_text_id

    # attribute columns
    sql_attr_uint   = page_namespace
    sql_attr_uint   = page_is_redirect
    sql_attr_uint   = old_id

    # uncomment next line to collect all category ids for a category filter
    #sql_attr_multi  = uint category from query; SELECT cl_from, page_id AS category FROM categorylinks, page WHERE page_title=cl_to AND page_namespace=14

    # optional - used by command-line search utility to display document information
    sql_query_info  = SELECT page_title, page_namespace FROM page WHERE page_id=$id
}

Lines 17-21 of our gist is the variable part of our config, where the sql_* settings for a data source are specified. The first three attributes may be shared among multiple nodes, but the sql_db attribute is unique to each one. You don’t want the wiki serving up the infosecdb to be presenting search results from wikidb. Even though some of those attributes are shared, we’ll make them all variables in the template for the most flexibility. You probably don’t want to revisit this every time.

Template Syntax

An ERB template mixes plain text with ruby code. The ruby code is delimited by paired bracket/percent tags, <% like this %>. A plain tag, as before, runs ruby code but does not display it in the interpolated template. The use of a <%= printing tag %> will print out the results of the ruby code. Within the tags, facts, global, and current scope variables are available by prepending them with an at sign, i.e. <%= @fqdn %> will print the fully qualified domain name of the node. For variables outside of the current scope, the scope.lookupvar method is used, i.e. <%= scope.lookup(‘wiki::sql_db’) %> will print the value of $wiki::sql_db. Note that no dollar sign is used and the leading :: is left off (I have not found a style guide explanation for this, but it’s pretty consistent in documentation).

You can also access hiera. You would do this with scope.function, i.e. <%= scope.function_hiera(["hiera::var"]) %>, but this is not recommended by puppet. It does clutter up the template, making it fairly unreadable and reliant on hiera. Assigning a local variable the value of a hiera result and using that variable in your template is a much better pattern.

We won’t use them today, but there are two other tags. <%# comments start with a pound %>, and if you have multiline code, using a hyphen in your tags will strip out leading or trailing space. This is great for conditionals and loops:

<%= if @something -%>
server  <% @server %>
<% end -%>

This would put ‘server @server’ on its own line without blank lines for the if and end lines.

ERB is itself fairly simple. Let’s take a look at the converted template, focusing on lines 17-21 again:

    # data source
    type = mysql
    sql_host = <%= @sql_host %>
    sql_user = <%= @sql_user %>
    sql_pass = <%= @sql_pass %>
    sql_db = <%= @sql_db %>

We’re simply going to take the value of the $sql_* variables from the calling class and interpolate them. If we set $sql_host to ‘192.168.1.15’, we’ll have the same value as our original config file had, and so on for the other values. So, where do we do that? In the module. In this case, I have a class profile::wiki in the profile module that will put this file on the node.

This is again taken from a production environment, so the content is not available on github in its entirety. I’ll post the relevant config snippets with some anonymization, you can see more in a previous article.

Module Modification

The most relevant portion is the package list:

  # Packages
  Yumrepo['epel'] -> Package<| |>
  $packages = [
    'mediawiki',
    'mediawiki-extensions',
    'gatekeeper',
    "mediawiki-${wikienvironment}-config",
    'nfs-utils',
  ]
  package { $packages:
    ensure => latest,
  }

In this list of packages is mediawiki-extensions, which deploys the SphinxSearch extension and the prerequisite of sphinx. After that is installed, we will overwrite the config file (if we do it beforehand, the package could overwrite it or fail to install). Let’s add a file resource with a template for the content attribute and ensure it follows the mediawiki-extensions package. We’ll go ahead and manage the searchd service and make sure it is notified.

  file { 'sphinx.conf':
    path    => '/etc/sphinx/sphinx.conf',
    ensure  => file,
    require => Package['mediawiki-extensions'],
    notify  => Service['searchd'],
    content => template('profile/wiki/sphinx.conf.erb'),
  }
  service { 'searchd':
    ensure => running,
  }

The template() function parses profile/wiki/sphinx.conf.erb and looks for the file in the location $environmentpath/profile/templates/wiki/sphinx.conf.erb. Create that file and populate it. Deploy the environment and run the agent. You’ll notice that it works… kinda. When you look at the changes to the file, you end up with this (just showing the adds, not the deletes):

+       # data source
+       type            = mysql
+       sql_host        =
+       sql_user        =
+       sql_pass        =
+       sql_db          =

We don’t have an sql_host variable in our manifest, so we got a null string. Whoops! Let’s go back to the top of our wiki profile and add some variables to the class. As we did last time, we set a local variable with a default in the local class. Here’s the diff:

$ git diff
diff --git a/manifests/wiki.pp b/manifests/wiki.pp
index 58514cc..7c819b5 100644
--- a/manifests/wiki.pp
+++ b/manifests/wiki.pp
@@ -30,8 +30,16 @@ class profile::wiki (
   $nfs_server            = 'localhost',
   $DeletedFileStoreMount = '/nfs/share/wiki/deleted',
   $ImagesMount           = '/nfs/share/wiki/images',
-  $database              = 'wikidev',
+  $sql_host              = undef,
+  $sql_user              = 'wikiuser',
+  $sql_pass              = 'password',
+  $sql_db                = 'wikidev',
 ) {
+  # Verify required values
+  if ($sql_host == undef) {
+    fail('No sql_host has been specified. Provide the name/ip of the database server hosting the specified wiki db.')
+  }
+
   # Global settings
   $docroot  = '/srv/www' # Built into the packages, cannot be changed.

@@ -132,14 +140,14 @@ class profile::wiki (
   }
   cron { 'sphinx_main':
     ensure  => 'present',
-    command => "/usr/bin/indexer --quiet --config /opt/sphinx/sphinx.conf ${database}_main --rotate >/dev/null 2>&1",
+    command => "/usr/bin/indexer --quiet --config /opt/sphinx/sphinx.conf wiki_main --rotate >/dev/null 2>&1",
     hour    => 3,
     minute  => 0,
     user    => 'sphinx',
   }
   cron { 'sphinx_incrementals':
     ensure  => 'present',
-    command => "/usr/bin/indexer --quiet --config /opt/sphinx/sphinx.conf ${database}_incremental --rotate >/dev/null 2>&1",
+    command => "/usr/bin/indexer --quiet --config /opt/sphinx/sphinx.conf wiki_incremental --rotate >/dev/null 2>&1",
     minute  => 5,
     user    => 'sphinx',
   }

$database has been replaced with $sql_db, which allows us to also fix a subtle bug with the cronjobs for sphinx (the config file in the package was static and only held ‘wiki_main’, so any other database name broke it… silently). The other variables required in the ERB template are provided along with defaults. We set $sql_host to a default of undef and then check to see if it has a value. If it remains undef, we error out and fail the catalog compilation. Deploy this change and you’ll see the error message:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: No sql_host has been specified. Provide the name/ip of the database server hosting the 
specified wiki db. at /etc/puppet/environments/v1_23_5/modules/profile/manifests/wiki.pp:40 on node wikidev01.nelson.va

Let’s go back to the YAML for this host. This could be done at another hierarchy level, if you have multiple servers connecting to the same database. Add in the three missing values and rename the one existing values. IPs and passwords are anonymized of course:

-profile::wiki::database              : 'wikitesting'
+profile::wiki::sql_host              : '192.168.1.15'
+profile::wiki::sql_user              : 'wikiuser'
+profile::wiki::sql_pass              : 'password'
+profile::wiki::sql_db                : 'wikitesting'

Let’s re-run the agent again.

+       # data source
+       type            = mysql
+       sql_host        = 192.168.1.15
+       sql_user        = wikiuser
+       sql_pass        = password
+       sql_db          = wikitesting

You should also see some entries about searchd having a scheduled refresh.Congratulations, you’ve just created a template.

Summary

Alongside a module that we’ve converted from hardcoded values to values from hiera, we’ve taken some hardcoded config files and created a template that can populate the config with hiera data. This article went through one file for one profile. You’ll probably find more files that would benefit from template conversion (for example: mediawiki’s LocalSettings.php also has data source information). If these config files were in packages (hint: like my mediawiki-${environment}-config package), then the process might be fairly lengthy, stripping files out of packages to turn them into templates populated from hiera, but you now have all the tools you need. Go forth and template!

Why Puppet?

As we near the end of my schedule puppet content, I’ve asked the Twitterverse for any other topics people want to see discussed. Jason Shiplett asked a great question: Why Puppet?

This is essentially a two-fold question. First, you must understand what Configuration Management (CM) is and why you need it. Second, of all the CM tools out there, why would you choose Puppet?

Configuration Management

In spite of my telling Jason that the world doesn’t need another “Why CM?” post, here we go :)

Plenty of other people have done a great job explaining what Configuration Management is and why you need it. Chief among these is Information Technology Infrastructure Library, or ITIL, a framework for IT Service Management. In the Service Transition volume, Configuration Management is described. We can simplify the meaning to describing and managing the state of a configuration through a service’s lifecycle.

Continue reading

Refactoring a Puppet class for use with Hiera

For the past few weeks, we have been working on packaging our own software and deploying it with puppet. Before that, we touched on refactoring modules to use hiera. In fact, I grandiosely referred to it as Hiera, R10K, and the end of manifests as we know them. I included a very simple example of how to refactor a per-node manifest into the role/profile pattern and use hiera to assign it to the node. Today, we’ll look at more features of hiera and how you would refactor an existing class to use hiera.

In a legacy implementation of puppet, you’ll likely find plenty of existing modules whose classes have static assignment or lots of conditionals to determine the necessary values to be applied. Even in a greenfield implementation of puppet, you may find yourself writing straight Puppet DSL for your classes before refactoring them to use hiera. Figuring out how to refactor efficiently isn’t always obvious.

First, let’s take a look at Gary Larizza’s When to Hiera  and The Problem with Separating Data from Puppet Code articles. Gary covers the when and why much better than I could, so please, go read his article and then come back here. Gary also covers the common pre-hiera pattern and a few patterns that can be used when refactoring to hiera. There is another pattern that is documented indirectly by Gary (under Hiera data bindings in Puppet 3.x.x) and in the Hiera Complete Example at docs.puppetlab.com. I’m going to explain this pattern and document this directly, adding another Hiera pattern to Gary’s list.

Continue reading

FPM and Build Automation

Having created one or more build servers, the next logical step is to start building software. We touched on this briefly a few weeks ago, and with a proper development station, it’s time to expand on it.

If you’re a developer by trade, you can probably skim or skip this article. Remember, this series is aimed at vSphere Admins, not devs. I’d certainly appreciate your insights in the comments or on twitter, though!

Modifying software build processes for FPM

We’ve used FPM in the past to take a directory and turn it into a package. This works very well when /some/long/path belongs entirely to your application. What if your application drops a binary in /bin, a manpage in /usr/share/man/man5, a config file in /etc, or even just a few files in a directory that’s shared with other packages? Let’s take a look at an extension for mediawiki. This is very simple, we have a legacy Makefile and two useful targets, dev and prod:

Continue reading

Puppetize a Build server

The Puppet series so far has really focused on VM builds and just started to touch on software packaging. We need an appropriate place to do this work, and what better way to set that than via Puppet itself? Today, we’ll create some roles and profiles for a build server, which could be permanent and share amongst developers, spun up as needed for the team, or spun up per developer.

Build Profile and Role

The last few examples we have done with FPM were on our “production” servers. That’s less than ideal for a few reasons. You wouldn’t want to mess up the publicly available service while packaging, whether by overwriting a file, exhausting resources, or the brief outage when services restart. It is not a good idea to add compilers and development libraries to any server unnecessarily as it increases the attack surface (additional security risks, additional packages to patch, additional items for auditors to flag, etc). You also probably do not want your build servers in the same environment as your production servers (unless, as is the case in these examples, your “production” environment is your lab – so just pretend it’s a different environment). Let’s assume that we do not have a good reason to violate these best practices, so our goal is set up a dedicated build server. It will require all the software we have been using so far, and we will throw on a local user. If you have LDAP or another directory service in your lab, I would suggest using it, but this is a good example as sometimes the build network is restricted.

We have two profiles to create, then. The first is the local users. We’ll call this class ::profile::build_users, in case we create another grouping of users later. The second profile is for our build software, and we will call it ::profile::build. Here are the two class files, located at profile/manifests/users/build.pp and profile/manifests/build.pp, respectively.

Continue reading

Deploying your custom application with Puppet

In the past two weeks, we learned how to create packages for our own applications and how to host them in a repository. The next step is to puppetize the application so that you can deploy your application to nodes through automation. We’ll need to add the repo to our base profile, so all nodes receive it, define a profile that requires the application, and a role class and corresponding hiera yaml to apply the configuration to a specified node. Let’s get started!

Add the repo to the base profile

This step is fairly simple. Last week, we defined the repo and applied it manually with:

  yumrepo {'el-6.5':
    descr    => 'rnelson0 El 6.5 - x86_64',
    baseurl  => 'http://yum.nelson.va/el-6.5/',
    enabled  => 'true',
    gpgcheck => 'false',
  }

Add that to your base profile. It should look something like this now:

Continue reading

Create a Yum Repo

In last week’s article, we learned how to build a package with FPM, specifically an RPM. Today, we’ll look at creating a Yum repository to host your packages. The repo can be built with puppet, which can also distribute settings so all your managed nodes can use the repo. By adding the package to the repo, it becomes available to install, again via puppet. This is the first step on the road to the automated software packing and delivery that is vital for continuous integration.

A repo has a few components.

  • Webserver – Content is served up over http.
  • createrepo – A piece of software that manages the repo’s catalog.
  • RPMs – What it’s serving up.

We don’t need to know how the pieces work, though. We’ll rely on palli/createrepo to manage the repo itself. We just make sure a webserver is available, the directories are there, and that there’s some content available.

Configure a host

I’m going to start moving faster now, because we’ve done this part a few times already. Please let me know on twitter if I’m going to fast and I can update the page and make sure future articles keep the same pace.

The first thing is to choose a node. Since our series so far relies on the hostname for a role, spin up a new VM called ‘yumrepo01′ or something similar. You don’t need to configure anything afterward, except possibly update Puppet (as I’m writing this, v3.7.0 was just released). Of course, you should run the agent and accept the cert so that the node can communicate with puppet before continuing.

Next up is the manifest. Start with the profile.

[rnelson0@puppet profile]$ cat manifests/yumrepo.pp
class profile::yumrepo {
  include '::profile::apache'

  apache::vhost {'yum.nelson.va':
    docroot    => '/var/www/html/puppetrepo',
  }
}

Our existing ::profile::apache ensures the web server is installed and running and creates firewall rules to allow traffic in. The apache::vhost definition creates the virtual host for our web server. We will place our repo underneath /var/www/html/puppetrepo but you can choose anywhere you want. The next piece is the role.

[rnelson0@puppet role]$ cat manifests/yumrepo.pp
class role::yumrepo {
  include profile::base  # All roles should have the base profile
  include profile::yumrepo

  $repodirs = hiera('repodirs')
  file { $repodirs :
    ensure => 'directory',
  }
  create_resources('::createrepo', hiera_hash('yumrepos'), {require => File[$repodirs]} )
}

After adding the base and yumrepo commands, we need to create the directories for the repo itself. The vhost doesn’t actually create the specified directory, and there’s also a cache directory. We also create some ::createrepo defines. All of this content comes from hiera:

[rnelson0@puppet hiera-tutorial]$ cat puppet_role/yumrepo.yaml
---
classes:
  - role::yumrepo
repodirs:
  - '/var/www/html/puppetrepo'
  - '/var/cache/puppetrepo'
yumrepos:
  'el-6.5':
    repository_dir       : '/var/www/html/puppetrepo/el-6.5'
    repo_cache_dir       : '/var/cache/puppetrepo/el-6.5'
    suppress_cron_stdout : true

Our repo has the name el-6.5 (based on our CentOS 6.5 base image) and exists one level underneath our docroot and cache directory (there’s a good reason why). There’s a cron job that runs every minute, rebuilding the index. Unless you like getting new mail every minute, I suggest suppressing output. Here’s the cron job it will create:

# Puppet Name: update-createrepo-el-6.5
*/1 * * * * /usr/bin/createrepo --cachedir /var/cache/puppetrepo/el-6.5 --changelog-limit 5 --update /var/www/html/puppetrepo/el-6.5 1>/dev/null

There are other options you can feed ::createrepo, but this will suffice to start. With the role and profile manifest and hiera yaml in place on the master, you can run the agent on yumrepo01 and everything should install properly. You can test access by visiting http://yum.nelson.va/ afterward. You should see the el-6.5/repodata directory structure, and the bottom dir will have a few files. Great success!

Populating the repo

This step is easy – copy files to the repo. Where packages go is up to you. You can toss them in the top level, you can create an RPMS or Packages directory; wherever you through it, createrepo will find it. We’ll put our files at the top level (/var/www/html/puppetrepo/el-6.5/ in this case). The crontab job runs once a minute, so after a brief wait it will be ready. You can again browse to the repo at http://yum.nelson.va/el-6.5/, using the correct hostname for your environment.

If you do use subdirectories, ::createrepo does not create or manage this subdirectory, so you would have to. Be aware that by doing so, you’ll be managing a directory that is managed by the define and you’ll have to ensure you don’t manage the same directory twice, once via ::createrepo and twice via a File resource. You could set manage_repo_dirs to false in your ::createrepo definition if you’d rather manage all the directories yourself.

Using the repo

Finally, we need a way to make use of the repo. A yumrepo resource type is native to puppet, so let’s look at the definition for our new repo:

  yumrepo {'el-6.5':
    descr    => 'rnelson0 El 6.5 - x86_64',
    baseurl  => 'http://yum.nelson.va/el-6.5/',
    enabled  => 'true',
    gpgcheck => 'false',
  }

We are defining the repo el-6.5, giving it a description and the URL to use. Adding a repo will just create the repo file but it defaults to disabled, so we set enabled to true. The default for gpgcheck, which ensures signatures on RPMs are valid, is true. Since we haven’t signed our package, we’ll set that to false. I’ll leave configuration of gpg, or lack thereof, up to you. Let’s go back to server01 and use puppet apply to attach the repo. Afterward, search for helloworld.

[rnelson0@server01 ~]$ yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: bay.uchicago.edu
 * extras: mirror.steadfast.net
 * updates: mirror.anl.gov
repo id                                   repo name                                                     status
base                                      CentOS-6 - Base                                               6,367
extras                                    CentOS-6 - Extras                                                15
puppetlabs-deps                           Puppet Labs Dependencies El 6 - x86_64                           68
puppetlabs-products                       Puppet Labs Products El 6 - x86_64                              422
updates                                   CentOS-6 - Updates                                            1,467
repolist: 8,339
[rnelson0@server01 ~]$ sudo puppet apply
  yumrepo {'el-6.5':
    descr    => 'rnelson0 El 6.5 - x86_64',
    baseurl  => 'http://yum.nelson.va/el-6.5/',
    enabled  => 'true',
    gpgcheck => 'false',
  }
Notice: Compiled catalog for server01.nelson.va in environment production in 0.11 seconds
Notice: /Stage[main]/Main/Yumrepo[el-6.5]/ensure: created
Notice: Finished catalog run in 0.07 seconds
[rnelson0@server01 ~]$ yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: bay.uchicago.edu
 * extras: mirror.steadfast.net
 * updates: mirror.anl.gov
el-6.5                                                                                 | 2.9 kB     00:00
el-6.5/primary_db                                                                      | 2.3 kB     00:00
repo id                                   repo name                                                     status
base                                      CentOS-6 - Base                                               6,367
el-6.5                                    rnelson0 El 6.5 - x86_64                                          1
extras                                    CentOS-6 - Extras                                                15
puppetlabs-deps                           Puppet Labs Dependencies El 6 - x86_64                           68
puppetlabs-products                       Puppet Labs Products El 6 - x86_64                              422
updates                                   CentOS-6 - Updates                                            1,467
repolist: 8,342
[rnelson0@server01 ~]$ yum search helloworld
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: bay.uchicago.edu
 * extras: mirror.steadfast.net
 * updates: mirror.anl.gov
========================================== N/S Matched: helloworld ===========================================
helloworld.x86_64 : no description given

  Name and summary matches only, use "search all" for everything.

Go ahead and try and install it with yum (abbreviated output):

[rnelson0@server01 ~]$ sudo yum install helloworld
...
Resolving Dependencies
--> Running transaction check
---> Package helloworld.x86_64 0:1.0-1 will be installed
--> Finished Dependency Resolution
...
Is this ok [y/N]: y
Downloading Packages:
http://yum.nelson.va/el-6.5/helloworld-1.0-1.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden"

If you check the permissions on the file, you might wonder why a world readable file isn’t available. Our CentOS template enforces selinux, so let’s take a look at the context:

[root@yumrepo01 el-6.5]# ls -laZ /var/www/html/puppetrepo/el-6.5/helloworld-1.0-1.x86_64.rpm
-rw-rw-r--. rnelson0 rnelson0 unconfined_u:object_r:user_home_t:s0 /var/www/html/puppetrepo/el-6.5/helloworld-1.0-1.x86_64.rpm
[root@yumrepo01 el-6.5]# ls -laZ /var/www/html/puppetrepo/el-6.5/repodata/repomd.xml
-rw-r--r--. root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/puppetrepo/el-6.5/repodata/repomd.xml

Here’s our issue: the new rpm is not in the httpd_sys_content_t context but is in the user_home_t context. This occurred because I scp’ed the file from server01 as rnelson0 to yumrepo01 as rnelson0, then on yumrepo01 moved the file to /var/www/html/puppetrepo as root. Let’s move the file back, then try copying it as root:

[root@yumrepo01 el-6.5]# mv /var/www/html/puppetrepo/el-6.5/helloworld-1.0-1.x86_64.rpm ~rnelson0/
[root@yumrepo01 el-6.5]# cp ~rnelson0/helloworld-1.0-1.x86_64.rpm /var/www/html/puppetrepo/el-6.5/
[root@yumrepo01 el-6.5]# ls -laZ /var/www/html/puppetrepo/el-6.5/helloworld-1.0-1.x86_64.rpm
-rw-r--r--. root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/puppetrepo/el-6.5/helloworld-1.0-1.x86_64.rpm

That’s better! You can also use the restorecon command to restore the default context on files, i.e. restorecon /var/www/html/puppetrepo/el-6.5/helloworld-1.0-1.x86_64.rpm. Now try and install it again:

[rnelson0@server01 ~]$ sudo yum install helloworld
...
Is this ok [y/N]: y
Downloading Packages:
helloworld-1.0-1.x86_64.rpm                                                            | 1.7 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : helloworld-1.0-1.x86_64                                                                    1/1
  Verifying  : helloworld-1.0-1.x86_64                                                                    1/1

Installed:
  helloworld.x86_64 0:1.0-1

Complete!
[rnelson0@server01 ~]$ rpm -qa | grep helloworld
helloworld-1.0-1.x86_64
[rnelson0@server01 ~]$ rpm -ql helloworld
/var/www/html/index.php

Summary

You now have a package, a repository that hosts it, and a client that can install the packages hosted by the repo. Go ahead and spin up server02-server10 and you should be able to easily define the repo and install the rpm. As we haven’t created any dependencies yet, the lack of httpd will prevent our app from working. If you install apache manually or via puppet, your hello world app will work. We’ll work on some more details of FPM, including dependencies, and integration of packages and repos with our puppet manifests next time.

References