Tips for configuring a new vRO 6 Appliance

I just configured a brand new vRealize Orchestrator Appliance v6.0.3 with a vCenter Server (not appliance) v6.0U1. The deployment of the OVF is pretty simple, but configuration was trickier than I expected. VMware’s guide is accurate if everything works well but painfully inadequate if you require any troubleshooting. Take a run through the guide, I’m not going to speak to what it does cover, and if you have problems, maybe one of these tips will help you.

Authentication

Any time you change authentication, you MUST restart the vRO service. You may see all the status icons go from green to red to blue and back to green, which makes it appear that some services are restarting, but they aren’t. If you’re not sure, click the restart button as shown below. Bonus: when the page responds and says the service, the service is ready to use, unlike some other VMware products *cough*vSphereWebClient*cough*

Continue reading

Discovering Puppet module dependencies for use with r10k

Currently, r10k does not perform dependency resolution. It’s on the roadmap, but not here yet (RK-3 tracks the effort). Until then, it’s up to you to determine the module dependencies. You can do this by visiting a module’s forge page, then clicking on dependencies, then clicking on each dependency’s forge page and clicking on dependencies until you reach the bottom. That’s pretty boring and a recipe for mistakes. Let’s look at how to automate this.

If you use puppet module install, it will install the module and all its dependencies for you. Give it a module and a new temporary directory, to ensure we don’t already have a dependency in our path, and you’ll end up with what you need.

Puppet module dependencies fig 1

In the past, I’ve shown how you can then use puppet module list and some search and replace patterns to convert to a Puppetfile, but it was all manual and it was scattered through the blog article. Here are some of the patterns you need, in vi format:

:%s/\s*[└┬├─]*\s/mod "/
:%s/ (v/", "/
:%s/)/"/
:%s/-/\//g

Vi is interactive, though, we can do better. Using the magic of sed, I generated a tool called generate-puppetfile that will install the specified modules in temp directories and use the module listing to build a Puppetfile for you.

Puppet module dependencies fig 2

I renamed the utility from depconvert to generate-puppetfile for clarity.

Download the utility (git clone https://github.com/rnelson0/puppet-generate-puppetfile.git). Run it and pass the names of modules you want to use. Currently there is no real error checking to be had, so if you enter an invalid name or no name, you won’t get what you want, but errors will be passed to the screen. I hope this helps!

NFS Export Settings for vSphere

Over the past few days, I’ve had to set up an NFS export for use with vSphere. I found myself frustrated with the vSphere docs because they only seem to cover the vSphere side of the house – when discussing the export that vSphere will use, the docs simply said, “Ask your vendor.” If you’re enabling NFS on your SAN and they have a one-click setup for you or a document with settings to enable, great, but if you’re setting up on a bare metal server or VM running Linux like me, you don’t have a vendor to ask.

Since I didn’t have a guide, I took a look at what happens when you enable NFS on a Synology. I don’t know if this is optimal, but this works for many people with the defaults. You can replicate this in Control Panel -> Shared Folders -> Highlight and Edit -> NFS Permissions. Add a new rule, add a hostname or IP entry, and hit OK. Here’s what the defaults look like:

NFS Exports fig 1

Let’s take a look at what happened in the busybox shell. SSH to your Synology as root with the admin password. Take a look at the permissions on the mount path (e.g. /volume1/rnelson0) and the contents of /etc/exports.

NFS Exports fig 2

(there is no carriage return at the end of the file, the ‘ds214>’ is part of the prompt not the exports contents)

A working mode for the directory is ‘0777’ and there’s a long string of nfs options. They are described in detail in the exports(5) man page. Here’s a high-level summary of each:

  • rw: Enable writes to the export (default is read-only).
  • async: Allow the NFS server process to accept additional writes before the current writes are written to disk. This is very much a preference and has potential for lost data.
  • no_wdelay: Do not delay writes if the server suspects (how? I don’t know) that another write is coming. This is a default with async so actually has no specific benefit here unless you remove async. This can have performance impacts, check whether wdelay is more appropriate.
  • no_root_squash: Do not map requests from uid/gid 0 (typically root) to the anonymous uid/gid.
  • insecure_locks: Do not require authentication of locking requests.
  • sec=sys: There are a number of modes, sys means no cryptographic security is used.
  • anonuid/anongid: The uid/gid for the anonymous user. On my Synology these are 1025/100 and match the guest account. Most Linux distros use 99/99 for the nobody account. vSphere will be writing as root so this likely has no actual effect.

I changed the netmask and anon(u|g)id values, as it’s most likely that a linux box with a nobody user would be the only non-vSphere client. Those should be the only values you need to change; async and no_wdelay are up to your preference.

If you use Puppet, you can automate the setup of your NFS server and exports. I use the echocat/nfs module from the forge (don’t forget the dependencies!). With the assumption that you already have a /nfs mount of sufficient size in place, the following manifest will create a share with the correct permissions and export flags for use with vSphere:

node server {
  include ::nfs::server

  file{ '/nfs/vsphere':
    ensure => directory,
    mode   => '0777',
  }
  ::nfs::server::export{ '/nfs/vsphere':
    ensure  => 'mounted',
    nfstag  => 'vsphere',
    clients => '10.0.0.0/24(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys,anonuid=99,anongid=99)',
  }
}

To add your new datastore to vSphere, launch the vSphere Web Client. Go to the Datastores page, select the Datacenter you want to add the export to, and click on the Related Objects tab. Select Datastores and the first icon is to add a new datastore. Select NFS and the NFS version (new to vSphere 6), give your datastore a name, and provide the name of the export (/nfs/vsphere) and the hostname/IP of the server (nfs.example.com). Continue through the wizard, checking each host that will need the new datastore.

NFS Exports fig 3

Click Finish and you have a datastore that uses NFS! You may need to tweak the export flags for your environment, but this should be a good starting point. If you are aware of a better set of export flags, please let me know in the comments. Thanks!

vCenter Web Client and OVF deployment issues with Teredo

Tonight I was deploying an OVF on a newly upgraded vCenter version 6.0U1 through the vSphere Web Client. This is the Windows server (VCS), not the appliance (VCSA). I ran into this wonderful error while deploying an OVF on the storage selection step:

VCS Storage Issue Fig 1

There is KB 2053229 that says this was resolved in vCenter 5.1U2. However, the underlying issue is that the client attempts to communicate with the vCenter service in a new session to validate the storage and so the problem may remain in certain environments. I found that a teredo tunneling address was being used during storage selection rather than the IPv4 address that I connected to in my web browser. These addresses are provided for “free” with Windows Server 2008R2. The (magic?) design of teredo is that it should not be used when an IPv4 A record exists and should only be used for IPv6 URLs. I’m not sure why VCS would use this, just that I observed it being used. I discovered this with the help of two artifacts: 1) another node on a remote network was able to make it past storage selection; and 2) I saw a deny entry in syslog with my problematic management node to the VCS’s IPv4 addresses with IPv6 as the service:

VCS Storage Issue Fig 2

The KB article points to DNS, so I checked and found a AAAA record for vCenter. Delete that record, wait for it to replicate throughout Active Directory and expire, and reload the vSphere Web Client. You should no longer see a log failure for IPv6 and your OVF deploy should at least make it past the storage selection stage. I hope this helps!

Platform9 and vSphere Integration

Platform9 recently announced a vSphere Integration product. If you haven’t heard of Platform9 before, they offer OpenStack-as-a-Service for management of private cloud installations. Platform9 manages the OpenStack platform and you manage your virtualized infrastructure. You don’t have to know or keep up with the inner workings of OpenStack to have a working platform. This new product announcement expands the platform from the KVM hypervisor to the vSphere hypervisor.

OpenStack is a very complicated platform. Gaining the knowledge needed to design, setup, and maintain an OpenStack system is time and effort not spent on fulfilling your business goals. As the platform grows, more time and effort is required to stay current and upgrade your implementation. The use of an externally managed system saves you that time and effort and that can be put directly toward your business goals.

Disclaimer: Platform9 provided me with an extended free trial account for the purposes of this article.

How does it work?

Platform9 deploys an OpenStack controller just for you. They install, monitor, troubleshoot, and upgrade it for you. With vSphere integration, a local VM, called a vSphere Gateway Appliance (VGA), deployed in your vSphere environment communicates with your vCenter server and Platform9’s hosted controller, eliminating the need for a VPN or other private communication channel between the controller and vCenter.

Continue reading