Hello Node.js on Rancher

In this post I’m going to show how to go through the Kubernetes Hello World Walkthrough but using Rancher instead of Google’s Cloud Platform. One of the reasons I wanted to install Kubernetes on my own resources instead of in the cloud is so that I don’t have to pay additional costs while I’m experimenting/learning.

You’ll need to have done the following before proceeding:

Create the Docker image

I’m going to build the Docker image on my Rancher machine (a VM), but you can build it anywhere. If you decide to build the Docker image somewhere other than the Rancher machine, you’ll need to push your image up to Docker Hub. You may be able to add a container registry to Rancher, but I haven’t explored that. You can also use my image.

Run the following on the Rancher machine.

mkdir hello-node
cd hello-node

Create a file named Dockerfile with the following contents:

FROM node:4.4
COPY server.js .
CMD node server.js

Create a file named server.js with the following contents:

var http = require(‘http’);
var handleRequest = function(request, response) {
response.end(“Hello World!”);
var www = http.createServer(handleRequest);

Build the Docker image (that’s a period after v1):

docker build -t chrisgreene/hello-node:v1 .

If you need to upload the image to Docker Hub (make sure to change the image name or it will conflict with mine and fail):

docker login
docker push

Create the deployment

Now that we have our Docker image ready to go we can create the deployment. To do so:

  1. Make sure you’re in the Kubernetes (k8s) environment.
  2. Select Kubernetes
  3. Select Kubectl
  4. You’ll now have a shell to enter commands


Enter the following command (replace the image name if you’re using your own):

kubectl run hello-node –image=chrisgreene/hello-node:v1 –port=8080


If the deployment was successfully created, you’ll see the following:


Verify our deployment:


If we get all of the pods, we will see the hello-node pod:


You can also view the replica sets by running kubectl get rs.

Select Infrastructure > Containers and you’ll see the hello-node containers:


Expose the Node.js service to the outside

In order to reach the Node.js service, we need to expose it. We can do this with the following command:

kubectl expose deployment hello-node –port=80 –target-port=8080 –external-ip= is the IP of my Rancher machine. You’ll most likely need to change this. If you’re copy and pasting from this example, make sure there are two dashes in front of port, target-port and external-ip. Sometimes those get lost during copy/paste and the command won’t work.


Let’s verify our service:


Now I can access my Node.js app:


Scale the app

Let’s scale the app to 4 replicas instead of one:


Verify that we now have 4 pods:


Upgrade the app

I’m not going to show the steps to upgrade the app, but they are exactly as described in Roll out an upgrade to your website.

I performed the steps and was able to see the new site:


Deploying Kubernetes with Rancher

In a previous post I showed how to deploy Rancher. In this post I want to show how to deploy Kubernetes with Rancher and then deploy a simple application on top of Kubernetes. Please refer to the previous post for installation procedures for what OS to use and how to install docker. You can read up to “Start the Rancher Server” of that post and then come back to this post.

Starting the Rancher Server

You can skip this step if you’ve already started the Rancher server from the previous post. If not, let’s grab the latest version of Rancher and start the container. If you’re copying and pasting, make sure that there are two dashes before restart in the command below. Sometimes when I was copying and pasting the dashes would get converted into a single dash and the command would fail:

sudo docker run -d –restart=always -p 8080:8080 rancher/server

Now you should be able to access the Rancher application by opening a web browser and hitting the IP/URL of the VM where the Rancher container was launched.

We need to create a new environment so let’s:

  1. Highlight Default Environment
  2. Select Manage Environments

Create the kubernetes environment


Select Add Environment


  1. For Container Orchestration select kubernetes
  2. Provide a name
  3. Press Create


To access the Kubernetes environment,

  1. Highlight Environment Default
  2. Select k8s


Let’s go ahead and add the first host by selecting Add Host:


On the next screen I’m going to use the IP address of the VM running my Rancher container to make things simpler by not having to worry about name resolution.


On the next screen:

  1. Leave the host type as custom
  2. Select the clipboard to copy the command
  3. Press close
  4. Paste the command into the CLI of your VM running the Rancher container


2016-06-28_20-33-52.jpgDocker should pull down the Rancher agent container:


Kubernetes is now starting:


If you want to see more details or troubleshoot an issue, select Infrastructure > Containers:


Select Kubernetes > System to view all of the Kubernetes services:


Launching a web server on Kubernetes

Now we are going to run a simple nginx server. Let’s first start by creating a new Replication Controller by selecting:

  1. Kubernetes
  2. Replication Controllers
  3. Add RC


Paste in the following:

apiVersion: v1
kind: ReplicationController
name: nginx
replicas: 2
app: nginx
name: nginx
app: nginx
– name: nginx
image: nginx
– containerPort: 80

To find out more about replication controllers, I’d suggest reading about them here, but I’ll cover a few things:

  • Replicas states that we want two containers running nginx
  • We apply the label app: nginx. This can be used to select the containers later.
  • image: nginx specifies the name of the docker iamge to pull down
  • We are going to expose port 80 on the container.

It shouldn’t take too long for both of the containers to be running. Notice the IP addresses. These IPs most likely won’t be accessible from your machine so you won’t have a way of accessing the nginx web server.


We can access our nginx web servers by exposing them via a Kubernetes Service.

To create the service select:

  1. Kubernetes
  2. Services
  3. Add Service


Paste in the following and press Create:

kind: Service
apiVersion: v1
name: “nginx-service”
app: nginx
– name: http
protocol: TCP
port: 80
targetPort: 80
– “”


The IP is the IP of the VM running the Rancher/Kubernetes services.

If we expand our nginx-service, we can see that it’s associated with the two nginx containers. How did it do this? It used the selector app: nginx defined in the service  to find all containers with the label “app: nginx”. This is an important concept in Kubernetes.


Now if I open a web browser and go to, I’ll see that nginx is running:


Getting started with Rancher

A few days ago Rancher Labs released Rancher 1.0 so I’d thought I’d take it for a test drive. This is the first time I’ve worked with a product like this so this post will be really basic, but if this is the way things are going, it’s pretty amazing. Take a moment to check out their site and watch the “See Rancher in Action” video. The speaker sounds like a cowboy so you can imagine me talking like a cowboy for the rest of this post. I’m going to show how to deploy an application named Rocket Chat, which is like Slack. In my next post on Rancher I’ll show how to deploy Kubernetes using Rancher and then deploy and an application on Kubernetes.


I’m going to mainly be following the Quick Start Guide 

I started with a Ubuntu 12.04.5 LTS VM running on ESXi 5.1. My VM’s name is rancher1a.vmware.local with an IP of

First let’s update the OS:

sudo apt-get update
sudo apt-get upgrade

Install the latest version of Docker by using the following commands or using Docker’s Instructions.

curl https://get.docker.com/ > docker-install.sh
Check out the docker-install.sh script to see what it’s doing.
chmod 700 docker-install.sh

Adding myself into the docker group:

sudo usermod -aG docker chris

Start the Rancher server

We will run the Rancher server in a container by running:

sudo docker run -d –restart=always -p 8080:8080 rancher/server

Verify that the container is running:


Now I can access Rancher by going to http://rancher1a.vmware.local:8080. You’ll be logged in automatically and will see the screen where you can add your first host:


Go ahead and select Add Host and we see that the VM that is running Rancher has been pre-populated. For this demonstration I’m going to leave things as is and press Save.


On the next screen, I’ll perform the following:

  1. Enter the IP of the VM where Ranch is running.
  2. Select the copy button
  3. Paste the copied text into the terminal running your Rancher container.
  4. Select Close


Let’s verify that the Rancher agent container is running:


Now go to:

  1. Infrastructure
  2. Hosts
  3. View the newly added host


You can click on the hostname and view a bunch of info:


Launching an App

We can view the built-in catalog by going to Catalog > All


Let’s deploy the RocketChat app by selecting View Details:


I’m going to leave everything at the defaults and select Launch:


You should be redirected to Applications > Stacks where you can see the application starting up:


At this point I like to switch over to the terminal that’s running the Rancher container and run the sudo watch docker ps command so that I can see the containers coming online.


It shouldn’t take long for everything to become active:


Now I can access the Rocket Chat instance at http://rancher1a.vmware.local:3000. From here you need to register for a new account:


Fill in some info. The email address doesn’t have to be real:


Acknowledge the warning that pops up and select a username:


You’ll be logged in where you can begin using the application:


The WordPress app is also simple to deploy so you may want to try that as well.








Using pfSense for a VMware home lab: Part 2


This post is a continuation of part 1 where we installed and performed the initial setup of pfSense for use in a VMware home lab. In this post we will finish up by creating VLANs, firewall rules, NAT and verify connectivity.

VLAN Creation

In the first post we configured the LAN interface (em0) with an IP address of  Now we will create VLANs 20 & 21, create an additional interface on em0 and assign the VLANs.

Select Interfaces > Assign > VLANs and select the plus sign.


Make sure the parent interface is set to em0, set the VLAN tag to 20, provide a description and save:


The VLAN should look like this:


Select Interfaces > Assign > Interface assignments and set WAN to VLAN 20.



Select the WAN link under Interfaces and perform the following

  1. Enable the interface.
  2. Fill in the description.
  3. Change the IPv4 Configuration Type to Static IPv4.
  4. Verify the IP address.
  5. Uncheck to block private networks.
  6. Select Save.
  7. Select Apply changes.


Interfaces > Assign > Interface assignments should now look like this:


Now we will create VLAN 21.  It’s a little bit different from VLAN 20 because we will be creating a new interface that lives on em0.  You can think of this as a sub interface.  For each new VLAN you want to create, you will create a new sub interface on em0 and assign the VLAN to it.  You can create additional em# devices by adding additional NICs to the VM in vCenter.

Select Interfaces > Assign > VLANs and select the plus sign and create VLAN 21 on em0.

Select Interfaces > Assign > Interface assignments

  1. Change the Available network ports drop-down box to VLAN21.
  2. Press the add button.



Select the OPT1 link under interfaces to configure it.

  1. Enable the interface.
  2. Provide a description.
  3. Change the IPv4 Configuration Type to Static IPv4.
  4. Set the IP of the interface.  This will be the gateway off all of the VMs that connect to this network.
  5. Select Save.
  6. Apply changes


Interface assignments should look like this:


Firewall Rules

We need to establish some firewall rules in order for traffic to be allowed. For this demonstration I’ll just create a rule that allows all traffic, but you can modify the rule set as you see fit. If I’m working on an issue where I suspect a firewall is preventing functionality, I’ll jump into my lab, add a firewall rule to block specific traffic, verify functionality then back out the firewall change and verify again. Let’s work on VLAN20 first:

Go to Firewall > Rules > VLAN20:


Set the action to Pass and the protocol to any:


Save and apply changes. The firewall rules should look like this and all traffic will be allowed:


Do the same for VLAN21.

 Testing Connectivity

To test that everything is working correctly I created a VM and placed it on the vlan20-blog portgroup:


Once the VM is up and running we can do the following:

  1. Set a valid IP address on VLAN 20 (
  2. Verify the IP
  3. Ping the gateway, which is an interface on our pfSense VM:


You can then spin up another VM on VLAN 21 and verify connectivity between the two VMs. The VMs should be able to communicate within your network regardless of which ESXi host they are on. If they aren’t, one thing you want to make sure is that the switches that your ESXi hosts are on are trunking our VLANs (20-25).

If you need to add the default gateway, run:

ip route add default via

If you need to remove a previous default gateway, run:

ip route delete default via

My ESXi hosts are connected to a Cisco SG-300-10. Here is how I created the VLANs on it.

Go to VLAN Management > Create VLAN  and enter:

  • VLAN ID: 20
  • VLAN Name: vlan20-blog


Go to VLAN Management > Port to VLAN  and:

  • Change VLAN ID equals to to 20 and press Go.
  • Change each port that your ESXi hosts are connected to Tagged
  • Press Apply


Do the same thing for VLAN 21.

Static routes and NAT

At this point your VMs should be able to communicate within your network. However, let’s say our VM has the IP address of and tries to ping When the request gets to, the request may be seen as coming from, which may be okay but needs to know how to send the request back to If it doesn’t have a route back to the network, the request will fail.

We can either add a static route so that will know how to reach VMs on the network or create NAT rules so that requests coming from the network will look like they are coming from the pfSense device ( and since this is the same network as, it will be able to communicate with it.

On Windows, here is how we would add a static route to the network via the pfSense device:

add mask

This says that if you need to reach anything on the network, talk to and it will take care of it.

Let’s go to Firewall > NAT > Outbound and look at our NAT rules.

This section looks different than what I’m used to because I’m running and older version of pfSense.

We see that the NAT mode is set to automatic and in the automatic rules we see that there are rules in place for our and networks:


If I ping a VM with the IP of on my local network from my VM and view the traffic with tcpdump, I see:


Here we see that the request is coming from, which is our pfSense device so we know the automatic NAT rules above are working.


If you look around the pfSense interface, you’ll see a lot of features that could be useful in a lab environment. It’s certainly been useful for me throughout the years. While I’m not a networking expert and didn’t intend to thoroughly cover things like firewall rules and NAT, if you have any questions, let me know and I’ll try to help.


Listing all key pairs in OpenStack

I’m pretty to new OpenStack, but I’ve noticed that pre-Liberty if you wanted to list all key pairs for a user, you needed to be logged in as that user. If this isn’t correct, please let me know. We are working on a project at work where I needed to retrieve key pairs for specific users while acting as an admin user.  In this post I’ll show how to do just that. All commands are ran as an admin user.

In my Kilo lab my nova client is version 2.22.0

nova –version

Displaying help for the keypair-list command results in:

nova help keypair-list
usage: nova keypair-list

Print a list of keypairs for a user

Note that there are no options available for specifying other users, tenants/projects, etc so it only acts on the user who is running the command.

I then found the following bug report: keypair-list should allow you to specify a user or all-users. To test this out I installed a DevStack instance of Liberty. Let’s see what version of the nova client is in provided:

nova –version

Now for the options:

nova help keypair-list
usage: nova keypair-list [–user <user-id>]

Print a list of keypairs for a user (Supported by API versions ‘2.0’ –
‘2.latest’) [hint: use ‘–os-compute-api-version’ flag to show help message
for proper version]

Optional arguments:
–user <user-id>  List key-pairs of specified user ID (Admin only).

Notice the new –user argument.

Let’s see if we can view the key pairs of the demo user. First we will get the demo user’s ID since the nova –user argument specifies that it only accepts an ID:

openstack user list 
| ID                               | Name     |
| 0f170c032ff74a1f9e5548c16bd76dcc | nova     |
| 2848a301af4e4b6faec536102b3d292b | glance   |
| 290e7f84f951426a9c5d63fa67aa506d | admin    |
| 5d1a93152efb4b00af59b3620bfd8cc3 | alt_demo |
| 6fc465ae0e944dc3b08eb661c43ba922 | demo     |
| d961ddcad066415f96a44fa8c7349166 | cinder   |
nova keypair-list --user 6fc465ae0e944dc3b08eb661c43ba922
| Name | Type | Fingerprint                                     |
| demo | ssh  | 8d:e2:65:ec:8c:91:52:bb:40:22:55:2e:9b:1f:f0:45 |

I’d probably do it differently, but for something quick, if you want to list all users/key pairs, you could do something like this.

for user in $(openstack user list -f value -c ID); do nova keypair-list –user ${user} | grep -P “\|\s(([a-f0-9]{2}:)?){15}[a-f0-9]{2}\s\|$”; done

| admin | ssh  | 2e:93:fd:9b:45:30:e1:47:fe:93:4e:4a:21:74:40:d0 |
| demo | ssh  | 8d:e2:65:ec:8c:91:52:bb:40:22:55:2e:9b:1f:f0:45 |

Using pfSense for a VMware home lab: Part 1


In this post I’m going to show how to use a pfSense virtual router in your VMware home lab.  If you’re using a single flat network in your home lab, this post will introduce additional complexity, but I believe that this is a good thing in this case.  It’s not often that you’re going to find an environment that only has a flat network using only the native VLAN.  By using a virtual router such as pfSense, you’ll be able to mimic an environment that you’re likely to see in the real world.  I’ve been using this set up for about five years now and while I occasionally run into issues because of the complexity, I can identify the issue and learn how to resolve it.  Later I almost always see the same behavior in my job and am able to recognize it.

Some of the features of pfSense that I use are:

  • VLANs
  • Firewalling
  • Spin up networks quickly
  • Virtual IPs
  • Load balancing
  • Source and destination NAT
  • DHCP

Here is an overview of pfSense from their website:

The pfSense project is a free network firewall distribution, based on the FreeBSD operating system with a custom kernel and including third party free software packages for additional functionality. pfSense software, with the help of the package system, is able to provide the same functionality or more of common commercial firewalls, without any of the artificial limitations. It has successfully replaced every big name commercial firewall you can imagine in numerous installations around the world, including Check Point, Cisco PIX, Cisco ASA, Juniper, Sonicwall, Netgear, Watchguard, Astaro, and more.

pfSense software includes a web interface for the configuration of all included components. There is no need for any UNIX knowledge, no need to use the command line for anything, and no need to ever manually edit any rule sets. Users familiar with commercial firewalls catch on to the web interface quickly, though there can be a learning curve for users not familiar with commercial-grade firewalls.

Here is an image of what I’ll show how to accomplish.


Download pfSense

Download and select the latest stable version (version 2.2.6 at the time of this post). Choose the appropriate architecture and select the Live CD Installer:


Portgroup creation

Before we install pfSense we will create three portgroups in vCenter.  You can of course name the portgroups whatever you’d like.

  1. blog-trunk
    • VLAN type: VLAN trunking
    • VLAN trunk range: 20 – 25 (or 20 – 21 is good enough for this exercise)2016-02-13_9-57-52.jpg
  2. vlan20-blog
    • VLAN type: VLAN
    • VLAN ID: 202016-02-15_21-14-36.jpg
  3. vlan21-blog
    • VLAN type: VLAN
    • VLAN ID: 212016-02-15_21-16-21.jpg


Create a VM in vCenter of type Other Linux (64-bit). I gave mine 1 vCPU and 384 MB of memory, but you can use whatever you think is appropriate. Give the VM two NICs. Connect the first NIC to the blog-trunk portgroup and the second to the Virtual Machines portgroup, which in my case is my LAN. Use the diagram above for reference.  For reasons we will see later, set the NICs to not connect at power on. Lastly, configure the CD-ROM to use the pfSense ISO:


Boot the VM and open a console.  Once you’re prompted to choose I(nstaller)… to install pfSense to the hard drive.


Select Accept these settings > Quick/Easy Install > Standard Kernel > Reboot

Here we can see why we booted the VM with the NICs disconnected.  By default the LAN interface is set to, which is the same as my LAN’s default gateway.


Select 2) Set interface(s) IP address. For this demonstration I set the following on the LAN (em1) interface:

  • IP:
  • Subnet:
  • Gateway:

Choose to not enable the DHCP server on the LAN interface.


We won’t be configuring the WAN (em0) interface at this time. If prompted, choose not to reconfigure the web configrator port to use http unless you want to.

You can now reconnect the VM’s portgroups.

Configuring pfSense

My pfSense’s web configuration page is available at  The default usename and password are pfsense / pfsense.


From here on out we will be setting up basic configuration values.  Please substitute values as necessary.



Configure the WAN interface

The WAN will actually be the internal side of the router that where your VMs will reside.  In my lab I actually have this reversed, but this is easier for demonstration purposes.  Feel free to swap your interfaces around to whatever makes the most sense to you.

Set the interface type to Static and set the IP to  Again, please consult the diagram above.  This will be the first of possibly many “internal” interfaces/IPs assigned to the pfSense router and these networks are what you will use to place your VMs onto.


The LAN interface will already be set but you can change it here if you’d like:


The last option is to change the default admin password:


Once you’re finished you’ll be sent to the pfSense dashboard:


In part two I’ll show how to configure VLANs, firewalling and get some test VMs communicating on the network.

List all vCenter VMs with Go (govmomi)

A few months ago I saw a that VMware’s govmomi was updated and since I’ve recently started to learn the Go programming language I thought it would be a good exercise to do something simple like list all virtual machines in a vCenter server.

I’m going to use VMware’s Photon 1.0 TP2 as a development environment since the full install comes with tools such as git and go.  I built a Photon VM using these instructions and performed a full install.  To keep things simple for this post I enabled ssh for root but you’ll probably want to create a user for yourself.

Installing govmomi

Let’s make a directory where we will install govmomi.

mkdir -p $GOPATH/src/github.com/vmware

Change into this directory and clone the govmomi repo:

cd $GOPATH/src/github.com/vmware
git clone https://github.com/vmware/govmomi.git

I’m not sure if I’m doing something wrong but I needed to install these otherwise I received errors when running the examples:

go get golang.org/x/tools/cmd/vet
go get golang.org/x/tools/cmd/goimports
go get golang.org/x/net/context

I found these in a travis-ci job, but I no longer have the link.

Running an example

Change into the directory that has the only example that is currently in the repo:

cd govmomi/examples/datastores

There is a single file in this directory named main.go and it will list all the datastores in your vCenter.  Actually if I remember correctly, it will only process the first datacenter in your vCenter.  The program needs several parameters to run and they can be retrieved from shell environment variables or by passing them into the program.   If you want to use shell environment variables, you’ll need to define them.  Here is what mine look like:

export GOVMOMI_URL=’https://vc5c.vmware.local/sdk
export GOVMOMI_USERNAME=’vmware\api’
export GOVMOMI_PASSWORD=’vmware123′
export GOVMOMI_INSECURE=’true’

Now I can run the program as such:

go run main.go

My output looks like this:

Name:                   Type:  Capacity:  Free:
nfs-ds412-momentus1     NFS    683.1GB    148.0GB 
nfs-ds412-5400          NFS    453.9GB    293.6GB 
nfs-ds412-hybrid0       NFS      1.8TB      1.0TB 
nfs-ds412-hybrid1       NFS      1.8TB   1002.9GB 
iscsi-ds412-momentus1-0 VMFS   749.8GB    132.3GB 
local-esx-5-1           VMFS   144.0GB     61.8GB 
diskstation-nfs         NFS    908.0GB    308.7GB
nfs-async               NFS    908.0GB    308.7GB

Alternatively, I could have passed in the parameters:

go run main.go –url https://vmware\\api:vmware123@vc5c.vmware.local/sdk –insecure true

Listing all VMs

Since I’m just learning Go and don’t know much about govmomi, I’m going to modify the datastore program instead of writing one from scratch.  I copied the main.go file to list-vms.go. Most of the changes are pretty straightforward.  Instead of searching for DataStores you need to search for VirtualMachines, etc.  Here are few of the changes that I made:

Line 148:

// Find datastores in datacenter
dss, err := f.DatastoreList(ctx, "*")


// Find virtual machines in datacenter
vms, err := f.VirtualMachineList(ctx, "*")

Line 162:

// Retrieve summary property for all datastores
var dst []mo.Datastore
err = pc.Retrieve(ctx, refs, []string{"summary"}, &dst)
if err != nil {


// Retrieve name property for all vms
var vmt []mo.VirtualMachine
err = pc.Retrieve(ctx, refs, []string{"name"}, &vmt)
if err != nil {

Line 171:

fmt.Fprintf(tw, "Name:\tType:\tCapacity:\tFree:\n")
for _, ds := range dst {
    fmt.Fprintf(tw, "%s\t", ds.Summary.Name)
    fmt.Fprintf(tw, "%s\t", ds.Summary.Type)
    fmt.Fprintf(tw, "%s\t", units.ByteSize(ds.Summary.Capacity))
    fmt.Fprintf(tw, "%s\t", units.ByteSize(ds.Summary.FreeSpace))
    fmt.Fprintf(tw, "\n")


fmt.Println("Virtual machines found:", len(vmt))
for _, vm := range vmt {
  fmt.Fprintf(tw, "%s\n", vm.Name)

I can retrieve a listing of all VMs as such:

go run list-vms.go

Here is the truncated output from my lab:

Virtual machines found: 84
Linux Minimal Template2

Looks good but they aren’t sorted.  Let’s fix that.

Sorting the VMs

I really just wanted to do this to see how to sort in Go.  I used the example here as a reference.

First we need to add “sort” to the import statement.

Next we need to implement the sort.Interface for ByName:

type ByName []mo.VirtualMachine

func (n ByName) Len() int                { return len(n) }
func (n ByName) Swap(i, j int)        { n[i], n[j] = n[j], n[i] }
func (n ByName) Less(i, j int) bool { return n[i].Name < n[j].Name }

Now before we loop through and print out the VM names we sort them first:


Now when I run the program the truncated results look like this:

Virtual machines found: 84
Linux Minimal Template2

You can view the script on github.