PowerCLI script to monitor recent vSphere Tasks

I haven’t had much time to look at vSphere 6.5 yet, but appearently the C# client is no more. I used to always have the C# client up as a diagnostic tool so that I could watch events trigger in real time as opposed to having to refresh the vSphere web client. Normally I’d open the C# client in full screen and then arrange my browsers so that they would cover the C# client but leave room so I’d be able to see the recent tasks pane. This would allow me to work in another product like vRealize Automation and see events appear in vCenter immediately.

If I can’t use the C# client to do this going forward, I thought I’d create a little PowerCLI script to provide me with the same information. I’m not sure how well it will work out yet, but you can find it on GitHub.

The result look like this:


Simple Powershell Script to Monitor vRealize Automation

Lately I’ve been learning vRealize Automation (vRA) and it has involved bringing the environment up and down frequently as well as breaking and fixing various services to see how the system would respond. I got tired of going into the VAMI (port 5480 of the vRA appliance) and selecting the Refresh button to get status on all of the various services so I created a little Powershell script that will display the status of the vRA services.

The simplest way to invoke the script is with:

get-vRAHealth vra71.vmware.local

Where vra71.vmware.local is my load balancer VIP for vRA. By default the script will continously refresh every 5 seconds.

You can disable the looping like so:

get-vRAHealth vra71.vmware.local -loop $false

And control the refresh interval:

get-vRAHealth vra71.vmware.local -refresh 10

Here is the output:


The script can be found on GitHub and below:

function get-vRAHealth() {
  <#    .SYNOPSIS     Displays health status of vRA components   .DESCRIPTION      Displays health status of vRA components   .EXAMPLE     get-vRAHealth vra71.vmware.local   .EXAMPLE     get-vRAHealth https://vra71.vmware.local -loop $true   .EXAMPLE     get-vRAHealth https://vra71.vmware.local -loop $true $sleep 2   #&amp;amp;gt;




  $uri = [System.Uri] $url

  if ($uri.Host -eq $null -and $uri.OriginalString) {
    $uri = [System.Uri] "https://$($uri.OriginalString)"

  if ($uri.Scheme -eq 'http') {
    $uri = [System.Uri] "https://$($uri.Host)"

  if ($uri.LocalPath -ne '/component-registry/services/status/current') {
    $uri = [System.Uri] "$($uri.AbsoluteUri)component-registry/services/status/current"

  while ($true) {
    Write-Host "Checking $($uri.AbsoluteUri)"

    try {
      $content = Invoke-WebRequest $uri.AbsoluteUri

      if ($content.StatusCode -eq 200) {
        $json = $content.Content | ConvertFrom-Json
        $json.content | select serviceName, `
	                  @{N='Registered';E={ $_.serviceStatus.serviceInitializationStatus }}, `
	           	  @{N='Available';E={ if (!$_.notAvailable) {'True'} else {'False'}}}, `
	                       lastUpdated, `
		               statusEndPointUrl `
		      | ft -auto
        if ($loop -eq $false) { break }
      } else {
          Write-Host "Unable to access vRA Component Registry. Error: $content.StatusCode"
    } catch {
       Write-Host "Unable to access vRA Component Registry. Error: $_.Exception.Message."
  sleep $refresh

HAProxy as a Load Balancer for vRealize Automation

In this post I’m going to show how to use HAProxy as a load balancer for vRealize Automation. I used Ubuntu 14.04 LTS for the OS.

Install HAProxy

 sudo apt-get install haproxy

Add sub interfaces to VM


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

auto eth0:0
iface eth0:0 inet static

auto eth0:1
iface eth0:1 inet static

auto eth0:2
iface eth0:2 inet static

HAProxy Config

A lot of this config are the defaults. I added the section so you can enable to the LB stats page. The bottom has my edits for vRA. There are three sections for the appliance, IaaS Web and IaaS Manager. I’m not an HAProxy expert so probably has some things that could be improved. I tried to add all the recommendations (persistence, load balancing policy, timeout, etc) as described in the vRA LB Guide.


 log /dev/log local0
 log /dev/log local1 notice
 chroot /var/lib/haproxy
 stats socket /run/haproxy/admin.sock mode 660 level admin
 stats timeout 30s
 user haproxy
 group haproxy

maxconn 2048
ssl-server-verify none

# Default SSL material locations
 ca-base /etc/ssl/certs
 crt-base /etc/ssl/private
 tune.ssl.default-dh-param 2048

# Default ciphers to use on SSL-enabled listening sockets.
 # For more information, see ciphers(1SSL). This list is from:
 # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
 ssl-default-bind-options no-sslv3

 log global
 mode http
 option forwardfor
 option forwardfor
 option httplog
 option dontlognull
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 errorfile 400 /etc/haproxy/errors/400.http
 errorfile 403 /etc/haproxy/errors/403.http
 errorfile 408 /etc/haproxy/errors/408.http
 errorfile 500 /etc/haproxy/errors/500.http
 errorfile 502 /etc/haproxy/errors/502.http
 errorfile 503 /etc/haproxy/errors/503.http
 errorfile 504 /etc/haproxy/errors/504.http

listen stats
 mode http
 log global
 stats enable
 stats uri /stats
 stats realm Haproxy\ Statistics
 stats auth admin:VMware1!

# vRA 7.1 Distributed

# vRA VA

frontend vra71-va
 bind ssl crt /etc/ssl/private/wildcard.pem
 mode http
 default_backend vra71-va-backend

backend vra71-va-backend
 mode http
 balance roundrobin
 stick on src table vra71-va-backend
 stick-table type ip size 200k expire 30m
 default-server inter 3s
 timeout check 10s
 option httpchk GET /vcac/services/api/health
 http-check expect status 204

 server vra71c check ssl verify none
 server vra71d check ssl verify none

# vRA IaaS Web

frontend vra71-iaas-web
 bind ssl crt /etc/ssl/private/wildcard.pem
 mode http
 default_backend vra71-iaas-web-backend

backend vra71-iaas-web-backend
 mode http
 balance roundrobin
 stick on src table vra71-iaas-web-backend
 stick-table type ip size 200k expire 30m
 default-server inter 3s
 timeout check 10s
 option httpchk GET /wapi/api/status/web
 http-check expect string REGISTERED

 server vra71c-web check ssl verify none
 server vra71d-web check ssl verify none

# vRA IaaS Mgr

frontend vra71-iaas-mgr-https
 bind ssl crt /etc/ssl/private/wildcard.pem
 mode http
 default_backend vra71-iaas-mgr-backend

backend vra71-iaas-mgr-backend
 mode http
 balance roundrobin
 stick on src table vra71-iaas-mgr-backend
 stick-table type ip size 200k expire 30m
 default-server inter 3s
 timeout check 10s
 option httpchk GET /VMPSProvision
 http-check expect rstring ProvisionService

 server vra71c-mgr check ssl verify none
 server vra71d-mgr check ssl verify none


This is the wildcard cert I’m using for vRA. You just need to include the cert and private key in the proper order.



Stats Page

Here is an example of what the HAProxy stats page looks like:




NetApp ONTAP 9 Simulator Guide by Neil Anderson

I want to take a moment to mention the great work Neal Anderson did in his NetApp ONTAP 9 Simulator Guide. This incredibly detailed (117 pages) guide not only shows you how to set up a NetApp ONTAP 9 simulator for your lab, but you’ll also learn how install/configure:

Storage is critical to any infrastructure and while you’re learning it’s easy to quickly install a storage solution that’s not representative of what you’ll see in the real world. I try to make my lab environment match what I see in real production environments as much as possible. That means using segregated networks with VLANs, firewalls, load balancers, CA signed certificates, and with the help this guide you’ll be able to implement a real world storage solution.

Kubernetes 1.3 HA Walkthrough – NGINX

Table of Contents

You can find all of the config files on the GitHub page.


Let’s test out the environment by installing NGINX.

Create the NGINX deployment

We are going to create an NGINX deployment where we will have an NGINX replica on each of our Kubernetes worker nodes. A replica will be Kubernetes pod:

kubectl run nginx –image=nginx –port=80 –replicas=3

If successful, you will see: deployment “nginx” created

Let’s get a listing of each of the pods:

kubectl get pods -o wide

NAME                   READY STATUS  RESTARTS AGE IP          NODE
nginx-2032906785-44ndh 1/1   Running 2        16d kube-worker2
nginx-2032906785-mvpft 1/1   Running 2        16d kube-worker1
nginx-2032906785-tyfmu 1/1   Running 2        16d kube-worker0

Here we can see that there is a pod on each of the worker nodes. Notice the IP address. Each of the pods (not docker container) will get an IP address from a network that was given out by etcd. Each worker node will be given a unique network range by etcd.

Create the NGNIX service

Now we need to create a Kubernetes Service to expose our NGINX deployment so we can reach it. I’m going to make the service of type NodePort, which will make the service accessible on any pod using the same port.

kubectl expose deployment nginx –port=80 –type=NodePort

View the service’s details

kubectl describe services nginx

Name:             nginx
Namespace:        default
Labels:           run=nginx
Selector:         run=nginx
Type:             NodePort
Port:              80/TCP
NodePort:          31153/TCP
Session Affinity: None

Notice the NodePort and Endpoints. The Endpoints are our Kubernetes pods. If we launch a web browser and go to one of our worker nodes and use port 31153, we should see NGINX:


Lastly, let’s check out one of the pod’s details:

kubectl describe pod nginx-2032906785-44ndh

Name: nginx-2032906785-44ndh
Namespace: default
Node: kube-worker2/
Start Time: Thu, 22 Sep 2016 20:20:45 -0600
Labels: pod-template-hash=2032906785
Status: Running
Controllers: ReplicaSet/nginx-2032906785
 Container ID: docker://19ffedded8de834da2e072f012c5081655b7149172d2c00d31944c7fe2499766
 Image: nginx
 Image ID: docker://sha256:ba6bed934df2e644fdd34e9d324c80f3c615544ee9a93e4ce3cfddfcf84bdbc2
 Port: 80/TCP
 State: Running
 Started: Sat, 08 Oct 2016 14:59:03 -0600
 Last State: Terminated
 Reason: Completed
 Exit Code: 0
 Started: Sun, 02 Oct 2016 14:38:18 -0600
 Finished: Mon, 03 Oct 2016 08:27:44 -0600
 Ready: True
 Restart Count: 2
 Environment Variables: <none>
 Type Status
 Initialized True
 Ready True
 PodScheduled True
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-jeqk5
QoS Tier: BestEffort
No events.

I’m not going to go into all of the details here, but you can read the Kubernetes Documentation.

Kubernetes 1.3 HA Walkthrough – SkyDNS

Table of Contents

You can find all of the config files on the GitHub page.


Kubernetes uses a DNS server based off of SkyDNS to . You can read more about it here.

You probably want to perform the actions below on the same machine where you installed kubectl.

Create the SkyDNS Kubernetes service

Download the service definition file

curl -O https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/skydns-svc.yaml

Edit skydns-svc.yaml and change clusterIP to or anything else that is in your etcd IP pool. Mine is I believe this IP needs to be in your certificate as a SAN entry. Before I did this the SkyDNS containers would fail and the Kubernetes controller node reported certificate errors from the skydns containers.


apiVersion: v1
 kind: Service
   name: kube-dns
   namespace: kube-system
     k8s-app: kube-dns
     kubernetes.io/cluster-service: "true"
     kubernetes.io/name: "KubeDNS"
     k8s-app: kube-dns
   - name: dns
     port: 53
     protocol: UDP
   - name: dns-tcp
     port: 53
     protocol: TCP

Create the service

kubectl create -f skydns-svc.yaml

Which should result in:

service “kube-dns” created

Create the skydns deployment

kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/kubedns.yaml

deployment “kube-dns-v19” created

kubectl –namespace=kube-system get pods

NAME                           READY     STATUS    RESTARTS   AGE
kube-dns-v19-965658604-p2js8   3/3       Running   0          22h
kube-dns-v19-965658604-ru5ac   3/3       Running   0          22h










Kubernetes 1.3 HA Walkthrough – kubectl

Table of Contents

You can find all of the config files on the GitHub page.

Install kubectl

kubectl is the program that we will use to interact with the Kubernetes environment. In my environment I installed it on my Windows 10 desktop running bash for Windows.

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Set the active cluster

Note that I’m setting the server to my HAproxy load balancer. This will most likely be different in your environment.

kubectl config set-cluster kubernetes-the-hard-way \
–certificate-authority=/var/lib/kubernetes/ca.pem \
–embed-certs=true \

Set the credentials

kubectl config set-credentials admin –token ‘VMware1!’

Set the default context

kubectl config set-context default-context \
–cluster=kubernetes-the-hard-way \
kubectl config use-context default-context

Get the component status and verify everything is okay

kubectl get componentstatuses

NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}