PowerCLI script to monitor recent vSphere Tasks

I haven’t had much time to look at vSphere 6.5 yet, but appearently the C# client is no more. I used to always have the C# client up as a diagnostic tool so that I could watch events trigger in real time as opposed to having to refresh the vSphere web client. Normally I’d open the C# client in full screen and then arrange my browsers so that they would cover the C# client but leave room so I’d be able to see the recent tasks pane. This would allow me to work in another product like vRealize Automation and see events appear in vCenter immediately.

If I can’t use the C# client to do this going forward, I thought I’d create a little PowerCLI script to provide me with the same information. I’m not sure how well it will work out yet, but you can find it on GitHub.

The result look like this:

2016-11-29_20-03-58.png


Simple Powershell Script to Monitor vRealize Automation

Lately I’ve been learning vRealize Automation (vRA) and it has involved bringing the environment up and down frequently as well as breaking and fixing various services to see how the system would respond. I got tired of going into the VAMI (port 5480 of the vRA appliance) and selecting the Refresh button to get status on all of the various services so I created a little Powershell script that will display the status of the vRA services.

The simplest way to invoke the script is with:

get-vRAHealth vra71.vmware.local

Where vra71.vmware.local is my load balancer VIP for vRA. By default the script will continously refresh every 5 seconds.

You can disable the looping like so:

get-vRAHealth vra71.vmware.local -loop $false

And control the refresh interval:

get-vRAHealth vra71.vmware.local -refresh 10

Here is the output:

2016-11-07_22-10-21.png

The script can be found on GitHub and below:

function get-vRAHealth() {
  <#    .SYNOPSIS     Displays health status of vRA components   .DESCRIPTION      Displays health status of vRA components   .EXAMPLE     get-vRAHealth vra71.vmware.local   .EXAMPLE     get-vRAHealth https://vra71.vmware.local -loop $true   .EXAMPLE     get-vRAHealth https://vra71.vmware.local -loop $true $sleep 2   #&amp;amp;gt;

  param(
    [Parameter(Mandatory=$true,Position=0)]
    [string]$url,

    [Parameter(Mandatory=$false,Position=1)]
    [string]$loop=$true,

    [Parameter(Mandatory=$false,Position=2)]
    [Int32]$refresh=5
  ) 

  $uri = [System.Uri] $url

  if ($uri.Host -eq $null -and $uri.OriginalString) {
    $uri = [System.Uri] "https://$($uri.OriginalString)"
  }

  if ($uri.Scheme -eq 'http') {
    $uri = [System.Uri] "https://$($uri.Host)"
  }

  if ($uri.LocalPath -ne '/component-registry/services/status/current') {
    $uri = [System.Uri] "$($uri.AbsoluteUri)component-registry/services/status/current"
  }

  while ($true) {
    clear
    Write-Host "Checking $($uri.AbsoluteUri)"

    try {
      $content = Invoke-WebRequest $uri.AbsoluteUri

      if ($content.StatusCode -eq 200) {
        $json = $content.Content | ConvertFrom-Json
        $json.content | select serviceName, `
	                  @{N='Registered';E={ $_.serviceStatus.serviceInitializationStatus }}, `
	           	  @{N='Available';E={ if (!$_.notAvailable) {'True'} else {'False'}}}, `
	                       lastUpdated, `
		               statusEndPointUrl `
		      | ft -auto
        if ($loop -eq $false) { break }
      } else {
          Write-Host "Unable to access vRA Component Registry. Error: $content.StatusCode"
      }
    } catch {
       Write-Host "Unable to access vRA Component Registry. Error: $_.Exception.Message."
  }
  sleep $refresh
  }
}

HAProxy as a Load Balancer for vRealize Automation

In this post I’m going to show how to use HAProxy as a load balancer for vRealize Automation. I used Ubuntu 14.04 LTS for the OS.

Install HAProxy

 sudo apt-get install haproxy

Add sub interfaces to VM

/etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

auto eth0:0
iface eth0:0 inet static
address 192.168.3.7
netmask 255.255.255.0

auto eth0:1
iface eth0:1 inet static
address 192.168.3.10
netmask 255.255.255.0

auto eth0:2
iface eth0:2 inet static
address 192.168.3.27
netmask 255.255.255.0

HAProxy Config

A lot of this config are the defaults. I added the section so you can enable to the LB stats page. The bottom has my edits for vRA. There are three sections for the appliance, IaaS Web and IaaS Manager. I’m not an HAProxy expert so probably has some things that could be improved. I tried to add all the recommendations (persistence, load balancing policy, timeout, etc) as described in the vRA LB Guide.

/etc/haproxy/haproxy.cfg


global
 log /dev/log local0
 log /dev/log local1 notice
 chroot /var/lib/haproxy
 stats socket /run/haproxy/admin.sock mode 660 level admin
 stats timeout 30s
 user haproxy
 group haproxy
 daemon
 debug

maxconn 2048
ssl-server-verify none

# Default SSL material locations
 ca-base /etc/ssl/certs
 crt-base /etc/ssl/private
 tune.ssl.default-dh-param 2048

# Default ciphers to use on SSL-enabled listening sockets.
 # For more information, see ciphers(1SSL). This list is from:
 # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
 ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
 ssl-default-bind-options no-sslv3

defaults
 log global
 mode http
 option forwardfor
 option forwardfor
 option httplog
 option dontlognull
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 errorfile 400 /etc/haproxy/errors/400.http
 errorfile 403 /etc/haproxy/errors/403.http
 errorfile 408 /etc/haproxy/errors/408.http
 errorfile 500 /etc/haproxy/errors/500.http
 errorfile 502 /etc/haproxy/errors/502.http
 errorfile 503 /etc/haproxy/errors/503.http
 errorfile 504 /etc/haproxy/errors/504.http

listen stats
 bind 192.168.3.201:80
 mode http
 log global
 stats enable
 stats uri /stats
 stats realm Haproxy\ Statistics
 stats auth admin:VMware1!

# vRA 7.1 Distributed

# vRA VA

frontend vra71-va
 bind 192.168.3.7:443 ssl crt /etc/ssl/private/wildcard.pem
 mode http
 default_backend vra71-va-backend

backend vra71-va-backend
 mode http
 balance roundrobin
 stick on src table vra71-va-backend
 stick-table type ip size 200k expire 30m
 default-server inter 3s
 timeout check 10s
 option httpchk GET /vcac/services/api/health
 http-check expect status 204

 server vra71c 192.168.3.11:443 check ssl verify none
 server vra71d 192.168.3.12:443 check ssl verify none

# vRA IaaS Web

frontend vra71-iaas-web
 bind 192.168.3.10:443 ssl crt /etc/ssl/private/wildcard.pem
 mode http
 default_backend vra71-iaas-web-backend

backend vra71-iaas-web-backend
 mode http
 balance roundrobin
 stick on src table vra71-iaas-web-backend
 stick-table type ip size 200k expire 30m
 default-server inter 3s
 timeout check 10s
 option httpchk GET /wapi/api/status/web
 http-check expect string REGISTERED

 server vra71c-web 192.168.3.13:443 check ssl verify none
 server vra71d-web 192.168.3.14:443 check ssl verify none

# vRA IaaS Mgr

frontend vra71-iaas-mgr-https
 bind 192.168.3.27:443 ssl crt /etc/ssl/private/wildcard.pem
 mode http
 default_backend vra71-iaas-mgr-backend

backend vra71-iaas-mgr-backend
 mode http
 balance roundrobin
 stick on src table vra71-iaas-mgr-backend
 stick-table type ip size 200k expire 30m
 default-server inter 3s
 timeout check 10s
 option httpchk GET /VMPSProvision
 http-check expect rstring ProvisionService

 server vra71c-mgr 192.168.3.25:443 check ssl verify none
 server vra71d-mgr 192.168.3.26:443 check ssl verify none

Certificate

This is the wildcard cert I’m using for vRA. You just need to include the cert and private key in the proper order.

/etc/ssl/private/wildcard.pem 

-----BEGIN CERTIFICATE-----
MIIF7zCCA9egAwIBAgICEAkwDQYJKoZIhvcNAQELBQAwgYsxCzAJBgNVBAYTAlVT
MREwDwYDVQQIDAhDb2xvcmFkbzEMMAoGA1UECgwDTGFiMQwwCgYDVQQLDANMYWIx
...
w5HvHhi/K6f1qeeBr+xKxTEvz3gfQvxEgSxMmMRbffqGM4UbMHkDuJq4H4yrow48
XavIE+zwl1EiDvzEcz5ThbAWSL5fRu6SB0eeYldr4uEGJ/8=
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAwekJRhfC3NHja9waE5W0lxA3HebfThF9nMbUpoYUK+TvFKz7
Mkl9mUp/RS/YDYsVnQ3cUNx83bITDmc3EbIVYzF8rMv1BjQCM4ewrhhbQuBnivoI
...
7XcWYfeZuFz2GJ+3+Wt6EzEaV3DmoU0nuULRkoOSFi7FXCxsFLPVzzuZZgRWXFiN
q6p+3O9rYgelJ0P4a5mtPlWdJJZ2bAe9A0tB/px+xdFtuEuzyed0gbA=
-----END RSA PRIVATE KEY-----

Stats Page

Here is an example of what the HAProxy stats page looks like:

2016-10-10_21-09-59.png

 

 


NetApp ONTAP 9 Simulator Guide by Neil Anderson

I want to take a moment to mention the great work Neal Anderson did in his NetApp ONTAP 9 Simulator Guide. This incredibly detailed (117 pages) guide not only shows you how to set up a NetApp ONTAP 9 simulator for your lab, but you’ll also learn how install/configure:

Storage is critical to any infrastructure and while you’re learning it’s easy to quickly install a storage solution that’s not representative of what you’ll see in the real world. I try to make my lab environment match what I see in real production environments as much as possible. That means using segregated networks with VLANs, firewalls, load balancers, CA signed certificates, and with the help this guide you’ll be able to implement a real world storage solution.


Kubernetes 1.3 HA Walkthrough – NGINX

Table of Contents

You can find all of the config files on the GitHub page.

Overview

Let’s test out the environment by installing NGINX.

Create the NGINX deployment

We are going to create an NGINX deployment where we will have an NGINX replica on each of our Kubernetes worker nodes. A replica will be Kubernetes pod:

kubectl run nginx –image=nginx –port=80 –replicas=3

If successful, you will see: deployment “nginx” created

Let’s get a listing of each of the pods:

kubectl get pods -o wide

NAME                   READY STATUS  RESTARTS AGE IP          NODE
nginx-2032906785-44ndh 1/1   Running 2        16d 172.16.43.3 kube-worker2
nginx-2032906785-mvpft 1/1   Running 2        16d 172.16.19.2 kube-worker1
nginx-2032906785-tyfmu 1/1   Running 2        16d 172.16.23.2 kube-worker0

Here we can see that there is a pod on each of the worker nodes. Notice the IP address. Each of the pods (not docker container) will get an IP address from a network that was given out by etcd. Each worker node will be given a unique network range by etcd.

Create the NGNIX service

Now we need to create a Kubernetes Service to expose our NGINX deployment so we can reach it. I’m going to make the service of type NodePort, which will make the service accessible on any pod using the same port.

kubectl expose deployment nginx –port=80 –type=NodePort

View the service’s details

kubectl describe services nginx

Name:             nginx
Namespace:        default
Labels:           run=nginx
Selector:         run=nginx
Type:             NodePort
IP:               172.16.242.11
Port:              80/TCP
NodePort:          31153/TCP
Endpoints:        172.16.19.2:80,172.16.23.2:80,172.16.43.3:80
Session Affinity: None

Notice the NodePort and Endpoints. The Endpoints are our Kubernetes pods. If we launch a web browser and go to one of our worker nodes and use port 31153, we should see NGINX:

2016-10-09_16-47-19.png

Lastly, let’s check out one of the pod’s details:

kubectl describe pod nginx-2032906785-44ndh

Name: nginx-2032906785-44ndh
Namespace: default
Node: kube-worker2/192.168.3.184
Start Time: Thu, 22 Sep 2016 20:20:45 -0600
Labels: pod-template-hash=2032906785
 run=nginx
Status: Running
IP: 172.16.43.3
Controllers: ReplicaSet/nginx-2032906785
Containers:
 nginx:
 Container ID: docker://19ffedded8de834da2e072f012c5081655b7149172d2c00d31944c7fe2499766
 Image: nginx
 Image ID: docker://sha256:ba6bed934df2e644fdd34e9d324c80f3c615544ee9a93e4ce3cfddfcf84bdbc2
 Port: 80/TCP
 State: Running
 Started: Sat, 08 Oct 2016 14:59:03 -0600
 Last State: Terminated
 Reason: Completed
 Exit Code: 0
 Started: Sun, 02 Oct 2016 14:38:18 -0600
 Finished: Mon, 03 Oct 2016 08:27:44 -0600
 Ready: True
 Restart Count: 2
 Environment Variables: <none>
Conditions:
 Type Status
 Initialized True
 Ready True
 PodScheduled True
Volumes:
 default-token-jeqk5:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-jeqk5
QoS Tier: BestEffort
No events.

I’m not going to go into all of the details here, but you can read the Kubernetes Documentation.


Kubernetes 1.3 HA Walkthrough – SkyDNS

Table of Contents

You can find all of the config files on the GitHub page.

Overview

Kubernetes uses a DNS server based off of SkyDNS to . You can read more about it here.

You probably want to perform the actions below on the same machine where you installed kubectl.

Create the SkyDNS Kubernetes service

Download the service definition file

curl -O https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/skydns-svc.yaml

Edit skydns-svc.yaml and change clusterIP to 172.16.0.10 or anything else that is in your etcd IP pool. Mine is 172.16.0.0/16. I believe this IP needs to be in your certificate as a SAN entry. Before I did this the SkyDNS containers would fail and the Kubernetes controller node reported certificate errors from the skydns containers.

skydns-svc.yaml

apiVersion: v1
 kind: Service
 metadata:
   name: kube-dns
   namespace: kube-system
   labels:
     k8s-app: kube-dns
     kubernetes.io/cluster-service: "true"
     kubernetes.io/name: "KubeDNS"
 spec:
   selector:
     k8s-app: kube-dns
 clusterIP:172.16.0.10
 ports:
   - name: dns
     port: 53
     protocol: UDP
   - name: dns-tcp
     port: 53
     protocol: TCP

Create the service

kubectl create -f skydns-svc.yaml

Which should result in:

service “kube-dns” created

Create the skydns deployment

kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/kubedns.yaml

deployment “kube-dns-v19” created

kubectl –namespace=kube-system get pods

NAME                           READY     STATUS    RESTARTS   AGE
kube-dns-v19-965658604-p2js8   3/3       Running   0          22h
kube-dns-v19-965658604-ru5ac   3/3       Running   0          22h

 

 

 

 

 

 

 

 

 


Kubernetes 1.3 HA Walkthrough – kubectl

Table of Contents

You can find all of the config files on the GitHub page.

Install kubectl

kubectl is the program that we will use to interact with the Kubernetes environment. In my environment I installed it on my Windows 10 desktop running bash for Windows.

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Set the active cluster

Note that I’m setting the server to my HAproxy load balancer. This will most likely be different in your environment.

kubectl config set-cluster kubernetes-the-hard-way \
–certificate-authority=/var/lib/kubernetes/ca.pem \
–embed-certs=true \
–server=https://kube-controller13.vmware.local:6443

Set the credentials

kubectl config set-credentials admin –token ‘VMware1!’

Set the default context

kubectl config set-context default-context \
–cluster=kubernetes-the-hard-way \
–user=admin
kubectl config use-context default-context

Get the component status and verify everything is okay

kubectl get componentstatuses

NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}