NetApp ONTAP 9 Simulator Guide by Neil Anderson

I want to take a moment to mention the great work Neal Anderson did in his NetApp ONTAP 9 Simulator Guide. This incredibly detailed (117 pages) guide not only shows you how to set up a NetApp ONTAP 9 simulator for your lab, but you’ll also learn how install/configure:

Storage is critical to any infrastructure and while you’re learning it’s easy to quickly install a storage solution that’s not representative of what you’ll see in the real world. I try to make my lab environment match what I see in real production environments as much as possible. That means using segregated networks with VLANs, firewalls, load balancers, CA signed certificates, and with the help this guide you’ll be able to implement a real world storage solution.


Kubernetes 1.3 HA Walkthrough – NGINX

Table of Contents

You can find all of the config files on the GitHub page.

Overview

Let’s test out the environment by installing NGINX.

Create the NGINX deployment

We are going to create an NGINX deployment where we will have an NGINX replica on each of our Kubernetes worker nodes. A replica will be Kubernetes pod:

kubectl run nginx –image=nginx –port=80 –replicas=3

If successful, you will see: deployment “nginx” created

Let’s get a listing of each of the pods:

kubectl get pods -o wide

NAME                   READY STATUS  RESTARTS AGE IP          NODE
nginx-2032906785-44ndh 1/1   Running 2        16d 172.16.43.3 kube-worker2
nginx-2032906785-mvpft 1/1   Running 2        16d 172.16.19.2 kube-worker1
nginx-2032906785-tyfmu 1/1   Running 2        16d 172.16.23.2 kube-worker0

Here we can see that there is a pod on each of the worker nodes. Notice the IP address. Each of the pods (not docker container) will get an IP address from a network that was given out by etcd. Each worker node will be given a unique network range by etcd.

Create the NGNIX service

Now we need to create a Kubernetes Service to expose our NGINX deployment so we can reach it. I’m going to make the service of type NodePort, which will make the service accessible on any pod using the same port.

kubectl expose deployment nginx –port=80 –type=NodePort

View the service’s details

kubectl describe services nginx

Name:             nginx
Namespace:        default
Labels:           run=nginx
Selector:         run=nginx
Type:             NodePort
IP:               172.16.242.11
Port:              80/TCP
NodePort:          31153/TCP
Endpoints:        172.16.19.2:80,172.16.23.2:80,172.16.43.3:80
Session Affinity: None

Notice the NodePort and Endpoints. The Endpoints are our Kubernetes pods. If we launch a web browser and go to one of our worker nodes and use port 31153, we should see NGINX:

2016-10-09_16-47-19.png

Lastly, let’s check out one of the pod’s details:

kubectl describe pod nginx-2032906785-44ndh

Name: nginx-2032906785-44ndh
Namespace: default
Node: kube-worker2/192.168.3.184
Start Time: Thu, 22 Sep 2016 20:20:45 -0600
Labels: pod-template-hash=2032906785
 run=nginx
Status: Running
IP: 172.16.43.3
Controllers: ReplicaSet/nginx-2032906785
Containers:
 nginx:
 Container ID: docker://19ffedded8de834da2e072f012c5081655b7149172d2c00d31944c7fe2499766
 Image: nginx
 Image ID: docker://sha256:ba6bed934df2e644fdd34e9d324c80f3c615544ee9a93e4ce3cfddfcf84bdbc2
 Port: 80/TCP
 State: Running
 Started: Sat, 08 Oct 2016 14:59:03 -0600
 Last State: Terminated
 Reason: Completed
 Exit Code: 0
 Started: Sun, 02 Oct 2016 14:38:18 -0600
 Finished: Mon, 03 Oct 2016 08:27:44 -0600
 Ready: True
 Restart Count: 2
 Environment Variables: <none>
Conditions:
 Type Status
 Initialized True
 Ready True
 PodScheduled True
Volumes:
 default-token-jeqk5:
 Type: Secret (a volume populated by a Secret)
 SecretName: default-token-jeqk5
QoS Tier: BestEffort
No events.

I’m not going to go into all of the details here, but you can read the Kubernetes Documentation.


Kubernetes 1.3 HA Walkthrough – SkyDNS

Table of Contents

You can find all of the config files on the GitHub page.

Overview

Kubernetes uses a DNS server based off of SkyDNS to . You can read more about it here.

You probably want to perform the actions below on the same machine where you installed kubectl.

Create the SkyDNS Kubernetes service

Download the service definition file

curl -O https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/skydns-svc.yaml

Edit skydns-svc.yaml and change clusterIP to 172.16.0.10 or anything else that is in your etcd IP pool. Mine is 172.16.0.0/16. I believe this IP needs to be in your certificate as a SAN entry. Before I did this the SkyDNS containers would fail and the Kubernetes controller node reported certificate errors from the skydns containers.

skydns-svc.yaml

apiVersion: v1
 kind: Service
 metadata:
   name: kube-dns
   namespace: kube-system
   labels:
     k8s-app: kube-dns
     kubernetes.io/cluster-service: "true"
     kubernetes.io/name: "KubeDNS"
 spec:
   selector:
     k8s-app: kube-dns
 clusterIP:172.16.0.10
 ports:
   - name: dns
     port: 53
     protocol: UDP
   - name: dns-tcp
     port: 53
     protocol: TCP

Create the service

kubectl create -f skydns-svc.yaml

Which should result in:

service “kube-dns” created

Create the skydns deployment

kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/kubedns.yaml

deployment “kube-dns-v19” created

kubectl –namespace=kube-system get pods

NAME                           READY     STATUS    RESTARTS   AGE
kube-dns-v19-965658604-p2js8   3/3       Running   0          22h
kube-dns-v19-965658604-ru5ac   3/3       Running   0          22h

 

 

 

 

 

 

 

 

 


Kubernetes 1.3 HA Walkthrough – kubectl

Table of Contents

You can find all of the config files on the GitHub page.

Install kubectl

kubectl is the program that we will use to interact with the Kubernetes environment. In my environment I installed it on my Windows 10 desktop running bash for Windows.

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Set the active cluster

Note that I’m setting the server to my HAproxy load balancer. This will most likely be different in your environment.

kubectl config set-cluster kubernetes-the-hard-way \
–certificate-authority=/var/lib/kubernetes/ca.pem \
–embed-certs=true \
–server=https://kube-controller13.vmware.local:6443

Set the credentials

kubectl config set-credentials admin –token ‘VMware1!’

Set the default context

kubectl config set-context default-context \
–cluster=kubernetes-the-hard-way \
–user=admin
kubectl config use-context default-context

Get the component status and verify everything is okay

kubectl get componentstatuses

NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}

Kubernetes 1.3 HA Walkthrough – HAproxy

Table of Contents

You can find all of the config files on the GitHub page.

Install HAproxy

I’m using HAproxy as a load balancer for my controller nodes. There isn’t much to it. I used Ubuntu 14.04 LTS and installed it with the following command:

apt-get install haproxy

I used the default config and items after # My edits.

/etc/haproxy/haproxy.cfg


global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

maxconn 2048

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
tune.ssl.default-dh-param 2048

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3

defaults
log global
mode http
option forwardfor
option forwardfor
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

# My edits

stats enable
stats uri /stats
stats realm Haproxy\ Statistics
stats auth admin:VMware1!

frontend kube-api-http
bind 192.168.3.201:80
reqadd X-Forwarded-Proto:\ http
default_backend www-backend

frontend kube-api-https
bind 192.168.3.201:6443 ssl crt /etc/ssl/private/kubernetes.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend

backend www-backend
redirect scheme https if !{ ssl_fc }
option httpchk get /healthz
http-check expect string ok
server kube-controller0 192.168.3.176:8080 check
server kube-controller1 192.168.3.177:8080 check
server kube-controller2 192.168.3.178:8080 check


Kubernetes 1.3 HA Walkthrough – workers

Table of Contents

You can find all of the config files on the GitHub page.

Install flannel

The following must be performed on each worker node.

mkdir -p /var/lib/kubernetes
mv /root/ca.pem /root/kubernetes.pem /root/kubernetes-key.pem /var/lib/kubernetes/
yum install -y flannel

/etc/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
Wants=etcd.service
After=etcd.service
After=network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld -etcd-endpoints=https://kube-etcd0.vmware.local:2379,https://kube-etcd1.vmware.local:2379,https://kube-etcd2.vmware.local:2379 -etcd-prefix=/coreos.com/network/ -etcd-cafile=/var/lib/kubernetes/ca.pem
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service


Enable and start the service on each worker node

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld


Verify that the service started successfully

systemctl status flanneld –no-pager


Verify you can get the etcd flanneld info

curl –cacert /var/lib/kubernetes/ca.pem -L https://kube-etcd0.vmware.local:2379/v2/keys/coreos.com/network/config

{“action”:”get”,”node”:{“key”:”/coreos.com/network/config”,”value”:”{\”Network\”: \”172.16.0.0/16\”}”,”modifiedIndex”:39,”createdIndex”:39}}

Read the rest of this entry »


Kubernetes 1.3 HA Walkthrough – controllers

Table of Contents

You can find all of the config files on the GitHub page.

Overview

This post will cover installing the Kubernetes API server, manager, scheduler and kubectl.

Install the Kubernetes API server, manager, scheduler and kubectl

Perform the following steps on each controller node.

mkdir -p /var/lib/kubernetes
mv /root/ca.pem /root/kubernetes.pem /root/kubernetes-key.pem /var/lib/kubernetes/

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-apiserver

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-controller-manager

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-scheduler

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl

chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl

mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/

Set up our authorization files

curl -O https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/token.csv

Change the password if you want. I’m changing it to VMware1!

sed -i ‘s/chAng3m3/VMware1!/g’ token.csv mv token.csv /var/lib/kubernetes/

curl -O https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/authorization-policy.jsonl mv authorization-policy.jsonl /var/lib/kubernetes/

Read the rest of this entry »