Table of Contents
You can find all of the config files on the GitHub page.
Install flannel
The following must be performed on each worker node.
mkdir -p /var/lib/kubernetes
mv /root/ca.pem /root/kubernetes.pem /root/kubernetes-key.pem /var/lib/kubernetes/
yum install -y flannel
/etc/systemd/system/flanneld.service
[Unit] Description=Flanneld overlay address etcd agent Wants=etcd.service After=etcd.service After=network.target Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld -etcd-endpoints=https://kube-etcd0.vmware.local:2379,https://kube-etcd1.vmware.local:2379,https://kube-etcd2.vmware.local:2379 -etcd-prefix=/coreos.com/network/ -etcd-cafile=/var/lib/kubernetes/ca.pem ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure RestartSec=5s [Install] WantedBy=multi-user.target RequiredBy=docker.service
Enable and start the service on each worker node
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
Verify that the service started successfully
systemctl status flanneld –no-pager
Verify you can get the etcd flanneld info
curl –cacert /var/lib/kubernetes/ca.pem -L https://kube-etcd0.vmware.local:2379/v2/keys/coreos.com/network/config
{“action”:”get”,”node”:{“key”:”/coreos.com/network/config”,”value”:”{\”Network\”: \”172.16.0.0/16\”}”,”modifiedIndex”:39,”createdIndex”:39}}
Install Docker
The following must be performed on each worker node.
yum -y install docker
/usr/lib/systemd/system/docker.service
[Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target rhel-push-plugin.socket Wants=docker-storage-setup.service [Service] Type=notify NotifyAccess=all EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/run/flannel/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network Environment=GOTRACEBACK=crash ExecStart=/usr/bin/docker-current daemon \ --exec-opt native.cgroupdriver=systemd \ $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 MountFlags=slave Restart=on-abnormal [Install] WantedBy=multi-user.target
Enable and start the service on each node
systemctl daemon-reload
systemctl enable docker
systemctl start docker
Verify docker is running
docker version
Client: Version: 1.10.3 API version: 1.22 Package version: docker-common-1.10.3-46.el7.centos.10.x86_64 Go version: go1.6.3 Git commit: d381c64-unsupported Built: Thu Aug 4 13:21:17 2016 OS/Arch: linux/amd64 Server: Version: 1.10.3 API version: 1.22 Package version: docker-common-1.10.3-46.el7.centos.10.x86_64 Go version: go1.6.3 Git commit: d381c64-unsupported Built: Thu Aug 4 13:21:17 2016 OS/Arch: linux/amd64
Install bridge utils on each host
You don’t have to do this but I like being able to see the linux bridges and interfaces connected to them.
yum -y install bridge-utils
Verify Networking
You don’t have to do this but I like to verify that docker containers can get etcd provided IP addresses and can communicate over the flannel overlay network. This image will start some small containers with networking tool such as ping. Run the following command on each worker node and verify that each container can ping the other worker nodes:
docker run -it joffotron/docker-net-tools
Note that when starting containers via Kubernetes, the containers won’t have etcd provided IP addresses. The Kubernetes pods wills will get etcd provided IP addresses.
Install kubelet
The following must be performed on each worker node.
Note: kube-controller13.vmware.local is an HAproxy VM in my lab that’s acting as a load balancer. If you don’t want to use a load balancer, you could just enter one controller node, but obviously this won’t be redundant.
mkdir -p /opt/cni
tar -xvf cni-c864f0e1ea73719b8f4582402b0847064f9883b0.tar.gz -C /opt/cni
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-proxy
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubelet
chmod +x kubectl kube-proxy kubelet mv kubectl kube-proxy kubelet /usr/bin/ mkdir -p /var/lib/kubelet/
/var/lib/kubelet/kubeconfig
apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/lib/kubernetes/ca.pem server: https://kube-controller13.vmware.local:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubelet name: kubelet current-context: kubelet users: - name: kubelet user: token: VMware1!
/etc/systemd/system/kubelet.service
[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/bin/kubelet \ --allow-privileged=true \ --api-servers=https://kube-controller0.vmware.local:6443,https://kube-controller1.vmware.local:6443,https://kube-controller2.vmware.local:6443 \ --cluster-dns=172.16.0.10 \ --cluster-domain=cluster.local \ --configure-cbr0=false \ --container-runtime=docker \ --docker=unix:///var/run/docker.sock \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --serialize-image-pulls=false \ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \ --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
Enable and start the service on each worker node
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
Verify that the service started successfully
systemctl status kubelet –no-pager
Verify that you can talk to the Kubernetes API
curl –cacert /var/lib/kubernetes/ca.pem -L https://kube-controller13.vmware.local:6443 -u ‘admin:VMware1!’
{ "paths": [ "/api", "/api/v1", "/apis", "/apis/apps", "/apis/apps/v1alpha1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/batch", "/apis/batch/v1", "/apis/batch/v2alpha1", "/apis/extensions", "/apis/extensions/v1beta1", "/apis/policy", "/apis/policy/v1alpha1", "/apis/rbac.authorization.k8s.io", "/apis/rbac.authorization.k8s.io/v1alpha1", "/healthz", "/healthz/ping", "/logs/", "/metrics", "/swagger-ui/", "/swaggerapi/", "/ui/", "/version" ] }
Install kube proxy
The following must be performed on each worker node.
/etc/systemd/system/kube-proxy.service
[Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/bin/kube-proxy \ --master=https://kube-controller13.vmware.local:6443 \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --proxy-mode=iptables \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
Enable and start the service on each worker node
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy