Kubernetes and Rancher on Ubuntu from Scratch
Ever wanted to set up a kubernetes cluster with a scalable and intuitive management dashboard?
I first came across Rancher in one of Techno Tim's Videos about a year ago and it removed a number of barriers to entry in terms of the knowledge required to get up and running.
After working with a single-node deployment for a few months, I decided to try and go to a full K8s deployment when Rancher released support at version 2.5. There are, however, some gotchas in Ranchers own installation guides so this post contains everything you need to go from a fresh Ubuntu Server 20.04 LTS installation to a working Rancher cluster on top of native K8s.
Sources
I used the following resources extensively when composing this guide, credit goes to the original authors
- Hackernoon - How to Set Up a Kubernetes Cluster on Ubuntu 20.04/18.04/16.04 in 14 Steps
- Techmint - How to Install a Kubernetes Cluster on CentOS 8
- Rancher - Kubernetes Cluster installation Guide
- Kubernetes NGINX Ingress Controller Docs
- METALLB Docs
Architecture
We're going to use 3 nodes for this cluster:
- 1x Kubernetes Master Node
- 2x Kubernetes Worker Nodes
Use your virtualization software of choice to set up these machines. I recommend using 2vcpus and 4gb of ram if you can spare it but you should be able to get by with 1vcpu and 1-2gb of ram. Ideally you'll want to have at least 15GB HDD for each machine. Assign each machine a static IP, for the purposes of this article I'll use the 192.168.0.0/24
range assigned as follows:
- Kubernetes Master Node: 192.168.0.100
- Kubernetes Worker Nodes: 192.168.0.101 and 192.168.0.102
The first thing we need to do is update the /etc/hosts
file on each machine to include the following entries
192.168.0.100 kubernetes-master
192.168.0.101 kubernetes-worker-1
192.168.0.102 kubernetes-worker-2
Then do a quick test to ensure that each machine can ping the others using the hostname specified.
Installing Docker and Kubernetes
Run the following commands on kubernetes-master, kubernetes-worker-1 and kubernetes-worker-2 to install docker and kubernetes.
Start by installing docker
sudo apt-get update
sudo apt install docker.io
sudo systemctl enable docker
sudo systemctl start docker
Check to see that docker is running by using sudo systemctl status docker
Next we need to disable things which interfere with Kubernetes.
sudo ufw disable
sudo swapoff -a
NB: I don't condone disabling the firewall for production use in an internet environment, I've left it disabled in this case for simplicity. Check the Rancher Port Requirements Documentation for more information.
Install apt-transport-https and set up the Kubernetes repository
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo bash -c 'echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list'
Now install Kubernetes
sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubectl
sudo systemctl enable kubelet
sudo systemctl start kubelet
Initialising the Kubernetes cluster
Commands in this section should only be run on kubernetes-master
Run the following command to initialise the cluster.
kubeadm init --control-plane-endpoint kubernetes-master
The --control-plane-endpoint
option is especially important here because it protects you from having to major surgery on your cluster if IP Addresses change - learn from my pain!
Running the command above should generate an output which looks something like:
kubeadm join kubernetes-master:6443 --token nu06lu.xrsux0ss0ixtnms5 \
--discovery-token-ca-cert-hash sha256:f996ea35r4353d342fdea2997a1cf8caeddafd6d4360d606dbc82314683478hjmf7
Keep this command safe, we'll be using it shortly to join our worker nodes to the cluster. Sometimes the \
and the line break can cause problems so I normally remove them so the command is on one long line, like this:
kubeadm join kubernetes-master:6443 --token nu06lu.xrsux0ss0ixtnms5 --discovery-token-ca-cert-hash sha256:f996ea35r4353d342fdea2997a1cf8caeddafd6d4360d606dbc82314683478hjmf7
Next move the kube config file to the current user so that the kubectl
command can be used
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Finally we need to set up the Pod Network. I've tried Calcio and Flannel but have had most success with Weave, so lets apply that config:
export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
Join worker nodes to the cluster
Joining the worker nodes to our cluster only requires running the join command we saved earlier on kubernetes-worker-1 and kubernetes-worker-2:
sudo kubeadm join kubernetes-master:6443 --token nu06lu.xrsux0ss0ixtnms5 --discovery-token-ca-cert-hash sha256:f996ea35r4353d342fdea2997a1cf8caeddafd6d4360d606dbc82314683478hjmf7
And there you have it, one fully working Kubernetes cluster. Running kubeadm get nodes
on kubernetes-master should list all 3 servers with Ready
status.
Preparing for Rancher
Before we install Rancher, we need to install an Ingress Controller to help us expose the services running in our Kubernetes cluster (like the Rancher Web UI) to the outside world. In order to achieve that, we're going to use the NGINX Ingress Controller combined with MetalLb, a pure software load balancer which will give us maximum flexibility when it comes to exposing services.
Start by installing helm, a popular package manager for Kubernetes
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
Then install the NGINX Ingress Controller
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
Next install the MetalLb Load Balancer
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
and configure it on first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Lastly we need to give MetalLb an IP address so that it can bind the NGINX Ingress Controller to the outside world.
Create a metallb-config.yaml
file with the following contents:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.184/32
If you wish to assign a range of IP addresses, you can replace 192.168.0.184/32
with a range e.g. 192.168.0.184-192.168.0.185
.
This will allow the Kubernetes cluster (and in turn Rancher) to expose ingresses on the specified IP address(es).
Now apply the config file to the cluster
kubectl apply -f metallb-config.yaml
We're now ready to install Rancher.
Installing Rancher
By comparison, installing rancher is an almost trivial affair to the steps we've taken to get to this point. I thoroughly recommend perusing the Rancher Installation Guide for Kubernetes Clusters before continuing, paying particular attention to the options around SSL Configuration. We'll be using Rancher Self-Signed certificates in this example.
Because we're using self-signed certificates, we need to install cert-manager. BEWARE at the time of writing, the Rancher instructions for installing cert-manager are incomplete, always check the official cert-manager documentation.
Follow these steps to install cert-manager
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version v1.3.1 \ --set installCRDs=true
Now we can finally install Rancher!
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo update
kubectl create namespace cattle-system
helm install rancher rancher-stable/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org
Replace rancher.my.org
with the publicly accessible URI of your choice.
Check on the status of the Rancher deployment:
kubectl -n cattle-system rollout status deploy/rancher
Once the success message has been received, you should be able to point your rancher.my.org
replacement to 192.168.0.184
either via DNS or your in the hosts file of your local machine (/etc/hosts
for linux and mac or C:\Windows\System32\Drivers\etc\hosts
for windows) and browse to your alternative for rancher.my.org
in your web browser of choice and you should be greeted with the Rancher welcome screen inviting you to set an initial administrator password.
When prompted, set the admin password and select that you want to manage a single cluster and all being well, you should now have a fully functioning Rancher installation. Enjoy.