So I am guessing whoever has hit the link to read this article is someone who has heard of Kubernetes and wanted to run something at home, whether that is for testing new stuff out, trying out kubernetes the hard way or even running code you write and see if it is apt for a K8s Cluster.
I previously had a macbook pro with virtualbox running and then ubuntu 18.04 server, this ran a few docker containers with docker-compose but I didnt believe it to be fit for what I wanted or needed, so I decided to get some extra hardware (a refurbished HP EliteDesk 800 G1), but any hardware with a good amount of RAM would fit the bill if you are not wanting to buy new hardware.
Just to add that this is not for production use although there is some aspects I am following to help with scenarios you may encounter in production.
- HP ELITEDESK 800 G1 SFF (4th gen i5, 16GB RAM, 120GB SSD, 500GB HD)
- Synology DS411J — Used as an NFS Server
Getting the initial set up running
So the hardware arrived and I got the ball rolling by installing ProxMox on the PC. I wont go into how to install ProxMox, the installer is super simple and there is a lot of blogs and official docs to help with that, here is a link to the official guide.
I had Proxmox set up, I was storing the VM images on the 500GB HD and ProxMox running on the SSD and i was now ready to get the ball rolling.
Kubernetes Cluster Creation
Create 3 VMs on Proxmox (you can create more if you want but there is no need to start with more than 3) with Ubuntu 18.04. Name them “kubernetes-master, kubernetes-node-1, kubernetes-node-2”.
And then follow this blog post to get kubernetes running on them and then come back to this article.
You should now have a 3 node kubernetes cluster.
On a side note you can use
To make things simpler on my home lab I decided to use Helm to install everything I possibly could, Helm is essentially a package manager for kubernetes, it allows you to create charts for your deployments and then manage them via super simple commands.
This will only be installed on the master, to be fair most if not all you now do on the K8s cluster will be done via the master or you can run kubectl commands from your own machine. To install helm run
curl -LO https://git.io/get_helm.sh chmod 700 get_helm.sh ./get_helm.sh
And helm is now installed. But we now need to secure the helm server, or in other words Tiller. We can do this by using SSL certs, as this adds a lot of content to this article Iwill make it simpler by giving you the link to the official docs, just follow the steps on it and come back to me after.
Since writing this helm 3 has been released which is very different to helm 2 and at some point I will write a how to upgrade guide
Lets recap what we have done so far:
- Found or bought some hardware
- Installed ProxMox on the hardware
- Create 3 Ubuntu VMs and named them accordingly
- Installed required packages to get a 3 node cluster up
- Install Helm/Tiller and set up SSL for Tiller
We should now have a working cluster. The next steps will be to get MetalLB, NFS-Client, Flannel, Hashicorp Consul/Vault set up and Traefik.
- MetalLB will be our Load Balancer, explanation to follow.
- NFS for Cluster Storage
Part 2 will include the below, realised it is probably better to split this in 2 parts as would probably be too long a post.
- Flannel will be our CNI plugin
- Consul for Key Value store and Vault for secrets.
- Traefik will be both our ingress controller and an SSL cert generator using LetsEncrypt/ACME
I’ll group these two into one section as they are quite important. When you use a cloud provider offering e.g. EKS, AKS you are able to request external loadbalancers or storage via the kubernetes API to each provider. They abstract this by pulling in other services they already have. This makes it a little bit more complex when building a cluster on physical hardware.
To fill the gaps we have by not using a cloud provider we will use MetalLB and the NFS-Client-Provisioner.
MetalLB is in essence an application based load-balancer. When you set up an On Prem kubernetes cluster you don’t have the luxury of what cloud providers give you, so when a kubernetes application deploys and you have load balancer set to true it will stay in an everlasting pending state as there is not one available.
This as per the Readme is an automatic provisioner for kubernetes that uses an already configured NFS server to automatically create persistent volumes. This is another thing you dont really get when you create your own cluster on hardware.
Install and configure NFS-Client-Provisioner
We will be using helm charts for all of what we will be deploying, so lets start by cloning the latest helmcharts repo
git clone https://github.com/helm/charts.git
You can install a helm chart without cloning the whole repo but it is a bit more secure and safer to have a stateful copy of the chart. This stops you deploying something you have no control over and that may introduce bugs or vulnerabilities.
In the home directory of your user create a new directory
Now lets copy the helm chart we require into this new directory
cp -r ~/helmcharts/stable/nfs-client-provisioner/ ~/kubernetes-on-tin/
Now lets change into that direcory cd ~/kubernetes-on-tin/nfs-client-provisioner . You will see a values.yaml file, and a templates directory. The values.yaml file is the place we can change or configure specific values of the chart, and the template directory is where we have templates of what is needed to deploy this service.
Lets now add in the necesarry values we want.
and ammend the following to your specific values
nfs: server: <host_ip> path: </path/to-nfs-mount> mountOptions: # For creating the StorageClass automatically: storageClass: create: true # Set StorageClass as the default StorageClass # Ignored if storageClass.create is false defaultClass: true
The first 3 I believe are easily understood, the one I will mention is defaultClass , this tells the chart to set the default kubernetes storage class to the nfs server. This allows dynamic provisioning rather than having to configure storage per pod/deployment etc.
With that set lets deploy the chart
helm install --name nfs-client --namespace nfs-client ~/kubernetes-on-tin/nfs-client-provisioner --tls
we are calling the deployment nfs-client and also adding it to its own name-space for a bit of seperation. Also at the end you will see --tls , this is because we initially configured tls for helm/tiller, if you do not tell the helm client to use tls then this command will just time out.
That should have deployed and we can check if deployed correctly by checking the kubernetes storage class.
kubectl describe sc Name: nfs-client IsDefaultClass: Yes Annotations: storageclass.kubernetes.io/is-default-class=true Provisioner: cluster.local/nfs-client-nfs-client-provisioner Parameters: archiveOnDelete=true AllowVolumeExpansion: True MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
That shows you that it was deployed and it is now set top be the default storage class, you will not have to define it for every deployment from now on. It will just default to that one.
Now MetalLB, again lets copy the helm chart to our new directory
cp -r ~/helmcharts/stable/nfs-client-provisioner/ ~/kubernetes-on-tin/
Lets now edit the values file for this chart
I will add below the configuration options i changed
configInline: address-pools: - name: default protocol: layer2 addresses: - 192.168.1.50/32
Here we are simply defining what IP we want the load balancer to have, in this case i have chosen 50. But you could add something like 192.168.1.2/24 and metallb will chose an IP from that range.
Lets deploy it in the same way we did with the nfs-client-provisioner
helm install --name metallb --namespace metallb ~/kubernetes-on-tin/metallb --tls
and to check it is working you can just deploy something that requires a load balancer and it should pick up an IP from the range you gave metallb, in my case
kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-system traefik LoadBalancer 10.107.34.150 192.168.1.50 80:32358/TCP,443:30706/TCP,8080:32402/TCP 52d
So you should have now a cluster that has 2 nodes + a master that you can use as a node if needed, “load balancers” on demand and dynamic storage provisioning via NFS. Part 2 should include more bits that can get you up and running with a few services.
P.N. I started writing this blog post a few months ago but I left it in draft for quite a while and decided to finish it today. I then realised i needed a part 2 where I can detail more of how to get apps running and the next few stages and also gotchas i had with starting my own K8S cluster.