Build your own Kubernetes (K8S) Cluster at home on a budget — Part 2

The second and final part to this guide.

Build your own Kubernetes (K8S) Cluster at home on a budget — Part 2


  • Got some hardware
  • Installed Proxmox
  • Installed Kubernetes
  • Installed Helm 2 (if you want you can follow the migraiton from helm2 to 3 guide helm has, it is simple and really quick to do on a clsuter with very little apps running —
  • Installed and configured MetalLB for load balancer creation
  • Installed and configured persistant storage for our Kubernetes clsuter

Part 2 Starts Here

In the last post we got a basic k8s cluster running with enough to now deploy apps to it. In this post im going to do a step by step on how to get traefik, consul and flannel set up and then deploy a simple app we can use like grafana/prometheus for monitoring which will use all the bits we configured prior (hopefully by the end of this you will feel comfortable enough to deploy it yourself).


Lets get flannel out the way, flannel is probably the easiest to get installed on the cluster, just run the following (github repo here —

kubectl apply -f


Before getting traefik installed we need to get consul up so it can share Key Values between all the nodes (HA Set UP).

Lets go over to and pull the latest helm chart from the official source.

cp -R consul-helm ~/kubernetes-on-tin/
cd ~/kubernetes-on-tin/consul-helm
vim values.yaml

Lets edit values.yaml and change some of the values to what we need.

Replicas: 3 
#you need an odd number of instances and ideally one on each host

StorageClass: "<ClassName>"
# See below
  enabled: true
  annotations: {}
  labels: {}
    - consul.domain
  tls: []

#This ingres config will make sense when we get traefik up and running

Get storage class name with the below command

kubectl get sc -A                                                                                                                                                                               
NAME                   PROVISIONER                                       AGE
nfs-client (default)   cluster.local/nfs-client-nfs-client-provisioner   180d

And now we can go ahead and deploy it with Helm

helm install -n consul consul ./

This should give you a couple of consul pods, and one point to make is the internal url for it, in this case it is…


You will need this for the Traefik set up.


You can grab the traefik helmchart from the stable helmchart directory

Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. This chart bootstraps…

cp -R traefik ~/kubernetes-on-tin/
cd ~/kubernetes-on-tin/traefik
vim values.yaml

Before starting, make sure you have an available IP address in MetalLB for Traefik, you can add one by editing the values.yaml of metallb and appending one like below and then running helm upgrade -n metallb metallb ~/kubernetes-on-tin/metallb/


Now edit/add the following values in the traefik values.yaml

# IP you gave to metallb

  enabled: true
  enforced: true
  permanentRedirect: false
  upstream: false
  insecureSkipVerify: false
  generateTLS: false

  storeAcme: true
  acmeStorageLocation: traefik/acme/account
  importAcme: false
    endpoint: consul.consul.svc.cluster.local:8500
    watch: true
    prefix: traefik

  keyType: RSA4096
  enabled: true
  email: [email protected]
  onHostRule: true
  staging: false
  logging: true
    enabled: true
      - main: "*.domain"
      - sans:
        - "domain"
  challengeType: dns-01
  delayBeforeCheck: 90
  resolvers: []
    name: cloudflare
    existingSecretName: ""
      CLOUDFLARE_EMAIL: "cloudflare_email"
      CLOUDFLARE_API_KEY: "cloudflare_api key"

  enabled: true
  domain: traefik.domain
  enabled: true
    enabled: true

If you get any errors deploying only by changing the above then let me know, it could be I missed a value somewhere. The settings I believe are self explanatory but again any questions then let me know.

Let’s deploy Traefik

helm install -n kube-system traefik ./

What you should see after running that, is all pods are up and running, and you should be able to hit the Traefik dashboard with the URL we set in the values file traefik.domain , if that works you should be able to then hit the URL for consul consul.domain and that should allow you to see the shared KV that are stored there.

And hopefully we now have a nice and shiny ingress controller with traefik. Traefik will auto renew your certs and request them if they dont already exist. It will also enforce https redirection for all apps. And on that note you should probably look at installing the Prometheus/Grafana Stack.

You can find the charts in the stable directory for helm charts, just follow the same process as these ones we deployed here, making sure ingress host is set, and also storage. The rest is all configuration in values.

This is a short post, I just wanted to finish off the part of deploying apps with helm charts. I will make one more note on how I lay out my helm charts, each helm chart sits in it’s own directory and has its own git repo. This allows seperation from one chart to another and also makes life easier when you get to the CI/CD bit. Which I will be writing a post on soon.

Thanks for reading