How I built my Kubernetes Homelab – Part 3
Hello again, welcome to part 3 of my little Kubernetes Homelab series. In this part we will setup the Kubernetes cluster itself. Because I want to use volumes created by a VMware vSphere provider and the first steps for this needs to done at the cluster initialization.
First of all we need to create a new file called vsphere.conf in the /etc/kubernetes folder. In this configuration file we are telling Kubernetes which settings should be used to connect to the vCenter for discovering the VMware-specific information.
marco@lab-kube-m1:~# sudo tee /etc/kubernetes/vsphere.conf >/dev/null <<EOF [Global] user = "administrator@vsphere.local" password = "XXXXXXXX" port = "443" insecure-flag = "1" [VirtualCenter "lab-vc01.homelab.horstmann.in"] datacenters = "Homelab" [Workspace] server = "lab-vc01.homelab.horstmann.in" datacenter = "Homelab" default-datastore = "QNAP:nfs-vms" resourcepool-path = "Cluster/Resources" folder = "k8s" [Disk] scsicontrollertype = pvscsi EOF
I used this article to create a config file, but needed to modify the config file and migrate the config file to the new format/version because the instructions are outdated. If you need to migrate a config file from one to another kubernetes version, you can use this command to make the conversion to a new Kubernetes version.
kubeadm config migrate --old-config /etc/kubernetes/kubeadminitmaster.yaml --new-config /etc/kubernetes/kubeadminitmasternew.yaml
I used the following configuration file for creating my Kubernetes cluster.
marco@lab-kube-m1:~# sudo tee /etc/kubernetes/kubeadminit.yaml >/dev/null <<EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: y7yaev.9dvwxx6ny4ef8vlq ttl: 0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.30.50 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock kubeletExtraArgs: cloud-config: /etc/kubernetes/vsphere.conf cloud-provider: vsphere name: lab-kube-m1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: extraArgs: cloud-config: /etc/kubernetes/vsphere.conf cloud-provider: vsphere extraVolumes: - hostPath: /etc/kubernetes/vsphere.conf mountPath: /etc/kubernetes/vsphere.conf name: cloud timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: extraArgs: cloud-config: /etc/kubernetes/vsphere.conf cloud-provider: vsphere extraVolumes: - hostPath: /etc/kubernetes/vsphere.conf mountPath: /etc/kubernetes/vsphere.conf name: cloud dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.20.6 networking: dnsDomain: cluster.local podSubnet: 10.10.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} EOF
Now I will reloaded the services to make sure everything is fine.
marco@allthreenodes:~# sudo systemctl daemon-reload marco@allthreenodes:~# sudo systemctl restart kubelet
Before I can initialize the first (master) node, kubeadm needs to pull several packages from the Google Container Registry to deploy later the cluster services. This packages must be downloaded on every cluster node.
marco@allthreenodes:~# sudo kubeadm config images pull [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.2 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.2 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.2 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.2 [config/images] Pulled k8s.gcr.io/pause:3.1 [config/images] Pulled k8s.gcr.io/etcd:3.3.10 [config/images] Pulled k8s.gcr.io/coredns:1.3.1
Ok, now this is the very uneventful creation of the cluster. This command runs on the master node and create the whole cluster.
marco@lab-kube-m1:~# sudo kubeadm init --config /etc/kubernetes/kubeadminit.yaml W0313 00:45:46.625707 18261 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.20.6 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [lab-kube-m1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.30.50] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [lab-kube-m1 localhost] and IPs [192.168.30.50 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [lab-kube-m1 localhost] and IPs [192.168.30.50 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0313 00:46:19.803820 18261 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0313 00:46:19.804774 18261 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 71.503600 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node lab-kube-m1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node lab-kube-m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: y7yaev.9dvwxx6ny4ef8vlq [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! [...] You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.30.50:6443 --token y7yaev.9dvwxx6ny4ef8vlq \ --discovery-token-ca-cert-hash sha256:e49d5d5b84fadea7affed6382e834bb3cba42a21cb14e544b8832668e317c37e
Because I want to manage the cluster with my normal user I’ve copied the admin.conf in a .kube folder of my home directory. After this I was able to use the kubectl command from my standard user.
marco@lab-kube-m1:~# mkdir -p $HOME/.kube marco@lab-kube-m1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config marco@lab-kube-m1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
Before I join my worker nodes I need to create the pod networking (CNI). I used Calico to provide the connectivity to the pods. In the first step we need to download the calico manifest (YAML) and modify it to match to our pod networking. Otherwise you will stay up until the night is nearly over and don’t know why later steps didn’t work. Especially when the kids get up early in the morning. ;-)
marco@lab-kube-m1:~# wget https://docs.projectcalico.org/manifests/calico.yaml
After downloading the file we need to edit it with e.g. VIM and search for this two lines. By default they are commented out. I changed it
the same value of podSubnet which was configured in my kubeadminit.yaml.
marco@lab-kube-m1:~# vi calico.yaml [...] - name: CALICO_IPV4POOL_CIDR value: "10.10.0.0/16" [...]
Now we need to apply the calico manifest. If you are getting something like syntax failure you probably had only removed the “#” in front of this two lines. But you need to remove “#<space>” to get the format of the YAML syntax right (yaml sucks sometimes)
marco@lab-kube-m1:~# kubectl apply -f calico.yaml
On both worker nodes I ran this command to join the nodes to the Kubernetes cluster with the command the master node told me at the end of the kubeadm init command.
marco@lab-kube-nX:~# sudo kubeadm join 192.168.30.50:6443 --token y7yaev.9dvwxx6ny4ef8vlq --discovery-token-ca-cert-hash sha256:e49d5d5b84fadea7affed6382e834bb3cba42a21cb14e544b8832668e317c37e
You can check if your cluster is running with this command.
marco@lab-kube-m1:~$ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-69496d8b75-bm2v4 1/1 Running 0 20h calico-node-jp89s 1/1 Running 0 20h calico-node-pkmhl 1/1 Running 0 20h calico-node-v7hdf 1/1 Running 0 20h coredns-74ff55c5b-nhtw6 1/1 Running 0 20h coredns-74ff55c5b-tmmpl 1/1 Running 0 20h etcd-lab-kube-m1 1/1 Running 0 20h kube-apiserver-lab-kube-m1 1/1 Running 0 20h kube-controller-manager-lab-kube-m1 1/1 Running 0 20h kube-proxy-7xchf 1/1 Running 0 20h kube-proxy-9cnk9 1/1 Running 0 20h kube-proxy-cpxpn 1/1 Running 0 20h kube-scheduler-lab-kube-m1 1/1 Running 0 20h
With this commands I got my cluster up and running. I hope my journey blog is useful for you. In the next part of this series I will deploy all necessary components to discover storage relevant components to use the vSphere Cloud Native Storage (CNS) with the vSphere CSI provider.