We are running a Kubernetes cluster on top of CloudStack.
User documentation is here: https://docs.cloud.csclub.uwaterloo.ca/kubernetes/
Enable the Kubernetes plugin from the CloudStack UI. This will require a restart of the management servers.
We currently have one control node and 3 worker nodes. Each node is using the same Compute offering (8 CPUs, 16GB of RAM). Autoscaling is enabled, so CloudStack will automatically create more worker nodes if necessary.
The admin kubeconfig has been installed on biloba and chamomile.
Note that we cannot use LoadBalancers because we are basically running our own load balancer (NGINX) outside of Kubernetes which accepts external traffic. To expose services, use Ingresses or NodePorts instead.
Read this first: https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml
Get the NodePort:
kubectl -n ingress-nginx get svc
Create an upstream in /etc/nginx/nginx.conf which points to the IPs of one or more Kubernetes VMs, with the HTTP port from the NodePort. Then, reload NGINX on biloba and chamomile.
Mark the NGINX IngressClass as the default IngressClass:
kubectl edit ingressclass nginx
This will open up Vim; add the annotation
ingressclass.kubernetes.io/is-default-class: "true" to the annotations section.
Edit the global ConfigMap:
kubectl -n ingress-nginx edit configmap ingress-nginx-controller
Add the following to the 'data' section:
allow-backend-server-header: "true" use-forwarded-headers: "true" proxy-buffer-size: 128k server-snippet: | proxy_http_version 1.1; proxy_pass_header Connection; proxy_pass_header Upgrade;
We are using a CSI driver for PersistentVolume storage.
UPDATE: don't apply the manifest directly; you'll need to download and edit it first. It seems like the labels on the control plane node changed starting from v1.24.
After downloading the manifest, open it in an editor and change
node-role.kubernetes.io/master: "" to
If you already applied the manifest and need to edit it, just run
kubectl -n kube-system edit deployment cloudstack-csi-controller.
wget https://github.com/apalia/cloudstack-csi-driver/releases/latest/download/manifest.yaml # Make necessary edits vim manifest.yaml kubectl apply -f manifest.yaml
To make this the default StorageClass, clone the repo, and edit examples/k8s/0-storageclass.yml so that it looks like this:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cloudstack-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: csi.cloudstack.apache.org reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false parameters: csi.cloudstack.apache.org/disk-offering-id: 0da1f706-fd2e-4203-8bae-1b740aef9886
Change the disk-offering-id to the ID of the 'custom' disk size offering in CloudStack. Apply the YAML file once you are done editing it.
Note: only a single writer is allowed, so do NOT use ReadWriteMany on any PersistentVolumeClaims.
Create a PersistentVolumeClaim and bind it to a Pod just to make sure that everything's working:
kubectl apply -f ./examples/k8s/pvc.yaml kubectl apply -f ./examples/k8s/pod.yaml
kubectl get pv to make sure that a PersistentVolume was dynamically provisioned.
Once you're done testing, delete the resources:
kubectl delete -f ./examples/k8s/pvc.yaml kubectl delete -f ./examples/k8s/pod.yaml
SSH'ing into a node
If you need to SSH into one of the Kubernetes nodes, get the IP from the CloudStack UI, and run e.g.
ssh -i /var/lib/cloudstack/management/.ssh/id_rsa firstname.lastname@example.org
(Do this from biloba or chamomile.)
The original CloudStack Kubernetes ISO which we used (v1.22) used Docker as the container engine, which is no longer supported; after upgrading to v1.24, all hell broke loose because kubelet tried to use containerd instead. As a workaround, we are using cri-dockerd on the control plane and the worker nodes. Each VM should have this in /var/lib/kubelet/kubeadm-flags.env:
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=unix:///run/cri-dockerd.sock --pod-infra-container-image=k8s.gcr.io/pause:3.7"
The solution was found here.
Enabling a feature gate
UPDATE: starting from v1.23, the PodSecurity feature gate is enabled by default, so there is no need to manually enable it. The rest of this section was kept for historical purposes only.
In v1.22, the PodSecurity feature gate is an Alpha feature and must be enabled in kube-apiserver (https://kubernetes.io/docs/concepts/security/pod-security-admission/).
SSH into the control node, and edit /etc/kubernetes/manifests/kube-apiserver.yaml so that the 'command' list has the following flag:
(If the flag is already present, add the gate after a comma, e.g.
This will automatically restart the kube-apiserver; wait a minute and run
kubectl -n kube-system get pods to check.
The kubeconfig in the CloudStack UI will only last one year (as of this writing, it is expired, so don't use it). If it expires again, here's how you can renew it:
- SSH into the control plane VM (see instructions above)
Create a file called e.g. kubeadm-config.yaml with this content:
apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration clusterName: "kubernetes" controlPlaneEndpoint: "172.19.134.149:6443" certificatesDir: "/etc/kubernetes/pki"
Generate a new admin kubeconfig which will last ten years:
kubeadm kubeconfig user --config=kubeadm-config.yaml --client-name=kubernetes-admin --org=system:masters --validity-period=87600h0m0s
Copy the output into
/root/.kube/configon biloba and chamomile.
Kubelet certificate rotation
SSH into the control plane and make sure that
rotateCertificates: true is set in /var/lib/kubelet/config.yaml.
What to do if the certificates expire
SSH into the control plane VM. To check the expired certs, run
kubeadm certs check-expiration. From biloba or chamomile, you can also run
kubectl -n kube-system get cm kubeadm-config -o yaml if the admin kubeconfig has not expired.
To renew the expired certificates, run
kubeadm certs renew all
You now need to restart kube-apiserver, kube-controller-manager, kube-scheduler and etcd. Unfortunately I was never able to figure out how to do this. Deleting the pods doesn't seem to work. We might need to restart all of the Docker containers running on the control plane. You can check the logs of the kube-apiserver pod to see if it's still having certificate expiry issues.
Anywho, the safe but slow option is to just restart all of the Kubernetes VMs from the CloudStack web UI.
To be sure that everything is working again, make sure that you can create a temporary pod successfully.
ceo manages the creation of new Kubernetes namespaces for members. See here to see how this works.
We are also using OPA Gatekeeper to restrict the Ingresses which members can create. See here and here for details.
Certificate Signing Requests
We're going to set the max. CSR signing duration to 10 years so that members don't have to worry about their kubeconfig cert expiring (at least, not for a long time).
SSH into the control node and edit /etc/kubernetes/manifests/kube-controller-manager.yaml so that it has the following CLI flag:
The controller will automatically restart after you save and close the file.