Kubernetes: Difference between revisions
m (→CSI Driver) |
m (→CSI Driver) |
||
Line 71: | Line 71: | ||
csi.cloudstack.apache.org/disk-offering-id: 0da1f706-fd2e-4203-8bae-1b740aef9886 |
csi.cloudstack.apache.org/disk-offering-id: 0da1f706-fd2e-4203-8bae-1b740aef9886 |
||
</pre> |
</pre> |
||
Change the disk-offering-id to the ID of the 'custom' disk size offering in CloudStack. |
Change the disk-offering-id to the ID of the 'custom' disk size offering in CloudStack. Apply the YAML file once you are done editing it. |
||
<b>Note</b>: only a single writer is allowed, so do NOT use ReadWriteMany on any PersistentVolumeClaims. |
<b>Note</b>: only a single writer is allowed, so do NOT use ReadWriteMany on any PersistentVolumeClaims. |
Revision as of 13:14, 1 January 2022
We are running a Kubernetes cluster on top of CloudStack.
User documentation is here: https://docs.cloud.csclub.uwaterloo.ca/kubernetes/
CloudStack setup
Enable the Kubernetes plugin from the CloudStack UI. This will require a restart of the management servers.
We currently have one control node and 3 worker nodes. Each node is using the same Compute offering (8 CPUs, 16GB of RAM). Autoscaling is enabled, so CloudStack will automatically create more worker nodes if necessary.
The admin kubeconfig has been installed on biloba and chamomile.
Note that we cannot use LoadBalancers because we are basically running our own load balancer (NGINX) outside of Kubernetes which accepts external traffic. To expose services, use Ingresses or NodePorts instead.
NGINX Ingress
Read this first: https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
Then run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml
Get the NodePort:
kubectl -n ingress-nginx get svc
Create an upstream in /etc/nginx/nginx.conf which points to the IPs of one or more Kubernetes VMs, with the HTTP port from the NodePort. Then, reload NGINX on biloba and chamomile.
Mark the NGINX IngressClass as the default IngressClass:
kubectl edit ingressclass nginx
This will open up Vim; add the annotation ingressclass.kubernetes.io/is-default-class: "true"
to the annotations section.
Edit the global ConfigMap:
kubectl -n ingress-nginx edit configmap ingress-nginx-controller
Add the following to the 'data' section:
allow-backend-server-header: "true" use-forwarded-headers: "true" proxy-buffer-size: 128k server-snippet: | proxy_http_version 1.1; proxy_pass_header Connection; proxy_pass_header Upgrade;
CSI Driver
We are using a CSI driver for PersistentVolume storage.
Installation:
kubectl apply -f https://github.com/apalia/cloudstack-csi-driver/releases/latest/download/manifest.yaml
To make this the default StorageClass, clone the repo, and edit examples/k8s/0-storageclass.yml so that it looks like this:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cloudstack-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: csi.cloudstack.apache.org reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false parameters: csi.cloudstack.apache.org/disk-offering-id: 0da1f706-fd2e-4203-8bae-1b740aef9886
Change the disk-offering-id to the ID of the 'custom' disk size offering in CloudStack. Apply the YAML file once you are done editing it.
Note: only a single writer is allowed, so do NOT use ReadWriteMany on any PersistentVolumeClaims.
Testing
Create a PersistentVolumeClaim and bind it to a Pod just to make sure that everything's working:
kubectl apply -f ./examples/k8s/pvc.yaml kubectl apply -f ./examples/k8s/pod.yaml
Run kubectl get pv
to make sure that a PersistentVolume was dynamically provisioned.
Once you're done testing, delete the resources:
kubectl delete -f ./examples/k8s/pvc.yaml kubectl delete -f ./examples/k8s/pod.yaml
SSH'ing into a node
If you need to SSH into one of the Kubernetes nodes, get the IP from the CloudStack UI, and run e.g.
ssh -i /var/lib/cloudstack/management/.ssh/id_rsa core@172.19.134.149
(Do this from biloba or chamomile.)
Enabling a feature gate
In v1.22, the PodSecurity feature gate is an Alpha feature and must be enabled in kube-apiserver (https://kubernetes.io/docs/concepts/security/pod-security-admission/).
SSH into the control node, and edit /etc/kubernetes/manifests/kube-apiserver.yaml so that the 'command' list has the following flag:
--feature-gates=PodSecurity=true
(If the flag is already present, add the gate after a comma, e.g. --feature-gates=Feature1=true,PodSecurity=true
.)
This will automatically restart the kube-apiserver; wait a minute and run kubectl -n kube-system get pods
to check.
Members
ceo manages the creation of new Kubernetes namespaces for members. See here to see how this works.
We are also using OPA Gatekeeper to restrict the Ingresses which members can create. See here and here for details.
Certificate Signing Requests
We're going to set the max. CSR signing duration to 10 years so that members don't have to worry about their kubeconfig cert expiring (at least, not for a long time).
SSH into the control node and edit /etc/kubernetes/manifests/kube-controller-manager.yml so that it has the following CLI flag:
--cluster-signing-duration=87600h
The controller will automatically restart after you save and close the file.