CloudStack: Difference between revisions

From CSCWiki
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 146: Line 146:
</pre>
</pre>


The management server will run on port 8080 by default, so reverse proxy it from NGINX:
Here's an NGINX snippet:
<pre>
<pre>
location / {
location / {
Line 158: Line 158:
apt install cloudstack-agent libvirt-daemon-driver-storage-rbd qemu-block-extra
apt install cloudstack-agent libvirt-daemon-driver-storage-rbd qemu-block-extra
</pre>
</pre>
Create a new user for CloudStack:
<pre>
useradd -s /bin/bash -d /nonexistent -M cloudstack
# set the password
passwd cloudstack
</pre>
Add the following to /etc/sudoers:
<pre>
cloudstack ALL=(ALL) NOPASSWD:ALL
Defaults:cloudstack !requiretty
</pre>
(There is a way to restrict this, but I was never able to get it to work.)


== Management server setup (cont'd) ==
=== Network setup ===
The /etc/network/interfaces file should look something like this (taking ginkgo as an example):
<pre>
auto enp3s0f0
iface enp3s0f0 inet manual


auto ens1f0np0
iface ens1f0np0 inet manual

# csc-cloud management
auto enp3s0f0.529
iface enp3s0f0.529 inet manual

auto br529
iface br529 inet static
bridge_ports enp3s0f0.529
address 172.19.168.22/27
iface br529 inet6 static
bridge_ports enp3s0f0.529
address fd74:6b6a:8eca:4902::22/64

# csc-cloud provider
auto ens1f0np0.425
iface ens1f0np0.425 inet manual

auto br425
iface br425 inet manual
bridge_ports ens1f0np0.425

# csc server network
auto ens1f0np0.134
iface ens1f0np0.134 inet manual

auto br134
iface br134 inet static
bridge_ports ens1f0np0.134
address 129.97.134.148/24
gateway 129.97.134.1
iface br134 inet6 static
bridge_ports ens1f0np0.134
address 2620:101:f000:4901:c5c::148/64
gateway 2620:101:f000:4901::1
</pre>
Add/modify the following lines to /etc/cloudstack/agent.properties:
<pre>
private.network.device=br529
guest.network.device=br425
public.network.device=br425
</pre>

=== libvirtd setup ===
Add/modify the following lines in /etc/libvirt/libvirtd.conf:
<pre>
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none"
mdns_adv = 0
</pre>

Uncomment the following line in /etc/default/libvirtd:
<pre>
LIBVIRTD_ARGS="--listen"
</pre>

Make sure the following lines are present in /etc/libvirt/qemu.conf:
<pre>
security_driver="none"
user="root"
group="root"
</pre>

Now run:
<pre>
systemctl mask libvirtd.socket
systemctl mask libvirtd-ro.socket
systemctl mask libvirtd-admin.socket
systemctl restart libvirtd
</pre>

== Management server setup (cont'd) ==
Now start the cloudstack-management systemd service and visit the web UI (https://cloud.csclub.uwaterloo.ca). The login credentials are 'admin' for both the username and password. Start the setup walkthrough (you will be prompted to change the password).
Now start the cloudstack-management systemd service and visit the web UI (https://cloud.csclub.uwaterloo.ca). The login credentials are 'admin' for both the username and password. Start the setup walkthrough (you will be prompted to change the password).

The walkthrough is almost certainly going to fail (at least, it did for me). Don't panic when this happens; just abort the walkthrough, and set up everything else manually. Once primary and secondary storage have been setup, enable the Pod, Cluster and Zone (there should only be one of each).

=== Primary Storage ===
* Type: RBD
* IP address: ceph-mon.cloud.csclub.uwaterloo.ca
* Scope: zone
* Get the credentials which you created in [[Ceph#CloudStack_Primary_Storage]]

=== Secondary Storage ===
* Type: NFS
* Host: ceph-nfs.cloud.csclub.uwaterloo.ca:2049
* Path: /cloudstack-secondary

Revision as of 23:42, 29 December 2021

We are using Apache CloudStack to provide VMs-as-a-service to members. Our user documentation is here: https://docs.cloud.csclub.uwaterloo.ca

Prerequisite reading:

Official CloudStack documentation: http://docs.cloudstack.apache.org/en/latest/

Building packages

While CloudStack does provide .deb packages for Ubuntu, unfortunately these don't work on Debian (the 'qemu-kvm' dependency is a virtual package on Debian, but not on Ubuntu). So we're going to build our own packages instead.

We're going to perform the build in a Podman container to avoid polluting the host machine with unnecessary packages. There's a container called cloudstack-build on biloba which you can re-use. If you create a new container, make sure to use the same Podman image as the release for which you're building (e.g. 'debian:bullseye').

The instructions below are adapted from http://docs.cloudstack.apache.org/en/latest/installguide/building_from_source.html

Inside the container, install the dependencies:

apt install maven openjdk-11-jdk libws-commons-util-java libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java genisoimage devscripts debhelper python3-setuptools

Install Node.js 12 as well (Debian bullseye's version happens to be 12):

apt install nodejs npm

Build the node-sass module (see this issue to see why this is necessary):

cd ui && npm install && npm rebuild node-sass && cd ..

The python3-mysql.connector package is not available in bullseye, so we're going to download and install it from the sid release:

curl -LOJ http://ftp.ca.debian.org/debian/pool/main/m/mysql-connector-python/python3-mysql.connector_8.0.15-2_all.deb
apt install ./python3-mysql.connector_8.0.15-2_all.deb

Download the CloudStack source code:

curl -LOJ http://mirror.csclub.uwaterloo.ca/apache/cloudstack/releases/4.16.0.0/apache-cloudstack-4.16.0.0-src.tar.bz2
tar -jxvf apache-cloudstack-4.16.0.0-src.tar.bz2
cd apache-cloudstack-4.16.0.0-src

Download the Maven dependencies:

mvn -P deps

Now open debian/control and perform the following changes:

  • Replace 'qemu-kvm (>=2.5)' with 'qemu-system-x86 (>= 1:5.2)' in the dependencies of cloudstack-agent
  • Remove dh-systemd as a build dependency of cloudstack (it's included in debhelper)

Now open debian/rules and add the following flags to the mvn command:

-Dmaven.test.skip=true -Dclean.skip=true -Dcheckstyle.skip

Now open debian/changelog and change 'unstable' to 'bullseye'.

As of this writing, there is a bug in libvirt which prevents VMs with more than 4GB of RAM from being created on hosts with cgroups2. Until that issue is fixed, we're going to need to modify the source code. Since we're already building a custom CloudStack package, it's easier to patch CloudStack than to patch libvirt, so paste something like the following into debian/patches/fix-cgroups2-cpu-weight.patch:

Description: Workaround for libvirt trying to write a value to the cgroups v2
  cpu.weight controller which is greater than the maximum (10000). The
  libvirt developers are currently discussing a solution.
Forwarded: not-needed
Origin: upstream, https://gitlab.com/libvirt/libvirt/-/issues/161
Author: Max Erenberg <merenber@csclub.uwaterloo.ca>
Last-Update: 2021-12-03
Index: apache-cloudstack-4.16.0.0-src/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
===================================================================
--- apache-cloudstack-4.16.0.0-src.orig/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
+++ apache-cloudstack-4.16.0.0-src/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
@@ -1483,6 +1483,10 @@ public class LibvirtVMDef {
         static final int MAX_PERIOD = 1000000;
 
         public void setShares(int shares) {
+           // Clamp the value to the cgroups v2 cpu.weight maximum until
+           // upstream libvirt gets fixed:
+           // https://gitlab.com/libvirt/libvirt/-/issues/161
+           shares = Math.min(shares, 10000);
             _shares = shares;
         }

I think you have to manually modify that LibvirtVMDef.java file to incorporate those changes (I could be wrong on this, but that's how I did it).

Then paste the following into debian/patches/00list:

fix-cgroup2-cpu-weight

Finally, import your GPG key into the container (make sure to delete it afterwards!), and build the packages:

debuild -k<YOUR_GPG_KEY_ID>

There should already be a .dupload.conf in the /root directory in the cloudstack-build container; if you need need another copy, ask a syscom member. Open /root/.ssh/config and change the User parameter to your username. Finally, go to /root and upload the packages to potassium-benzoate (replace the version number):

dupload cloudstack_4.16.0.0+1_amd64.changes

Database setup

We are using master-master replication between two MariaDB instances on biloba and chamomile. See here and here for instructions on how to set this up.

To avoid split-brain syndrome, mariadb.cloud.csclub.uwaterloo.ca points to a virtual IP shared by biloba and chamomile via keepalived. This means that only one host is actually handling requests at any moment; the other is a hot standby.

Also add the following parameters to /etc/mysql/my.cnf on the hosts running MariaDB:

[mysqld]
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'

Also comment out (or remove) the following line in /etc/mysql/mariadb.conf.d/50-server.cnf:

bind-address = 127.0.0.1

Now restart MariaDB.

Management server setup

Install the management server from our Debian repository:

apt install cloudstack-management

Run the database scripts:

cloudstack-setup-databases cloud:password@localhost --deploy-as=root

(Replace 'password' by a strong password.)

Open /etc/cloudstack/management/db.properties and replace all instances of 'localhost' by 'mariadb.cloud.csclub.uwaterloo.ca'.

Open /etc/cloudstack/management/server.properties and set 'bind-interface' to 127.0.0.1 (CloudStack is being reverse proxied behind NGINX).

Run some more scripts:

cloudstack-setup-management

Mount the cloudstack-secondary CephFS volume at /mnt/cloudstack-secondary:

mkdir /mnt/cloudstack-secondary
mount -t nfs4 -o port=2049 ceph-nfs.cloud.csclub.uwaterloo.ca:/cloudstack-secondary /mnt/cloudstack-secondary

Now download the management VM template:

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/cloudstack-secondary/ -u https://download.cloudstack.org/systemvm/4.16/systemvmtemplate-4.16.0-kvm.qcow2.bz2 -h kvm -F

The management server will run on port 8080 by default, so reverse proxy it from NGINX:

location / {
  proxy_pass http://localhost:8080;
}

Compute node setup

Install packages:

apt install cloudstack-agent libvirt-daemon-driver-storage-rbd qemu-block-extra

Create a new user for CloudStack:

useradd -s /bin/bash -d /nonexistent -M cloudstack
# set the password
passwd cloudstack

Add the following to /etc/sudoers:

cloudstack ALL=(ALL) NOPASSWD:ALL     
Defaults:cloudstack !requiretty

(There is a way to restrict this, but I was never able to get it to work.)

Network setup

The /etc/network/interfaces file should look something like this (taking ginkgo as an example):

auto enp3s0f0
iface enp3s0f0 inet manual

auto ens1f0np0
iface ens1f0np0 inet manual

# csc-cloud management
auto enp3s0f0.529
iface enp3s0f0.529 inet manual

auto br529
iface br529 inet static
    bridge_ports enp3s0f0.529
    address 172.19.168.22/27
iface br529 inet6 static
    bridge_ports enp3s0f0.529
    address fd74:6b6a:8eca:4902::22/64

# csc-cloud provider
auto ens1f0np0.425
iface ens1f0np0.425 inet manual

auto br425
iface br425 inet manual
    bridge_ports ens1f0np0.425

# csc server network
auto ens1f0np0.134
iface ens1f0np0.134 inet manual

auto br134
iface br134 inet static
    bridge_ports ens1f0np0.134
    address 129.97.134.148/24
    gateway 129.97.134.1
iface br134 inet6 static
    bridge_ports ens1f0np0.134
    address 2620:101:f000:4901:c5c::148/64
    gateway 2620:101:f000:4901::1

Add/modify the following lines to /etc/cloudstack/agent.properties:

private.network.device=br529
guest.network.device=br425
public.network.device=br425

libvirtd setup

Add/modify the following lines in /etc/libvirt/libvirtd.conf:

listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none"
mdns_adv = 0

Uncomment the following line in /etc/default/libvirtd:

LIBVIRTD_ARGS="--listen"

Make sure the following lines are present in /etc/libvirt/qemu.conf:

security_driver="none"
user="root"
group="root"

Now run:

systemctl mask libvirtd.socket
systemctl mask libvirtd-ro.socket
systemctl mask libvirtd-admin.socket
systemctl restart libvirtd

Management server setup (cont'd)

Now start the cloudstack-management systemd service and visit the web UI (https://cloud.csclub.uwaterloo.ca). The login credentials are 'admin' for both the username and password. Start the setup walkthrough (you will be prompted to change the password).

The walkthrough is almost certainly going to fail (at least, it did for me). Don't panic when this happens; just abort the walkthrough, and set up everything else manually. Once primary and secondary storage have been setup, enable the Pod, Cluster and Zone (there should only be one of each).

Primary Storage

Secondary Storage

  • Type: NFS
  • Host: ceph-nfs.cloud.csclub.uwaterloo.ca:2049
  • Path: /cloudstack-secondary