Cloud: Compute Node Setup

From CSCWiki
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

UPDATE: this page is deprecated, as it is for the old cloud which used OpenStack. We are now using CloudStack instead.

Machine setup

Disk configuration

A block device with lost of disk space (to be mounted at /var/lib/nova/instances)

Networking configuration

Do not register the IPv6 address in DNS. This may cause issues with OpenStack services.

We will be using 2 interfaces on the machine:

  • 1gbps for management (VLAN 529 (CSC Cloud Management))
  • 10gbps for VMs (VLAN 134 (MSO), 425 (CSC Cloud))

Fix ebtables

update-alternatives --config ebtables

and choose "ebtables-legacy". This is necessary to work around missing features in ebtables-nft.

/etc/sysctl.d/10-ipv6.conf

# Disable autoconf
net.ipv6.conf.all.autoconf=0
net.ipv6.conf.default.autoconf=0

# Stop accepting router advertisments
net.ipv6.conf.all.accept_ra=0                                                                       
net.ipv6.conf.default.accept_ra=0

# Do not use temporary addresses
net.ipv6.conf.all.use_tempaddr=0                                                                    
net.ipv6.conf.default.use_tempaddr=0

/etc/network/interfaces

Configure the switch port with VLAN 529 as the untagged VLAN, and VLANs 134 and 425 as the tagged VLANs.

# Management interface
auto $INTERFACE
iface $INTERFACE inet static
   address        172.19.168.XX
   netmask        255.255.255.224
   gateway        172.19.168.1

iface $INTERFACE inet6 static
   address fd74:6b6a:8eca:4902::XX
   netmask 64
   gateway fd74:6b6a:8eca:4902::1

#################
# VM NETWORKING #
#################

auto $INTERFACE.134
iface $INTERFACE.134 inet manual
iface $INTERFACE.134 inet6 manual
   vlan-raw-device $INTERFACE

auto $INTERFACE.425
iface $INTERFACE.425 inet manual
iface $INTERFACE.425 inet6 manual
   vlan-raw-device $INTERFACE

Compute service

Prerequisites

  • debian.csclub APT repository configured

Installation

Configure virtualization

sudo apt install qemu qemu-kvm libvirt-bin bridge-utils

Install Nova Compute

Based on the official OpenStack document:

sudo apt install nova-compute neutron-linuxbridge-agent

Now configure:

/etc/nova/nova.conf
[DEFAULT]                                                                                                                                                                                                                                      
state_path=/var/lib/nova                                                                                                                                                                                                                       
enabled_apis=osapi_compute,metadata                                                                                                                                                                                                            
transport_url=rabbit://$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca                                                                                                                                               
auth_strategy=keystone                                                                                                                                                                                                                         
my_ip=172.19.168.XX                                                                                                                                                                                                                      
use_neutron=true                                                                                                                                                                                                                               
firewall_driver=nova.virt.firewall.NoopFirewallDriver                                                                                                                                                                                          
default_availability_zone = csc-mc                                                                                                                                                                                                             
compute_monitors =  cpu.virt_driver,numa_mem_bw.virt_driver 

[oslo_concurrency]
lock_path=/var/lock/nova

[database]
connection=mysql+pymysql://$USER:$PASS@db.cloud.csclub.uwaterloo.ca/nova_api

[libvirt]                                                                                           
use_virtio_for_bridges=True                                                                         
inject_password=true                                                                                
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED
block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC
cpu_mode = custom
cpu_model = Broadwell

[keystone_authtoken]
auth_uri = https://auth.cloud.csclub.uwaterloo.ca
auth_url = https://admin.cloud.csclub.uwaterloo.ca
memcached_servers = memcache1.cloud.csclub.uwaterloo.ca:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = $USER
password = $PASS

[vnc]
enabled = true
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = https://console.cloud.csclub.uwaterloo.ca/vnc_auto.html

[glance]
api_servers = https://image.cloud.csclub.uwaterloo.ca

[neutron]
url = https://network.cloud.csclub.uwaterloo.ca
auth_url = https://admin.cloud.csclub.uwaterloo.ca
auth_type = password
project_domain_name = Default 
user_domain_name = Default
project_name = service
region_name = csc-mc
username = $USER
password = $PASS

[placement]
os_region_name = csc-mc
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = https://admin.cloud.csclub.uwaterloo.ca/v3
username = $USER
password = $PASS
/etc/neutron/neutron.conf
[DEFAULT]
# ...
transport_url=$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca
auth_strategy=keystone

[keystone_authtoken]
auth_uri = https://auth.cloud.csclub.uwaterloo.ca
auth_url = https://admin.cloud.csclub.uwaterloo.ca
memcached_servers = memcache1.cloud.csclub.uwaterloo.ca:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = $USER
password = $PASS
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings=mso-internet:$INTERFACE.134, mso-intranet:$INTERFACE.425

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]
enable_vxlan=true
local_ip=172.19.168.XX
l2_population=true

Then:

sudo systemctl restart nova-compute neutron-linuxbridge-agent

Add mapping

On controller1.cloud:

. ~/cloud-admin
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Setup migration

  • Add SSH keys for root and nova on each compute node
  • Change nova user shell to /bin/sh
  • Enable --listen flag for Libvirt (/etc/default/libvirtd)
  • Enable listen_tcp and auth_mode_tcp = none in /etc/libvirt/libvirtd.conf