Cloud: Compute Node Setup: Difference between revisions

From CSCWiki
Jump to navigation Jump to search
No edit summary
 
(17 intermediate revisions by 2 users not shown)
Line 1: Line 1:
UPDATE: this page is deprecated, as it is for the old cloud which used OpenStack. We are now using [[CloudStack]] instead.
'''NOTE: These instructions are a WIP'''


== Machine setup ==
== Machine setup ==


=== Networking configuring ===
=== Disk configuration ===

A block device with lost of disk space (to be mounted at /var/lib/nova/instances)

=== Networking configuration ===

'''Do not register the IPv6 address in DNS. This may cause issues with OpenStack services.'''


We will be using 2 interfaces on the machine:
We will be using 2 interfaces on the machine:
Line 9: Line 15:
* 1gbps for management (VLAN 529 (CSC Cloud Management))
* 1gbps for management (VLAN 529 (CSC Cloud Management))
* 10gbps for VMs (VLAN 134 (MSO), 425 (CSC Cloud))
* 10gbps for VMs (VLAN 134 (MSO), 425 (CSC Cloud))

==== Fix ebtables ====

<pre>
update-alternatives --config ebtables
</pre>

and choose "ebtables-legacy". This is necessary to work around missing features in ebtables-nft.

==== <code>/etc/sysctl.d/10-ipv6.conf</code> ====

<pre># Disable autoconf
net.ipv6.conf.all.autoconf=0
net.ipv6.conf.default.autoconf=0

# Stop accepting router advertisments
net.ipv6.conf.all.accept_ra=0
net.ipv6.conf.default.accept_ra=0

# Do not use temporary addresses
net.ipv6.conf.all.use_tempaddr=0
net.ipv6.conf.default.use_tempaddr=0</pre>


==== <code>/etc/network/interfaces</code> ====
==== <code>/etc/network/interfaces</code> ====


Configure the switch port with VLAN 529 as the untagged VLAN, and VLANs 134 and 425 as the tagged VLANs.
<pre># Management interface
auto eno1.529
iface eno1.529 inet manual
iface eno1.529 inet6 manual
vlan-raw-device eno1


<pre># Management interface
auto br529
auto $INTERFACE
iface br529 inet static
iface $INTERFACE inet static
bridge_ports eno1.529
address 172.19.168.23
address 172.19.168.XX
netmask 255.255.255.224
netmask 255.255.255.224
gateway 172.19.168.1
gateway 172.19.168.1


iface br529 inet6 static
iface $INTERFACE inet6 static
address fd74:6b6a:8eca:4902::23
address fd74:6b6a:8eca:4902::XX
netmask 64
netmask 64
gateway fd74:6b6a:8eca:4902::1
gateway fd74:6b6a:8eca:4902::1
Line 34: Line 58:
#################
#################


auto enp94s0.134
auto $INTERFACE.134
iface enp94s0.134 inet manual
iface $INTERFACE.134 inet manual
iface enp94s0.134 inet6 manual
iface $INTERFACE.134 inet6 manual
vlan-raw-device enp94s0
vlan-raw-device $INTERFACE


auto enp94s0.425
auto $INTERFACE.425
iface enp94s0.425 inet manual
iface $INTERFACE.425 inet manual
iface enp94s0.425 inet6 manual
iface $INTERFACE.425 inet6 manual
vlan-raw-device enp94s0
vlan-raw-device $INTERFACE
</pre>
</pre>


Line 55: Line 79:
==== Configure virtualization ====
==== Configure virtualization ====


<code>sudo apt install qemu qemu-kvm libvirt-bin bridge-utils</code>
<ol style="list-style-type: decimal;">
<li><p>Allow syscom access to libvirt.</p>
<p><code>/etc/polkit-1/localauthority/50-local.d/libvirt.pkla</code></p>
<pre class="conf">[Allow syscom to libvirt]
Identity=unix-group:syscom
Action=org.libvirt.unix.manage
ResultAny=yes</pre></li>
<li><p><code>sudo apt install qemu qemu-kvm libvirt-bin bridge-utils</code></p></li></ol>


==== Install Nova Compute ====
==== Install Nova Compute ====


Based on the official OpenStack document:
From:


* https://docs.openstack.org/ocata/install-guide-ubuntu/nova-compute-install.html
* https://docs.openstack.org/ocata/install-guide-ubuntu/nova-compute-install.html
Line 77: Line 94:
===== <code>/etc/nova/nova.conf</code> =====
===== <code>/etc/nova/nova.conf</code> =====


<pre class="conf">[DEFAULT]
<pre class="conf">[DEFAULT]
state_path=/var/lib/nova
# ...
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
transport_url=rabbit://$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca
transport_url=rabbit://$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca
auth_strategy=keystone
my_ip=172.19.168.23
my_ip=172.19.168.XX
use_neutron=True
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
default_availability_zone = csc-mc
default_availability_zone = csc-mc
compute_monitors = cpu.virt_driver,numa_mem_bw.virt_driver


[oslo_concurrency]
[oslo_concurrency]
Line 97: Line 116:
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED
block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC
block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC
cpu_mode = custom
cpu_model = Broadwell


[keystone_authtoken]
[keystone_authtoken]
Line 138: Line 159:
username = $USER
username = $USER
password = $PASS</pre>
password = $PASS</pre>

===== <code>/etc/neutron/neutron.conf</code> =====
===== <code>/etc/neutron/neutron.conf</code> =====


Line 158: Line 180:


<pre class="conf">[linux_bridge]
<pre class="conf">[linux_bridge]
physical_interface_mappings=mso-internet:enp94s0.134, mso-intranet:enp94s0.425
physical_interface_mappings=mso-internet:$INTERFACE.134, mso-intranet:$INTERFACE.425


[securitygroup]
[securitygroup]
Line 166: Line 188:
[vxlan]
[vxlan]
enable_vxlan=true
enable_vxlan=true
local_ip=172.19.168.23
local_ip=172.19.168.XX
l2_population=true</pre>
l2_population=true</pre>
Then:
Then:


<code>sudo systemctl restart nova-compute neutron-linuxbridge-agent</code>
<code>sudo systemctl restart nova-compute neutron-linuxbridge-agent</code>

=== Add mapping ===

On controller1.cloud:

<pre>. ~/cloud-admin
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova</pre>

=== Setup migration ===

* Add SSH keys for root and nova on each compute node
* Change nova user shell to /bin/sh
* Enable <code>--listen</code> flag for Libvirt (/etc/default/libvirtd)
* Enable listen_tcp and auth_mode_tcp = none in /etc/libvirt/libvirtd.conf

Latest revision as of 20:58, 29 December 2021

UPDATE: this page is deprecated, as it is for the old cloud which used OpenStack. We are now using CloudStack instead.

Machine setup

Disk configuration

A block device with lost of disk space (to be mounted at /var/lib/nova/instances)

Networking configuration

Do not register the IPv6 address in DNS. This may cause issues with OpenStack services.

We will be using 2 interfaces on the machine:

  • 1gbps for management (VLAN 529 (CSC Cloud Management))
  • 10gbps for VMs (VLAN 134 (MSO), 425 (CSC Cloud))

Fix ebtables

update-alternatives --config ebtables

and choose "ebtables-legacy". This is necessary to work around missing features in ebtables-nft.

/etc/sysctl.d/10-ipv6.conf

# Disable autoconf
net.ipv6.conf.all.autoconf=0
net.ipv6.conf.default.autoconf=0

# Stop accepting router advertisments
net.ipv6.conf.all.accept_ra=0                                                                       
net.ipv6.conf.default.accept_ra=0

# Do not use temporary addresses
net.ipv6.conf.all.use_tempaddr=0                                                                    
net.ipv6.conf.default.use_tempaddr=0

/etc/network/interfaces

Configure the switch port with VLAN 529 as the untagged VLAN, and VLANs 134 and 425 as the tagged VLANs.

# Management interface
auto $INTERFACE
iface $INTERFACE inet static
   address        172.19.168.XX
   netmask        255.255.255.224
   gateway        172.19.168.1

iface $INTERFACE inet6 static
   address fd74:6b6a:8eca:4902::XX
   netmask 64
   gateway fd74:6b6a:8eca:4902::1

#################
# VM NETWORKING #
#################

auto $INTERFACE.134
iface $INTERFACE.134 inet manual
iface $INTERFACE.134 inet6 manual
   vlan-raw-device $INTERFACE

auto $INTERFACE.425
iface $INTERFACE.425 inet manual
iface $INTERFACE.425 inet6 manual
   vlan-raw-device $INTERFACE

Compute service

Prerequisites

  • debian.csclub APT repository configured

Installation

Configure virtualization

sudo apt install qemu qemu-kvm libvirt-bin bridge-utils

Install Nova Compute

Based on the official OpenStack document:

sudo apt install nova-compute neutron-linuxbridge-agent

Now configure:

/etc/nova/nova.conf
[DEFAULT]                                                                                                                                                                                                                                      
state_path=/var/lib/nova                                                                                                                                                                                                                       
enabled_apis=osapi_compute,metadata                                                                                                                                                                                                            
transport_url=rabbit://$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca                                                                                                                                               
auth_strategy=keystone                                                                                                                                                                                                                         
my_ip=172.19.168.XX                                                                                                                                                                                                                      
use_neutron=true                                                                                                                                                                                                                               
firewall_driver=nova.virt.firewall.NoopFirewallDriver                                                                                                                                                                                          
default_availability_zone = csc-mc                                                                                                                                                                                                             
compute_monitors =  cpu.virt_driver,numa_mem_bw.virt_driver 

[oslo_concurrency]
lock_path=/var/lock/nova

[database]
connection=mysql+pymysql://$USER:$PASS@db.cloud.csclub.uwaterloo.ca/nova_api

[libvirt]                                                                                           
use_virtio_for_bridges=True                                                                         
inject_password=true                                                                                
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED
block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC
cpu_mode = custom
cpu_model = Broadwell

[keystone_authtoken]
auth_uri = https://auth.cloud.csclub.uwaterloo.ca
auth_url = https://admin.cloud.csclub.uwaterloo.ca
memcached_servers = memcache1.cloud.csclub.uwaterloo.ca:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = $USER
password = $PASS

[vnc]
enabled = true
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = https://console.cloud.csclub.uwaterloo.ca/vnc_auto.html

[glance]
api_servers = https://image.cloud.csclub.uwaterloo.ca

[neutron]
url = https://network.cloud.csclub.uwaterloo.ca
auth_url = https://admin.cloud.csclub.uwaterloo.ca
auth_type = password
project_domain_name = Default 
user_domain_name = Default
project_name = service
region_name = csc-mc
username = $USER
password = $PASS

[placement]
os_region_name = csc-mc
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = https://admin.cloud.csclub.uwaterloo.ca/v3
username = $USER
password = $PASS
/etc/neutron/neutron.conf
[DEFAULT]
# ...
transport_url=$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca
auth_strategy=keystone

[keystone_authtoken]
auth_uri = https://auth.cloud.csclub.uwaterloo.ca
auth_url = https://admin.cloud.csclub.uwaterloo.ca
memcached_servers = memcache1.cloud.csclub.uwaterloo.ca:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = $USER
password = $PASS
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings=mso-internet:$INTERFACE.134, mso-intranet:$INTERFACE.425

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]
enable_vxlan=true
local_ip=172.19.168.XX
l2_population=true

Then:

sudo systemctl restart nova-compute neutron-linuxbridge-agent

Add mapping

On controller1.cloud:

. ~/cloud-admin
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Setup migration

  • Add SSH keys for root and nova on each compute node
  • Change nova user shell to /bin/sh
  • Enable --listen flag for Libvirt (/etc/default/libvirtd)
  • Enable listen_tcp and auth_mode_tcp = none in /etc/libvirt/libvirtd.conf