Cloud: Compute Node Setup: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 111: | Line 111: | ||
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED |
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED |
||
block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC |
block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC |
||
cpu_mode = custom |
|||
cpu_model = Broadwell |
|||
[keystone_authtoken] |
[keystone_authtoken] |
||
Line 193: | Line 195: | ||
<pre>. ~/cloud-admin |
<pre>. ~/cloud-admin |
||
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova</pre> |
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova</pre> |
||
=== Setup migration === |
|||
* Add SSH keys for root and nova on each compute node |
|||
* Enable <code>--listen</code> flag for Libvirt (/etc/default/libvirtd) |
|||
* Enable listen_tcp and auth_mode_tcp = none in /etc/libvirt/libvirtd.conf |
Revision as of 20:32, 24 March 2018
NOTE: These instructions are a WIP
Machine setup
Disk configuration
/var/lib/nova/instances should be mounted to somewhere with lots of space (after installed)
Networking configuration
We will be using 2 interfaces on the machine:
- 1gbps for management (VLAN 529 (CSC Cloud Management))
- 10gbps for VMs (VLAN 134 (MSO), 425 (CSC Cloud))
/etc/sysctl.d/10-ipv6.conf
# Disable autoconf net.ipv6.conf.all.autoconf=0 net.ipv6.conf.default.autoconf=0 # Stop accepting router advertisments net.ipv6.conf.all.accept_ra=0 net.ipv6.conf.default.accept_ra=0 # Do not use temporary addresses net.ipv6.conf.all.use_tempaddr=0 net.ipv6.conf.default.use_tempaddr=0
/etc/network/interfaces
# Management interface auto eno1 iface eno1 inet static address 172.19.168.23 netmask 255.255.255.224 gateway 172.19.168.1 iface eno1 inet6 static address fd74:6b6a:8eca:4902::23 netmask 64 gateway fd74:6b6a:8eca:4902::1 ################# # VM NETWORKING # ################# auto enp94s0.134 iface enp94s0.134 inet manual iface enp94s0.134 inet6 manual vlan-raw-device enp94s0 auto enp94s0.425 iface enp94s0.425 inet manual iface enp94s0.425 inet6 manual vlan-raw-device enp94s0
Compute service
Prerequisites
debian.csclub
APT repository configured
Installation
Configure virtualization
Allow syscom access to libvirt.
/etc/polkit-1/localauthority/50-local.d/libvirt.pkla
[Allow syscom to libvirt] Identity=unix-group:syscom Action=org.libvirt.unix.manage ResultAny=yes
sudo apt install qemu qemu-kvm libvirt-bin bridge-utils
Install Nova Compute
From:
- https://docs.openstack.org/ocata/install-guide-ubuntu/nova-compute-install.html
- https://docs.openstack.org/ocata/install-guide-ubuntu/neutron-compute-install.html
sudo apt install nova-compute neutron-linuxbridge-agent
Now configure:
/etc/nova/nova.conf
[DEFAULT] state_path=/var/lib/nova enabled_apis=osapi_compute,metadata transport_url=rabbit://$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca auth_strategy=keystone my_ip=$IP use_neutron=true firewall_driver=nova.virt.firewall.NoopFirewallDriver default_availability_zone = csc-mc compute_monitors = cpu.virt_driver,numa_mem_bw.virt_driver [oslo_concurrency] lock_path=/var/lock/nova [database] connection=mysql+pymysql://$USER:$PASS@db.cloud.csclub.uwaterloo.ca/nova_api [libvirt] use_virtio_for_bridges=True inject_password=true live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC cpu_mode = custom cpu_model = Broadwell [keystone_authtoken] auth_uri = https://auth.cloud.csclub.uwaterloo.ca auth_url = https://admin.cloud.csclub.uwaterloo.ca memcached_servers = memcache1.cloud.csclub.uwaterloo.ca:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = $USER password = $PASS [vnc] enabled = true vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = https://console.cloud.csclub.uwaterloo.ca/vnc_auto.html [glance] api_servers = https://image.cloud.csclub.uwaterloo.ca [neutron] url = https://network.cloud.csclub.uwaterloo.ca auth_url = https://admin.cloud.csclub.uwaterloo.ca auth_type = password project_domain_name = Default user_domain_name = Default project_name = service region_name = csc-mc username = $USER password = $PASS [placement] os_region_name = csc-mc project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = https://admin.cloud.csclub.uwaterloo.ca/v3 username = $USER password = $PASS
/etc/neutron/neutron.conf
[DEFAULT] # ... transport_url=$USER:$PASS@rabbit.cloud.csclub.uwaterloo.ca auth_strategy=keystone [keystone_authtoken] auth_uri = https://auth.cloud.csclub.uwaterloo.ca auth_url = https://admin.cloud.csclub.uwaterloo.ca memcached_servers = memcache1.cloud.csclub.uwaterloo.ca:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = $USER password = $PASS
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge] physical_interface_mappings=mso-internet:enp94s0.134, mso-intranet:enp94s0.425 [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan=true local_ip=172.19.168.23 l2_population=true
Then:
sudo systemctl restart nova-compute neutron-linuxbridge-agent
Add mapping
On controller1.cloud:
. ~/cloud-admin su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Setup migration
- Add SSH keys for root and nova on each compute node
- Enable
--listen
flag for Libvirt (/etc/default/libvirtd) - Enable listen_tcp and auth_mode_tcp = none in /etc/libvirt/libvirtd.conf