Virtualization (LXC Containers): Difference between revisions
Line 167: | Line 167: | ||
If you do not need cgroup delegation, just ignore the cgroup-specific steps. |
If you do not need cgroup delegation, just ignore the cgroup-specific steps. |
||
⚫ | |||
The first step is Docker-specific - we need to insert the overlay module, which Docker needs: |
|||
<pre> |
|||
modprobe overlay |
|||
</pre> |
|||
To enable it permanently: |
|||
<pre> |
|||
echo overlay >> /etc/modules |
|||
</pre> |
|||
<b>Update</b>: Unfortunately this won't actually work because overlayfs is not enabled in unprivileged containers on Debian. Apparently only Ubuntu does this. See [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845014 here] for more details. |
|||
⚫ | |||
line to /etc/sysctl.conf: |
line to /etc/sysctl.conf: |
||
<pre> |
<pre> |
||
Line 343: | Line 333: | ||
</pre> |
</pre> |
||
To run Docker in the unprivileged container: |
|||
Attach to the container and [https://docs.docker.com/engine/install/debian/ install Docker]. |
|||
<br> |
|||
Attach to the container and install fuse-overlayfs. We need this because unprivileged users cannot mount overlay directories on Debian. Make sure to read [https://docs.docker.com/engine/security/rootless/ this]. Next, [https://docs.docker.com/engine/install/debian/ install Docker]. |
|||
Then run <code>docker run --rm hello-world</code> to make sure that everything is working correctly. |
Then run <code>docker run --rm hello-world</code> to make sure that everything is working correctly. |
||
Revision as of 18:35, 5 June 2021
As of Fall 2009, we use Linux containers to maintain virtual machines, most notably caffeine, which is hosted on glomag. The various commands to manipulate Linux containers are prefixed with "lxc-"; see their individual manpages for usage.
Management Quick Guide
To manage containers, use the lxc-* tools, which require root privilege. Some examples (replace caffeine with the appropriate container name):
# check if caffeine is running lxc-info -n caffeine # start caffeine in the background lxc-start -d -n caffeine # stop caffeine gracefully lxc-halt -n caffeine # stop caffeine forcefully lxc-stop -n caffeine # launch a TTY console for the container lxc-console -n caffeine
To install Linux container support on a recent Debian (squeeze or newer) system:
- Install the lxc and bridge-utils packages.
- Create a bridged network interface (this can be configured in /etc/network/interfaces as though it were a normal Ethernet device, with the additional bridge_ports parameter. This is usually called br0 (can be created manually with brctl). LXC will create a virtual Ethernet device and add it to the bridge when each container starts.
To start caffeine, run the following command as root on glomag:
lxc-start -d -n caffeine
Containers are stored on the host filesystem in /var/lib/lxc (root filesystems are symlinked to the appropriate directory on /vm).
ehashman's Guide to LXC on Debian
Configuring the host machine
First, install all required packages:
# apt-get install lxc bridge-utils
Setting up ethernet bridging
Next, create an ethernet bridge for the container. Edit /etc/network/interfaces
:
# The primary network interface #auto eth0 #iface eth0 inet static # address 129.97.134.200 # netmask 255.255.255.0 # gateway 129.97.134.1 # Bridge ethernet for containers auto br0 iface br0 inet static bridge_ports eth0 address 129.97.134.200 netmask 255.255.255.0 gateway 129.97.134.1 dns-nameservers 129.97.2.1 129.97.2.2 dns-search wics.uwaterloo.ca uwaterloo.ca
Cross your fingers and restart networking for your configuration to take effect!
# ifdown br0 && ifup br0 // bash enter to see if you lost connectivity and have to make a machine room trip
Note: !!! Do not use !!!
# service networking restart
The init scripts are broken and this likely will result in a machine room trip (or IPMI power cycle).
Setting up storage
Last, allocate some space in your volume group to put the container root on:
// Find the correct volume group to put the container on # vgdisplay // Create the volume in the appropriate volume group # lvcreate -L 20G -n container vg0 // Find it in the dev mapper # ls /dev/mapper/ // Create a filesystem on it # mkfs.ext4 /dev/mapper/vg0-container // Add a mount point # mkdir /vm/container
Last, add it to /etc/fstab
:
/dev/mapper/vg0-container /vm/container ext4 defaults 0 2
Test the entry with mount
:
# mount /vm/container
Now you're done!
Creating a new container
Create a new container using lxc-create
:
// Create new container "container" with root fs located at /vm/container # lxc-create --dir=/vm/container -n container --template download
This will prompt you for distribution, release, and architecture. (Architecture must match host machine.)
Take this time to review its config in /var/lib/lxc/container/config
, and tell it to auto-start if you like:
# Auto-start the container on boot lxc.start.auto = 1
You'll also want to set up networking (if applicable):
# Networking lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.hwaddr = de:ad:be:ef:ba:be # or something sensible
Now,
// List containers, -f for fancy # lxc-ls -f
to ensure that your container has been successfully created; it should be listed. You can also list its root directory if you like. To start it in the background and obtain a root shell, do
// Start and attach a root shell # lxc-start -d -n container # lxc-attach -n container
Migrating a container between hosts
Start by shutting the container down:
root@container:~# halt
Then make a tarball of the container's filesystem:
# tar --numeric-owner -czvf container.tar.gz /vm/container
Copy it to its target destination, along with the configs:
$ scp container.tar.gz new-host: $ scp -r /var/lib/lxc/container/ new-host:/var/lib/lxc/
Now carefully extract it. If you haven't already, provision storage and ethernet per the container creation section.
Yes, we really do want to stick it directly into /
:
# tar --numeric-owner -xzvf container.tar.gz -C /
If you are migrating from an old version of LXC onto a newer one (e.g. migrating onto xylitol), update the config:
# lxc-update-config -c /vm/container/config
This will also create a config.backup; you should inspect the new config file to make sure the migration was successful.
Verify the container's existence:
# lxc-ls -f NAME STATE IPV4 IPV6 AUTOSTART ----------------------------------------- container STOPPED - - YES
Now just start it on up:
# lxc-start -d -n container
And test by trying an ssh in!
merenber's guide to unprivileged LXC containers
Prerequisite reading: https://wiki.debian.org/LXC#Privileged_Vs._Unprivileged_Containers
With unprivileged containers, UIDs and GIDs in the container map to a different set of UIDs/GIDs on the host. This is very important if you wish to use nested virtualization (i.e. container inside a container), because it is dangerous to use nested virtualization in a privileged container. The following is a guide to setting up unprivileged containers with cgroup delegation, i.e. processes inside the container can create new cgroups. This is useful if, for example, you wish to run Docker in an LXC container. If you do not need cgroup delegation, just ignore the cgroup-specific steps.
First, we need to enable unprivileged user namespaces, which are disabled by default on Debian. Add the following line to /etc/sysctl.conf:
kernel.unprivileged_userns_clone = 1
Then run:
sysctl -p
Now we are going to create a dummy user under which the unprivileged containers will run. Note that it is possible to create unprivileged containers as root; however, I wasn't able to get it to work. Some kind of permissions error when lxc-start tried to mount the rootfs. If you do find a way, please add the instructions here.
useradd -s /bin/bash -m lxcuser0
Make sure that the newly created user has subuid and subgid entries:
cat /etc/subuid cat /etc/subgid
For example, /etc/subuid could look like the following:
lxcuser0:100000:65536
This means that UIDs 0-65535 in the container will be mapped to UIDs 100000-165535 on the host.
Next, make sure that the user is allowed to create new veth interfaces:
echo 'lxcuser0 veth br0 10' >> /etc/lxc/lxc-usernet
Create a new logical LVM volume, as shown in the guide above (replace 'gitlabrunner' by the container name):
lvcreate -L 10G -n gitlabrunner xylitol-raidten mkfs.ext4 /dev/mapper/xylitol--raidten-gitlabrunner mkdir /vm/gitlabrunner
Add the following to /etc/fstab:
/dev/mapper/xylitol--raidten-gitlabrunner /vm/gitlabrunner ext4 defaults 0 2
Next, we need the volume to be mounted as the UID and GID which will be root inside the container (here, 100000). We will use debugfs to do this:
debugfs -w -R 'set_inode_field . uid 100000' /dev/mapper/xylitol--raidten-gitlabrunner debugfs -w -R 'set_inode_field . gid 100000' /dev/mapper/xylitol--raidten-gitlabrunner
See this post if you're interested in knowing what these commands are doing.
Now we're ready to mount the volume:
mount /vm/gitlabrunner
Use ls
to make sure that the volume was indeed mounted as the subuid root, not the real root:
ls -ld /vm/gitlabrunner
Now switch to the dummy user and copy the default LXC conf file:
su - lxcuser0 cp /etc/lxc/default.conf .
Add the following lines to your copy of default.conf (replace the values with whatever you found in /etc/subuid and /etc/subgid, respectively):
lxc.idmap = u 0 100000 65536 lxc.idmap = g 0 100000 65536
This is necessary to create the rootfs with the correct file ownerships.
Now, as the dummy user, open a tmux or screen session. This is necessary to avoid some weird TTY permission error.
Inside the tmux session, run the following:
lxc-create -f default.conf -t download -n gitlabrunner --dir=/vm/gitlabrunner -- -d debian -r buster -a amd64
Now exit from the tmux session, and open ~/.local/share/lxc/gitlabrunner/config
. Add the following
lines to it:
lxc.include = /usr/share/lxc/config/nesting.conf lxc.mount.auto = proc:mixed sys:ro cgroup:mixed lxc.apparmor.profile = unconfined lxc.start.auto = 1
Now switch back to the root user and install the cgroup tools:
apt install cgroup-tools
The idea is to create a new cgroup under which the unprivileged container will run, so that Docker (inside the container) can create new cgroups as necessary. Note: this method uses cgroups v1, which are going away soon. I tried to use cgroups v2 but I kept on running into some cgroup permissions error. If you figure out a way to use cgroups v2, please update the instructions here.
Paste the following into /root/bin/lxc-unprivileged-autostart.sh:
#!/bin/bash CGROUP_OWNER=lxcuser0 CGROUP_NAME=lxcgroup0 CGROUP_CONTROLLERS=rdma,cpuset,memory,perf_event,devices,pids,blkio,freezer,net_cls,net_prio,cpu,cpuacct NUM_CPUS=2 cgcreate \ -t $CGROUP_OWNER:$CGROUP_OWNER \ -a $CGROUP_OWNER:$CGROUP_OWNER \ -g "$CGROUP_CONTROLLERS:$CGROUP_NAME" # cpuset controller needs to be initialized echo 0 > /sys/fs/cgroup/cpuset/$CGROUP_NAME/cpuset.mems echo 0-$(( $NUM_CPUS - 1 )) > /sys/fs/cgroup/cpuset/$CGROUP_NAME/cpuset.cpus su -c "cgexec -g '$CGROUP_CONTROLLERS:$CGROUP_NAME' lxc-autostart" $CGROUP_OWNER
Change NUM_CPUS to however many CPU cores you wish to be available inside the container. You may have noticed that hugetlb is missing from the list of controllers; this is because the hugetlb controller is not mounted on xylitol as of this writing. Feel free to add it to the list if this no longer the case.
Make the script executable:
chmod +x bin/lxc-unprivileged-autostart.sh
Now paste the following into /etc/systemd/system/lxc-unprivileged-autostart.service
:
[Unit] Description=Autostart unprivileged LXC Containers Requires=lxc.service After=lxc.service [Service] Type=oneshot ExecStart=/root/bin/lxc-unprivileged-autostart.sh RemainAfterExit=true [Install] WantedBy=multi-user.target
Then run:
systemctl daemon-reload systemctl enable lxc-unprivileged-autostart.service systemctl start lxc-unprivileged-autostart.service
Now switch to the dummy user and make sure that the container is running:
su - lxcuser0 lxc-ls -f
To run Docker in the unprivileged container:
Attach to the container and install fuse-overlayfs. We need this because unprivileged users cannot mount overlay directories on Debian. Make sure to read this. Next, install Docker.
Then run docker run --rm hello-world
to make sure that everything is working correctly.