Difference between revisions of "Virtualization (LXC Containers)"

From CSCWiki
Jump to navigation Jump to search
m (Ztseguin moved page Virtualization to Virtualization (LXC Containers): Clarify topic.)
(WiCS LXC docs)
Line 1: Line 1:
 
As of Fall 2009, we use [http://lxc.sourceforge.net/ Linux containers] to maintain virtual machines, most notably [[Machine_List#caffeine|caffeine]], which is hosted on [[Machine_List#glomag|glomag]]. The various commands to manipulate Linux containers are prefixed with "lxc-"; see their individual manpages for usage.
 
As of Fall 2009, we use [http://lxc.sourceforge.net/ Linux containers] to maintain virtual machines, most notably [[Machine_List#caffeine|caffeine]], which is hosted on [[Machine_List#glomag|glomag]]. The various commands to manipulate Linux containers are prefixed with "lxc-"; see their individual manpages for usage.
   
  +
== Management Quick Guide ==
= Creating a new container =
 
 
Create a new container using `lxc-create`:
 
 
# Create new container "containername" with root fs located at /vm/containername
 
lxc-create --dir=/vm/containername -n containername --template download
 
 
This will prompt you for distribution, release, and architecture. (Architecture *must* match host machine.)
 
 
# List containers
 
lxc-ls
 
 
to ensure that your container has been successfully created; it should be listed. You can also list its root directory if you like. To start it in the background and obtain a root shell, do
 
 
# Start and attach a root shell
 
lxc-start -d -n containername
 
lxc-attach -n containername
 
 
Now you're ready to [[New CSC Machine#After Installing|configure your machine]].
 
 
== Networking ==
 
 
Networking might not be enabled on your container by default. If this is the case (easily checked with `ifconfig`), you'll need to modify its config file, located at /var/lib/lxc/containername/config:
 
 
# Network configuration
 
lxc.network.type = veth
 
lxc.network.flags = up
 
 
# that's the interface defined above in host's interfaces file
 
lxc.network.link = br0
 
 
# name of network device inside the container,
 
# defaults to eth0, you could choose a name freely
 
# lxc.network.name = lxcnet0
 
 
lxc.network.hwaddr = DE:AD:BE:EF:70:10 # your favourite fake MAC
 
 
# the ip may be set to 0.0.0.0/24 or skip this line
 
# if you like to use a dhcp client inside the container
 
lxc.network.ipv4 = 129.97.134.XXX/24
 
 
# define a gateway to have access to the internet
 
lxc.network.ipv4.gateway = 129.97.134.1
 
 
= Management Quick Guide =
 
   
 
To manage containers, use the <tt>lxc-*</tt> tools, which require root privilege. Some examples (replace <tt>caffeine</tt> with the appropriate container name):
 
To manage containers, use the <tt>lxc-*</tt> tools, which require root privilege. Some examples (replace <tt>caffeine</tt> with the appropriate container name):
Line 73: Line 29:
   
 
Containers are stored on the host filesystem in /var/lib/lxc (root filesystems are symlinked to the appropriate directory on /vm).
 
Containers are stored on the host filesystem in /var/lib/lxc (root filesystems are symlinked to the appropriate directory on /vm).
  +
  +
== ehashman's Guide to LXC on Debian ==
  +
  +
=== Configuring the host machine ===
  +
  +
First, install all required packages:
  +
  +
<pre># apt-get install lxc bridge-utils</pre>
  +
==== Setting up ethernet bridging ====
  +
  +
Next, create an ethernet bridge for the container. Edit <code>/etc/network/interfaces</code>:
  +
  +
<pre># The primary network interface
  +
#auto eth0
  +
#iface eth0 inet static
  +
# address 129.97.134.200
  +
# netmask 255.255.255.0
  +
# gateway 129.97.134.1
  +
  +
# Bridge ethernet for containers
  +
auto br0
  +
iface br0 inet static
  +
bridge_ports eth0
  +
address 129.97.134.200
  +
netmask 255.255.255.0
  +
gateway 129.97.134.1
  +
dns-nameservers 129.97.2.1 129.97.129.10
  +
dns-search wics.uwaterloo.ca uwaterloo.ca</pre>
  +
Cross your fingers and restart networking for your configuration to take effect!
  +
  +
<pre># service networking restart
  +
// bash enter to see if you lost connectivity and have to make a machine room trip</pre>
  +
==== Setting up storage ====
  +
  +
Last, allocate some space in your volume group to put the container root on:
  +
  +
<pre>// Find the correct volume group to put the container on
  +
# vgdisplay
  +
  +
// Create the volume in the appropriate volume group
  +
# lvcreate -L 20G -n container vg0
  +
  +
// Find it in the dev mapper
  +
# ls /dev/mapper/
  +
  +
// Create a filesystem on it
  +
# mkfs.ext4 /dev/mapper/vg0-container</pre>
  +
Last, add it to <code>/etc/fstab</code>:
  +
  +
<pre>/dev/mapper/vg0-container /vm/container ext4 defaults 0 2</pre>
  +
Test the entry with <code>mount</code>:
  +
  +
<pre># mount /vm/container</pre>
  +
Now you're done!
  +
  +
=== Creating a new container ===
  +
  +
Create a new container using <code>lxc-create</code>:
  +
  +
<pre>// Create new container &quot;container&quot; with root fs located at /vm/container
  +
# lxc-create --dir=/vm/container -n container --template download</pre>
  +
This will prompt you for distribution, release, and architecture. (Architecture ''must'' match host machine.)
  +
  +
Take this time to review its config in <code>/var/lib/lxc/container/config</code>, and tell it to auto-start if you like:
  +
  +
<pre># Auto-start the container on boot
  +
lxc.start.auto = 1</pre>
  +
Now,
  +
  +
<pre>// List containers, -f for fancy
  +
# lxc-ls -f</pre>
  +
to ensure that your container has been successfully created; it should be listed. You can also list its root directory if you like. To start it in the background and obtain a root shell, do
  +
  +
<pre>// Start and attach a root shell
  +
# lxc-start -d -n container
  +
# lxc-attach -n container</pre>
  +
=== Migrating a container between hosts ===
  +
  +
Start by shutting the container down:
  +
  +
<pre>root@container:~# halt</pre>
  +
Then make a tarball of the container's filesystem:
  +
  +
<pre># tar --numeric-owner -czvf container.tar.gz /vm/container</pre>
  +
Copy it to its target destination, along with the configs:
  +
  +
<pre>$ scp container.tar.gz new-host:
  +
$ scp -r /var/lib/lxc/container/ new-host:/var/lib/lxc/</pre>
  +
Now carefully extract it. '''If you haven't already, provision storage and ethernet per the container creation section.'''
  +
  +
Yes, we really do want to stick it directly into <code>/</code>:
  +
  +
<pre># tar --numeric-owner -xzvf container.tar.gz -C /</pre>
  +
Verify the container's existence:
  +
  +
<pre># lxc-ls -f
  +
NAME STATE IPV4 IPV6 AUTOSTART
  +
-----------------------------------------
  +
container STOPPED - - YES </pre>
  +
Now just start it on up:
  +
  +
<pre># lxc-start -d -n container</pre>
  +
And test by trying an ssh in!
   
 
[[Category:Software]]
 
[[Category:Software]]

Revision as of 16:03, 1 June 2016

As of Fall 2009, we use Linux containers to maintain virtual machines, most notably caffeine, which is hosted on glomag. The various commands to manipulate Linux containers are prefixed with "lxc-"; see their individual manpages for usage.

Management Quick Guide

To manage containers, use the lxc-* tools, which require root privilege. Some examples (replace caffeine with the appropriate container name):

# check if caffeine is running
lxc-info -n caffeine

# start caffeine in the background
lxc-start -d -n caffeine

# stop caffeine gracefully
lxc-halt -n caffeine

# stop caffeine forcefully
lxc-stop -n caffeine

# launch a TTY console for the container
lxc-console -n caffeine

To install Linux container support on a recent Debian (squeeze or newer) system:

  • Install the lxc and bridge-utils packages.
  • Create a bridged network interface (this can be configured in /etc/network/interfaces as though it were a normal Ethernet device, with the additional bridge_ports parameter. This is usually called br0 (can be created manually with brctl). LXC will create a virtual Ethernet device and add it to the bridge when each container starts.

To start caffeine, run the following command as root on glomag:

lxc-start -d -n caffeine

Containers are stored on the host filesystem in /var/lib/lxc (root filesystems are symlinked to the appropriate directory on /vm).

ehashman's Guide to LXC on Debian

Configuring the host machine

First, install all required packages:

# apt-get install lxc bridge-utils

Setting up ethernet bridging

Next, create an ethernet bridge for the container. Edit /etc/network/interfaces:

# The primary network interface
#auto eth0
#iface eth0 inet static
#       address 129.97.134.200
#       netmask 255.255.255.0
#       gateway 129.97.134.1

# Bridge ethernet for containers
auto br0
iface br0 inet static
    bridge_ports eth0
    address 129.97.134.200
    netmask 255.255.255.0
    gateway 129.97.134.1
    dns-nameservers 129.97.2.1 129.97.129.10
    dns-search wics.uwaterloo.ca uwaterloo.ca

Cross your fingers and restart networking for your configuration to take effect!

# service networking restart
// bash enter to see if you lost connectivity and have to make a machine room trip

Setting up storage

Last, allocate some space in your volume group to put the container root on:

// Find the correct volume group to put the container on
# vgdisplay

// Create the volume in the appropriate volume group
# lvcreate -L 20G -n container vg0

// Find it in the dev mapper
# ls /dev/mapper/

// Create a filesystem on it
# mkfs.ext4 /dev/mapper/vg0-container

Last, add it to /etc/fstab:

/dev/mapper/vg0-container /vm/container        ext4    defaults        0       2

Test the entry with mount:

# mount /vm/container

Now you're done!

Creating a new container

Create a new container using lxc-create:

// Create new container "container" with root fs located at /vm/container
# lxc-create --dir=/vm/container -n container --template download

This will prompt you for distribution, release, and architecture. (Architecture must match host machine.)

Take this time to review its config in /var/lib/lxc/container/config, and tell it to auto-start if you like:

# Auto-start the container on boot
lxc.start.auto = 1

Now,

// List containers, -f for fancy
# lxc-ls -f

to ensure that your container has been successfully created; it should be listed. You can also list its root directory if you like. To start it in the background and obtain a root shell, do

// Start and attach a root shell
# lxc-start -d -n container
# lxc-attach -n container

Migrating a container between hosts

Start by shutting the container down:

root@container:~# halt

Then make a tarball of the container's filesystem:

# tar --numeric-owner -czvf container.tar.gz /vm/container

Copy it to its target destination, along with the configs:

$ scp container.tar.gz new-host:
$ scp -r /var/lib/lxc/container/ new-host:/var/lib/lxc/

Now carefully extract it. If you haven't already, provision storage and ethernet per the container creation section.

Yes, we really do want to stick it directly into /:

# tar --numeric-owner -xzvf container.tar.gz -C /

Verify the container's existence:

# lxc-ls -f
NAME       STATE    IPV4  IPV6  AUTOSTART  
-----------------------------------------
container  STOPPED  -     -     YES   

Now just start it on up:

# lxc-start -d -n container

And test by trying an ssh in!