Podman: Difference between revisions
mNo edit summary |
|||
(6 intermediate revisions by the same user not shown) | |||
Line 20: | Line 20: | ||
== Networking == |
== Networking == |
||
As of this writing (2024-08-16), the latest network backend in Podman is [https://github.com/containers/netavark netavark]. Hosts which are still using the legacy CNI backend should switch to netavark as soon as possible, because support for CNI will be removed in Podman 5.0. Unfortunately, the officially recommended way to migrate from CNI to netavark is to run "podman system reset", which deletes '''everything''' (containers, images, networks, etc.). This is usually undesirable. Here's what I suggest instead (assuming you don't have custom Podman networks): |
|||
Podman uses [https://www.cni.dev/ CNI] plugins. If a container needs to be publicly accessible, you will want to create a bridge network. Here's how I did it. |
|||
<ol> |
|||
If the host already uses a bridge interface as its primary interface (e.g. 'br0'), I suggest that you do <b>not</b> use this for the CNI bridge. I did that once, and when I brought down the CNI network, br0 came down with it. I suggest creating a dedicated bridge for the containers which is attached to the primary bridge via a veth pair. Here's an example from xylitol's /etc/network/interfaces: |
|||
<li>Stop all running containers.</li> |
|||
⚫ | |||
<li>Run <code>echo -n netavark > /var/lib/containers/storage/defaultNetworkBackend</code>.</li> |
|||
... |
|||
<li>Restart the stopped containers.</li> |
|||
auto br0 |
|||
⚫ | |||
iface br0 inet static |
|||
bridge_ports eno1 |
|||
⚫ | |||
netmask 255.255.255.0 |
|||
⚫ | |||
If you had custom networks before, this is trickier. You will need to manually convert the CNI JSON file into the netavark JSON format (under /etc/containers/networks). |
|||
auto podbr1 |
|||
iface podbr1 inet manual |
|||
=== Directly exposing a container to a public network === |
|||
bridge_ports none |
|||
The easiest way to do this, in my opinion, is with a macvlan network. Here's an example of how this was done for [[BigBlueButton]] on xylitol: |
|||
up ip link add name podveth0 type veth peer name podveth1 |
|||
up ip link set podveth0 master br0 |
|||
up ip link set podveth1 master podbr1 |
|||
up ip link set dev podveth0 up |
|||
up ip link set dev podveth1 up |
|||
down ip link del podveth0 |
|||
⚫ | |||
Let's say we want to create a container (or a pod) which has a public IP address 129.97.134.173. We will create a new network for it with an IP range of length 1. In the command below, I will initially create a macvlan network, then convert it into bridge; I find this easier than trying to modify the default bridge configuration which podman creates. |
|||
<pre> |
<pre> |
||
podman network create \ |
|||
podman network create -d macvlan -o parent=podbr1 --subnet 129.97.134.0/24 --ip-range 129.97.134.173/32 --gateway 129.97.134.1 bbbnet |
|||
--driver=macvlan \ |
|||
--ipv6 \ |
|||
--opt parent=br0 \ |
|||
⚫ | |||
⚫ | |||
--subnet=2620:101:f000:4901:c5c::0/64 \ |
|||
--gateway=2620:101:f000:4901::1 \ |
|||
bbbnet |
|||
</pre> |
</pre> |
||
Now open /etc/cni/net.d/bbbnet.conflist and make it look like the following: |
|||
Then create a pod in which the containers will be run: |
|||
<pre> |
<pre> |
||
podman pod create \ |
|||
{ |
|||
--name bbbpod \ |
|||
"cniVersion": "0.4.0", |
|||
--network bbbnet \ |
|||
--share net \ |
|||
"plugins": [ |
|||
⚫ | |||
{ |
|||
--ip6=2620:101:f000:4901:c5c::173 |
|||
"type": "bridge", |
|||
"bridge": "podbr1", |
|||
"ipam": { |
|||
"type": "host-local", |
|||
"routes": [ |
|||
{ |
|||
"dst": "0.0.0.0/0" |
|||
} |
|||
], |
|||
"ranges": [ |
|||
[ |
|||
{ |
|||
⚫ | |||
"rangeStart": "129.97.134.173", |
|||
"rangeEnd": "129.97.134.173", |
|||
"gateway": "129.97.134.1" |
|||
} |
|||
] |
|||
] |
|||
} |
|||
} |
|||
] |
|||
} |
|||
</pre> |
</pre> |
||
If you run <code>podman network ls</code>, you should now see that bbbnet is a bridge network. |
|||
== Systemd == |
== Systemd == |
||
Line 85: | Line 61: | ||
=== Systemd in podman === |
=== Systemd in podman === |
||
To run systemd in podman, just create a Dockerfile like the following: |
To run systemd in podman, just create a Dockerfile like the following: |
||
< |
<pre> |
||
FROM ubuntu:bionic |
FROM ubuntu:bionic |
||
Line 93: | Line 69: | ||
CMD [ "/bin/systemd" ] |
CMD [ "/bin/systemd" ] |
||
</ |
</pre> |
||
Then run: |
Then run: |
||
< |
<pre> |
||
podman build --privileged -t ubuntu-systemd:bionic -f ubuntu-bionic-systemd.Dockerfile |
podman build --privileged -t ubuntu-systemd:bionic -f ubuntu-bionic-systemd.Dockerfile |
||
</ |
</pre> |
||
If you're running this as root, I suggest using the --privileged flag. I am pretty sure that there some specific capabilities you can add instead to make it work (via the --cap-add flag), but this is easier. |
If you're running this as root, I suggest using the --privileged flag. I am pretty sure that there some specific capabilities you can add instead to make it work (via the --cap-add flag), but this is easier. |
||
Then, to run a container with this image: |
Then, to run a container with this image: |
||
⚫ | |||
podman run -it --privileged ubuntu-systemd:bionic |
podman run -it --privileged ubuntu-systemd:bionic |
||
</ |
</pre> |
||
=== Podman in systemd === |
=== Podman in systemd === |
||
Podman has a built-in command to generate systemd service files to start containers and pods. For example, let's say we have a pod named bbbpod. Run the following: |
Podman has a built-in command to generate systemd service files to start containers and pods. For example, let's say we have a pod named bbbpod. Run the following: |
||
< |
<pre> |
||
podman generate systemd --files --name bbbpod |
podman generate systemd --files --name bbbpod |
||
</ |
</pre> |
||
This will create .service files for the pod and the containers inside it. Now you just need to enable them: |
This will create .service files for the pod and the containers inside it. Now you just need to enable them: |
||
< |
<pre> |
||
mv *.service /etc/systemd/system/ |
mv *.service /etc/systemd/system/ |
||
systemctl daemon-reload |
systemctl daemon-reload |
||
systemctl enable pod-bbbpod.service |
systemctl enable pod-bbbpod.service |
||
</ |
</pre> |
||
If you now run <code>systemctl start pod-bbbpod</code>, the pod and its containers will start. |
If you now run <code>systemctl start pod-bbbpod</code>, the pod and its containers will start. |
||
Line 121: | Line 98: | ||
First, we create a pod in the network we previously created: |
First, we create a pod in the network we previously created: |
||
< |
<pre> |
||
podman pod create --network bbbnet --name bbbpod --share net |
podman pod create --network bbbnet --name bbbpod --share net |
||
</ |
</pre> |
||
Then run a container inside the pod: |
Then run a container inside the pod: |
||
< |
<pre> |
||
podman run -it --name bbb --hostname bbb --pod bbbpod --privileged ubuntu-systemd:bionic |
podman run -it --name bbb --hostname bbb --pod bbbpod --privileged ubuntu-systemd:bionic |
||
</ |
</pre> |
||
You can add more containers to the pod: |
You can add more containers to the pod: |
||
< |
<pre> |
||
podman run -d --name greenlight --pod bbbpod --env-file $PWD/env bigbluebutton/greenlight:v2 |
podman run -d --name greenlight --pod bbbpod --env-file $PWD/env bigbluebutton/greenlight:v2 |
||
</ |
</pre> |
||
The bbb and greenlight containers can now communicate with each other over localhost. |
The bbb and greenlight containers can now communicate with each other over localhost. |
||
<b>Important</b>: Make sure to edit /etc/hostname and /etc/network/interfaces (or whichever network manager you decide to use) in each container. |
<b>Important</b>: Make sure to edit /etc/hostname and /etc/network/interfaces (or whichever network manager you decide to use) in each container. |
||
== Volumes == |
|||
Unfortunately podman does not currently have functionality to allocate a separate volume to each container. Instead, I suggest mounting each root-level folder in a separate volume. |
|||
Let's say you created a new LVM volume mounted at /vm/bigbluebutton. So create your container like the following: |
|||
<pre> |
|||
podman run ... --name bbb -v /vm/bigbluebutton/bin:/bin -v /vm/bigbluebutton/boot:/boot -v /vm/bigbluebutton/etc:/etc -v /vm/bigbluebutton/home:/home -v /vm/bigbluebutton/lib:/lib -v /vm/bigbluebutton/lib64:/lib64 -v /vm/bigbluebutton/media:/media -v /vm/bigbluebutton/mnt:/mnt -v /vm/bigbluebutton/opt:/opt -v /vm/bigbluebutton/root:/root -v /vm/bigbluebutton/sbin:/sbin -v /vm/bigbluebutton/srv:/srv -v /vm/bigbluebutton/usr:/usr -v /vm/bigbluebutton/var:/var ubuntu-systemd:bionic |
|||
</pre> |
|||
It is also a good idea to mount /var/lib/containers in a separate LVM volume to avoid running out of space on the host. |
Latest revision as of 08:28, 16 August 2024
Podman is a very neat Docker-compatible container solution. Some of the advantages it has over Docker are:
- no daemon (uses a fork-and-exec model)
- systemd can run inside containers very easily
- containers can become systemd services on the host
- non-root users can run containers
Installation
As of bullseye, podman is available in the official Debian repositories. I suggest installing it from the unstable distribution, since podman 3.2 has many useful improvements over previous versions:
apt install -t unstable podman podman-docker
The podman-docker package provides a wrapper script so that running the command 'docker' will invoke podman. Recent versions of podman also provide API compatibility with Docker, which means that docker-compose will actually work out of the box. (For non-root users, you will need to set the DOCKER_HOST environment variable to unix://$XDG_RUNTIME_DIR/podman/podman.sock
).
I suggest adding the following to /etc/containers/registries.conf so that podman automatically pulls packages from docker.io instead of quay.io:
[registries.search] registries = ['docker.io']
Networking
As of this writing (2024-08-16), the latest network backend in Podman is netavark. Hosts which are still using the legacy CNI backend should switch to netavark as soon as possible, because support for CNI will be removed in Podman 5.0. Unfortunately, the officially recommended way to migrate from CNI to netavark is to run "podman system reset", which deletes everything (containers, images, networks, etc.). This is usually undesirable. Here's what I suggest instead (assuming you don't have custom Podman networks):
- Stop all running containers.
- Run
echo -n netavark > /var/lib/containers/storage/defaultNetworkBackend
. - Restart the stopped containers.
If you had custom networks before, this is trickier. You will need to manually convert the CNI JSON file into the netavark JSON format (under /etc/containers/networks).
Directly exposing a container to a public network
The easiest way to do this, in my opinion, is with a macvlan network. Here's an example of how this was done for BigBlueButton on xylitol:
podman network create \ --driver=macvlan \ --ipv6 \ --opt parent=br0 \ --subnet=129.97.134.0/24 \ --gateway=129.97.134.1 \ --subnet=2620:101:f000:4901:c5c::0/64 \ --gateway=2620:101:f000:4901::1 \ bbbnet
Then create a pod in which the containers will be run:
podman pod create \ --name bbbpod \ --network bbbnet \ --share net \ --ip=129.97.134.173 \ --ip6=2620:101:f000:4901:c5c::173
Systemd
Podman integrates with systemd in both directions - systemd can run in podman, and podman can run in systemd.
Systemd in podman
To run systemd in podman, just create a Dockerfile like the following:
FROM ubuntu:bionic ENV DEBIAN_FRONTEND=noninteractive RUN apt update && apt install -y systemd RUN passwd -d root CMD [ "/bin/systemd" ]
Then run:
podman build --privileged -t ubuntu-systemd:bionic -f ubuntu-bionic-systemd.Dockerfile
If you're running this as root, I suggest using the --privileged flag. I am pretty sure that there some specific capabilities you can add instead to make it work (via the --cap-add flag), but this is easier.
Then, to run a container with this image:
podman run -it --privileged ubuntu-systemd:bionic
Podman in systemd
Podman has a built-in command to generate systemd service files to start containers and pods. For example, let's say we have a pod named bbbpod. Run the following:
podman generate systemd --files --name bbbpod
This will create .service files for the pod and the containers inside it. Now you just need to enable them:
mv *.service /etc/systemd/system/ systemctl daemon-reload systemctl enable pod-bbbpod.service
If you now run systemctl start pod-bbbpod
, the pod and its containers will start.
Pods
Podman pods are similar to Kubernetes pods; they can share namespaces with each other, such as network namespaces and UTS namespaces. In this example, we will use a network namespace.
First, we create a pod in the network we previously created:
podman pod create --network bbbnet --name bbbpod --share net
Then run a container inside the pod:
podman run -it --name bbb --hostname bbb --pod bbbpod --privileged ubuntu-systemd:bionic
You can add more containers to the pod:
podman run -d --name greenlight --pod bbbpod --env-file $PWD/env bigbluebutton/greenlight:v2
The bbb and greenlight containers can now communicate with each other over localhost.
Important: Make sure to edit /etc/hostname and /etc/network/interfaces (or whichever network manager you decide to use) in each container.
Volumes
Unfortunately podman does not currently have functionality to allocate a separate volume to each container. Instead, I suggest mounting each root-level folder in a separate volume.
Let's say you created a new LVM volume mounted at /vm/bigbluebutton. So create your container like the following:
podman run ... --name bbb -v /vm/bigbluebutton/bin:/bin -v /vm/bigbluebutton/boot:/boot -v /vm/bigbluebutton/etc:/etc -v /vm/bigbluebutton/home:/home -v /vm/bigbluebutton/lib:/lib -v /vm/bigbluebutton/lib64:/lib64 -v /vm/bigbluebutton/media:/media -v /vm/bigbluebutton/mnt:/mnt -v /vm/bigbluebutton/opt:/opt -v /vm/bigbluebutton/root:/root -v /vm/bigbluebutton/sbin:/sbin -v /vm/bigbluebutton/srv:/srv -v /vm/bigbluebutton/usr:/usr -v /vm/bigbluebutton/var:/var ubuntu-systemd:bionic
It is also a good idea to mount /var/lib/containers in a separate LVM volume to avoid running out of space on the host.