Migrating from Docker to Podman
Introduction
I have a small Hetzner server (2 vCPU / 4 GB RAM) that I use for all my personal projects. I’ve been running Docker on it without any problems for more than two years. I knew there were alternatives, but I never really bothered to explore them.
That changed when I stumbled upon this article. It convinced me that Podman was worth investigating, and I immediately saw a few benefits:
- Podman lets you run containers much like regular Linux binaries (fork/exec model), using your own user instead of root.
- There is no centralized daemon.
- Replacing Docker with Podman is relatively easy, since their APIs are compatible.
- Containers running multiple processes are supported out of the box, because Podman fully supports running a process manager (systemd) inside a container.
Although Podman and Docker are similar, I didn’t want to start the migration until I understood Podman better, so I also read this book. After that, I was convinced the migration would be worthwhile and decided to give it a try.
The migration plan
My server runs Ubuntu 24.04 LTS. It currently hosts a containerized Nginx instance, which serves static pages and acts as a reverse proxy for other web applications. Nginx also manages SSL and the certificate renewal process.
The server is deployed using scripts (PowerShell and Bash). The same script can set up a server in the dev environment (Multipass) or in production (Hetzner Cloud API). Once everything works in the dev environment, deploying to production usually takes only a few minutes.
The deploy script first creates a server on the chosen provider (Multipass or Hetzner), installs all the packages defined in the cloud-init configuration, and then runs a script called setup.sh as root on the server. That script applies additional configuration (for example, configuring packages and setting up cron jobs).
The migration plan was:
- Modify the existing deploy script to install Podman instead of Docker.
- Migrate the Nginx container to the new environment.
- Migrate other applications (static web pages and .NET applications).
Performing the migration
Podman can be installed by simply adding the Podman package to the cloud-init packages list:
#cloud-config
packages:
- podman
On Ubuntu 24.04 this installs Podman 4.9.3, while the current Podman release is 5.7.1. A key difference from Docker is that with Docker you are not tied to an old release (even on an LTS) and can usually get the latest version. Podman is still under very active development, and the differences between versions are substantial, so not having access to the latest version can cause quite a few headaches down the road.
I wanted to use Podman in rootless mode. This meant that the network layer would use user-mode networking. On Ubuntu 24.04 this relies on the slirp4netns package.
Migrating s6-overlay to systemd to manage container processes
Docker does not officially support running multi-process containers. To manage both Nginx and cron (used for certificate renewals), I use s6-overlay as the process manager. This works perfectly with Docker.
One of Podman’s selling points is that multi-process containers are officially supported. Podman natively supports running systemd inside the container to manage processes.
Since systemd is officially supported, I wanted to migrate my s6-overlay setup to systemd, but after a while I gave up:
- The examples I found were all using the
ubi8-initbase image. This image is based on Fedora, and I really didn’t want to use that for all my containers. - My existing images use Alpine as a base; unfortunately, Alpine uses a different init system (OpenRC).
- I experimented with creating a base image from Ubuntu and adding the systemd package. This, however, does not work out of the box, because the default systemd setup installs services that make no sense inside a container.
I’m sure that if I spent enough time on this I could make it work, but since my current solution with s6-overlay works without problems, I decided to keep it.
No Dockerfile modifications were needed
Because I decided to keep s6-overlay, no modifications to the Dockerfile were needed.
Replacing Docker Compose with the Podman CLI
Podman has an optional tool, Podman Compose, which can emulate the behavior of Docker Compose. My projects usually only run a single service, so I decided I didn’t need this extra tool and that I would use Podman CLI commands directly to manage my containers. Here is an example of a start.sh script for my Nginx container:
#!/bin/bash
set -eo pipefail
# Here we are explicitly setting the network subnet, something that was not
# needed with Docker. The reason for that will be explained later.
podman network exists nginx || podman network create --subnet 11.89.0.0/24 nginx
podman build -t gregor/nginx ./docker/nginx
podman run -d --replace --restart=always --name nginx --network nginx \
-p 8080:80 -p 8443:443 \
-v $(pwd)/logs:/var/log/s6 \
-v $(pwd)/logs/letsencrypt:/var/log/letsencrypt \
-v $(pwd)/settings:/app/settings \
-v $(pwd)/sites:/app/sites \
-v $(pwd)/cache:/cache \
-v $(pwd)/settings/letsencrypt:/etc/letsencrypt \
-e S6_KEEP_ENV=1 \
-e S6_KILL_GRACETIME=500 \
localhost/gregor/nginx
One benefit of Docker Compose was that container configuration could be easily reused to run different containers from the same image—for example, to apply database migrations or rebuild FTS indexes. When using the Podman CLI directly, configuration can only be shared if you script the podman run command.
Containers should survive a host restart
The Nginx container should be automatically started when the host server boots. After switching to Podman, that didn’t happen, even though the container was started with the --restart=always flag.
I found out that rootless containers are not automatically restarted unless Podman is properly configured. I needed to add this section to my setup.sh script on the server:
configure_podman()
{
# Enable the podman auto-restart service for user containers.
# For root containers it's automatically enabled.
systemctl --user -M gregor@ enable podman-restart.service
# Enable user 'lingering'; otherwise user containers
# are only started upon login.
loginctl enable-linger
}
After running that, the containers started automatically on host boot.
Resolving containers in the same network by name
I also use Nginx as a reverse proxy. All containers are joined to a network named nginx (see the section above). In the Docker setup I referenced all containers by name:
set $app1 app1:8080;
location / {
proxy_pass http://$app1;
}
For Nginx to be able to resolve the name app1, a resolver IP address must be defined. With Docker, you can use the IP of the internal Docker DNS:
resolver 127.0.0.11;
With Podman this is a bit different. DNS is not global, but tied to the network to which the container is attached. The IP address of the resolver is the network gateway. That is why I previously specified a subnet when creating the nginx network using --subnet 11.89.0.0/24, so that the gateway would predictably be at 11.89.0.1:
resolver 11.89.0.1;
Solving network timeouts when an upstream container gets restarted
Using Podman exposed another problem that I hadn’t seen with Docker.
When an app1 container (sitting behind the Nginx proxy from the previous example) was restarted—and therefore got a new IP and MAC address—external clients couldn’t reach it for a while. Connectivity was restored immediately if I ran a ping command (for example, ping app1) inside the Nginx container. That behavior led me to suspect stale DNS entries.
While looking for a solution, I noticed that the problem did not occur if I ran a small script at the end of each application’s start.sh script:
# reset-network.sh
# The following line will call 'nginx -s reload'
podman exec nginx /app/reload-nginx.sh
podman network reload nginx
This workaround is not perfect, though, because it does not help if a container is restarted by other means—for example, via podman stop or if conmon restarts it.
I decided to spend a little bit more time researching this. Podman uses the aardvark-dns server, which stores the DNS mapping table for each network in a file. The file is named after the network (nginx); in my case, it was:
/run/user/1000/containers/networks/aardvark-dns/nginx
I performed a few container restarts and checked whether the IP changes were reflected in the DNS entry table:
cat /run/user/1000/containers/networks/aardvark-dns/nginx
11.89.0.1
435abbc22669eb87da881ccc5f5be18545cac35c13bc3178d982f8b6b3f6f4dd 11.89.0.83 nginx,435abbc22669
b57c7803e4be4007b5f0d6302c0d4a6260973b3b414174ca31556c3d558e5b16 11.89.0.84 mindthegames,b57c7803e4be
The DNS entries were updated promptly, which led me to suspect that the real issue was the TTL published by aardvark-dns. After some research, I found an issue that confirms this; this also explains why reloading Nginx helped.
To work around this, I explicitly set the TTL in the Nginx config:
resolver 11.89.0.1 valid=1s;
This improved the situation considerably, so the reset-network.sh script was no longer needed.
How to listen on privileged ports when using a rootless Podman container
Rootless containers cannot bind to privileged ports on the host. This means that a container running Nginx cannot listen directly on ports 80 and 443.
This can be solved in various ways. My first attempt was redirecting traffic from ports 80 and 443 to ports 8080 and 8443:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-ports 8443
This solution worked, but I later replaced it because of another problem.
The problem with slirp4netns and source IPs
On some of my personal pages I use basic access authentication. To prevent brute-force attacks I use fail2ban. Fail2ban checks the logs of a service for failed authentication attempts and, after too many failures, bans the offending IP address by adding a firewall rule.
After migrating to Podman, this no longer worked. At first I thought the problem was related to fail2ban not adding the correct rule to the correct routing table (which I had struggled with when setting this up in the Docker environment). But upon closer inspection I realized that the IP address being banned was not the IP of my Hyper-V network (Multipass uses Hyper-V on Windows) but the IP of the Podman internal network.
Reading the Podman documentation more carefully, I found a section explaining how slirp4netns replaces client IPs with the IP of the network gateway. To work around this, several options are suggested:
- use
port_handler=slirp4netns(which prevents using user-defined networks), - use host networking (with security implications),
- upgrade to Podman 5 and use the
pastanetwork driver (not possible on the current LTS), - set up a proxy server on the host (for example, HAProxy), then use the PROXY protocol to forward the correct source IP.
I chose the last option. I added the HAProxy package to my host server and a small script to my setup.sh:
configure_haproxy()
{
# Not complete configuration, just an example!
cat << EOF > /etc/haproxy/haproxy.cfg
frontend fe_http
bind *:80
mode tcp
default_backend bk_http
frontend fe_https
bind *:443
mode tcp
default_backend bk_https
backend bk_http
mode tcp
server nginx 127.0.0.1:8080 send-proxy
backend bk_https
mode tcp
server nginx 127.0.0.1:8443 send-proxy
EOF
}
Since this solution also redirects ports 80 and 443 to 8080 and 8443, I removed the redirect rules from the previous section. In addition, I needed to configure Nginx to correctly handle the PROXY protocol:
- configure the
httpblock to read the source IP from the PROXY protocol, - add the
proxy_protocolkeyword to each of my Nginx server blocks.
http {
real_ip_header proxy_protocol;
set_real_ip_from 11.89.0.0/24;
}
server {
listen 443 ssl proxy_protocol;
http2 on;
}
In my opinion this workaround is quite complicated, and hopefully once the server is upgraded to the next LTS (with Podman 5) it will no longer be necessary.
Certbot Nginx plugin no longer works
I use Certbot to manage SSL certificates. Enabling the PROXY protocol caused the Certbot Nginx plugin to stop working (the plugin does not register a listener with proxy_protocol support).
There were two alternatives:
- Certbot Webroot plugin
- Certbot DNS plugin
I already use the DNS plugin for local servers, so I decided to switch to it on my Hetzner server as well.
Conclusion
Looking back, the migration was not as easy as I expected. The issue with source IPs can be avoided on newer Podman versions by using pasta networking mode instead of slirp4netns; unfortunately, this is not possible on the current Ubuntu LTS.
On the positive side, I really like the approach of not having a central daemon anymore, and the fact that my existing images just work.
The systemd integration looks promising on paper, but after experimenting with it, s6-overlay still seems better suited to running inside a container—although it does require you to learn another syntax for describing your services.