WSL2, openSUSE, rootless podman and enabling cgroups v2 limits
Diving straight in..
It isn’t straight-forward to run rootless using cgroups v2 facilities without some config. Which isn’t obvious at all.. however! Ta-da!
Configuring WSL2
Fortunately there is a tool these days, “WSL Settings” but here is my .wslconfig . This lives in the root of %userprofile%
[wsl2]
kernelCommandLine=cgroup_no_v1=all systemd.unified_cgroup_heirarchy=1
memory=32GB
processors=12
networkingMode=mirrored
dnsTunneling=true
autoProxy=true
Importantly it is enforcing cgroups v2 with the kernel command, and for me my machine has 64 GB of memory and a i7-13700K so I’ve limited WSL to stop any unwanted greed from occuring. The network parameters are also important if you want container-to-container network chat within your pods and the local host, particularly with bridge/DNS lookups.
Tell WSL Linux kernel about some WSL options
Although I can run as rootless, I still need to setup some stuff as root =( We need to make sure SystemD is in use, a sensible automount as recommended by Microsoft and a mount to make podman happy. Edit /etc/wsl.conf
as root.
[automount]
options = "metadata,umask=22,fmask=11"
[boot]
systemd=true
command="mount --make-rshared /"
A small-aside: podman-compose
I use openSUSE as my distribution and the highest packaged version of podman-compose is 1.2. There’s quite a few functional additions and the current version is 1.4.
python3 -m venv ~/mydev
source ~/mydev/bin/activate
pip install --upgrade pip
pip install podman-compose
Just remember to source your venv every time you’re doing podman work. However a little more is needed..
Local containers.conf
We are running rootless. Therefore the containers config must live locally. Create a directory if it doesn’t exist, and the file: ~/.config/containers/containers.conf
Very important here - the firewall driver must be overriden as nftables will not work correctly rootless. I think it is due to AppArmor not running under WSL, anyway, easily fixed by dropping back in time to iptables. Also note I’ve put a 50 GB log limit - this is just a backstop and this should be configured for your own run-time environment, including rotation. I’ve also added a tz which will override most container timezones, it’s not fool-proof but handy enough to add it here. The config also removes the podman-compose warning message.
[containers]
log_siz_max=50000000
compose_providers = ["/home/xxxx/venvs/common/bin/podman-compose"]
compose_warning_logs=false
tz="Europe/London"
[network]
firewall_drivers="iptables"
Configuring cgroups v2 limits
To use limits in a podman container a fair bit of configuration is required, at least for WSL, I’ve not actually checked a native openSUES Tumbleweed distribution.
As root, create /etc/systemd/system/user-0.slice
[Unit]
Before=systemd-logind.service
[Slice]
Slice=user.slice
[Install]
WantedBy=multi-user.target
As root, create a directory and file /etc/systemd/system/user@.service.d/delegate.conf
[Service]
Delegate=cpu cpuset io memory pids
As root, create a directory and file /etc/systemd/system/user-.slice.d/override.conf
[Slice]
Slice=user.slice
CPUAccounting=yes
MemoryAccounting=yes
IOAccounting=yes
TasksAccounting=yes
Now this is done, reload SystemD to check it doesn’t choke..
sudo systemctl daemon-reload
and if that was fine use a normal windows cmd/powershell to issue a wsl --shutdown
and then re-open your WSL terminal.
Check if took effect:
cat /sys/fs/cgroup/user.slice/user-$(id -u).slice/user@$(id -u).service/cgroup.controllers
You should see a line like this: cpuset cpu io memory pids
CPU and memory limit
Limits can now be imposed in a compmose file, along these lines, 0.5 means half a cpu, 1GB is a limit of 1GB.
my_service:
environment:
...
cpus: 0.5
memory: 1GB
volumes:
...
Final note
I am not an expert at cgroups but read enough to work it out. Like most things there are consequences.. it takes CPU cycles running this kind of audit/control which is probably why it is disabled by default, however I am certain I don’t want a SQL Server container running wild on me!