I recently got a small fix merged into metrics-server, which powers kubectl top and is used for horizontal pod autoscaling in Kubernetes. It’s not exactly a core component, but most production clusters have it installed.
I’ve been using ArgoCD at work lately for deploying Helm charts through a GitOps flow, and I noticed that the metrics-server APIService resource kept showing as “OutOfSync” even though nothing had actually changed.
The issue was that the Helm template always rendered the insecureSkipTLSVerify field explicitly, but Kubernetes omits it from live resources when it’s false (the API default). This caused ArgoCD to see a constant diff.
The fix was to conditionally render the field using {{- with .Values.apiService.insecureSkipTLSVerify }} so it only appears when set to true. Same approach other projects like KEDA have used.
It’s a tiny fix, but it’s satisfying to have a change merged into something as widely deployed as metrics-server.
Earlier this year, I wanted to help seed Linux distribution torrents using cheap VPS servers that offer terabytes of monthly bandwidth for less than $20/year. To maximize cost efficiency, I needed something as memory-efficient as possible.
After trying qBittorrent, rTorrent, and Transmission (via Docker) on a 1GB RAM VPS, I kept running into OOM issues and configuration headaches. Those clients are great, but running them in 1GB RAM doesn’t seem to be a design goal. It’s probably still possible with enough tweaking.
I ended up building distro-seed, a lightweight Go-based BitTorrent seeder using anacrolix/torrent. It’s a Go library that handles all the BitTorrent protocol details while letting you build exactly what you need without the overhead of a full client.
So far, it’s seeded 1.25 TB of Linux distributions.
The project includes an Ansible playbook to set up a fresh Ubuntu VPS for automatic seeding. The whole setup is simple: configure your torrent sources in a YAML file, run the playbook, and let it seed.
Today, I came across a bug where a Celery worker wasn’t gracefully shutting down, and it was causing some odd “Connection Refused” errors from requests within the task being run by the worker. It was also shutting down before it could send errors to Rollbar/Sentry for the team to know they need to address it.
This was happening because the entrypoint had a script that effectively did this:
echo "starting celery worker"
celery -A tasks worker
I was recently troubleshooting some packet loss with ping on linux, and I noticed by default ping won’t explicitly show lost packets:
$ ping x.x.x.x
64 bytes from x.x.x.x: icmp_seq=8 ttl=52 time=18.1 ms
64 bytes from x.x.x.x: icmp_seq=11 ttl=52 time=21.2 ms
(notice the skipped 8-10, those were lost packets)
To fix this, you can add run ping with -O:
$ ping -O x.x.x.x
64 bytes from x.x.x.x: icmp_seq=11 ttl=52 time=19.0 ms
no answer yet for icmp_seq=12
no answer yet for icmp_seq=13
64 bytes from x.x.x.x: icmp_seq=14 ttl=52 time=18.9 ms
To improve this output, try adding -I (traceroute -I) to make it use ICMP ECHO instead of UDP for probes.
If you want better statistics about packet loss, you can also use a tool called mtr which combines traceroute and ping. If you’re on Ubuntu, you can install this with sudo apt-get install mtr.
If you have an old computer with spare hard drives, it might be useful to use it to share files with computers on your network by turning it into a NAS (network attached storage). There are several popular DIY NAS options:
I ended up going with OpenMediaVault due to it being Debian based and super popular. To set it up, I followed this incredibly thorough guide: link
I had a few issues that caused it to stop working after restarting.
The first issue was related to the disk being encrypted. OpenMediaVault started in emergency mode due to being unable to access the encrypted disk. Emergency mode unfortunately doesn’t allow SSH access, which means you need to plug a monitor and keyboard into it. You have to make sure you have nofail in the options section of the lines of /etc/fstab starting with /dev/disk. You also need to add nonempty and remove x-systemd.requires=<disk> from the options section of the lines starting with /srv/dev-disk-by-uuid (for mergerfs). I’ve had to redo the removing x-systemd.requires part every time I apply settings from the web UI. More information about this: 1234567
The second issue involved losing network adapter configuration after restarting. I had to run omv-firstaid and configure the network adapter with a static IP to resolve this. This may have something to do with an empty configuration from the web UI ovewriting the existing working configuration.
I recently learned that docker will ignore ufw (uncomplicated firewall) rules by default. This means that it will still expose ports that are blocked by ufw.
The fix involved adding this to /etc/docker/daemon.json:
{
"iptables": false,
"ip6tables": false
}
Then I restarted the docker daemon with sudo systemctl restart docker.
I recently had a lot of trouble getting my Raspberry Pi Zero W 2 to connect to WiFi on my Ubiquiti Unifi AP AC Lite after a firmware upgrade to 5.43.46. It’s a 2.4Ghz-only device, and actually all of my 2.4Ghz-only devices wouldn’t connect.
Apparently something I did set a setting called PMF (Protected Management Frames) to “Required”, and the Pi Zero W 2 doesn’t support that. In theory, this setting should help prevent clients from getting disconnected through de-auth packets from an attacker. However, not all devices support this.
To fix the issue, you need to find the PMF setting under Settings -> Wifi -> <your network> -> Advanced -> Security and change it to “Optional”.