

Again?!


Again?!


I’ve finally pinned down my backup automaton:
PartOf= a custom containers.target.Conflicts=containers.target for creating read only snapshots of the relevant subvolumes.Wants=borgmatic.service wich creates a borg backup of the snapshots on a removable drive. It also starts containers.target on success or failure since the containers are not required to be stopped anymore.rclone synced to an S3 compatible storage.What I’m not super happy about is the starting of containers.target via the systemd unit’s OnSuccess= mechanism but I couldn’t find an elegant way of stopping the target while the snapshots were being created and then restarting the target through the other dependency mechanisms.
I also realize it’s a bit fragile, since subsequent backup steps are started even if previous steps fail. But in the worst case that should just lead to either no data being written (if the mount is missing) or backing up the same data twice (not a problem due to deduplication).


What I’m reading is that you want site-to-site connectivity. Wireguard + possibly dynamic DNS makes this pretty easy (assuming you can open ports and configure NAT at your sites). Or you could set up some other VPN solution like OpenVPN.
There’s also tailscale (a paid service) for facilitating the wireguard setup, NAT traversal and relaying. headscale is a self hosted solution that aims to provide something similar (but more limited in scope).


You can use man <command> (in this case man cut) to read a program’s manual page. Appending --help (without any other arguments will often produce at least a short description of the program and list the available options.


I’m also using Caddy with desec and get the same result when adding a new subdomain. It fixes itself after a while though (10+ minutes). Maybe try waiting a little longer.


I’m also using caddy with desec.io. When first triggering the challenge for an entry, it can fail a couple of times. I think it just takes a while for the DNS entry to be available.
Another thing that I’ve experienced is that I can’t use wildcard subdomain entries. My guess is that it’s somehow because I only have public IPv6 addresses (but I don’t remember the details). I have configured an internal DNS with the wildcard entry since I’m only ever connecting to that host via wireguard from outside my network. For the host itself I’ve created a regular AAAA record.


kmail is in the arch repository (in extra), the package is called kmail.


Then maybe the downloaded packages are actually corrupted. You could check if they have plausible file sizes. IIRC pacman will ask you if you want to delete the non-matching files but I’m not entirely sure. They should end up in /var/cache/pacman/pkg.


Update only archlinux-keyring and try again.
# pacman -S archlinux-keyring
# pacman -Syu
In some cases you may need to re-populate the keyring.
# pacman-key --init
# pacman-key --populate


Thanks for reminding me to finish reading that book!


Sourceforge has stopped distributing adware installers again since they changed ownership a few years ago.

TL;DR:
The problem is growing leafy plants like lettuce and spinach in space can come with a side dish of bacteria, according to a new study from a team at the University of Delaware. In tests on plants grown in simulated microgravity, they were shown to actually be more susceptible than normal to the Salmonella enterica pathogen.


IIRC you have to have the app installed on your phone and be in the same network, but it’s been a while.


I did use franz at some point but eventually switched to ferdium, which doesn’t require an account. That being said I haven’t used either for a while now.


Hi, your link (the actual link, not the link text) is to https://www.reddit.comwww.myvote.wi.gov .
Do you (not you personally) though?


Was also going to suggest KeePass and syncthing, it’s been working flawlessly for a long time. In case of conflicts, at least keepassxc allows you to easily merge databases.
All my services run in podman containers managed by systemd (using quadlets). They usually point to the :latest tag and I’ve configured the units to pull on start when there is a new version in my repository. Since I’m using opensuse microos, my server (and thus all services) restart regularly.
For the units that are configured differently, I update the versions in their respective ansible playbooks and redeploy (though I guess I could optimize this a bit, I’ve only scratched the surface of ansible).