One issue with infrastructure sorts of repositories is what to do with sensitive data. Keys, tokens and other secrets shouldn’t be committed to git repos, but they have to go somewhere. In some cases you can put them in your CI/CD system and import them as variables. But that gets complicated quickly. One way to address it is to use git crypt. It’s not a standard git extension, but it’s been around for a fair bit of time.
In [yesterday’s post][past] the
past repo was described.
The first step to getting that to work is to correctly configure
history files in the first place. Some are easy, but some are more
For [zsh][zsh] and [MySQL][mysql] it’s rather easy. Just put something
like this in your
I’ve written a few articles on using vcsh for tracking your home dir. Unlike previous options vcsh lets me use multiple repositories. My first experiment with this was a past repository. Lots of Unix tools use the GNU readline library so there are a number of history files to collect. I already was collecting all of them in ~/.history.d. In addition due to problems with NFS mounted home dirs I’d long ago put the hostname in the names of history files as a way to prevent file corruption.
This is why I avoid zfs - at least on Linux. Lots of people say I’m paranoid; that the issue has been decided, but it clearly hasn’t. I get that it has a lot of benefits. I’m currently working on a FreeBSD based project where zfs will be really beneficial. And I’ve used it before on Solaris. And the alternatives on Linux (btrfs primarily) still seem too unstable for my liking.
People don’t think of the unix command line as a UI, but it is and it has its own idioms. Nearly all of them are conventions, not hard and fast rules. Because of this sometimes things take on a few meanings. The first meaning of the dash, "-" is to mark a command line flag like ls -l or mkdir -p. It comes up less often, but another pretty well known meaning is stdout/stdin.
At geek brekkie yesterday P— mentioned the idea of archiving links that you use with the Internet Archive. This seemed like a great idea to use in deploying my blog. I’ve wanted to add a general link checker to look for broken links. This isn’t quite the same thing but it would be an option for remediating link rot when found. Plus it seemed simple to do. My proof of concept for this also provides an excellent answer for a common question: when have you gone too far for a shell script and should switch to a “real language.” This script has gotten past that line so thought I’d share it.
I use batch to rerun failed cron jobs. I also use it for webhooks. There are three reasons for doing this and why eventually I’ll end up changing even first runs of cronjobs to use batch. They handle load issues, locking issues and return quickly. The batch command is usually implemented as part of cron and at, but it runs at a certain load, not a certain time. It can be set at different loads when the system is configured, but the idea is that batch run jobs one at a time when the system load is “low”.
Cron jobs sometimes fail and the old way of getting emails from the cron daemon doesn’t really scale. For instance you might have a job that fails from time to time and that’s ok - but fail for too long and it’s a problem. Generally, email as an alerting tool is a Bad Thing and should be avoided. Since I have prometheus set up for everything, the easiest thing is to use the textfile-collector from node exporter to dump some basic stats.
Yesterday I covered the overview of how this gets deployed. Now for the detail. The script for testing is more aspirational than functional. It runs a spellcheck on the blog, but doesn’t die if it finds any. I’m supposed to read the output. I never do. I should though. Someday I’ll add consequences to the spellcheck run, but for now at least it tries. #!/bin/sh set -e make spell Next up is the script that does the build and deployment to pages.