I’ve written a few articles on using vcsh for tracking your home dir. Unlike previous options vcsh lets me use multiple repositories. My first experiment with this was a past repository.
Lots of Unix tools use the GNU readline library so there are a number of history files to collect. I already was collecting all of them in ~/.history.d. In addition due to problems with NFS mounted home dirs I’d long ago put the hostname in the names of history files as a way to prevent file corruption.
This is why I avoid zfs - at least on Linux. Lots of people say I’m paranoid; that the issue has been decided, but it clearly hasn’t.
I get that it has a lot of benefits. I’m currently working on a FreeBSD based project where zfs will be really beneficial. And I’ve used it before on Solaris.
And the alternatives on Linux (btrfs primarily) still seem too unstable for my liking.
People don’t think of the unix command line as a UI, but it is and it has its own idioms. Nearly all of them are conventions, not hard and fast rules. Because of this sometimes things take on a few meanings.
The first meaning of the dash, "-" is to mark a command line flag like ls -l or mkdir -p. It comes up less often, but another pretty well known meaning is stdout/stdin.
At geek brekkie yesterday P— mentioned the idea of archiving links that you use with the Internet Archive. This seemed like a great idea to use in deploying my blog.
I’ve wanted to add a general link checker to look for broken links. This isn’t quite the same thing but it would be an option for remediating link rot when found. Plus it seemed simple to do.
My proof of concept for this also provides an excellent answer for a common question: when have you gone too far for a shell script and should switch to a “real language.
I use batch to rerun failed cron jobs. I also use it for webhooks. There are three reasons for doing this and why eventually I’ll end up changing even first runs of cronjobs to use batch. They handle load issues, locking issues and return quickly.
The batch command is usually implemented as part of cron and at, but it runs at a certain load, not a certain time. It can be set at different loads when the system is configured, but the idea is that batch run jobs one at a time when the system load is “low”.
Cron jobs sometimes fail and the old way of getting emails from the cron daemon doesn’t really scale. For instance you might have a job that fails from time to time and that’s ok - but fail for too long and it’s a problem. Generally, email as an alerting tool is a Bad Thing and should be avoided.
Since I have prometheus set up for everything, the easiest thing is to use the textfile-collector from node exporter to dump some basic stats.
Yesterday I covered the overview of how this gets deployed. Now for the detail.
The script for testing is more aspirational than functional. It runs a spellcheck on the blog, but doesn’t die if it finds any. I’m supposed to read the output. I never do. I should though. Someday I’ll add consequences to the spellcheck run, but for now at least it tries.
1 2 3 4 5 #!
But that still does mean it needs to be deployed.
You could deploy just on s3 but for me I already have my own server. So I just deploy it to there.
For some reason a few weeks back I was wondering about using ssh keys to encrypt/decrypt files. Seems like a thing that should be possible, why not? And sure enough, it’s been done.
This won’t be as good as using gpg keys. Specifically without the web of trust it can be hit with MITM attacks, but I think it would be “good enough” for most people in most uses. And in my experience getting people to use gpg is like pulling teeth.
While playing with the Twitter API via a Go lib I saw someone call on people to troll reply to a tweet with a specific piece of text.
This seemed like a really easy thing to search for and block. So I modified the little Go program from starting with ephemeral and put in this as the function run in main().
It searches for the string it’s given and then if the tweet consists solely of that text (compared by turning the string lowercase and then stripping out everything that’s not a letter) it blocks the user.
One nice side effect of using vcsh was developing more complex scripts to help me do things. I didn’t have to worry a script or tool would get lost when a machine inevitibly died.
However before writing a script, sometimes it’s not a bad idea to check and see if someone else already has. Lately many of those that I’ve found have been in Go.
Originally I did these with update but it made update take a long time to run and sometimes with die if a rarely used Go util was broken.
A while back I switched to vcsh. I’ve written a few articles on using it but since then I’ve migrated machines a number of times.
The big issue I’ve found is having to manually install software on each machine. There are things my scripts depend on and things I just expect to have and manually creating them each time is annoying.
So the solution obviously is a script. It’s actually used all the time as I might create new dependencies or find new tools I need so I’d want that installed on all machines.
So now ephemeral was working on my current tweets, but not handling the “historical” ones which twitter deems to be the ones older than the 3,200 most recent.
There are APIs that you can use to access those, but they require a paid license. So instead I went into my settings in twitter and requested an archive of my tweets. This takes a number of hours but eventually you get a download link and a while after using that you end up with a huge zip file.
I’ve set up Vicky Lai’s ephemeral to make my tweets, well, ephemeral. But if you follow her README you’ll see the initial config is all manual. I’d rather have that be scripted so I can more easily replicate it in the future.
So I wrote a script to create the AWS lambda instance. The prerequisites for this are covered in Vicky’s README but you’ll also need the aws cli, a go install and I find it’s also useful to use named profiles.
tl;dr my year in vim
Gource is a neat tool for visualising the history of a of a software project. In a way it’s kind of a fun combination of this scene Jurassic Park and version control.
Reading up on it I learned it could also visualise multiple repositories so I decided it would be kind of fun to do just that. I use vcsh to manage my home directory, pass to manage passwords, Hugo for my website and slack for managing my personal servers.
Work had a bake off thingy and I managed to come in second. The winner did bread pudding that included whisky - this was a reminder that just like in writing you need to know your audience! The cookbook for my selection of cookies is based off my Thanksgiving cookbook template.
A few years back I linked a number of drabbles and twabbles I had written. So a new one for the holiday season and the relaunch of the Drabblecast:
“All the reindeer finally loved him. They all shouted with glee. Rudolph grinned at them all. He released his tentacles.”
Also in writing this I reread the old contributions and realised “Different” would be better if I changed just one word (see if you can spot it):
I had this idea I would start doing more regular blog posts last year but that seems to have failed. Maybe next year!
I had help with this year’s troff adventures so thanks to Catherine for this year’s 2018 cookbook. There are some recipe errata I need to do - but those are all mine. Essentially I winged (sorry) the turkey burritos on the day and what I actually did vs what I vaguely considered doing early Saturday morning did not line up.
Another adventure with troff generated a menu. By all reports people had a good time. I was a bit more adventurous this year and did biscuits - a variation on Allen’s. This meant I had to cook one dish 10 minutes before dinner so timing of everything else became a bit more strict.
It worked however and people liked them. The rice cooker and the slow cookers really did make the timing issue less stressful this time.
Following up on the cube root trick post, I wrote a version that speaks the number in Chrome. It’s an experimental API and I doubt I’m choosing the voice very well. But it does train how to do the trick better than reading the cubed number on screen.
On browsers that don’t support this API it degrades down to the way the previous version worked.
I also updated the old tool to take input once the second number is typed in - with that I got down to a sustained 3.