Tools


Secrets in git repos

One issue with infrastructure sorts of repositories is what to do with sensitive data. Keys, tokens and other secrets shouldn’t be committed to git repos, but they have to go somewhere. In some cases you can put them in your CI/CD system and import them as variables. But that gets complicated quickly. One way to address it is to use git crypt. It’s not a standard git extension, but it’s been around for a fair bit of time.

Configuring history

In [yesterday’s post][past] the past repo was described. The first step to getting that to work is to correctly configure history files in the first place. Some are easy, but some are more complex.

For [zsh][zsh] and [MySQL][mysql] it’s rather easy. Just put something like this in your ~/.zshrc:


Saving history with vcsh

I’ve written a few articles on using vcsh for tracking your home dir. Unlike previous options vcsh lets me use multiple repositories. My first experiment with this was a past repository. Lots of Unix tools use the GNU readline library so there are a number of history files to collect. I already was collecting all of them in ~/.history.d. In addition due to problems with NFS mounted home dirs I’d long ago put the hostname in the names of history files as a way to prevent file corruption.

ZFS on Linux

This is why I avoid zfs - at least on Linux. Lots of people say I’m paranoid; that the issue has been decided, but it clearly hasn’t. I get that it has a lot of benefits. I’m currently working on a FreeBSD based project where zfs will be really beneficial. And I’ve used it before on Solaris. And the alternatives on Linux (btrfs primarily) still seem too unstable for my liking.

Dashing git

People don’t think of the unix command line as a UI, but it is and it has its own idioms. Nearly all of them are conventions, not hard and fast rules. Because of this sometimes things take on a few meanings. The first meaning of the dash, "-" is to mark a command line flag like ls -l or mkdir -p. It comes up less often, but another pretty well known meaning is stdout/stdin.

Using batch to avoid problems

I use batch to rerun failed cron jobs. I also use it for webhooks. There are three reasons for doing this and why eventually I’ll end up changing even first runs of cronjobs to use batch. They handle load issues, locking issues and return quickly. The batch command is usually implemented as part of cron and at, but it runs at a certain load, not a certain time. It can be set at different loads when the system is configured, but the idea is that batch run jobs one at a time when the system load is “low”.

How this blog gets deployed (part 2)

Yesterday I covered the overview of how this gets deployed. Now for the detail. The script for testing is more aspirational than functional. It runs a spellcheck on the blog, but doesn’t die if it finds any. I’m supposed to read the output. I never do. I should though. Someday I’ll add consequences to the spellcheck run, but for now at least it tries. #!/bin/sh set -e make spell Next up is the script that does the build and deployment to pages.

How this blog gets deployed (part 1)

This website is maintained with hugo which is a static site generator. That means the source is parsed and all of the html, css and javascript are generated and saved as files. It means deployment just requires a plain, basic web server. But that still does mean it needs to be deployed. You could deploy just on s3 but for me I already have my own server. So I just deploy it to there.

Go utilities in ~/bin

One nice side effect of using vcsh was developing more complex scripts to help me do things. I didn’t have to worry a script or tool would get lost when a machine inevitibly died. However before writing a script, sometimes it’s not a bad idea to check and see if someone else already has. Lately many of those that I’ve found have been in Go. Originally I did these with update but it made update take a long time to run and sometimes with die if a rarely used Go util was broken.

Moving home - updates to vcsh usage

A while back I switched to vcsh. I’ve written a few articles on using it but since then I’ve migrated machines a number of times. The big issue I’ve found is having to manually install software on each machine. There are things my scripts depend on and things I just expect to have and manually creating them each time is annoying. So the solution obviously is a script. It’s actually used all the time as I might create new dependencies or find new tools I need so I’d want that installed on all machines.

Why vim was opening markdown files slowly.

A few months back I upgraded my vim configuration to use pathogen for managing my vim plugins. Since adding plugins was now a doddle I found a few lists of “super-duper useful vim plugins you must have” and just installed them blindly. Along the way I also configured how various plugins and the like were configured. This included a one-line change to associate .md files with markdown (by default they associate Modula-2 which I haven’t written code in for 20+ years).

Continuous Integration repo reminders

For a number of projects I work on I pull in third party tools. Sometimes they’re straight copies - that’s what I inherited in some PHP projects I work on. But in others I use git subtree to pull them in. However there’s a problem. I need a way to remind myself to check for updates. And generally I like things related to a project to live in the project. For my home projects I use Gitlab and their CI system.

UUCP mail

I’m doing a bit more open source development and some projects insist on updating code bases on mailing lists with patches. And they generally react badly to html email. Using mutt works best with them. And that’s fine, it’s not too hard to get mutt working with gmail. And you can use pass to pull in the password in your .muttrc like so: password=`pass gmail/acct/mutt` So no need to keep a password resting in cleartext in your homedir.

The Less Scary Guide To Google Authenticator and PAM

Modifying low level authentication is a worrisome thing. If you do it wrong the fear is that you can’t log back in to fix it. So unlike some other guides out there I’ll point out the danger points here and some ideas on how to address them. This is kind of long so a high level overview is this: install client software, install server software, activate server software, generate key, done!

The vcsh write-gitignore command

I’ve been using vcsh for a few months now and am very happy with it. Currently I’m using two repos - a home repo which is really just a continuation of my old mercurial (previously subversion) home dir; and a past repo which is where all my history files are stored. One issue I had was that while vcsh st worked fine, vcsh home st really didn’t - showing me all the files that weren’t tracked by git.

Version control for ~ (v3.0)

For a long time I used NFS for my home dir. That worked great at home and at work where I’d have a desktop and server. But then I got a laptop and that stopped working. For a while I’d rsync things but then I came across a “version control your home dir” article (this one?) and was hooked.


Docker Versus a Fossil

Docker is essentially “container tooling 2.0” following the 1.0 attempts of LXC. And it now has a number of competitors - including the original LXC project. All of them look interesting and rapid feedback loops are making them better.

Containers themselves aren’t really magical. They’re based on a number of newer namespace services in the Linux kernel. If you’re curious, Julia Evans has written some great pieces on how containers work with Running containers without Docker being a really good starting point.

Her articles show an interest in exploring and learning why things work as they do. A new tool shows up that could be useful and she dissects it to see how it works. A good, positive approach to an industry rapidly changing.


Hugo

I had been using Pelican to manage this site, but switched it to Hugo this week. My main reason for this is that I want to learn Go and Hugo is written in Go and uses Go templates in themes and a few other places. After having played with it for a few weeks, I thought I’d share my impressions so far.


SRI hashes for CDN js and css files

Subresource Integrity is a nifty idea to use SRI hashes to verify external resources your web app depends on haven’t been compromised. Using content delivery networks (CDNs) for common web resources (javascript and css) makes pages load faster since chances are those things have been loaded by other sites and are cached by the browser. It also means bandwidth gets used better generally which is a good thing. But it does mean you’re trusting the CDN.

More filesystem fun...

So as a followup, the flaw in my plan was that fat32/vfat doesn’t grok users and groups - or their associated permissions. Therefore both cp and tar emit loads of errors when copying to such a filesystem. Which is annoying. Therefore I went the tarfile route. While taring to the device is tempting, I imagined walking someone over the phone on how to extract that and then just got frustrated before even having the conversation.

Disk formatting

I currently need to prime a backup. There’s around 1.5TB of data on a Linux server in the cloud and a client wants regular backups of it to an OS X backup server they use for their media backups. I have a local copy so I thought I’d do the modern version of a stationwagon full of tapes to reduce the bandwidth used. Unfortunately this brings us to filesystem fun. Filesystem Linux OS X ext[234] yes via fuse hfs+ not well yes fat32/vfat yes yes ufs kinda kinda Of these UFS is most like the fs I’m copying from (ext4).

Wiping disks

I’m returning a server to a hoster. I generally trust them and have no reason to believe that they’d go snooping through my disk but it’s always nice to clean things up. There are a lot of tools for this: wipe, secure-delete and several others. But none really fit my use case. I was trying to clean up free space as I backed up and deleted personal data on the server.

A Year of Moshing

I’ve been using mosh for around a year now and find it very handy for interactive ssh sessions from my laptop. It’s even handy from desktops or servers if you have a spotty network connection. However, I have noticed one issue. You’ll get a buildup of mosh-server processes on the machine(s) you mosh into if your mosh sessions tend to end uncleanly - if the client mosh dies while not connected to the mosh server.