At geek brekkie yesterday P— mentioned the idea of archiving links that you use with the Internet Archive. This seemed like a great idea to use in deploying my blog. I’ve wanted to add a general link checker to look for broken links. This isn’t quite the same thing but it would be an option for remediating link rot when found. Plus it seemed simple to do. My proof of concept for this also provides an excellent answer for a common question: when have you gone too far for a shell script and should switch to a “real language.” This script has gotten past that line so thought I’d share it.
Cron jobs sometimes fail and the old way of getting emails from the cron daemon doesn’t really scale. For instance you might have a job that fails from time to time and that’s ok - but fail for too long and it’s a problem. Generally, email as an alerting tool is a Bad Thing and should be avoided. Since I have prometheus set up for everything, the easiest thing is to use the textfile-collector from node exporter to dump some basic stats.
For some reason a few weeks back I was wondering about using ssh keys to encrypt/decrypt files. Seems like a thing that should be possible, why not? And sure enough, it’s been done. This won’t be as good as using gpg keys. Specifically without the web of trust it can be hit with MITM attacks, but I think it would be “good enough” for most people in most uses. And in my experience getting people to use gpg is like pulling teeth.
While playing with the Twitter API via a Go lib I saw someone call on people to troll reply to a tweet with a specific piece of text. This seemed like a really easy thing to search for and block. So I modified the little Go program from starting with ephemeral and put in this as the function run in main(). It searches for the string it’s given and then if the tweet consists solely of that text (compared by turning the string lowercase and then stripping out everything that’s not a letter) it blocks the user.
So now ephemeral was working on my current tweets, but not handling the “historical” ones which twitter deems to be the ones older than the 3,200 most recent. There are APIs that you can use to access those, but they require a paid license. So instead I went into my settings in twitter and requested an archive of my tweets. This takes a number of hours but eventually you get a download link and a while after using that you end up with a huge zip file.
I’ve set up Vicky Lai’s ephemeral to make my tweets, well, ephemeral. But if you follow her README you’ll see the initial config is all manual. I’d rather have that be scripted so I can more easily replicate it in the future. So I wrote a script to create the AWS lambda instance. The prerequisites for this are covered in Vicky’s README but you’ll also need the aws cli, a go install and I find it’s also useful to use named profiles.
tl;dr my year in vim Gource is a neat tool for visualising the history of a of a software project. In a way it’s kind of a fun combination of this scene Jurassic Park and version control. Reading up on it I learned it could also visualise multiple repositories so I decided it would be kind of fun to do just that. I use vcsh to manage my home directory, pass to manage passwords, Hugo for my website and slack for managing my personal servers.
Following up on the cube root trick post, I wrote a version that speaks the number in Chrome. It’s an experimental API and I doubt I’m choosing the voice very well. But it does train how to do the trick better than reading the cubed number on screen. On browsers that don’t support this API it degrades down to the way the previous version worked. I also updated the old tool to take input once the second number is typed in - with that I got down to a sustained 3.5 second response time.
After reading how to get cube roots in your head in a particular set of circumstances I learned the trick while stuck in traffic. But it then says you have to practice a lot so I wrote a tool in C to do that. But then I realised a web version might be a bit more accessible to people so here’s one. Made the trainer work nicer on mobiles thanks to [Kae]’s suggestion.
Since switching to vcsh I’ve been writing more personal scripts since they’re pretty easy to ship around to each machine. Plus more things have REST APIs and python’s slumber makes it dead easy to talk to them. However I then have to make sure modules like slumber are installed since it’s not in the python standard library. This adds a level of awkwardness to the scripts in my ~/bin. While looking for something else I came across this answer on stackoverflow and it fit what I wanted to do.
This is just a simple thing, but it makes working in Go’s source tree way easier. Particularly since I use repos from three different sites that start with “git.” In zsh there’s a thing called cdpath with zsh will use to complete a cd command. For the longest time mine was set to cdpath=(~ ~/src) so if I typed cd foo and there wasn’t a foo in the current directory zsh would go look in ~ and then in ~/src.
Quite often I find it useful to push to more than one repo. If a
repo is used for system configuration, I might have a central repo
github.com but also have it on the servers
it’s used to manage. It can also be useful for some code review
So php isn’t the greatest thing ever.
Nonetheless its tooling has evolved and that are a number of things
you can do to make it less auful. The goal of this is to cover
setting of a php project so that you can pull in dependancies and
provide an easy way for users to install your software.
Been bad at updating this but oddly inspired by another good file system article (there have been a lot of them lately). This one essentially covers what POSIX says and how file systems implement that and what developers don’t understand about that whole interaction. Link to the original article which I got to via this lwn post.
As the number of filesystems - local and remote - and filesystem-like things increases, the subtle and not so subtle changes in semantics continue to grow. What’s the fastest way to copy a file varies even on local, POSIXy filesystems.
Static analysis is one way to root out more complex bugs in C and C++ programs. And clang offers a static code analyzer. To make use of it as an analyze target in an autoconfed project, just add this snippet to your Makefile.am. Put the filenames you want analyzed on the analyze_srcs line. If you have any local include dirs, just add them as -I flags to the clang lines. # Static analysis.
This xkcd comic explains heartbleed in 6 easy panels. Nifty idea; anthropomorphising the heap as a thought balloon. Are you still there, server? It's me Margaret.
Some info on how the OpenBSD libc’s malloc could have detected, neutered or reduced the impact of Heartbleed. Further info on OpenSSL’s broken free list implementation. Essentially, don’t implement your own memory handling routines; use the system ones or obvious alternatives.
The Heartbleed OpenSSL bug has been in the news a lot. And like many security stories there have been a few conspiracy theories floating around. Since OpenSSL is open source software, anyone can view the hostory of the project and see how the bug came about. But it does require understanding some tools. In this post I hope to help explain them. Step 1: Find The OpenSSL Source. A good way to do this is to search for the project name and git or svn or hg.
When writing C code you often end up with code like this:
One of the annoying things about doing docs in doxygen is forgetting to type make between edits and checking what it’s rendering like. Luckily there are inotify-tools which, when combined with a simple shell loop, let you rebuild your docs whenever their source files change. By the time I’d switched to the browser and refreshed it the docs had been rebuilt. Not exactly WYSIWG editting but it skips the “forget to run make” step which is annoyingly tedious.
Recently I was working on a low level library which I wanted to add a dependent library to. For things like this I really wish they’d have better docs. So if I was going to contribute to the world of lower level libraries I was going to at least try to make some docs for it. First step was to pick a tool. In the end I went with doxygen which seems to integrate well with autoconf.
I use an awful lot of terminal windows. I’ve always tweaked my fonts to get the most number of terminals and still be readable. For a long time that was Anonymous, but appears that the original author of that font has made a new one with some nice tweaks. Note that this font works just fine on both Linux and OS X. Anonymous Pro in Action
First a link on how to track website visitors without using cookies. A quick tutorial on how to use curl to do a host of web requests. Very useful for testing REST APIs from the shell.