How to avoid losing your cron jobs one day

In my 14 years of working with Linux I’ve never done that… But there comes a time in every man’s life to accidentally issue crontab -r instead of crontab -e.

Yes, that removes the current crontab. Yes, the buttons "e" and "r" are conveniently located next to each other. No, it doesn’t prompt for any sort of confirmation. And yes, like the old saying goes, *nix is very user-friendly – it’s just choosy about who its friends are.

What to do:

  • Backups, obviously;
  • If you’re messing with the crontab, chances are, you’ve already edited it recently. Use a terminal emulator with an “instant replay” feature, like iTerm2, to go back in time slightly;
  • Grab the commands from the syslog. On Debian, this is a good starting point:
    grep CRON /var/log/syslog | awk '{$1=$2=$3=$4=$5=$6=$7=""; print $0}' | sort | uniq
  • Add an alias to your favorite shell: alias crontab="crontab -i"

Shell scripting: “while read” loops breaking early

I stumbled upon this issue a couple of days ago while trying to debug a deployment script. The script would read entries from a CSV file and provision VMs one by one. The issue was that only the first entry from the file would get processed, after which the script would exit normally as if there were no more entries.

As it turns out, if you are using a “while read” construct to read something from a file and in the loop you have something that reads input from stdin… well, it will eat up the rest of the lines from the file, naturally, causing the while loop to break.

In my case I had a ssh command in the loop. All it took to fix it was to deny reading from stdin by adding the “ssh -n” flag:

while IFS="," read newhost newipaddr; do
  ssh -n $newhost "my remote commands"
done << newhosts.csv